Your browser is not supported. Please update it.

30 April 2026

BBC’s deepfake TV series hits uncomfortably close to home – FREE TO READ

A technology-fueled conspiracy thriller from the BBC has taken audiences by storm after the recent conclusion of its third season, stimulating themes of deepfakes and manipulated video evidence. These fearmongering topics are far from futuristic—they feel tangible and visceral for modern audiences, and are highly relevant areas for video industry professionals.

First aired in 2019, ‘The Capture’ explores how the application of deepfake technology, the act of manipulating someone’s face and/or body into a false scenario, can be applied to surveillance for the greater good of society. The show’s controversy stems from the way it “corrects” missing parts of CCTV footage using video manipulation techniques, creating a joined-up picture to be used as credible evidence when convicting criminals in a court of law.

Though a work of fiction, the reason Faultline is spotlighting this TV series (no spoilers ahead, only insights) is because some of the fabricated technologies presented in the show draw palpable parallels with real-world techniques. More importantly, many of the video technology companies that fall under our remit are already working in the surveillance space, or are actively exploring it.


Surveillance, like anti-piracy, is a clandestine industry. Companies are reluctant to talk about their work in areas that monitor rather than entertain. Yet we have a few examples up our sleeve.

In fact, Faultline explored remarkably similar themes at IBC 2025, where conversations with technology companies (some off the record) revealed the legal complexities about expanding from media and entertainment to closed circuit television.

Why does CCTV still suck in 2026?

For the countless companies the world over offering some flavor of video optimization, video upscaling, or video enhancement technology, at some stage surveillance opportunities will have been discussed in meetings.

The problem in the real-world is that if CCTV footage used in criminal proceedings has been enhanced through proprietary and non-standardized image enhancement software, it opens an opportunity for lawyers to pick the evidence apart.

As one image enhancement vendor told Faultline, the concern is that when you introduce uncertainty, it becomes possible to reframe clear evidence: “Was it really a gun being pointed by my client, or was it a pencil that has been manipulated to look like a gun by [insert company]’s video enhancement software?”

The introduction of ambiguity could be enough to undermine a case and complicate prosecutions built primarily on video evidence.

However, there is a backdoor. If you are smart or lucky enough to have developed a standardized video enhancement technology, like LCEVC (low complexity enhancement video coding), then your chances of grabbing a slice of the surveillance market are markedly improved.

V-Nova, the brains behind LCEVC (MPEG-5 Part-2), has informed Faultline about deployments of its image enhancement technology in surveillance and defense. The London-based firm’s only publicly known client in this field is the aerospace company Leonardo, which uses LCEVC to enhance image quality and bandwidth efficiency of drone camera footage.

That said, while streaming platforms are embracing newer video codecs like HEVC, AV1, and now even VVC, surveillance is even more stubbornly attached to the legacy H.264 (AVC) codec than broadcasters are—not because H.264 is the best codec, but because it comes royalty-light and is guaranteed to work with legacy DVRs.

Money is the ultimate barrier, as most CCTV systems are installed on tight budgets where quality is an afterthought.

This low-quality problem is compounded by storage economics. Customers buying surveillance systems are encouraged to take the more financially viable option of 60 days of blocky 480p video over 10 days of crystal-clear 4K. Add to this the creaking networks these cameras sit on (still uploading over DSL-grade uplinks) and the ceiling for picture quality is set remarkably low.

The obvious question then is why not let AI handle the clean-up?

Consumer-facing apps can now denoise and upscale old photos into near-cinematic quality in seconds. But in surveillance, once generative pixels are introduced, the evidential integrity is compromised.

The role of watermarking and provenance:

Beyond image enhancement use cases, forensic watermarking has a role to play in media security workflows, particularly in protecting pre-release content from leaks.

For instance, in studio and post-production environments, each user or session can be assigned uniquely watermarked media, enabling any leaked frame or short clip (even down to something like a screenshot) to be traced back to its source. This capability is increasingly relevant as editing and review workflows move into cloud-based remote environments outside of secure facilities.

In the TV show, The MET Police Counter Terrorism Command develop a proprietary twin-lens camera system designed to detect digital manipulation in real-time. Similarities can be drawn to the C2PA (Coalition for Content Provenance and Authenticity)—a project founded in 2021 by Adobe, Arm, BBC, Intel, Microsoft, and Truepic with the aim of bringing transparency to the origin of media in the age of rampant AI and dangerous deepfakes.

Last year, Sony’s PXW-Z300 camcorder launched as the world’s first to natively embed digital signatures into video files, supporting C2PA. We are awaiting further device support for C2PA ahead of IBC 2026.

Preempting the AI video bubble:

Produced by Heyday Television, ‘The Capture’ was ahead of its time, preceding the AI-generated content boom. Early experiments in AI-generated video emerged around 2019, but these were restricted to animations which suffered from visual stuttering and inconsistent motion.

It wasn’t until 2023 when commercial models like Runway introduced the first high-quality video-to-video and text-to-video AI generators, which opened the floodgates for the realism breakthrough in 2024-onwards, through models like OpenAI’s Sora (recently shuttered) and Google’s Veo.

In related news, YouTube just last week announced the expansion of its likeness detection tool to actors, athletes, and musicians, allowing them to scan for AI-generated impersonations. The system flags content but does not guarantee removal and offers no revenue-sharing model, unlike Content ID for copyright protection.

Hot off the press is news that the first fully AI-generated animated feature film, ‘Sanxingdui: Future Memories’ was approved for theatrical release this month, in China.

While deepfake manipulation was around at the time in 2019, its use was limited, and its application was messy—unlike the meticulous replicas portrayed in the TV show, or the indiscernible AI videos that flood social media today.

The show’s real sci-fi trickery comes from the real-time deepfake manipulation of live CCTV feeds. While it appears farfetched, scammers today are using real-time deepfake manipulation to impersonate others during live video calls, with highly accurate results.

There are certain techniques to look out for to debunk real-time facial deepfakes over video calls, like the inability to raise their hand to their face, as it disrupts the deepfake overlay, but the pace of progress in AI-generated video will surely see these flaws patched soon.

London, one of the most watched cities outside of China, was the perfect setting for ‘The Capture’, though the technology wouldn’t feel out of place at IBC.