Close
Close

Published

BBC defends ambiguous ITU AI report, pointing to ethical minefield

As promised, Faultline Online Reporter picked the brains of the authors behind the ITU-R’s AI broadcast report published just last week, which we lambasted for its distinctly equivocal conclusion. In its defense, the organization explained that vagueness is the only way to approach AI and machine learning in today’s climate – having to aggressively trim off the fat from an ethical standpoint in the rocky waters of these preliminary stages.

Andy Quested, who works on standards at the BBC’s design and engineering department, helped curate the report and assured us the latest reference document is part of a much larger beast arriving in July – around the time of a workshop being held in Geneva with Japanese broadcaster NHK. Really then, last week’s 20-page Artificial Intelligence Systems for Program Production and Exchange report was a snapshot in time, he assured us. “We had to get the document out very quickly, in under a year, and had to adhere to terms of ethics,” admitted Quested, which essentially meant having to skimp on the conclusion.

That word ethics is a real stickler in any field, but particularly in the AI sector and even more so for a public service broadcaster like the BBC, which apparently has to traverse significantly more hoops than a company like Sky with a paying subscriber base. An ethical minefield essentially boils down to the assessment of whether collecting data for the use of sending personalized content to viewers is moral, which clearly isn’t exclusive to AI and machine learning, but the uncertainty about the technology certainly won’t help its cause.

As a quick recap, the radiocommunications sector of the ITU (ITU-R) concludes that while AI and machine learning algorithms can positively influence program and production workflow as well as audience experiences, the development of a training framework is the fundamental element needed to enable AI in broadcast workflows. This framework must be fed real task data to build applications with sufficient accuracy for broadcasting. Our initial issue was that this so-called vital framework was not elaborated upon and although these reports aren’t designed to be instructions, some additional guidance would be useful for readers. Quested conceded the wide-ranging nature of the task at hand meant it had to be vague. “We simply can’t draw conclusions at this stage,” he said, “but we are beginning to see progress.”

A conclusion of our own is that the ambiguity in AI and machine learning most definitely boosts its buzzword rating.

Moving on, we decided to get a little more specific and US-centric, with Quested then referencing some AI and machine learning projects being carried out by Disney, looking at applying algorithms to making cultural changes in content. Food, for example, can be changed from its original form in the US version of the movie, to food more commonly consumed in various local markets when the movie is exported, automatically switching out sustenance scene-by-scene. AI can take the cultural element even deeper, said Quested, citing facial expressions or hand gestures across different cultures meaning entirely different things. Of course, tasking AI and machine learning systems with making cultural changes in turn opens up a multitude of new ethical cans of worms.

Circling back to the BBC, we probed into whether any significant results had emerged from a BBC Four project last year, when it carried out two full days of programming scheduled entirely by AI algorithms developed to optimize accordingly to the user demographic. The idea behind the “high risk” R&D initiative was to help filmmakers and schedulers find hidden gems within the vast BBC archive housing millions of hours’ worth of programming. Computers trawled through more than 270,000 programs across the archive available in digital formats, and then ranked the top 150 most relevant factual titles by what the broadcaster has coined “BBC Four-ness”.

Although insightful, the ambitious test didn’t result in a roaring success, according to Quested, admitting that AI is probably the wrong term to be used, with machine learning being the more applicable term here due to the way data is fed. In any case, the BBC’s R&D technologists devised four techniques – Object & Scene Recognition, Subtitle Analysis, Visual Energy, and a combination of the three.

“We tested each of these techniques by creating compilations of up to 15 minutes. It didn’t always go to plan. The machine didn’t always get it right, and sometimes got stuck in its own data loops. At one point it had a particular fascination with buses, another test run focused almost solely on clips from one documentary about the garden of an English country house, with barely a mention of the other 150 programs,” noted a BBC blog post sent to us by Quested.

Another point of contention is that with many broadcasters struggling to grasp the complex transition to IP, how are companies lacking the extensive resources of the BBC expected to keep pace in AI and machine learning? Quested argued that AI and machine learning algorithms should be treated as an entirely separate field to IP. “These algorithms are not a distribution system. They can be applied to email or finance, archive retrieval or data gathering, basically for looking at any business system,” he said, giving the example that if an AI is asked to go find images of the Taj Mahal, not only would it locate thousands of items from archives, but would do so from every angle in every weather condition, including the Indian restaurant of the same name down the road, he jested.

So, next month the ITU-R and BBC plan to publish a follow up report which will hopefully expand on the topics discussed here and provide a little more conclusive color.

Close