Your browser is not supported. Please update it.

5 December 2019

Why is China’s AI content crackdown targeting VR?

The Great Firewall of China just got a little tighter. The Chinese government has slammed the doors on the creation of fake news, specifically taking aim at AI-based videos. Far from being another draconian internet content crackdown, the rest of the world would do well to take a leaf out of China’s book this time around – but it does raise questions over how the new legislation could leak out and hinder the wider video industry.

Comparisons could be drawn to Europe’s contentious Article 13 reform, which went down like a lead balloon with industry groups such as the European Broadcasting Union, despite its intention to control content scraping and therefore help extinguish fake news.

While there is plenty of emphasis on how simple AI-based techniques are driving the proliferation of so-called “deepfakes”, virtual reality (VR) content has also been outed as a surprising contributor to the fake news pandemic.

Deepfakes are widely used across social media to show highly lifelike videos of people – primarily politicians and celebrities – saying or doing things which are entirely fictional. The creation of deepfakes and other false information online, without explicit disclosure that it was created with AI or VR technology, is now illegal under Chinese law and enforced by the Cyberspace Administration of China. It comes a month after California criminalized the use of AI-enabled imagery or audio in political campaigns, becoming the first US state to do so.

Reports do not, however, point the finger at any specific VR platforms, headsets or software supporting the creation of VR content. This is rather mysterious and the only reason we can think of for VR to be targeted is because hyper realistic content can be created relatively easily in VR environments. The technology has therefore been viewed as a weapon with dangerous potential in the spread of false information.

Similarly, people are reluctant to highlight the specific AI techniques being manipulated in the fake news war, we suppose not to inspire further deepfake-makers and so to contain the spread. We can identify one in particular though – the non-profit research organization OpenAI which received an indirect $1 billion investment from Microsoft not too long ago.

Ironically, OpenAI’s intention was to create a safe Artificial General Intelligence system that would not harm humans, and then it became best known for its fake news generator. Elon Musk, AWS and Peter Thiel were among OpenAI’s early backers, founded in 2015, although Musk has long since departed the board over concerns about the company’s direction.

The South China Morning Post stated, “With the adoption of new technologies, such as deepfake, in online and audio industries, there have been risks in using such content to disrupt social order and violate people’s interests, creating political risks and bringing negative impact to national security and social stability.”

Social media platforms must take responsibility to stem the spread by fighting fire with fire – igniting an AI versus AI online content war. Twitter earlier this year acquired a UK-based specialist in detecting and stamping out fake news online using machine learning algorithms, called Fabula AI. The start-up says its patented geometric deep learning algorithms are used to authenticate sources of information circulating on social media.

Facebook, probably the guiltiest party for enabling the spread of fake news, has pledged to develop in-house technology for detecting deepfakes, but has repeatedly refused to remove deepfakes from its properties, including one on Instagram defaming its own CEO Mark Zuckerberg.

But what about the network owners? Orange is an example of one tier 1 operator taking preventative action, signing up for a blockchain-based certification program called Safe.press. Developed by French start-up Block Expert, Safe.press aims to stamp out fake news through a platform for mutual authentication. By doing so, Orange said at the time, in March this year, that it became the first company globally to show a blockchain-secured digital trust label on its websites – allowing one-click verification about the authenticity of online content broadcast to the media in real-time.

Even video technology vendors are bulking up on deepfake detection capabilities. Ad platform SpotX has integrated technology from brand partner zvelo, US web software outfit, which uses a combination of static and heuristic analysis techniques, aggregation of third-party fake news feeds, and human-supervised machine learning – all processed by its AI Cloud Network.

China’s new rules will go live on January 1, 2020, although the severity of punishments is still to be decided.