Close
Close

Published

deepfakes and AI tricks call IoT Objectivity into question

‘Fake News’ has been a talking point for what feels like a very long time, but AI-based tools are now making headlines for creating ‘fake porn’ – with online platforms reacting as best they can to remove these ‘deepfakes’ from the web. As we increasingly experience or understand the world through digital mediums, these advances in machine-learning capabilities call into question the relevance of Objectivity (the philosophical concept) in the digital age.

deepfakes was the Reddit username of someone who created the namesake tool that could map the faces of people onto the bodies of others – with mixed successes. Of course, as many will know or would have been able to guess, the first community uses of the tool saw people create pornography of celebrities using publicly available images (often an Instagram account).

However, comedy videos were also pretty popular on the Reddit website, with Donald Trump and Nicholas Cage popular candidates for remapping. Some of the videos work better than others, but all are presented with the knowledge that the content is inherently fake – that the absurdity or voyeurism is novel in some fashion, and that because it is impossible, it is valued by the viewer in some manner.

And of course, the possible ramifications of these kinds of tools are huge. With improvements, they present a way to completely invent new realities for viewers. If you thought ‘fake news’ was bad now, when all it took were obscure shadowy web blogs, imagine the propaganda power of altered video – especially when the source video is of poor enough quality that it can be more easily manipulated. ‘Revenge porn’ is another area expected to see an influx of this practice.

 

Motherboard puts the sentiment nicely, although it’s really worth stressing that your own research on this topic is probably going to land you in trouble with your IT department. These tools could allow political actors to completely reshape the media landscape, with unnerving results. You may only need to spoof a few words of audio, or insert a face into a crowd for a few seconds of video – and it seems that the tools to do this have become unnervingly accessible.

The barrier to entry has been utterly demolished in the past couple of years, with consumer-grade hardware now able to churn out things that were only previously accessible to academics and commercial researchers – who are usually bound by ethical considerations or corporate responsibility or liability. Now, these AI-based tools are available to any computer literate kid with access to a PC and the inclination to do so – with a supportive hobbyist community, on hand to help.

The likes of Reddit, Gfycat, and Twitter have all taken steps to take the content off of their websites, but it persists. There’s no taking back the likes of FakeApp, a piece of software designed to be as user-friendly as possible. It’s out in the wild now – that’s sort of the beauty of open source. As Quartz puts it, “no matter what happens to the original deepfakes software, this is only the beginning.”

This is the latest in a trend of exploits that demonstrate how far AI technologies have to go before they could be commercially viable. In November, Riot covered a paper from Kyushu University that found it was possible to utterly bamboozle an image recognition AI by changing just a single pixel in its input source – something imperceptible to the human eye.

Another paper, from Berkeley, showed how a speech recognition system could be fooled by altering 0.1% of the source waveform, and more worryingly, could be tricked into outputting a result of the attackers’ choosing. A human ear listening to the file would never hear the difference. As The Register goes on to point out, similar attacks have tricked a Google system into classifying a 3D-printed turtle as a gun, and how Google researchers could use patterned stickers to make a system think a toaster was a banana.

The problem with these kinds of spoofs are that they are almost impossible for a human to diagnose. The researcher could spend hours looking for the specific pixel or sound wave that was spoofed in an image or audio file, and never find it. Meanwhile, the AI-based system that was fed that data will continue to spit out wrong answers – which could very easily go unnoticed for long enough to cause severe damage if the tech makes it to a commercial deployment.

But one could add all number of redundancies to a system, in order to make sure that such mistakes are not damaging. However, at this point, you are significantly increasing the complexity of a system – which isn’t so much of a problem in a data center environment, but could completely negate the viability of such an AI-based project in a network-edge deployment that is constrained by power supply, network bandwidth, or computational grunt.

Having an ML system to check on the outputs of another ML system that is in turn checking the output of another process might be a secure way of doing things, but it is not something that appears scalable for the IoT, at this stage anyway. Sure, videos could be protected by some sort of cryptographic signature, to determine authenticity and therefore any deviation from the source – but that won’t stop conspiracy theorists or the average Facebook user who simply likes or share a post without conducting any further research.

Close