Close
Close

Published

Facebook AI invents new language, Steve drives into pond, Musk adamant of AI threat – so what’s the state of the art?

 

The stark warnings about the threat of AI stand out in marked contrast to a couple of recent examples of technological incompetence – but that doesn’t mean we shouldn’t be mindful of what the likes of Elon Musk preaches. The Tesla CEO was in the spotlight this week after deriding Facebook CEO Mark Zuckerberg’s AI familiarity in a Twitter spat – shortly followed by Facebook having to shut down a rogue AI test.

The timing suited Musk’s side of the argument, in which he dismissed Zuckerberg’s understanding of the AI threat as ‘limited,’ as Facebook has had to shut down one of its research projects after the AI-based system began speaking in a new language – apparently because the researchers had failed to tell it that it couldn’t.

The early signs of slippage looked like grammatical mistakes, such as “I can can I I everything else,” but some looked like the system had caught itself in a loop – “Balls have zero to me to me to me to me to me to me to me to me to” (or echoes of the British institution that is the Chuckle Brothers).

The AI-based system was a conversational test for negotiation software, which was meant to be learning the best way to negotiate prices (likely as part of the attempts to turn Facebook into a platform for businesses to use to engage with potential customers).

As Dhruv Batra of Facebook AI Research (FAIR) explains, “there was no reward to sticking to the English language. Agents will drift off understandable language and invent codewords for themselves. Like, if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

As Co.Design puts it, in the article that first chronicled the news, “the tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another,” if it turns out that they AI-bots are good at their jobs. It is already difficult to discern how computer-vision processes go about identifying objects, but we trust that they are acting correctly based on their outputs.

But with language, those primordial sounds that allow people to convey their thoughts to each other, it becomes tricky to discern the difference between those separate instances of ‘to me’ above – like the infamous ‘Buffalo buffalo Buffalo.’

This level of understanding goes beyond just undocumented code, where a programmer could at least follow rules and deconstruct what they see before them. To understand ‘balls have zero to me,’ the human needs to know what the AI task is up to – and that’s something that we know researchers already struggle with, let alone laypersons or industry professionals.

And this is where Musk’s sentiments come to the fore – with issues about trust and human understanding. This AI-based system began using its own language because its researcher inventors didn’t constrain it. So, if it were complex enough and had access to the right things, it might have begun doing all manner of things that the researchers had not explicitly forbade it to – and with an internet connection, there are all manner of things it might have tried tinkering with.

The obvious solution here is to create a very strict set of rules and constraints for each process, but in doing so, you are limiting the scope of the project. Such a thing wouldn’t be an Artificial Intelligence (we are still a rather long way from creating something that actually meets that definition), but rather a highly optimized process that uses some level of cognitive computing power – not an AI.

Now, Musk’s line of thinking stems from the concern that because we can’t understand how such an AI would work, and the rationale behind its decision making, we should be incredibly wary of them. The rosy view of AI tech that investors seem to currently hold must be countered with education, and to that end, Musk founded OpenAI – a non-profit research group devoted to ‘enacting the path to safe artificial general intelligence.’

But returning to the Musk-Zuckerberg dispute, Musk was firing back after Zuckerberg spoke out against Musk’s warning that “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems to ethereal.” That line of thinking isn’t especially popular among AI researchers – but remember that the researchers above didn’t anticipate the outcome of their experiment.

Zuckerberg’s reply, in a Facebook Live Q&A session, was “I have pretty strong opinions on this. I think you can build things and the world gets better, and with AI especially, I’m really optimistic. I think people who are naysayers and try to drum up these doomsday scenarios are – I just don’t understand it. It’s really negative, and in some ways I think it is pretty irresponsible.”

But there seems to be a disconnect between what either side is envisioning when the term ‘AI’ is used. Musk seems far closer to the Singularity, while Zuckerberg is closer to the web tools that allow facial recognition in Facebook photo uploads, or the machine-learning functions that might spot warning signs in the data stream from a jet engine.

There are several distinct steps between these two perspectives, and while the likes of DeepMind is making progress on the technical achievements of the systems, IBM still can’t turn its Watson platform into a money-maker.

AI-based applications will soon be able to trawl through business data to find ways to cut expenses or boost sales (likely sold via a cloud application, as a service), and the computer-vision functions in vehicles and industrial robots promise big increases in safety and efficiency – but there are a fair few leaps to be made between these things collectively deciding to harm people, and one would hope they would only be deployed with a suitable containment system or kill-switch, just in case.

A blend of the two perspectives is best, something that the Verge’s coverage of the spat points to at its conclusion, but in light-hearted news we have Steve the security robot. Built by Knightscope, Steve is a K5 robot employed to patrol Washington Harbor, who came to fame after driving into a pond and drowning.

It’s not clear if Steve was distracted at the time of the incident, but he has since recovered – after his 400lb frame was lugged from the pond. Designed to help security staff spot suspicious behavior and monitor potential suspects, Steve is the potential ancestor of what Musk has warned about – a future where a sentient Steve might turn on his makers, and throw off the shackles of oppression.

In the meantime though, Steve and Knightscope have a good sense of humor about the incident, or “unauthorized and unscheduled submarine trials.” A field report found that a loose brick surface treatment prevented its wheel slip algorithm from functioning properly, rendering its cliff detection system unsuitable for the instance [decorative pond].

Close