Welcome signs of reality check in AI industry emerge

The mood in the world of AI has changed subtly during 2018, stemming from a desire to avoid a hard landing from the hype that has lifted the field to stratospheric heights of over-expectation. This has not tempered enthusiasm so much as dampened some of the rhetoric associated with deep machine learning in particular, where the essential error has been to exaggerate the level of general intelligence that has been achieved.

That echoed earlier mistakes dating right back to the dawn of AI, when progress on a very limited use case confined to a small vocabulary led to wild prognostications that machine translation would be solved within a few years. Such mindless over excitement has caused the larger AI field to fizzle out several times over the last 60 years, and there is a sense this must not happen again. While it seems unlikely history will be repeated there is still a danger of investment collapsing as large amounts of venture capital disappear among the frenzy of start-ups.

There has been spectacular progress in some areas, especially vision and natural language processing, as well as deriving insights from complex data sets with great promise in medical diagnosis for example. But this has deluded practitioners in the field who ought to know better into promoting the idea that we are on the threshold of a new industrial revolution more profound than the advent of the steam engine or electric power.

AI today is described in breathless terms as computer algorithms that use silicon incarnations of our organic brains to learn and reason about the world, producing intelligent superhumans rapidly making their creators obsolete. The reality is far more down to earth, because the phenomenal aspect of contemporary AI is not the algorithms themselves but the hardware they run on.

This has of course galvanized research into algorithms to make most efficient use of the hardware, and this in turn has percolated down into silicon design. AI and machine learning research therefore straddle both hardware and software with much exciting work being done, but it is the increase in power and storage, coupled with the liberation of massive data sets, that went into the mix that is contemporary AI.

The other point to make is that the AI and machine learning algorithms we have are still rooted in statistical and probabilistic analysis, and while they could loosely be said to be inspired by the human brain they are not modelled on it. They cannot be, because we lack a clear enough appreciation of exactly how the brain tackles complex problems, beyond some of the basic circuitry. Machine learning algorithms operate essentially by adjusting their parameters or weights in response to feedback just like earlier statistical algorithms used in game theory for many years, just to a greater level of depth.

Of course, the brain could be described as just a bunch of neurons and circuits, but in the human case has evolved to a much deeper level of cognition that allows insights and knowledge to be transferred flexibly in a fluid way between domains of knowledge. By contrast machine learning, despite all the talk of unsupervised algorithms, is very much confined to domains. It is a machine tool rather than an intelligent entity, taking computers a step beyond the codification of rules in expert systems but still requiring human hands, or minds, to guide it. The anthropomorphizing of them into intelligent and sentient silicon beings is very premature.

There was a sense of this institutional self-awareness at the recent AI World in Boston, where the presentations veered between further breathless presentations of great advances and an almost sheepish admission that hype was exceeding accomplishment. There was also a reality check in the second annual AI Index, a partnership comprising among others Harvard, MIT, Stanford, the nonprofit OpenAI, and the Partnership on AI industry consortium, collating and analyzing data on the state of the field itself. Its first report a year ago had been criticized for being too effusive and US-centric, reflecting the origin of its contributors.

Both these caveats have been addressed at least partly in the second edition, which emphasized for example how funding is now exploding almost everywhere and not just in the US. Funding is becoming more evenly distributed across Europe and Asia, with China, Japan, and South Korea leading Eastern countries in AI research paper publication, university enrollment, and patent applications.

Europe though, spearheaded by the UK, has forged ahead faster than is often acknowledged and has become the region publishing the largest number of AI papers. The continent accounted for 28% of all AI-related publications in 2018, just ahead of China on 25%, with North America well behind on 17%. Papers published does not equate to investment and implementation, but the index is keen to redress the US-centricity of the first version.

The rhetoric has also been reined back significantly, with the admission that rate of progress in AI has itself become a hotly contested issue. The report continues to insist that AI is set to change the world but concedes now that how and when are very much open questions.

The second index goes further than the first in stressing that progress has been skewed so far towards certain fields like limited game-playing and vision where extraordinary progress has been made. It remains far behind in general intelligence tasks that would result in, say, total automation of more than a limited variety of jobs, the report admitted. For that reason, talk of AI decimating jobs is also premature.

The report nominates computer vision as the hottest field of research, because of its foundational status as the discipline helping to develop self-driving cars as well as usher in augmented reality and object recognition. No surprise there, but less expected was the suggestion that natural language processing research is receiving less attention and investment at present. This rather seems to ignore the extensive work on natural language processing at Google, Amazon, Apple and Microsoft among others.

While the AI index may have erred on that point its measured tone imbued with reality is welcome and timely as AI applications begin to enter mission critical environments where appreciation of their limitations in learning, adaptation and robustness should help avoid over hasty deployment of algorithms that are not fully tried and tested.