It was refreshing to hear Google AI’s chief scientist Fei-Fei Li admit that the achievements of her field have been overblown, and that the greatest challenges remain to be solved. After all, Google, along with the other big names in the AI field such as Microsoft, Amazon, IBM and China’s Baidu, have been most guilty of stoking the hype for their own competitive reasons – but Li is definitely taking a softer line, emphasizing that the objective of AI is to enhance rather than replace human jobs, even if such sentiments may sound fatuous to those most immediately threatened.
It looks like Google has also been humbled by the furor over its role in the US Department of Defense’s Project Maven, designed to employ advanced algorithms in war zones. Li had warned in an email last year that any mention of AI in that context would be toxic and backfire badly on Google, which as we now know it did. But there was a sense Google could not have it both ways, trumpeting how AI was entering every field but then pretending it was not involved in military applications. It reflects not just how achievements have been overblown, but also the word itself, often tagged on to algorithms that are just forms of statistical regression or inference.
However, Li, who was speaking at Google’s Cloud Next 2018 conference in San Francisco, was not dousing the flames of AI but more resetting the clock of expectations forward. In fact, Google used the conference to intensify its campaign to win the battle of the full AI stack by entering the cloud edge computing market with a slimmed down version of its Tensor Processing Unit (TPU). These are ASICs (application specific integrated circuits) designed by Google for accelerating machine learning workloads.
This is a significant move coming two years after Google announced it was adding TPUs to its cloud platform to provide the computational power needed to train complex machine learning models based on deep neural networks. It is now in its second version able to deliver 180 teraflops of performance with 64 GB of high bandwidth memory. But the main point is that it runs TensorFlow, Google Brain’s neural network system.
The name TensorFlow derives from the operations such neural networks perform on their multidimensional data arrays, referred to as tensors. This enables them to exploit the mathematics of tensors, describing operations between such data arrays that played a crucial role in Einstein’s development of general relativity, and more recently in signal processing, before being coopted for machine learning. TPUs score by outperforming general purpose GPUs which in turn are more efficient for machine learning tasks than most CPUs.
Along with a slimmed down version of the TPU, Google has announced Cloud IoT Edge, an associated edge computing platform that extends Google Cloud’s data processing and machine learning to the emerging Edge TPU devices. Target applications for the new edge platform include predictive maintenance, robotics, and machine vision.
Google will now compete here with Amazon and Microsoft, which already have well developed edge computing ranges. Amazon has extended its IoT platform through AWS Greengrass, while Microsoft recently announced general availability of its Azure IoT Edge. Google is therefore playing catch up on the edge, although it is well placed by being able to play on the proven success of TensorFlow and the huge scope of its AI program.
Google plans in October 2018 to release the Edge TPU as part of a development kit including a reference design for experimenting with the chip inside edge devices.