Close
Close

Published

Google I/O: for the third year, AI-driven Android is a key theme

For the third year running, the combination of artificial intelligence (AI) and Android mobile devices was a big theme at Google’s annual developer event, Google I/O.

The company outlined how it plans to use deep learning in forthcoming smartphones, as well as its home displays and web services, in particular to reinvent the user interface around voice recognition rather than the touchscreen. This has started in virtual assistants, smart speakers and home hubs, but Google envisages all devices and interactions making use of voice – and in future, tactile and virtual reality interfaces. The degree to which each of the web giants can define and control the next generation of Internet and mobile experiences will shape their commercial success in the 5G era.

Google showed off a prototype handset which responded quickly to a combination of voice commands and dictation across different applications while being context-aware too. This code will be made available for Google’s Pixel smartphones later this year.

“We envision a paradigm shift to voice interfaces,” said VP of engineering for Google Assistant, Scott Huffman.

There was a far deeper focus on the privacy issues of AI than there has been in the past, as Google, Facebook and others come under intense scrutiny for how they handle users’ data in the AI environment. Google has “a deep sense of responsibility to create things that improve people’s lives … and benefit society as a whole,” said CEO Sundar Pichai in his keynote address.

Google may have made its fortune with software, but it has been increasingly engaged with hardware developments in recent years, not to compete head-on with Samsung or Huawei – the engines of Android sales – for market share, but to drive the most advanced designs and user experiences in order to move the market forward and keep ahead of Apple.

It announced two new Pixel models – the Pixel 3a, which starts at $399 with a 5.6-inch OLED display; and the 3a XL, with a 6-inch display and priced from $479. Both use Qualcomm’s 10nm Snapdragon 670 system-on-chip with a Category 11/12 LTE modem, with 12.2-megapixel front cameras and 8-mp rear cameras. These use AI to match the performance of more expensive cameraphones like the iPhone X, especially in poor light.

The company talked about future support for foldable handsets, which will be enabled in the next release of its operating system, Android Q. “Android Q is native with 5G. Twenty carriers will launch 5G services this year, and more than a dozen 5G Android phones will ship this year,” said Stephanie Cuthbertson, a senior Android director. “Multiple OEMs will launch foldables this year running Android.”

Other features of Android Q include easier access to privacy and location controls. The new OS is now available in beta release in 21 devices from 12 OEMs.

Google is also active in creating integrated hardware/software platforms for the home and used I/O to show its latest products, and to rebrand its Nest smart devices (see separate item).

On the AI side, it is racing with Facebook and others to allow more AI-enabled applications to run on resource-constrained systems like smartphones and speakers, and to move a larger percentage of the processing to the device rather than the cloud.

At I/O it described a new tool, called Testing with Concept Activation Vectors, which is designed to detect biases in models that are used by AIs. In a demonstration conducted by Pichai, the tool discovered an algorithm that unfairly favored men in a search for images of doctors.

Google, Facebook and others are also working to shrink the large data sets needed to train neutral models, and so reduce the time and resources required to get a model to the stage where it is usable and trusted. The techniques involve hiding known parts of a data set, such as words in a transcribed audio text, and getting the model to predict the missing words. Google’s approach is called ‘Bidirectional Encoder Representations from Transformers’ (BERT), which it currently uses to understand context for speech recognition. Facebook claimed that its similar technique was able, in one scenario, to reduce the amount of transcribed audio needed to train a model from12,000 hours to just 80 hours while reducing word error rates.

Such advances at Google are helping it to build on the foundations for mobile AI which it has been putting in place over the past two years and more. At last year’s I/O, the central theme of the launch of Android P (the old dessert names have been dropped) was AI-driven enrichment of the user experience.

In particular, AI was harnessed to give users greater control over how they use their smartphones. For instance, the OS can learn user preferences and habits and then make sure the most-used applications and functions load more quickly, or that battery and screen performance is optimized to suit the customer’s behaviors.

Google was collaborating with its own AI ubsidiary DeepMind to produce Adaptive Battery and Adaptive Brightness, which are billed as a better approach to power management and battery life, ensuring that the exact amount of power is used for a particular requirement.

And at the 2017 event, many of the developments which are now transforming the Google Android experience were already glimpsed. In particular, we saw indications of how Google would try to redefine the way users interact with the Internet, so that when the search box which underpins its success is outdated, the company is still in control of consumers’ experience. Much of the race to dominate that user interaction is focused on AI, and some of its offshoots like machine vision.

The rather crude first generation examples of the new web experience, like Apple Siri, Google Home and Amazon Alexa, are already evolving into far more sophisticated tools, harnessing machine learning to understand and predict the customer’s choices, together with virtual reality and computer vision to reinvent the user interface. At the heart of all that is Google’s Assistant software, which has developed into an interface which spans mobile, home and vehicle devices, and into a broad developer platform.

At I/O 2017, Google CEO Sundar Pichai said the company was being forced to reimagine how its products work for a world where people will interact in a “natural and seamless” way with technology.

He said: “It’s important to us to make these advances work better for everyone—not just for the users of Google products. We believe huge breakthroughs in complex social problems will be possible if scientists and engineers can have better, more powerful computing tools and research at their fingertips. But today, there are too many barriers to making this happen.”

To support the massive AI/ML workloads which lie behind its changes, Google has increasingly been populating its cloud with its own-designed TPU (Tensor processing unit) chips. Google is also providing the specialized ML processor via its Compute Engine Infrastructure as a Service (IaaS) system, meaning that customers will be able to spin up compute instances to use the TPUs in their applications.

Google is integrating AI-based functions into many of its web services, used by both consumers and businesses. For instance, Google Lens, the visual search app, was first demonstrated in public at I/O 2017, as an image recognition function inside a smartphone camera application, using machine vision to identify types of flowers. This is now part of the Photos app and Google Assistant.

Geoff Blaber, VP, Americas at CCS Insight, said: “Google I/O 2017 saw Google taking clear measures to respond to competitor moves in artificial intelligence, the home, IoT and VR. Google’s scale puts it in a strong position but whilst the mobile OS battle is over, we’re only at the dawn of a broader war in artificial intelligence and the home.”

Close