Please note that Wireless Watch will take its annual spring break next week so
there will be a combined issue for the weeks commencing May 8 and May 15, to be
published on Monday May 15.
Voice-activated speakers and digital assistants may just seem like the modern, more annoying version of the old Microsoft paperclip, continually offering help and reminders where they are not needed. But while Apple Siri, Microsoft Cortana, Google Home/Now and Amazon Echo/Alexa may have a long way to go, we will look back on them as the advance guard in a crucial battle – to define and control the next generation of the mobile and web experience, on the long route to the Tactile Internet.
Voice-activated assistants – which use powerful AI (artificial intelligence) engines to deliver detailed, context-aware and personalized answers to users’ questions – are the way in which web giants hope to place themselves at the heart of a user’s whole range of activities, whether they are in their smart home, connected car, at work or on the phone. This, in turn, should drive usage, purchasing and new revenue streams, and provide stepping stones towards a future experience centered around virtual reality and advanced machine learning, in which the boundaries between the physical and digital world blur (at least in the view of some futurists).
The current crop of digital assistants are a very long way from delivering that kind of vision – even supposing it would be welcome to most internet users. But they provide valuable pointers to how users will interact with the web and cloud, and therefore how the service providers may be able to boost their revenues. But with every shift to a new internet experience, there is risk for those which dominate the current one. When Apple launched Siri, Google acknowledged in a Senate committee hearing that, if that became the normal way for users to carry out internet search, it would threaten Google’s own business, which is founded on the search norms it made its own. Back in 2011 executive chairman Eric Schmidt testified that new approaches to search, such as voice activation and context-aware results, could threaten his firm’s traditional model.
Many Google activities since then have been geared to staying ahead of the game in search and the broader web experience. These have included a string of AI-related acquisitions, the launch of Google Now, and updates to the search engine itself, such as 2014’s Hummingbird, which aimed to make the experience “more human” and more like a real brain in its processes.
But there is a big race going on here. Facebook, Baidu, Microsoft and Amazon have all invested heavily in machine learning and AI engines, which will drive future connected experiences for people, things and robots. But there has been less progress in the devices which will deliver these services naturally and intuitively. These started as digital assistant apps to run on smartphones, led by Siri and followed by Cortana, Google Now and Samsung Bixby; now the intelligent, proactive interactions these promise have been extended to other hardware – notably home hubs incorporated into speakers like Amazon Echo or Google Home; and car dashboards. Even some of the mobile operators, most recently Orange, are piling into the digital assistant game.
But Amazon is currently setting the pace in divorcing the digital assistant from specific hardware and pushing it into every kind of device so that it becomes as ubiquitous as the current web browsers and search boxes. Last week it extended its own home device range, but more importantly, teamed with chipmaker NXP (soon to be part of Qualcomm), on a reference design which would allow Alexa to be incorporated quickly and cheaply into all kinds of gadgets.
Amazon has come from behind to take its new position as the driving force in digital assistants. It originally designed Alexa to provide an AI-based differentiator for its Fire Phone, but the software appeared to have flopped along with the handset. However, necessity became the mother of invention and Alexa was revived as the engine behind the Echo speaker – which Amazon describes as “a Star Trek computer for your home”, and which has proved to be a surprise hit with consumers and has forced Apple and Google to respond with their own home offerings.
Now it has pushed back into the all-important mobile platform – where a rising percentage of internet usage takes place, and which is essential to making an experience the consumer default. It has partnered with Huawei to launch Alexa on that firm’s flagship smartphone, in a deal which echoes Amazon’s dual strategy for Kindle, which is available on its own ereaders and tablets, or as an app for other devices.
And the NXP alliance could accelerate the pace of getting Alexa into lower end devices and into new form factors. The chip company has created a hardware/software reference design using the same 7-microphone array as the Amazon Echo, and the same far-field audio processing, echo cancellation and beamforming. All this is combined with an NXP i.MX7 application processor (which is not used in the Echo itself), lower level stacks, and access to the Alexa Voice Service (AVS) in the cloud.
“Following the introduction of standalone voice devices such as the Amazon Echo and the new standalone Google Assistant, what we’re seeing now is a rush by a wide range of device manufacturers to integrate voice into their end devices, from washing machines to smoke alarms to alarm clocks,” Robert Thompson, i.MX ecosystem manager at NXP, told EETimes. “This reference platform is designed to short cut a lot of these challenges and get the process moving a lot quicker … you can focus on usage models and what is relevant to the device you’re looking to integrate Alexa into.”
The i.MX7 can be exchanged for other NXP processors, depending on the application – the i.MX6 or upcoming i.MX8 if 3D graphics acceleration is required for instance; or the i.MX6 Ultralight for basic, very low power devices. The reference design is available free from Amazon, but is not available through NXP or its channels.
Meanwhile, Amazon itself is reported to be about to launch an Echo with a built-in screen, hard on the heels of last week’s unveiling of the Echo Look, which has an integrated camera. This indicates a proliferation of different form factors within the Echo portfolio, geared towards different applications and user behaviors. There is an intense race with Google now, especially after the search giant said this month that its Home smart speaker could now recognize multiple users. That will put the pressure on Amazon to up its Echo game again – it is expected to integrate video chat and voice-controlled online shopping into the device this year.
Indeed, the Echo and the Amazon online store experiences are starting to mirror one another. The new Echo Look, for instance, features an app called Style Check, which uses
machine learning and advice from fashion experts to provide style recommendations, while users can compare outfits with friends using the new camera. This is similar to the Outfit Compare feature in the US Amazon shopping app. As with its various stores and apps for shopping and for consuming content, Amazon will tie everything together with common interfaces and the Prime subscription service, aiming to make its software part of every aspect of consumer life and purchasing.
An Echo with a screen (the project is supposedly codenamed Knight) would allow Alexa to respond to more complex queries – presenting a range of options in one view, for instance, rather than forcing users to listen to a whole list of spoken choices. That would enable Echo to emulate some of the functionality of the Alexa app on Amazon’s Fire tablets and Fire TV.
Amazon also has a significant opportunity with carmakers, which are looking for in-vehicle infotainment options which will differentiate their vehicles, but without surrendering too much power to Apple or Google. Siri and Google Now capabilities are already integrated into the companies’ respective in-car platforms, CarPlay and Android Auto, but many car companies are wary of working with Silicon Valley firms which aim to control the connected car experience and even design their own autonomous vehicles.
In the car, Amazon’s initial partner is Ford, which will integrate Alexa into its vehicles from this summer, allowing drivers to start their cars using their Amazon Echo, play Kindle audio books and order items on Amazon while travelling – as well as more generic search and navigation functions. Ford is working on adding Alexa home-to-car integration for vehicles with Sync Connect in the future. Steve Rabuchin, vice president for Amazon Alexa said: “We believe voice is the future, and this is particularly true in cars.”
The mobile market is more challenging because the incumbents are so strong. Here, after the failure of its own smartphone, Amazon needs to rely on mobile users wanting to download the Alexa app rather than using the default on their handsets, such as Siri. That is tough, as third party browser makers know, but Amazon did achieve it with the Kindle software. Repeating that trick will rely on deals with Android smartphone makers which do not have their own assistants and want to reduce their reliance on Google – like Huawei, though Samsung is a lost opportunity now it has its own Bixby to counteract Google Now.
More importantly, success on the smartphone could be driven by uptake of Alexa in the home and elsewhere. Amazon needs to tie as many customer activities as possible into the Echo and in-car interfaces, with the Prime subscription system being its chief weapon, so that users take the trouble to switch to Alexa on their mobile devices, in order to have the same experience and preferences wherever they go.
Remote control of the smart home is Amazon’s big boast for mobile Alexa. As befits a company which failed in smartphones but has been a big hit in living rooms, it is centering its digital assistant assault on the home, looking to build out the end-to-end experience from there.
Amazon’s other strategy to chip away at the smartphone status quo is to make Alexa a more open platform than its rivals. Here, it is taking a big leaf out of Google’s and Apple’s smartphone books, promising an app store stuffed with services, many of them free, which can be controlled via Alexa’s voice interface. Anyone with Alexa-based hardware – the Echo, Echo Dot , Echo Look or Tap, or compliant smart household appliances or lights – can choose from the 7,000 apps and services which can be triggered by a voice command.
Amazon was smart to open up Alexa to third party developers at an early stage and will now be aiming to become the ‘Google Play’ of smart home apps, and more importantly, of the web services of the future, voice-activated and AI-enabled.
Essentially, Alexa is being positioned as the more open AI platform. Apple retains an iron grip over Siri, which is only available on its own hardware, with integrations through iOS APIs. Microsoft’s Cortana is in a pretty similar position, but tied to the Windows 10 OS rather than specific products. Android’s Google Now would appear to provide an open alternative, but developers are then tied to the Android platform and the associated Google licensing terms that come with it. The core components of Android are open source (AOSP), but the Google Mobile Services apps that form the core value offering in the smartphone arena come with a bundled licence, which means the developer can only include one Google app if the rest are also supported as standard on the device.
That gives Amazon an opportunity to push Alexa as an AI platform for Android without the Google bloatware. It has expanded AVS developer tools, which enable third parties to build hardware that houses the Alexa interface – allowing the device to act as the gateway to Amazon’s cloud and monetization strategy. Now the NXP deal has simplified that process even further with a chip-level reference design.
However, the fight is only getting harder for Amazon as Google and Apple take the digital assistant increasingly seriously, and other companies enter the game. In December, it was revealed that Microsoft was planning to bring its Cortana digital assistant to smart speakers, with the first device revealed to be a third party device from Harman Kardon, due later this year. According to Windows Central, the firm may be designing its own speaker too, and it is planning to integrate Cortana into other smart home gadgets, and to announce a fully-fledged home hub later in 2017.
Samsung now has its own Bixby assistant, the product of Viv Labs, a company that Samsung bought in October. Bixby is initially targeted at high end Galaxy models, starting with the S8, and claims three main selling points – it doesn’t need to launch other applications to function; it is more contextually aware than rivals; and it understands natural language.
The eventual goal for Bixby is to be able to do everything on a phone that a user could do themselves, using their fingers. The system has been designed so that it can recognize when it does not have enough input from the user to carry out a task, so that it can request more data instead of failing at the task, as the other digital assistants tend to do. This contextual understanding is a core component, and is intended to allow total control of the phone in multi-step operations.
A Bixby software developers’ kit (SDK) is on the way, which would allow application developers to incorporate the system into their own systems (both software and potentially hardware). In October, Samsung’s mobile division CTO, Injong Rhee, said that the Galaxy S8 would be a springboard for expanding Samsung’s AI tech into its other products – which include TVs and domestic appliances, as well as potentially is SmartThings smart home division.
Does the MNO have any role in this digital assistant race?
It is a tenet of the cellular market that mobile operators can differentiate their services from those of over-the-top providers by harnessing the unique knowledge they have about their subscribers’ tastes, preferences and behavior. Few, however, have proved that point with any major successes in areas dominated by web providers, such as messaging or media. So could combining the MNO’s knowledge of its subscriber, and ability to control the quality and resources of its network, really give it an edge this time, enhanced by AI?
Telefonica and Orange seem to think so. The former showed off its own digital assistant, Aura, at this year’s Mobile World Congress. The product of a secretive two-year R&D project, with significant input from Microsoft, Aura is similar to Apple Siri or Amazon Alexa, but is tied specifically into the Telefonica network. Users can check details of their Telefonica services and request help or upgrades, using a voice interface on their smartphone. The Spanish telco calls Aura a “cognitive intelligence system” rather than a digital assistant and claims three advantages – the control of a natural language interface; full “transparency” about Telefonica fees and services; and a personalized way to discover new service options.
This is limited stuff so far, however, since it is primarily a way to interact with Telefonica and its services, and so would be additional – and probably peripheral – to more all-encompassing apps.
Orange is more ambitious with its new Djingo assistant, unveiled last week at the Orange Hello event in Paris. This will be available early next year in France, with other Orange markets to follow. By that time, of course, Amazon, Google and other others are likely to have pushed the boundaries of the AI assistant still further, but Djingo and Aura could herald a wave of operator-driven launches. Orange revealed that the technology behind its assistant has largely come from a strategic partnership with Deutsche Telekom which included cooperation on the intelligence engine, and others could join the party too. “It is open to others on the condition they respect the spirit of the partnership, which is easier with two than many,” said Orange’s European deputy CEO, Gervais Pellissier.
The voice recognition technology behind Djingo comes from Nuance Communications, which has also provided voice activation for Orange IPTV. The telco will provide a home speaker, a smartphone app and a TV remote control, all including the new AI capabilities. Orange also plans a specific digital assistant for its upcoming banking service, powered by IBM Watson.