Amazon Web Services (AWS) used its re:Invent conference as a platform to launch five new machine learning services, and unveil a partnership with Intel to create AWS DeepLens – a wireless video camera with machine learning features.
This comes a week after AWS threw its weight behind the Open Neural Network Exchange, an open source AI project that brings it closer to Facebook and Microsoft.
The event also saw the cloud giant announcing the latest element in an increasingly multi-faceted alliance with AT&T, an LTE-M button somewhat like the Dash buttons AWS’s parent firm sells to consumers – but this one is for enterprises to order supplies or issue service requests with a single click.
On the machine learning (ML) front, the Intel partnership has resulted in what we believe is the first piece of AWS-branded hardware – a camera running on an unspecified Intel Atom processor with Gen9 graphics, 8GB of RAM, 16GB of storage. Running the Ubuntu 16.04 Linux operating system, the 4-megapixel camera supports 1080p HD video, and can connect by WiFi, USB or Micro HDMI.
Small enough to carry around in one hand, DeepLens has over 100 gigaflops of compute power on the device itself, according to AWS – enough to allow for deep learning predictions on HD video in real time.
But the gigaflops number is not the most important criterion for benchmarking DeepLens’ performance – the integration into the AWS platform will be what makes or breaks the product, as its whole purpose is to make it easier for developers to create machine vision applications, as part of the broader AWS mission to make its services accessible for all kinds of businesses and developers.
At $250, DeepLens is cheap enough for developers to buy a couple to tinker with, but it’s a long way from being as accessible as the Raspberry Pi. With pre-built and pre-trained models, it should remove some of the installation headaches. AWS says that even those with no ML experience will be able to run their first ML projects in under 10 minutes, with examples including license plate recognition to trigger remote garage door unlocking.
The business model question that lingers is how many of these developer projects turn into an AWS bill large enough to justify the initial investment, but AWS seems more than capable of affording that outlay. It’s a platform that just keeps growing, with Amazon adding new features to it at great pace.
The latest include AWS SageMaker, a managed service for building, training, deploying, and then managing ML models, which will be tightly interwoven with DeepLens. SageMaker promises to let developers look at data stored in AWS S3, and transform it using popular ML libraries – with native integrations for TensorFlow and Apache MxNet.
The other four new ML tools are Amazon Transcribe, which converts speech to text; Amazon Translate, which converts text between two languages; Amazon Comprehend, for understanding natural language (and all of its rule-breaking); and Amazon Rekognition Video, a video analysis tool for both batch processing and real time machine vision.
Besides the swathe of new AWS products and partner integrations, Amazon also announced that it was joining the Open Neural Network Exchange (ONNX) project, which was founded recently by Facebook and Microsoft. AWS’ Python-based ONNX-MxNet package is now available for developers, as a deep learning framework that can be accessed by API calls from a number of programming languages.
Launched back in September, the ONNX project aims to provide a shared model representation for interoperability between AI frameworks, so that AI developers can more easily work on multiple frameworks. Microsoft open-sourced its Cognitive Toolkit, and Facebook did the same with Caffe2, as part of the ONNX launch, and PyTorch was the third addition. Now AWS’ ONNX-MxNet joins the list, specifically to let developers run the ONNX format on the Apache MxNet framework – bringing the total to four supported frameworks.
The frameworks enable a developer to build and run the computation graphs that represent a neural network. ONNX aims to allow a developer to move more easily between the frameworks for their neural networks, to allow them to match the framework to the task at hand – with ONNX saying that one framework might be optimized for mobile devices, and another for low-latency clouds.
In theory, using the ONNX representations should allow for quicker transitions between approaches for developers, and for hardware vendors, it should provide a simplified checklist of features to cater for – which might lead to more cost-effective designs.
AWS and Microsoft have already worked on a project called Gluon. Announced back in October, Gluon is a new open source deep learning interface that the pair are aiming at developers needing to build new models. Collectively, more demand for AI compute resources will be good news for the top-two cloud computing providers, who have provided the open source tools that they hope will be used to create demand for their own services.
By contrast with all this neural networking, AWS’s latest deal with AT&T looks rather mundane, but it is part of a growing partnership that could be a blueprint for how telcos and cloud giants will work together in future, carving up the networked cloud services value chain between them rather than battling head-to-head.
Earlier this year, the two companies announced a landmark deal to cooperate in offering enterprise telecoms services, agreeing to integrate their respective networking and cloud capabilities. This goes well beyond existing work to connect devices to the cloud, and optimize those links – they will also cooperate on preconfiguring sensors and devices for efficiency in the Internet of Things (IoT), and working on overall platform security and threat management.
This seems to show AT&T acknowledging a reality which most carriers will have to do too – that they are not in a position to compete with Amazon AWS or IBM directly in offering cloud services. But they have highly valuable expertise in device connectivity and management, and in provisioning and monetizing large numbers of gadgets and consumers. So alliances like this one are sure to proliferate, though some telcos will be more successful than others in avoiding a bitpipe role in the cloud, and securing a significant role in the value chain when they join forces with Amazon, Microsoft or vertical market platforms like GE’s Industrial Internet Initiative (in which AT&T is also the primary carrier member).
In that context, the LTE-M Button, as it is branded, is an example of the kind of joint activity which AT&T and AWS hope will enable them to sew up significant shares of the enterprise IoT market. The button runs on AT&T’s new, national LTE-M network for low power, machine-to-machine connectivity. The device is pre-configured to make its deployment very simple and it will launch in the first quarter of 2018 with an introductory price of $30 per button for the first 5,000. After that the price will be in the “mid-$30s” and the battery should last for about three years, said AT&T. The price includes its data usage for its whole life.
The Button is supported by AWS’s 1-Click. This is deliberately named after Amazon’s easy payments option for its ecommerce customers, and is a new initiative aiming to get customers using AWS with as little difficulty and configuration as the original 1-Click service.
AT&T is positioning LTE-M as superior to similar WiFi-connected devices because the button will be simpler to deploy and run, will provide more consistent coverage even in an enterprise’s remote locations, and could save a business from having to send people out to respond to a service call or reques.