Close
Close

Published

AWS continues IoT re:Invention, tags Intel for ML camera, joins ONNX

Amazon Web Services has used its re:Invent conference as a platform to launch five new AWS machine-learning services, and unveil a partnership with Intel to create AWS DeepLens – a wireless video camera with machine-learning features. It comes a week after AWS threw its weight behind the Open Neural Network Exchange, an open source AI project that brings it closer to Facebook and Microsoft.

The Intel partnership has resulted in what we believe is the first piece of AWS-branded hardware – a camera that looks a bit like someone has stuck a webcam on top of the iconic MacBook charger. Housing an unspecified Intel Atom CPU with Gen9 graphics, 8GB of RAM, 16GB of storage, and running Ubuntu 16.04, the 4MP camera provides 1080p HD video, and can connect by WiFi, USB, or Micro HDMI.

Small enough to easily carry around in one hand, AWS says DeepLens has over 100 gigaflops (GFLOPS) of compute-power on the device itself – enough to allow for real-time deep-learning predictions on HD video in real-time. For context, that’s not all that much processing power – but that’s not surprising in this form factor. The Cray-2 supercomputer managed 1.9 GLOPS of peak performance back in 1985, but today’s consumer GPU video cards are easily capable of teraflop performances (1000 gigaflops) – with the most powerful peaking at over twenty teraflops.

But the FLOP number is not the most important criteria for benchmarking DeepLens’ potential performance. Instead, the integrations into the AWS platform will be what makes or breaks DeepLens – as its whole purpose is to make it easier for developers to create machine-vision applications.

At $250, it’s cheap enough for developers to buy a couple to tinker with, but it’s a long way from being as accessible as the Raspberry Pi. With pre-built and pre-trained models, it should remove some of the installation headaches. AWS says that even those with no ML experience will be able to run their first ML projects in less than ten minutes, with examples including license plate recognition to trigger remote garage door unlocking.

The business model question that lingers is how many of these developer projects turn into an AWS bill large enough to justify the initial investment, but AWS seems more than capable of affording that outlay. It’s a platform that just keeps growing, with Amazon adding new features to it at great pace.

The latest plethora include AWS SageMaker, a managed service for building, training, deploying, and then managing machine-learning models, which is envisioned as being tightly interwoven with DeepLens. SageMaker promises to let developers look at data stored in AWS S3, and transform it using popular machine-learning libraries – with native integrations for TensorFlow and Apache MxNet.

The other four new ML tools follow their nomenclature pretty closely. Amazon Transcribe converts speech to text, Amazon Translate converts text between two languages, Amazon Comprehend is meant for understanding natural language (and all of its rule-breaking), and Amazon Rekognition Video is a video analysis tool for both batch processing and real-time machine-vision.

Besides the swathe of new AWS products and partner integrations, Amazon also announced that it was joining the Open Neural Network Exchange (ONNX) project, which was founded recently by Facebook and Microsoft. AWS’ Python-based ONNX-MxNet package is now available for developers, as a deep learning framework that can be accessed by API calls from a number of programming languages.

Launched back in September, the ONNX project aims to provide a shared model representation for interoperability between AI frameworks, so that AI developers can more easily work on multiple frameworks. Microsoft open-sourced its Cognitive Toolkit, and Facebook did the same with Caffe2, as part of the ONNX launch, and PyTorch was the third addition. Now AWS’ ONNX-MxNet joins the list, specifically to let developers run the ONNX format on the Apache MxNet framework – bringing the total to four supported frameworks.

The frameworks enable a developer to build and run the computation graphs that represent a neural network. ONNX aims to allow a developer to move more easily between the frameworks for their neural networks, to allow them to match the framework to the task at hand – with ONNX saying that one framework might be optimized for mobile devices, and another for low-latency clouds.

In theory, using the ONNX representations should allow for quicker transitions between approaches for developers, and for hardware vendors, it should provide a simplified checklist of features to cater for – which might lead to more cost-effective designs.

The ONNX graph comprises a series of nodes, which then go on to form the connections between each other that give rise to the term ‘neural networks.’ These links, or synapses, strengthen over time, when more ‘correct’ answers are discovered. The flow of data through a neural network, from the inputs, through the nodes and synapses, and into the outputs, requires a lot of testing and tweaking – working backwards from the outputs to refine the synapses until an input gives the desired output. That process is called backpropagation.

AWS and Microsoft have already worked on a project called Gluon. Announced back in October, Gluon is a new open-source deep-learning interface that the pair are aiming at developers needing to build new models. Collectively, more demand for AI compute resources will be good news for the top-two cloud computing providers, who have provided the open-source tools that they hope will be used to create demand for their own services.

Close