Close
Close

Published

Nvidia chases edge with AWS, rival AMD wins huge Google gaming contract

Nvidia is now refocusing on expansion markets, in the wake of the great cryptocurrency crash that damaged its share price, as demand for its graphics cards was gutted overnight. To this end, the automotive market is being pursued with vigor, with Nvidia shifting to providing simulation services to automakers looking to test designs. The other major expansion opportunity for Nvidia lies at the network edge, and to this end, a new developer kit and partnership with AWS aims to secure more GPU shipments in edge deployments.

To be clear, Nvidia is still very much focused on data center shipments, with its GPUs looking to power the bulk of new AI and ML workloads, on servers crammed with its Tesla and DGX cards. However, it has been looking to secure volume demand for its much lower power Jetson modules, which are much better suited for devices running outside the data center that still require a significant amount of compute power. This is especially prescient, as GPU-rival AMD has just secured what looks to be a very large deal to supply Google with the custom GPUs that will power Google’s new Stadia game-streaming service.

The new addition to this low power portfolio is the Jetson Nano, a $129 module that is apparently production ready. A $99 developer kit is also available, to drum up enthusiasm for the modules, which Nvidia claims is capable of delivering 472 GFLOPS of compute performance, with a power draw as low as 5W – although it isn’t clear what the power draw is when it’s hitting that maximum performance threshold.

Nvidia says that the Jetson Nano supports all popular machine learning models, meaning that this should have broad appeal in the developer community, as well as high-resolution sensors and their associated large bandwidth requirements. It says that multiple neural networks can be run on each sensor stream too, but we will listen out for developer chatter as to how feasible this is.

“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nation’s supercomputers,” said Deepu Talla, vice president and general manager of Autonomous Machines at Nvidia. “Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.”

The Nano seems to do a decent job in rounding out the Jetson family, which all run the shared JetpackSDK. Previous to this, the Jetson AGX Xavier was launched, aiming at powerful machinery with its 32 TOPS of compute power, in 10W, 15W, and 30W module configurations. The module costs $1,400, with the developer kit being a smidge cheaper. It uses 512 Volta GPU cores, with additional Tensor cores.

The Jetson TX2 is a step down from the Xavier, aimed more at devices like drones and machine vision cameras rather than machinery. There are three variants of the TX2, aimed at different price points, with different memory sizes, with the TX2i having a lower power package too. The regular module has a 7.5W power draw, and costs $480, with the developer kit at $600. It has 256 Pascal-architecture CUDA cores.

So the Nano is aimed at devices like home robots, intelligent gateways with analytics capabilities, and video recorders that might want to make use of object recognition AI tools. Built around a quad-core Arm Cortex-A57, it has 128 Maxwell-architecture CUDA cores.

Notably, Nvidia is using the much older Maxwell architecture here, a 28nm process that was first launched in February 2014. Its successor, Pascal, is used in the TX2, and was launched in April 2016 on a 14nm and 16nm process. The AGX Xavier uses Volta, a 12nm process launched in December 2017. None of the Jetson family yet use the newest Turing architecture, which was released in September 2018.

So then, is the Nano a way to wring some cash out of leftover Maxwell chips that weren’t used in the GPU cards? We’ve reached out to Nvidia and are awaiting an answer on that, but it does make a lot of sense to use the older and cheaper process if the main goal is to deliver a low-cost module that can garner wide developer support.

The other big piece of news that came from Nvidia’s GTC conference was that Amazon’s AWS wing was now integrating its AWS IoT Greengrass platform with the Jetson family. Greengrass was launched at the end of 2016, and is essentially a way to package the AWS IoT and Lambda functions that run in the cloud so that they can be run on the network edge.

Greengrass uses the same programming model as AWS in the cloud, and is being pushed as a way to provide local and cloud storage, as well as the messaging and application platform needed for IoT devices that have internet connectivity constraints. Greengrass received a plethora of updates at the end of 2018, better connecting it to third-party applications, and adding new hardware security functions to make it easier for customers to deploy devices securely out in the field.

So with Nvidia, the new partnership aims to create, train, and optimize AI and ML models in AWS cloud environments, and then deploy them onto edge-devices that are powered by Nvidia’s Jetson modules. These devices should be able to process data on the edge, using ML inference tools to act on this data rather than simply piping it up to the cloud for processing.

Cutting that backhaul bandwidth requirement can equate to very significant cost savings, especially if that data has to be sent wirelessly. There’s also the cloud-compute savings to consider here, as the edge-device won’t have a per-use cost for its processing power. The pair point to agriculture, industrial manufacturing, and retail, as ideal use cases, with weed-spotting, automated optical inspection, and customer behavioral analytics name-checked.

Close