Close
Close

Published

Nvidia keeps on trucking with Volta GPU and Toyota car deal

The GPU Technology Conference (GTC) was the stage on which Nvidia delivered a pile of news announcements, including a new GPU (Volta), a GPU cloud computing offering, robot testing platform, and business partnerships with Toyota and SAP. Following some healthy Q1 results, the company continues its barnstorming progress, while its closest rival, AMD, continues to flounder, and Intel eyes its data center incursion nervously.

But the silicon industry is already looking at the next generations of chips that might power increasingly complex compute tasks. Google has its Tensor Processing Unit, an extremely optimized chip for machine-learning functions, as well as plugging away on quantum computing chips too, and Intel is still plugging away with its immense R&D budgets – dedicated to x86 but potentially considering other approaches. Nvidia looks comfortable in the short term, but the nascent market hasn’t crowned a definitive winner yet.

The Q1 results saw revenue hit $1.94bn, up 48% from Q1 2016, with a 129% boost in the earnings-per-share for that same quarter. Q1 2017 was an 11% dip from the preceding quarter, however, falling from $2.17bn, but the stock market has wasted no time in pushing the share price up some 17.5% – largely due to the 59.4% gross margin it posted for the quarter.

As for the business lines, Nvidia says it tripled its data center GPU computing business compared to a year ago. In its latest 10-K, filed in March, its desktop GPU sales accounted for 72% of its total annual revenue, although both automotive and data center are catching up.

At GTC, Nvidia and SAP, the ERP and software provider, announced a collaboration to combine Nvidia’s AI expertise with SAP’s business software platforms. Essentially, SAP is just using Nvidia’s hardware and supporting software to power new cloud-based enterprise software, some of which was on show at GTC.

The first demo was SAP Brand Impact, a tool that uses machine-vision and deep learning to analyze the impact of brand logos and advertising. Audi has been using it to examine the impact of its corporate sponsorship deals, and is “strongly considering possibilities to combine it with our media analysis workflow for the upcoming Audi Cup and Audi FC Bayern Munich Summer Tour.”

The second demo was of SAP’s Accounts Payable app, which has been trained using a recurrent neural network to automatically process business invoices, improving cashflow and significantly reducing mistakes. The third was the Service Ticketing application, which uses Nvidia GPUs to analyze unstructured data to more efficiently route customer support tickets to the right staff and departments.

While enterprise software isn’t as sexy as self-driving cars or industrial robots, it does have the potential to provide quicker returns on investment than using GPU-based machine-learning in the ‘physical’ world. Administration and payments require an awful lot of man-hours, and should such AI-augmented applications save both time and effort, then enterprises will clamor to adopt them, thanks to the scale of the potential savings.

But self-driving cars were also a part of Nvidia’s GTC presentation, where it announced a partnership with Toyota, which sees Toyota commit to using Nvidia’s Drive PX computers in its autonomous vehicles. The sensor fusion boards will be analyzing and reacting to the sensor inputs from Toyota’s cars, from radar, LiDAR, and video, powered by Nvidia’s Xavier processor.

It’s another win in the automotive space for Nvidia, which also recently announced a deal with Bosch, a major supplier to automakers, which sees Bosch use Nvidia silicon in its Bosch AI Computer. That deal was the first outing for Xavier, a chip that Nvidia says can process up to 30 trillion deep learning operations each second, in a 30W power package. Nvidia anticipates achieving SAE Level 3 capabilities this year, and Level 4 in 2018.

The headline news for Nvidia at GTC was the launch of Volta, Nvidia’s new GPU architecture, which is being pitched at AI and High Performance Computing (HPC) applications. The first Volta product is the Tesla V100, a GPU built for data center usage, for both AI inferencing and training, as well as HPC acceleration and also graphics tasks.

Nvidia says the V100 houses 21bn transistors, and provides a 5x performance improvement over Pascal, its previous generation, in peak teraflops. The most notable change has been the addition of dedicated Tensor cores, 640 of them, which are the horsepower that drives what Nvidia claims is 120 teraflops of deep learning performance from the V100 – equivalent to around 100 CPUs, according to the company.

Also of note is the addition of HBM2 RAM, which the company developed in partnership with Samsung. With a claimed throughput of 900GBps, Nvidia says the V100 has 50% more memory bandwidth than its Pascal GPUs. A new generation of the NVLink interconnect system allows the GPUs to be linked to other GPUs and CPUs in clusters. Desktop GPU rival AMD has also recently reported being hamstrung by a shortage of HBM2 memory, make of that what you will.

The V100 will form the basis of the Nvidia’s new DGX workstations and rack-mounted computers, which are being supported by a revamped software suite. Claiming to be 3x faster than the old DGX line, the new Volta-powered boxes are being pitched as providing the equivalent AI-based compute performance of 800 CPUs – via 8 PCI cards housing the V100. The smaller DGX Station desktop ‘personal supercomputer’ is apparently equivalent to 400 CPUs, while using 40x less power than those CPUs would have.

While Nvidia will be selling these boxes into many of the cloud computing providers’ data centers, it is also launching its own cloud compute program that developers will be able to use – to lease the capacity to run their own software tasks on without the burden of purchasing those rather expensive boxes.

Called the Nvidia GPU Cloud (NGC), the pitch is “accelerated computing when and where you need it,” the system revolves around a containerized software stack based on the same software running on the DGX boxes. The NGC will launch a public beta in Q3, Nvidia is hoping that its on-demand capacity offering is more popular than the GRID equivalent it pitched at consumer video gamers – who unsurprisingly, weren’t too enthusiastic about cloud-rendered video games. It will be interesting to see the reaction of the likes of AWS and Azure, which Nvidia is striding into direct GPU-based competition with.

The last of the product announcements was another software platform, aimed at robotics developers. Called the Isaac Robot Simulator, the system allows developers to test their robots in simulated virtual environments, promising quicker time to market and lower costs thanks to the ease of training the systems. Isaac can transfer its learned practices to real-world robots.

At the conference, some 50 companies showcased robots that were based on Nvidia’s Jetson platform, itself based on Nvidia’s Tegra K1 SoC. The applications on display ranged from elderly care and search and rescue, as well as the more traditional industrial automation tasks. To boost adoption of the Jetson platform, Nvidia says its partners will be releasing open-source reference platforms for drones, submersibles, and wheeled robots, to act as building blocks.

Close