The MLPerf benchmarking club has released its latest swathe of results, which put Nvidia and Google at the head of the curve. However, while the improvements have been impressive, this industry has lost its luster, and it seems fair to say that the mainstream crowd have been burnt out by the hype machine. Still, this week’s Musk-related announcement might rekindle some of that enthusiasm.
In the meantime, v0.6 of MLPerf is out, measuring the capabilities of hardware in six core applications, ostensibly so that the industry has a good yardstick against which to measure its collective projects. If everyone has a common set of tests to use, then they should be able to measure the success of their research projects.
The actual results from the second iteration of MLPerf are not really worth dwelling on, as they’ll be horrendously out of date in only a few months – but should you desire, HPCWire has an in-depth discussion. Rather, the jump forward from this batch and the previous ones released last December is the thing that deserves our attention, as it illustrates the tremendous rate of change.
Nvidia’s DGX SuperPod managed to complete the Resnet-50 v1.5 benchmark in 80 seconds. Back in 2017, when the DGX-1 server itself was released, that test took 8 hours – around 0.28% of the time, or 360 times quicker. Nvidia would also like you to know that it did best in the hardest benchmarks, particularly the heavyweight object detection needed for autonomous driving, although Google beat Nvidia by 84% in the non-recurrent translation and lightweight object recognition tests – but we are straying into apples-to-oranges territory here.
Google’s TPU v3 Pods were behind what is said to be the first results that show a public cloud provider beating an on-premises system, running large machine-learning training workloads. This is significant, as it means that if time is a pressing concern, a developer can now turn to a cloud provider to process their workload, instead of having to buy and run the hardware themselves.
Speaking to ZDNet, Google Cloud’s Zak Stone said “There’s a revolution in machine learning. All these workloads are performance-critical,” he said. “They require so much compute, it really matters how fast your system is to train a model. There’s a huge difference between waiting for a month versus a couple of days.”
In terms of industry dynamic, this confirms what we have suspected for some time – that the main adopters of AI workloads are going to be using appliances stored in data centers, and not a workstation PC with four video cards on their desk. Most AI functions are going to be incorporated into business software anyway, and completely abstracted accordingly, such that ‘AI users’ are not going to know they fall into that category.
In all manner of cloud-based software applications, this crop of AI hardware can be put to use optimizing delivery times and workflows, or running analytics functions to draw new insights from boring old datasets – all without the customer every knowing (and therefore caring) that they are using the latest in processing technology. As far as most will be concerned, it will be just the latest new feature for some colossal piece of business software.
To this end, all the fanfare that accompanied early AI silicon announcements has died off, as most in the space come to the realization that there are perhaps a dozen major customers for their hardware and that everything else is more in line with fitting out single computer lab, rather than a data center.
On that note, Intel announced its new Pohoiki Beach board, which consists of 64 of its Loihi chips – which mimic how the human brain operates. The new Pohoiki board is actually a combination of the smaller Nahuku boards, which could contain 8 to 32 Loihi chips. It’s an impressive bit of kit, designed for researchers to use in applications like machine-vision and robotics, where latency is a priority.
“With the Loihi chip we’ve been able to demonstrate 109 times lower power consumption running a real-time deep learning benchmark compared to a GPU, and 5 times lower power consumption compared to specialized IoT inference hardware,” said Chris Eliasmith, co-CEO of Applied Brain Research and professor at University of Waterloo. “Even better, as we scale the network up by 50 times, Loihi maintains real-time performance results and uses only 30 percent more power, whereas the IoT hardware uses 500 percent more power and is no longer real-time.”
“Loihi allowed us to realize a spiking neural network that imitates the brain’s underlying neural representations and behavior. The SLAM solution emerged as a property of the network’s structure. We benchmarked the Loihi-run network and found it to be equally accurate while consuming 100 times less energy than a widely used CPU-run SLAM method for mobile robots,” professor Konstantinos Michmizos of Rutgers University said while describing his lab’s work on SLAM to be presented at the International Conference on Intelligent Robots and Systems (IROS) in November.
In other brain-related news, Neuralink has announced that it has a functional brain-to-machine interface, which is implanted into mammalian brains using a robot that is a bit like a sewing machine – embedded threads of electrodes to detect neuron activity that can be used as an input for a computer.
Neuralink is backed by Tesla CEO Elon Musk, and was founded with $100mn of his money in 2017. The technique has been tested on “animal brains,” with rats being known to be used and a slip of the tongue from Musk suggesting monkeys have been used too. It has said that 19 animals have been used, and that it has an 87% success rate. It is hoping to experiment on human volunteers in 2020, pending FDA approval.
Such an interface could be transformational, but getting employees to correctly use wearables and AR headsets has proven difficult – never mind working the brain-threading requirement into employment contracts. Currently, Neuralink is pitching this as a medical procedure, for people suffering from paralysis, but there’s a wonderful sci-fi angle to explore here that very swiftly gets dystopian.