Qualcomm hurls hat into data center AI silicon race

Qualcomm’s expansion away from mobile devices continues. Having found not all that much in the way of demand for IoT devices, it is now chasing that other shiny new market – AI-based processing workloads in the data center. With its new Cloud AI 100 accelerator, a 7nm chip that claims 10x performance per watt than its rivals, Qualcomm is hoping to steal some thunder from Intel, Google, and Nvidia.

Intel has been scrambling to find a way to make up for the shortcomings of its x86 CPU architecture in the new workloads demanded by AI and ML tasks – as these tasks favor the parallel computing capabilities of GPUs and FPGAs, which the serial CPUs can’t provide. Nvidia too is now trying to figure out how to adjust to life after the cryptocurrency crash, which gutted the demand for its GPUs that had driven its share price skyward. It has chosen data center and automotive functions as key markets, to this end. Intel too is chasing cars, with its monstrous $15.3bn Mobileye purchase all the evidence needed there.

Google, meanwhile, is a bit of an enigma, given that it has its Tensor Processing Unit (TPU) silicon but seems to be planning on using it only inside its Google Cloud Platform (GCP), meaning that the TPU is essentially an internal product. We can’t see Google slinging TPUs to Amazon’s AWS, but speaking of the cloud-titan, Amazon has been exploring its own Arm-based CPUs – a major thorn in the side of Intel, as Arm wins could slash demand for x86 chips inside the conventional data center workloads. Amazon’s Inferentia chip is, as the name suggests, designed for inferencing tasks, and was built by Amazon’s Annapurna Labs subsidiary.

Intel would have been considering those safe markets just a few years ago, but has had its hand forced into dropping $16.7bn on Altera to get into FPGAs, with its new 10nm Agilex FPGAs unveiled last week, as well as acquiring Movidius and Nervana to build its own Neural Network Processors (NNPs – now rather delayed) – and now working on its own GPU too. Huawei’s Kunpeng 920 is another Arm-based design, which hopes to carve out a chunk of this market, and Facebook’s open source initiatives continued when its released Kings Canyon, an ASIC chip designed for inferencing too.

But the new entrants are fighting an uphill battle against the entrenched and incumbent habits, where upgrades are always considered with great skepticism. This is not such a problem in the data centers themselves, where new appliances and racks are routinely added and trashed as needed, but if the on-premises and network-edge opportunity is as big as these collective marketing departments say, then you are going to start butting heads with an IT director that has a chronic aversion to fancy new gadgetry – they’ve been burned too many times before.

For Qualcomm in particular, it is quite removed from the conversation about data center silicon providers. To get into that discussion will require some effort, but if its performance claims are true, then it would seem to have a pretty good chance of doing so. After all, it seemed rather unlikely that Arm-based platforms would be making an appearance, and now Amazon has its own designs for workhorse processors. With Arm designs cropping up in more of the peripheral processors too, joining the ranks of MIPS cores in things like networking and storage appliances, there are a lot of ‘synergy’ opportunities here. With RISC-V lurking on the horizon too, it does seem that the data center market will look very different in five years or so.

So Qualcomm is banking on being able to win pretty swift acceptance. To this end, it has embraced the accelerator form factor that served Nvidia so well – PCI cards that can be plugged into conventional server motherboards. This avoids having to replace racks with new designs, and means that existing hardware can be repurposed.

Qualcomm is keen to stress that the Cloud AI 100 is not simply a repurposed mobile processor, and that the design is a completely new signal processor that has been made specifically for AI inferencing workloads. To this end, the PCI form factor, which facilitates some very beefy thermal cooling options, claims peak performance of up 3x that of the Snapdragon 855 SoC, and about 50x that of the older Snapdragon 820. Specifically, the 10x claim over rivals is aimed at FPGA approaches to inferencing.

Strangely, Qualcomm is being coy with actual performance benchmarks. It says (speaking to VentureBeat) that the chip can provide ‘far greater’ than 100 TOPS (terra [trillion] operations per second), but that doesn’t quite square with the 3x claim for the Snapdragon 855, as that SoC can provide 7 TOPS – which would put the Cloud AI 100 in the 21 TOPS camp.

With the chip slated for release in 2020, we will have to wait a while to see the data sheet. Qualcomm isn’t revealing much at all, in the way of details about the hardware. Of course, its rivals will likely have released updates that will challenge its 10x claim, and pricing is going to be a key criteria. It needs to be cheap enough to displace the other approaches, or the data center managers are just going to use more of the cheaper less efficient designs.

Qualcomm also has a track record in the data center. Its Centriq Arm-based server CPUs were a pretty notable failure here, perhaps because they were released onto a market that was not yet receptive enough. Still, apparently losing all enthusiasm for Centriq last year was odd (there was talk of activist investor pressure), as it seemed that the market’s tide here was changing, but if AWS can spin up its own internal designs, then perhaps banking on the major cloud computing vendors to drive CPU volume is not a particularly sage strategy. Centriq kit is available if you know where to look, but Qualcomm isn’t exactly heralding its existence – and if the data center managers were paying attention to Centriq, that could dampen enthusiasm for the new accelerators.

So the final word seems to be that a bloodbath is looming. Intel is a giant, but one that has been flailing somewhat, in response to the new challengers. Nvidia seems out in front, Google has its own designs, as does AWS, while Microsoft seems happy to play the field. Qualcomm is a very capable company, and it could certainly pull this off, but this is a market that is going to see competing vendors start kicking lumps and tearing strips out of each other. It’s still in flux.