Your browser is not supported. Please update it.

14 June 2022

Nvidia’s superchips show a glimpse of 6G processing future 

Developments in the high end processor industry were of limited interest to RAN engineers before the 5G era. High performance basebands processors were designed as vendor-proprietary ASICs by the network equipment vendors, so the main merchant chips related to the radio complex itself, and to device processors. But the start of virtualization of the RAN is throwing a spotlight on the role of general purpose processors (GPP), and of specialized accelerators sold by merchant chip providers rather than designed by vendors.  

 

Outside of greenfield networks like those of Dish and Rakuten, the progress towards vRANs – which run varying percentages of their digital network processing on cloud platforms – is very slow. In the macro network, fully cloud-based 5G networks are not envisaged by many operators until the late 2020s, or later. Many see this migration as a ‘6G’ activity.  

 

That approach may be wise, enabling the operator to enhance its 5G performance in relatively conventional ways while waiting for new solutions to mature, and preparing for a next-decade architecture that will be fully cloud-native. If that approach prevails, it will make high performance processors a key element of the 6G platform as that starts to emerge. Even in 5G, handling the compute-intensive tasks in Layer 1 of the network (such as Massive MIMO beamforming), is immensely challenging for cloud processors. GPP cannot handle it alone, but adding sophisticated accelerators based on graphical processing units (GPUs) or field programmable gate arrays (FPGAs) can increase cost and power consumption, and still introduces trade-offs compared to dedicated chips. 

 

In 6G, the need for these merchant solutions will be fully established (there is virtually no scenario in which 6G is not cloud-native). However, the processing demands will be even greater in a network that is expected to use sub-terahertz spectrum with ever more complex orders of MIMO and beamforming.  

 

All this makes developments in ‘superchips’, targeted at very high performance computing tasks such as scientific supercomputers, of interest to RAN roadmaps too. Nvidia is leading superchip development, and has already talked about adapting these architectures to make them more power-efficient and compact, and so deployable in servers that must handle 5G/6G RAN and other demanding, yet increasingly everyday, use cases such as intensive AI/ML. The convergence of mobile connectivity with AI on an increasingly distributed edge cloud – hinted at in Dish’s architecture, and destined to be fully-fledged in 6G – will just add to compute demands. And 5G/6G will drive ever-rising volumes of data, which is already driving innovation in memory and storage chips, and the fortunes of leaders in the field such as Samsung. 

 

Nvidia’s superchip concept was unveiled in March at its GTC conference. Its own superchip models are called Grace Superchip and Grace Hopper, after the processors that power them – the former combines two Grace CPUs, the latter one Grace and one Hopper GPU. 

 

Grace Hopper features an NVLink–C2C 900Gbps connection between the two processors, effectively extending Hopper’s memory to 600GB, which is important for acceleration of demanding tasks in AI and future 6G and helps enable 15 times faster data rates than Grace alone, according to Nvidia. 

 

“The reason it’s interesting for high performance computig is that energy efficiency is a very important figure right now,” Ian Buck, VP of hyperscale and HPC at Nvidia, told EETimes. “Demand for compute isn’t slowing down. You can actually reduce the energy footprint of computing by moving to more performant supercomputing architectures like Grace Hopper.” 

 

The Grace superchip features 144 ARM CPU cores with almost 1Tbps of combined memory bandwidth across the two processors. Buck said this design is “taking a standard ARM core and building the best possible chip that can be made to complement our GPUs.” 

Each Grace CPU is accompanied by 16 memory chiplets, based on a customized LPDDR5X form factor that claims lower power consumption than standard DDR (double data rate), delivering 500Gbps in memory bandwidth. 

 

Server makers including Supermicro, Gigabyte, Asus, Foxconn, QCT and Wiwynn have announced plans to make servers with Nvidia superchips. Initially these will be for scientific applications but Supermicro said it will also market these servers for a wider range of use cases including 5G RAN, digital twins, AI, cloud graphics and gaming workloads. The first servers should be available in the first half of 2023.