Close
Close

Published

Ericsson, Nvidia and Verizon push GPUs to the heart of the MNO’s platform

The slow shift of the mobile industry towards virtualized RANs (vRANs), in which many baseband functions are run as software on common cloud infrastructure, is mainly perceived as an opportunity for Intel. Finally, networks will run on cloud infrastructure – platforms dominated by Intel processors – rather than dedicated chips designed by the equipment vendors specifically for RAN functions.

This has appeared to be Intel’s game to lose. Some challengers have suggested ARM-based alternatives targeted at vRAN and other high performance cloud applications. Cavium and Huawei both have very credible offerings, though Qualcomm seems to have backed away. At a future stage when operators might deploy their vRANs in third party clouds such as AWS, there could be more presence for processors designed by the webscale giants themselves to power their clouds.

But these challengers have so far made limited impact on early vRAN tests, trials and deployments, with Intel’s FlexRAN reference platform proving a successful stepping stone into the mobile network market, which has always been close to merchant chip suppliers. But now, Nvidia has reared its head in this space. The competition between Nvidia’s graphics processor unit (GPU) platform and Intel’s CPU-centric architecture, as the basis of high performance computing systems, is one of the defining battles of the twentieth century chip industry. But while a GPU-first strategy has scored highly for Nvidia in AI, XR and supercomputing, it has not, so far, been visible in the vRAN.

Now that is changing. At Mobile World Congress Los Angeles (MWC LA) last week, Verizon and Ericsson both talked of work to support vRAN – as well as integration of edge computing with a distributed, disaggregated RAN architecture – with GPU-led platforms and Nvidia cooperations.

This is all about finding chip technologies which really can cope with the extremely high demands of vRAN workloads and deliver performance that is equal to, or better than, a dedicated architecture. Doubts that this will be possible using off-the-shelf chips are one reason why operators have been so slow to embrace vRAN, at least outside indoor and small cell networks.

These baseband network functions may run on cloud servers running on off-the-shelf chips, but the hardware is scarcely commoditized or simple. To run the functions of a RAN, with the speed, reliability and near-real time responsiveness required in 5G, the hardware will have to be extremely high performance, and unlikely to be cheap.

Intel has given up arguing that, in any near term timeframe, its Xeon CPUs will be enough, on their own, to support RAN functions to the same standard as proprietary, optimized and dedicated hardware. To achieve that, it has been integrating Xeon with FPGA (field programmable gate array) and ASIC chips to support offload and acceleration of particularly challenging workloads. The more other challenging technologies, such as mixed reality (XR) and artificial intelligence (AI) are also deployed in operators’ networks, the more sophisticated the cloud hardware will need to be.

Nvidia claims it is applying recent breakthroughs in GPU technology for supercomputing to the vRAN challenge and has worked with Ericsson to build the “world’s first software-defined 5G RAN”. Its new EGX Edge Supercomputing Platform is designed to be flexible, to support a variety of high performance use cases including 5G, massive IoT and AI. It is cloud-native, and powered by the company’s Cuda Tensor Core GPU, which can process 15 teraflops of data per second and up to 140 simultaneous high definition video streams.

“All of this processing basically translates to one single node that is equivalent to hundreds of nodes in a data center,” CEO Jensen Huang said during his keynote speech at MWC LA.

Among the partners announced for EGX are, in addition to Ericsson on the vRAN side, Red Hat to help implement Kubernetes container orchestration and deliver a carrier-grade software stack; and Microsoft to integrate its Azure cloud platform to help with AI computation and other intensive workloads, distributed from edge to cloud as required.

Nvidia also announced a lengthened list of partnerships with data center server and software vendors, including Atos, Dell, Fujitsu, Mellanox, Lenovo, QCT, Super Micro, Cisco and VMware.

Huang described how Nvidia sees 5G and edge computing as being inextricable – twin enablers of many applications that need to process huge amounts of data, often with very quick response times, including AI analytics, cloud gaming, mixed reality, the IoT and software-defined networking (SDN).

“The fundamental problem, of course, is that what used to be a phone, the cloud, and the telcos essentially being a pipe has to change. The edge can no longer just be a pipe. The second fundamental problem is the amount of sensor data that is going to be streaming across this pipe is going to grow incredibly. 5G is going to make it possible for us to connect up to 1,000 times more things than we currently do with 4G. Those things are going to be streaming sensor information that are going to be high density, high data rate, as well as continuous.”

Telcos, cloud providers and others have to take a risk on the new 5G/edge platforms in order to stimulate uptake of these emerging technologies, most of which are not widely commercialized as yet. “My attitude about most of these things — because we’re in the computer industry and we build computing platforms — is if you build it, they might not show up. But if you don’t build it, they can’t show up,” he said.

“The future is software-defined, and these low latency applications that have to be delivered at the edge can now be provisioned at the edge,” he went on. “That future will become software-defined high performance computing.”

While Nvidia was focusing on use cases, such as factory robotics, which require a combination of 5G connectivity, edge computing and AI, Ericsson was emphasizing how a GPU architecture could make RAN functions themselves far more efficient and scalable. Despite the talk about their collaboration on vRAN, both firms said they were not yet ready to share any details of their approach or progress.

Thomas Noren, Ericsson’s head of 5G commercialization, used dynamic spectrum sharing (DSS) as an example of an intensive, time-sensitive RAN workload which would require very high performance chips before it could be virtualized on cloud infrastructure. DSS (see separate item about Verizon’s network) allows an operator to run 4G and 5G flexibly on the same band and dynamically allocate spectrum between the two. DSS reschedules spectrum every millisecond, which makes it very challenging for the hardware to handle all the time and frequency variables.

So far, such workloads require special-purpose, fully optimized processors, argued Noren. He said: “We have a multicore baseband architecture that allows us to do parallel processing; we have the capacity to introduce 5G and DSS on the same baseband. Our purpose-built baseband unit is the fastest, most power-efficient unit in the industry. That’s why we can do DSS.”

So it is a huge concession by Ericsson to say it believes a vRAN, and sophisticated tasks like DSS, could in future run on an off-the-shelf chip.

The same goes for the distributed unit (DU) within a disaggregated vRAN. Most vRAN architectures will run some network functions on a centralized server, while others will be distributed closer to the cell site on a distributed unit, often because they require lower latency.

Ericsson already has roadmaps to virtualize the centralized unit (CU) on off-the-shelf hardware, and has worked on this with Intel. But while the CU requires a great deal of horsepower, the DU brings different challenges because many units will be deployed, so cost and power consumption must be minimized, even while supporting very low latency response and potentially high bandwidth.

Noren said: “If you look at all the available technologies, they are too expensive, too power hungry and too big to be effective compared to our purpose-built hardware for DU. Nvidia has a platform and development framework that we can potentially use.”

He was clear that the co-development of a GPU-based vRAN was only in the experimental stage, and remained cautious about success – understandably, given the decades of R&D that have gone into creating specialized platforms. He pointed out that Ericsson’s existing baseband, running on specialized silicon, could already handle the “incredibly computer intensive” software to support a workload like DSS, while in the case of the Nvidia platform, he was merely “open to the idea”. He said: “We will explore if we can develop a distributed baseband product (DU) with Nvidia GPU. We don’t have any committed product plans, but we think this is a very interesting idea.”

Verizon has also been working hard on ways in which GPUs might help support very high performance 5G/edge networks and applications, either working as accelerators or, through parallel processing, supporting a high end cloud platform in their own right. A group of engineers at Verizon has been working for the past two years on ways to orchestrate and load-balance data over a 5G network onto an edge processing unit based on one or more GPUs.

The effort was initially focused on XR and AI workloads, under the leadership of TJ Vitolo, head of the telco’s XR Lab. They were looking to reduce power consumption in the smartphone or other device by removing its GPU, and moving those workloads to the edge cloud. The low latency of edge-plus-5G would enable the same experience for the user as they currently get from a device-based GPU, and the relaxation of power constraints would support more advanced applications based on XR, AI or massive IoT.

This led to the development of a GPU-based orchestration system which has recently gone into the test phase, amid promises that it will be able to revolutionize XR markets. In fact, as Wireless Watch’s sister service, Faultline (which provides a weekly analysis of digital video developments) points out, the XR market itself is a turbulent one, with more than its share of flops (BBC VR Hub, Nokia Ozo, Magic Leap and so on).

Of course, Verizon’s breakthrough could orchestrate workloads for other applications if the XR market proves disappointing, but could its work help to improve the AR/VR experience, and the economics for service providers, and help to revitalize the sector?

Ultimately, the plan is to empower app developers. Verizon wants to give developers a ride on its Intelligent Edge Network, a project launched some two and a half years ago and designed to support Verizon’s 5G small cells deployment and existing 4G LTE network. Start with the developer community and the rest will follow, as telcos often say (but rarely succeed in executing).

Verizon’s 5G Lab has created eight specific services designed to equip developers with tools to create applications for use on its 5G edge technology. These are 2D computer vision, XR lighting, split rendering, real-time ray tracing, spatial audio, 3D computer vision, real-time transcoding, and asset caching.

What Verizon has set out to solve is the heavy imaging and graphics processing requirements of XR applications, which would benefit from the eight services listed above being shifted to the network edge. As such, it says these applications would run significantly better on a GPU, but are hindered by the limited resource management nature of GPUs. Verizon therefore developed a prototype using GPU slicing and management of virtualization designed to support any GPU-based service.

Live network trials using the newly developed GPU orchestration technology in combination with edge services have proven successful. “In one test for computer vision as a service, this new orchestration allowed for eight times the number of concurrent users, and using a graphics gaming service, it allowed for over 80 times the number of concurrent users,” states the announcement.

Of the eight services created, a couple stand out. Real-time transcoding is a technique for saving time and optimizing workflows, while real time ray tracing is a less well-known tool. The latter takes traditional 3D scenes or objects, which are designed using complex custom algorithms, to extrapolate or calculate how light will fall and how ending colors will look and feel. Verizon says that with real time ray tracing capabilities, each pixel can receive accurate light coloring – greatly advancing the realism of 3D images.

Split rendering is another neat technique. This enables the delivery of PC/console level graphics to any mobile device by breaking up the graphics processing workload of games, 3D models, or other complex graphics, and then pushes the most difficult calculations onto the server – allowing the lighter calculations to remain on the device.

On top of the obvious network technology clout, Verizon recently acquired a bunch of VR assets – picking up the remains of Jaunt XR earlier this month, a year after the start-up gave up the ghost. Those assets mostly include AR software, but Jaunt probably held onto some VR assets too, after shifting focus from VR to AR – perhaps some of its studio-grade cameras, along with a suite of VR software and a distribution platform.

This deal was designed to enhance Verizon’s existing XR arm, called Envrmnt. This division forms part of the 5G Lab, focused on providing mobile XR applications to autonomous vehicles, smart cities and volumetric VR applications.

Critical to Verizon’s 5G-focused Intelligent Edge Network is the fiber network backbone. Verizon has laid thousands of miles of fiber this year, including 4,200 miles across 60 cities during the second quarter alone – an increase from the 3,000 miles of fiber achieved in Q1.

“Creating a scalable, low cost, edge-based GPU compute capability is the gateway to the production of inexpensive, highly powerful mobile devices,” said Nicola Palmer, SVP of technology and product development at Verizon. “This innovation will lead to more cost effective and user friendly mobile mixed reality devices and open up a new world of possibilities for developers, consumers and enterprises.”

Once delivering XR applications over networks efficiently becomes a reality, only then does the industry stand a chance of breaking into the mainstream. Verizon’s progress is a rare positive and the onus is now on developers to get the ball rolling.

Close