Your browser is not supported. Please update it.

20 November 2024

Fujitsu’s role in SoftBank and Nvidia’s tantalizing AI-RAN offering – FREE TO READ

AI wunderkind Nvidia and SoftBank are promising the telco community sumptuous rewards from the deployment of AI-RAN tech. The pair have led a trial, alongside Fujitsu and Red Hat, which they say promises returns of 219% for every AI-RAN server installed. The veracity of this claim is yet to be seen. Wireless Watch spoke to Rob Hughes, the Head of Wireless Marketing at Fujitsu, for details of Fujitsu’s work to maximize RAN performance using AI.

Fujitsu has been working with Nvidia for around four years developing vRAN software using Nvidia GPUs – the chip designer’s secret sauce for running AI applications. The research has been focused on how to monetize capacity, given the high costs involved in AI hardware.

In a sign of its commitment to this field, Fujitsu has also recently joined the AI-RAN Alliance – an initiative which has SoftBank and Nvidia as founding members. News of Fujitsu joining the AI-RAN Alliance has not yet been publicly announced.

Fujitsu has worked with Nvidia on research using the A100 Tensor Core GPU and the GH200 Grace Hopper Superchip. The research with Nvidia has focused on three primary concepts, according to Hughes, the first of which was to develop a RAN platform that can run AI applications.

“Previously our hardware and software were designed specifically for the RAN. Now with virtual RAN, our software is just another workload that can run on any platform,” said Hughes. “So why not put that on a platform that can also run AI? If this is just a workload, then you can run your regular AI business using those same resources. It’s like a restaurant in in downtown New York; you’ve got to keep your tables constantly occupied to pay for that real estate. It’s the same kind of model for AI-RAN.”

The second focus of the research has been to use AI to improve RAN performance, for example with energy efficiency or uplink performance improvements. The financial benefits of these are to lower overall total cost of ownership (TCO) and to upsell an improved customer experience.

The third area of focus has been the AI-related services which operators can offer enterprises using their AI-RAN networks, such as autonomous driving capabilities.

Looking specifically at the PoC led by Nvidia and SoftBank that was announced last week, the project is called AITRAS. It uses Nvidia’s recently launched AI Aerial accelerated computing platform for the RAN Layer 1 software, Nvidia’s AI Enterprise software for the Edge AI, and its GH200 Grace Hopper Superchip. Fujitsu provided the radio unit and Layer 2 and 3 software (with input from SoftBank), and Red Hat provides the virtualization platform.

AITRAS allows AI and RAN computations to run on the same network infrastructure so that the operator, SoftBank in this case, can develop and deploy AI applications on edge AI servers for enterprise applications.

SoftBank has developed three AI applications using NVIDIA AI Enterprise on AITRAS. The first is a robot which is managed by a large language model (LLM). The application uses SoftBank’s edge AI servers for low-latency input of sensor data from the robot and output of control information from LLM. The second application uses corporate data to generate company-specific LLMs, called RAG. The third application is an AI foundational model for autonomous driving.

The AITRAS trial began in October and uses Fujitsu’s 4.9 GHz radios using 100 MHz of bandwidth. The PoC can operate on 20 cells simultaneously, accessing speeds of up to 1.3 Gbps.

The proposition is that telcos like SoftBank can offer AI applications to their enterprise customers via their network and in doing so, monetize unused capacity in the RAN for these AI services. This is the restaurant table analogy that Fujitsu’s Hughes refers to.

The context is that radio access networks typically only operate at about one-third capacity until the occasional moments that they need to use peak capacity. “With the common computing capability provided by AI-RAN, it is expected that telcos now have the opportunity to monetize the remaining two-thirds capacity for AI inference services,” said Nvidia.

The other element is that Nvidia’s GPUs are expensive, so you would be unlikely to deploy them solely for use in the RAN. There needs to be a business imperative.

This message was heard from operators at Ericsson’s OSS/BSS summit in Paris last month. Now that the furor around AI has calmed down, MNOs are unwilling to pay for AI-compute equipment unless there is a clear commercial benefit.

“There is a lot of excess compute capacity sitting idle in base stations around the world and collectively the industry has been looking at ways to monetize those resources. The goal is to focus on new services, and if you are doing that anyway, what else can you do to improve RAN performance?” said Hughes.

SoftBank aims to sell an AI-RAN product, like the AITRAS PoC, to operators from 2026. And the tech firm claims to provide a clear commercial opportunity.

SoftBank and Nvidia say that operators can earn around $5 in AI inference revenue from every $1 of capex invested over a five-year period. In addition, SoftBank said that when considering opex savings operators will see a return of up to 219% for every AI-RAN server it adds to its network.

While the concept of monetizing excess compute capacity makes excellent sense, it does go against the broader movement for energy efficiency in the RAN. Instead of the continued effort to reduce energy consumption at the RAN level, this project will see an astronomical growth in consumption and may not be the most efficient way to run AI inferencing anyhow. Operators may be keen to follow this model nonetheless if they are tempted by the Nvidia-SoftBank sales pitch.

But will buyers be tempted to buy edge compute services from MNOs? Ronnie Vasishta, SVP Telecoms, at Nvidia pointed out to Wireless Watch this week that there are as-yet-unannounced examples of data center partnerships with MNOs to provide edge compute services. The Nvidia sales pitch will rely on more of these types of deals.

In addition to the AITRAS project, Fujitsu has agreed to run two R&D projects with SoftBank. The first is a verification lab in Dallas, Texas, where Fujitsu’s mobility operations are based. The lab will be used to validate AI-RAN hardware, software, and applications. The second strand of the agreement will follow from the AITRAS PoC.

Fujitsu and SoftBank will develop software using AI to maximize RAN performance, “to advance the commercialization of AI-RAN,” the pair said in a statement. This research will focus on the Layer 2 and 3 functions of the network.

“The companies aim not only to significantly improve investment efficiency on mobile infrastructure by enhancing and refining RAN features and performance through AI, but also drive new services and bring about various innovations in society and industry by improving communication quality for users even during times of mobility and congestion, and enable real-time response and analysis,” said SoftBank and Fujitsu in a statement.