Your browser is not supported. Please update it.

20 October 2020

ARM extends infrastructure vision from supercomputer to edge

It’s always difficult for a company to set out its roadmap while under the shadow of a potential acquisition, and especially one that could entirely change the firm’s model and proposition. The prospect that Nvidia might succeed in its $40bn purchase of ARM clearly hovered over discussions at ARM’s annual developer summit this month, but executives still succeeded in providing a clear vision of core strategies, particularly to enable processors for the end-to-end cloud and 5G network.

Powering the cloud from edge to supercomputers, including telco infrastructure, is clearly also the objective of Nvidia, though its starting point is its own processors, while ARM’s is an IP licensing and developer ecosystem model that is open to all customers. Nvidia wants to increase its penetration of the value chain by supporting licensing of IP in its own and ARM’s markets, though its main proof point to date has been in its CUDA programming model, and it seems impossible that its ownership will not compromise the traditional ARM approach and its huge customer base.

But ARM and its potential future parent have one important factor in common – they need to push back against Intel’s incumbent advantage if they are to dominate cloud-based networks. Another proposed acquisition – AMD’s of Xilinx (see separate item) – may help them by shaking up the cloud processor sector some more and strengthening an Intel rival that is already resurgent against the larger firm. But that combination will also put pressure on Nvidia by enhancing AMD’s competitive position in key 5G and cloud segments.

The progress of ARM’s architecture in infrastructure segments is critical to its future, with our without Nvidia. Its greatest success has been built on device processor designs, particularly smartphones, and it has never been successfully challenged there. But as that market’s growth slows – despite winning Apple’s business for future Mac system-on-chip – ARM has been evolving its CPU, GPU (graphics processing unit) and microcontroller core IP to target areas with greater growth potential, but more challenging competition. Particularly important are the next generation of connected devices, particularly for the IoT; and the server infrastructure that will support the cloud from giant data centers to the most distributed edge, often supporting virtualized 5G networks in the process.

Despite a clear interest from cloud processor designers and buyers in diversifying their ecosystem beyond Intel x86, progress in the infrastructure market has been gradual for ARM processors and GPUs. Early efforts based on 32-bit designs, led by start-ups like Smoothstone, had limited impact; and some promising 64-bit developments by major suppliers, such as Qualcomm’s, did not get off the ground. But several developments have made the ARM-based infrastructure platform credible and a viable challenge to Intel, and the new confidence was demonstrated in the DevSummit presentations.

First, larger chip suppliers and server vendors started to support an alternative to Intel. Marvell has had the highest impact in the former group, while HPE and others have added ARM-based products to their portfolios. Last year’s announcement by Huawei’s HiSilicon semiconductor arm, of an ARM-based cloud server processor, was another huge endorsement, though the relationship between ARM, its Chinese affiliate ARM China, and Huawei – in the midst of US sanctions – is uncertain, especially if ARM is acquired by a US vendor.

Geopolitics is a confusing element for everyone involved in the 5G semiconductor value chain. It will put new pressure on Intel by, potentially, empowering new Chinese challengers and limiting the US giant’s access to Chinese markets. But the same applies to all US-aligned suppliers, which are, amidst these unknowables, more targeted than ever on the biggest chip buyers of all, the US webscalers.

Here, the support of AWS for ARM server architectures has been a highly significant development. The Amazon-owned webscaler acquired Annapurna Labs in 2016 and used that expertise to develop a custom ARM processor to power some of its EC2 cloud computing workloads and SmartNICs (programmable network interface cards to accelerate network adapter functionality). The resulting technology is now the heart of AWS’s Graviton 2 processor.

Other milestones include:

  • Oracle recently announced plans to use the Altra, the first 80-core server processor based on ARM designs, in its Oracle Cloud from next year. Altra was designed by start-up Ampere.
  • The world’s fastest supercomputer, Japan’s Fugaku, is powered by Marvell’s ThunderX3 processors, which are based on up to 96 custom ARM v8.3 cores. Its predecessor, ThunderX2, has been used by Microsoft Azure.
  • Last year, Red Hat said it believed ARM was now on a par with other architectures to support its Linux and cloud software. Jon Masters, chief ARM architect at the company, said it had been a 10-year journey to make ARM “a first-class architecture along with x86, IBM Power, and IBM Z in Red Hat Linux”.

Last year, ARM announced its Neoverse N1, proclaimed as “the first Arm platform specifically designed from the ground up for infrastructure, on a roadmap committed to delivering more than 30% higher performance per generation”. And this year, it mapped out a cloud-to-edge roadmap for Neoverse, which splits the N1 product into two families of designs.

Last year’s N1 was targeted at very high compute capabilities as required in a 5G base station or a cloud server. Now it has two successors, the V1 for very compute-intensive scenarios and the N2, which will reach out to the edge of the network.

For single-threaded 64-bit applications, the V1 claims 50% higher performance than the N1, and the addition of Scalable Vector Extensions (SVEs) aims to accelerate deep learning, high performance computing, dynamic spectrum sharing and other use cases that involve matrix math operations.

The N2 is targeted at a range of use cases for telcos and cloud providers, such as mobile edge computing, SmartNIC and scale-out.  It targets a 30-40% performance improvement over the N1.

At the Summit, ARM committed to support only 64-bit architectures by 2022, for all future ‘big cores’ – the high performance cores in its big.Little architecture, which balances high end processing power with some very low power cores for more basic tasks, increasing overall system power efficiency.

The original Neoverse was based on the 16-nanometer Cosmos design, but ARM will be pushing forward with a new generation each year. Last year’s designs, which were codenamed Ares, were on 7nm, and now it has moved to Zeus (7nm+), and then Poseidon (5nm) next year.

ARM is still offering, and enhancing, the Neoverse E1, which is optimized for throughput performance. This is aimed particularly at telco networks and at smoothing the transition from 4G to high throughput, fully virtualized 5G infrastructure. It targets equipment from 16-core SoCs running at 15W for gateways or 35W for 5G base stations; going up to 32-core versions which could run the data plane for routers with multiple 100Gbps Ethernet ports. In networking, embedded processor vendors such as NXP and Texas Instruments have largely migrated from proprietary cores to ARM.

Neoverse gives chipmakers a range of accelerators and tools so they can build diverse products by adding functions from the toolbox, or by using their own on-chip custom silicon. This has seen a more flexible approach than ARM has for its device processor IP (or than Intel has).

Rather than its partners choosing between standard silicon, or the expense of an architectural licence which allows them to customize the designs, the strategy for Neoverse seems more nuanced, reflecting the complexity of the market, with a huge variety of feature combinations to be considered. But there is also the need to incorporate a wide array of functions in the core design to meet the hefty demands of the cloud computing platform, so Neoverse has virtualization technologies, performance and management technologies, reliability and service support integrated.

This balance between integration of a broad range of functionality, while also supporting a high degree of customization, is a hallmark of processor design in the 5G era. The latter has driven new interest in programmable technologies like FPGA and  flexible ASIC (Intel has made acquisitions in bot with Altera and eASIC). No longer are semiconductor providers wedded to one particular model – companies like Marvell and Intel are offering every level of customization, with accompanying trade-offs of cost and flexibility, and the sweet spot increasingly sits somewhere between the dedicated but rigid and expensive ASIC, and the generic processor.

“Heterogeneous computing is really where the industry is moving,” Drew Henry, ARM’s general manager for infrastructure, has said. “In this post-Moore’s Law world you need to be able to provide a very flexible design that allows our customers to really build what they want to build, and that’s why we are seeing such remarkable things happening in the silicon ecosystem.”

At DevSummit, ARM emphasized its three-pronged approach to the infrastructure market:

  • increasing compute performance of the system
  • better developer access to that performance through our software and tools
  • security protection throughout the ecosystem.

In support of this, it has added support for chip-level interfaces such as CCIX and CXL to ease design and integration; as well as new developer tools with support for common operating systems and container management systems such as Kubernetes.

Chris Bergey, SVP of infrastructure products, said: “In infrastructure, we’re seeing a quiet revolution in how we deliver and distribute compute. It can be distilled down to a single word: choice. The days of homogeneous one-size-fits-all, server farms powered by a single, legacy, general-purpose compute architecture are being displaced by solutions that allow greater vendor choice and flexibility in how to distribute processing at optimal points along the compute spectrum from cloud to edge. In this way, the right resources can be layered in at the right points along the spectrum.”

ARM also provided updates on its Project Cassini, an open initiative to support a common cloud-native experience on ARM-based edge clouds. The project is based on three components:

  • Standards for software development, based on a new ARM program, announced at DevSummit, called SystemReady. This is an extension of the existing ServerReady program and provides compliance certification to ensure that software can work seamlessly across a diverse range of hardware.
  • Certification and APIs (application programming interfaces) that support security requirements. For instance, Cassini supports PSA-Certified, a framework that gives an objective assessment of the quality of implementation for the device root-of-trust, and PARSEC, an open source project that provides secure root-of-trust abstraction and common runtime security services.
  • Project Cassini provides reference solutions, developed in partnership with the ecosystem and targeting a wide variety of use cases, that support cloud-native stacks at the edge.

Of course, storage is another essential foundation of cloud/network convergence. ARM says about 85% of hard disk drive controllers and solid-state drive controllers are based on its cores, and announced its first 64-bit design in the Cortex-R family, the R82. This is designed to accelerate the development and deployment of next generation computational storage solutions, and to enable edge-focused and low latency applications. Cortex-R82 promises up to twice the performance of previous Cortex-R designs to support lower latency applications and access to up to 1TB of DRAM.