Close
Close

Published

ARM’s server processor push gets boost from Amazon’s Graviton 1

The webscale businesses have, in recent years, become increasingly engaged in developing their own hardware – first their own Intel-based server designs to drive down their costs, as in Facebook’s Open Compute Project; then starting to experiment with their own processors to underpin those servers. The threat of some of these processors being ARM-based rather than x86-based has hung over Intel, which dominates the data center chip space, but which is heavily reliant on the cloud giants to keep that business growing.

Google was said to have been in trials with Qualcomm for a customized ARM-based server processor – though the apparent relegation of that project to Qualcomm’s back burner suggests Google did not decide to proceed. Now, Amazon AWS has unveiled its A1 Graviton processors, which it will use in its own centers, but could also offer to customers as an alternative to Intel Xeon solutions.

Amazon bought Annapurna Labs, an IoT-focused ARM system-on-chip designer, back in 2015. In time, Annapurna designed the Nitro SoC, which AWS used for networking and storage functions in its EC2 cloud systems – offloading work from the x86 chips in the servers. Now it seems that Annapurna’s designs have evolved sufficiently to offload the x86 chips entirely.

This is bad news for Intel, especially after Microsoft said that it would eventually like to have 50% of its Azure servers to be ARM-powered. Marvell recently acquired Cavium, another prominent ARM data center designer, and will be pushing the Cavium ThunderX2 designs as well.

Intel’s x86 frenemy, AMD, is also pushing forward, with a new generation called Epyc. AWS does offer Epyc configurations in EC2, although it’s not clear how the A1 chips compare to the AMD chip performance. Apparently, AMD was in talks with AWS to build the ARM-based Graviton line, but that partnership fell apart.

Currently, A1 Graviton instances are priced more cheaply at AWS than comparable x86 ones, which could help AWS negotiate lower prices from Intel.

For now, the A1 is available in AWS’ EC2 instances, and can be bought ranging from the small 1 virtual CPU (vCPU) with 2GB of RAM, all the way up to the 16 vCPU with 32GB of RAM. The pricing ranges from $0.0255 per hour for the smallest configuration, called ‘a1.medium,’ to $0.408 per hour for the big ‘a1.4xlarge’ setup – which seems to be four quad-core chips in a single instance, rather than a 16-core chip. At those prices, AWS says that this makes them up to 45% cheaper than x86 virtual machines.

Of course, the actual costs are going to depend on the workload, so if the software runs faster on x86, there may still be a smaller bill than the cheaper-per-hour options. This means that the Intel line has an early advantage, but if software companies begin optimizing for ARM, then that advantage could be quickly eroded.

One of Intel’s saving graces is the difficulty that many companies run into when emulating x86 and Windows on ARM chips. However, modern code and software should be able to handle the shift to the ARM chips sufficiently well. Amazon is providing the A1 with Amazon Linux 2, Ubuntu, and Red Hat Enterprise Linux 7.6, though for firms which need a Windows environment, then things might not be so smooth.

The A1 chips appear to be based on the Cortex-A72 core. They support the ARMv8-A architecture, which is running the AArch64 instruction set. That should mean that most modern programs can be ported, but there’s also support for legacy 32-bit applications inside AArch64.

One thing to bear in mind is that the physical chip currently appears to be just a quad-core design. That means, unless there’s something pretty revolutionary happening in the motherboards, the Graviton line is currently limited to four physical cores per physical socket. By comparison, Intel and AMD both have options for 16 physical cores, offering 32 processing threads thanks to hyperthreading, and the more extreme designs have 32 physical cores, for 64 threads (32C/64T).

There are potential benefits for networking resources, when it comes to linking elements to PCI lanes in the motherboards. The ARM designs appear to enable far more PCI options, so more peripherals could fit into a board.

As for ARM itself, the AWS announcement comes not long after the SoftBank-owned processor IP designer unveiled Neoverse – the new brand for its data center offerings, segmented out to prevent confusion with the Cortex CPU line. Neoverse is currently on the 16nm Cosmos design, but ARM says it will be pushing forward with a new generation each year, moving to Ares (7nm), Zeus (7nm+), and then Poseidon (5nm).

ARM says that a million Neoverse servers will ship in 2018, and that they are proving popular in “a new class of cloud servers that manage the networking, storage, and security workloads” – in other words, what the AWS Nitro chips were doing. Now that the A1 Graviton is on the scene, the Neoverse platform can begin to pursue the application processor workloads too.

Currently, the AWS EC2 A1 instances are being targeted at scale-out workloads, which include containerized microservices, as well as web servers, development environments, and caching. In time, we expect to see IoT workloads cropping up as standalone elements too.

In the meantime, it’s going to be interesting to see how the AWS announcement alters the cloud computing marketplace. Intel and ARM are partners, working together in ARM’s Pelion PaaS, and while Intel might be playing nicely, the competition between the two could get very bitter very quickly.

Close