ARM hurls DynamIQ multicore design at AI and automotive

ARM has unveiled a serious upgrade to its multicore processor IP family, with the launch of DynamIQ, a new way of combining up to eight cores in flexible ways, according to the needs of a particular application. Pitching it at artificial intelligence and automotive developers, ARM is claiming a monumental shift in multicore micro-architectures, going well beyond its current core-combining approach, big.Little.

Although many licensees use big.Little to balance power consumption and performance by combining different types of core, some architectural licensees, like Qualcomm, have devised their own approaches and claimed ARM’s is too limited. The Softbank-owned company will hope to silence such criticisms and incorporate DynamIQ in a large proportion of the 100bn chips it hopes to enable in the mobile, embedded, IoT and AI processor markets, by 2021.

DynamIQ is essentially an evolution of big.Little, in which the high performance cores only spool up when heavy lifting is needed, and the low power cores manage ongoing activity. But the new micro-architecture opens the door to combinations that weren’t possible in big.Little, such as 1+3 and 1+7 – with clusters of up to eight cores, all with different power and performance.

ARM says this enables a wider variety of designs, making the ‘right processor for the right task’ approach, kicked off with big.Little, more granular and targeted. The company claims the innovation “boosts innovation in SoCs designed with right-sized compute, with heterogeneous processing that delivers meaningful AI performance at the device itself.”

The big pitch for ARM is addressing the demand for AI processing power, and to this end, it has added new AI and machine-learning (ML) processor instructions that it claims will add a 50fold boost in AI performance over the next 3-5 years, compared to today’s Cortex-A73 systems, and up to 10 times faster responses between the CPU and the rest of the SoC, thanks to new accelerator hardware.

The accelerators will provide hardware-based AI and ML functions, which will run far more quickly in silicon than they would be in software running on a general purpose processor. A redesigned memory subsystem may also help with this, although further details on this haven’t been released yet. ARM is making a big deal about how easily the accelerators can be integrated into its designs.

The other target for the push is autonomous systems, with a particular focus on ADAS (Advanced Driver Assistance Systems) and safety monitoring systems. Based on the announcement, it seems that ARM is basing this on the accelerator hardware capabilities, rather than the AI features, but failsafe features are also prominent in the launch materials.

As for design upgrades, ARM says the new features enable devices to stay within the thermal constraints imposed by their form factor more easily – as running fewer cores at high speeds should generate less waste heat energy from the processor. This is good news for smartphones, wearables and VR/AR devices, which all rely on passive cooling to dissipate the heat from their cores, and have to throttle their processors when that thermal envelope is exceeded.

“Today we are giving you a glimpse of what’s ahead for the Cortex-A processors that will help to power the next 100bn ARM-based chips. ARM is uniquely positioned to transform and accelerate compute solutions such as AI wherever compute happens, as it is the only common compute architecture from sensor to server,” said Nandan Nayampally, GM of ARM’s Compute Products Group.

ARM is betting big on this ‘sensor to server’ approach, where its only real competitor all along the chain is Intel. In smartphones, it’s hard to see another architecture becoming top dog; and ARM has advantages in the very low power end of things, although there are also more efficient architectures optimized for even smaller, more power-constrained devices.

At the high end, though, while there are ARM servers, x86 (and specifically Intel) are still absolutely dominant in the data center – in both private and public cloud. In the latter, Google, Amazon and Facebook may anoint alternatives to Intel, but those new power-efficient webscale architectures are not guaranteed to be ARM-based. Some cloud server players are pro-ARM, notably Microsoft and Canonical, but in the short term, the market for network edge computing, gateways and fog/MEC appliances should be more natural prospects for ARM’s licensees.

Gateways, and micro-servers to support the distributed and mobile cloud, are greenfield opportunities for ARM-based chips, sorting the data collected by IoT sensor networks or connected machinery before sending only the necessary data off to cloud applications, to improve responsiveness and reduce latency.

However, some question the need for a common processor architecture from end to end, though there are benefits such as common code base. However, there are numerous IoT initiatives that are working on streamlining the transmission of data, and for internet applications, once data is in an IP packet, the silicon architecture underneath is something of a moot point. Most IoT companies make a big deal out of being hardware agnostic, and so ARM’s focus on this as a selling point seems a little misaligned.

ARM notes that, as it passed the 100bn-unit milestone this year, it had seen 50bn of those shipments between 2013-2017, “demonstrating the industry’s insatiable demand for more compute,” according to Nayampally. ARM expects to hit the 200bn milestone by 2021.

The breakdown of these chip shipments shows the change in demand since 2013. Before then, 43bn Classic ARM chips had shipped, but since 2013, that family’s sales stand at just 17bn. For Cortex-A (high-compute requirements), pre-2013 shipments hit 3bn, but have surged to 9bn post-2013. Cortex-R (real time processors) has shifted from 1bn to 3bn, but the biggest shift comes from Cortex-M (ARM’s microcontrollers), which rocketed from 3bn pre-2013 shipments, to 21bn since 2013.

By 2035, ARM expects there to be 275bn IoT devices, with just 13.5bn mobile devices and a little over 2bn PCs and tablets.