The push for open, and even open source, platforms is extending right down to the semiconductor layer, not just in data center infrastructure, but in the telecoms network.
The established licenseable IP providers, such as ARM and MIPS for processors and CEVA for digital signal processors (DSPs), are seeking to move into Intel’s territory in cloud infrastructure while extending their influence in the 5G network and devices.
In December, Wave Computing announced that it would offer the MIPS architecture, which it now owns following the break-up of former owner Imagination Technologies, as open source code with no royalties or proprietary licensing. Other open source chip projects – admittedly further from mainstream commercial products – include the FOSSi (Free and Open Source Silicon) Foundation, LibreCores and OpenCores.
RISC-V, established in 2010, is targeting use cases from ultra-low power real time chips, to communications infrastructure, with its open source processor initiative. Recently, the increasingly powerful force in telecoms, the Linux Foundation, set up the CHIPS Alliance, with Google as one of the founders. This is designed to accelerate and encourage the adoption of open platforms – particularly RISC-V – by creating open building blocks for different target use cases. These include processors and system-on-chip (SoC) designs suited to infrastructure elements including baseband units, switches or edge nodes.
Hard on the heels of that Google-influenced development (see Wireless Watch March 20 2019), comes another with the hand of Facebook behind it. Both these web giants have a keen interest in driving down the cost of cloud infrastructure by pushing open ecosystems, but now their efforts are extending into the network, notably with Facebook’s Telecom Infra Project (TIP). This new grouping, however, is under the auspices of its older, cloud-focused initiative, the Open Compute Project (OCP), though it has clear implications for telcos moving towards disaggregated networks and cloud-deployed basebands.
The new group is called Open Domain-Specific Architecture (ODSA) and claims 53 members, working under the auspices of the OCP. It was actually set up last October, by Netronome and six other chip companies, and is now going public with its roadmap and its first white paper. The other founders are Global Foundries, FPGA maker Achronix, Dutch chip giant NXP, packaging and test specialist Sarcina, plus start-ups Kandou, zGlue and SiFive.
It released its initial white paper and held its first workshop early this year, and is now sharing its next steps as well as its alliance with the OCP, which should give it significantly more heft in the market. It will initially focus mainly on the data center, hence the OCP tie-up, but it will also start targeting mobile network/5G and edge computing use cases later this year.
It is focused on the fashionable technology of ‘chiplets’, which disaggregate the different functions of an SoC and target each component at a specific task, in a bid to reduce the cost of the accelerator chips for complex tasks and counteract the effects of a slowdown in Moore’s Law. The latter is urgent as the growth in server performance slows, at the same time that data volumes rise, as does workload complexity, with the adoption of technologies like virtualized RAN, machine learning and robotics.
The main way to mitigate the challenges that server processors, like Intel Xeon, have in supporting very demanding applications like AI and vRAN is to offload the more intensive processes to accelerators, which may be based on a variety of chips including FPGAs (field programmable gate arrays), DSPs and GPUs (graphical processing units). Some even more specialized silicon is emerging, such as Google’s TPU (tensor processing unit) for high end cloud workloads such as AI/ML. A common accelerator in the telecoms infrastructure is the smartNIC (network interface card), which offloads network traffic processing from the central processor.
Netronome, an early mover in the smartNIC space, said that the formation of the ODSA was sparked by its rising number of customer requests for domain-specific accelerators (DSAs). These are programmable chips which can be optimized for highly specialized workloads. The company’s director of silicon architecture program management, Bapi Vinnakota, said the interest in DSAs has been partly driven by Google’s work on the TPU, a chip specifically designed to accelerate machine learning and neural networking workloads.
But all these accelerators push up the cost and complexity of the platform, which in a market like vRAN will make it tough for operators to achieve one of the key objectives of disaggregating their networks – reduced cost of ownership. The ODSA believes the answer is to push disaggregation right down to the silicon and break the chip into multiple chiplets, each with a subset of capabilities, as required by a specific task.
The ODSA hopes an open architecture for chiplets will accelerate adoption of this approach across the cloud and networking industries, boosting innovation and price competition. The overarching aim is to improve the economics of developing accelerator chips, especially for smaller companies, by creating open interfaces that will connect chiplets from different vendors, Lego-style. The ODSA also plans to develop technology to connect its DSA chiplets to other ASICS and programmable chips, to avoid having to develop every part of the chip from scratch.
“Today all multichip interfaces are proprietary. What we want as a group is to make an open interface, so you can assemble a best-of-breed chip,” said Vinnakota. “The challenge is how to make them talk to one another almost as efficiently. You need these chiplets to share an architectural interface and to believe they’re all part of the same chip. Until now that’s been entirely proprietary. We’re going to make that architectural interface completely open. If you can make it open, you can assemble chips from multiple vendors.”
DSAs are currently costly to develop and only target a narrow market, compared to a CPU, so open designs are important to broaden the ecosystem, otherwise the market is likely to be left to the giants which are currently developing chiplets, such as Intel, Marvell and Xilinx. Intel is trying to drive an ecosystem for chiplets based around its own architecture, EMIB (Embedded Multi-die Interconnect Bridge), as is Marvell with its MoChi. Intel released the AIB protocol for EMIB into open source last summer as part of a chiplets project for the US Department of Defense’s DARPA.
Each founding ODSA member will contribute different expertise but Netronome is pivotal, since it is donating the intellectual property from its Open Network Flow Processor (NFP) architecture as the foundation of the platform the group promises to develop this year – in particular the 800Gbps fabric used in its multicore network processors. Achronix is taking the lead on the first proof-of-concept and On Semiconductor is providing expertise on power and thermal issues.
At the workshop, members proposed “starting simply” with a physical layer interface or a “bunch of wires”, which would be followed later by other interfaces. These might include CCIX, the open specification which extends the PCIe standard and simplifies communication between a central processor and accelerators. Another option could be RISC-V’s TileLink.
ODSA plans to create a proof-of-concept based on PCIe before the end of the year and by then, it will also have defined its PHY, protocol and other specifications, with a view to seeing commercial implementations next year.
The group will also work on business models and value propositions for different industry sectors and define a test certification program.