When Intel acquired Altera, one of the two leading providers of programmable FPGA chips, it signalled the new prominence of this technology to support heavy duty processing tasks in the data center, cloud and telco network. But in the carrier world, it has been Altera’s arch-rival Xilinx that has been more agile, working closely with Qualcomm on its server offering, and with Amazon AWS on an FPGA cloud platform which could support telcos in future.
The FPGA (field programmable gate array) can be customized to perform specific functions very well. In the mobile world, it is used by Intel, Qualcomm and others to take on specialized or high-performance tasks in many platforms from the 5G modem to the Cloud-RAN server.
This is part of an acknowledgement that the general purpose CPU, however powerful, cannot cope optimally with all the diverse and compute-intensive tasks which go with telco networks and the migration to 5G levels of performance.
As the demanding functions of a mobile network move to run as virtual machines on servers, and as new tasks such as AI-enabled analytics enter the operator’s toolkit, those virtualized networks will need to run on a combination of CPUs, GPUs (graphical processing units), FPGAs, digital signal processors and emerging specialized chips for tasks like neural networking (such as Google’s Tensor Processing Unit, which it says saved investing in a dozen new data centers to support machine learning at scale).
Xilinx is the key FPGA partner for Qualcomm (and others, now that Altera has been removed as an independent). It recently announced a partnership with Amazon AWS which could provide FPGA-enabled solutions on an as-a-service basis for data centers and operators. Many companies struggle to adopt FPGAs inhouse because of the shortage of skills to program them. Now AWS Marketplace will add Amazon FPGA Instances (AFIs), created by Xilinx, to its Amazon Machine Images (AMIs). That will enable them to offer FPGA-accelerated platforms as a cloud service. Customers no longer need to invest in specialized hardware and skills, but can instead configure and pay for an Amazon FI instance.
The first three services which will be available in this way are accelerated video encoding and AR/VR processing for cloud video services, designed by NGCODEC; cloud-based advanced query services from RYFT; and a version of the genomic analytics platform from Edico Genome. All of these are best accelerated by FPGAs rather than being well-suited to GPUs.
While the initial targets are cloud service developers which want to accelerate their services without investing in FPGA technology, it is not hard to see how this system could also support hosted services for smaller operators, daunted by the infrastructure and skills required to implement Cloud-RAN or an AI-enabled network.
As for Intel, it recently elaborated on its strategy for incorporating Altera’s FPGAs into its portfolio, two years after it acquired the firm for $16.7bn. It will effectively offer two options for massive computation, both as complements to its Xeon CPUs as well as in standalone mode. These are the FPGAs, and Intel’s Xeon Phi add-in cards, which compete with GPUs from companies like Nvidia and AMD. Intel has itself been far less GPU-centric than Nvidia, which has put it at a disadvantage in some markets, especially for AI/neural networking platforms. It is now hitting back with an alternative approach built around Altera, so while we can see Xeon Phi as a defensive offering, the FPGAs are far more aggressive and strategic in sectors like AI and Cloud-RAN. Intel is already arguing that they make more versatile, optimized coprocessors than Xeon Phi.
The Intel FPGAs will be used either inline (the data is moved from the CPU to the FPGA for processing) or offload (the data bypasses the CPU altogether).
Bernhard Friebe, senior director of software solutions in the Intel Programmable Solutions Group, said: “The advantage for FPGA is GPUs play in some areas but not all, and if you look at the use model of inline vs. offload, they are limited to offload mostly. So, there’s a broader application space you can cover with FPGA.”
Intel will offer an integrated solution sporting tight coupling between CPU and FPGA, targeting both ultra-low latency and high bandwidth applications. In some cases this will be based on a hybrid processor with the Altera Arria 10 FPGA integrated at the die with the Skylake generation Xeon CPU. They will be linked by Intel’s new UltraPath Interconnect (UPI) technology, with claimed data transfer rates of up to 10.4Gtps.
It will also offer discrete FPGA devices on a PCI Express card. Intel is providing a developer toolset and APIs for both integrated and discrete products using the same tools, accelerators and libraries, all written in OpenCL.
These FPGA and hybrid offerings are among Intel’s most powerful weapons for penetrating the mobile network market. They will help to reassure operators who do not believe a Cloud-RAN is viable because its functions cannot be fully supported by general purpose processors. And over time, they could enable Intel to support massive platforms which will host RANs in the cloud, as Amazon AWS seems poised to do in future – but also, potentially, Microsoft Azure, which uses Intel FPGAs in its Brainwave platform for neutral networking.
As well as servers, FPGAs are an important element of base stations, and of the virtualized version, Cloud-RAN. Intel has a base station platform built around its Xeon processor with accelerators optimized for signal processing. Prototypes based on FPGA chips were used in China Mobile’s huge C-RAN market trial, for instance. Altera signed a strategic collaboration with the China Mobile Research Institute (CMRI) in 2014, focused on the future needs of 5G with regards to virtualization and FPGAs.
Intel has also talked about producing a full base station platform, though to date it has mainly worked with partners to get x86 processors into cell site equipment, as seen in its alliances with NSN to create the RACS/Liquid Apps offering,.
Altera should help stave off the ARM-based challenge in cloud servers and carrier infrastructure. That challenge looks more convincing the closer it gets to the network itself – ARM designs already have a significant installed base in base stations and other infrastructure. Cavium is the leading light in heavy duty ARM-based processors for network and cloud infrastructure, though Qualcomm’s first server platform will aim to eclipse it.
The key to Cavium’s progress is to ignore any sense that all cloud infrastructure processors are the same, or even ‘COTS’ in the real sense. Instead, this is about using a uniform platform with standardized interfaces to improve the economics, and then adding optimized elements to address very specific workloads and drive maximum performance. This view will be essential for any company targeting the carrier network of the future.
Xilinx and Samsung invest in Efinix:
Xilinx and Samsung Ventures are among the participants in a $9.5m funding round for Efinix, a developer of programmable silicon based in Silicon Valley. The company, founded in 2012, has raised $16m to date to support its Quantum technology, which it says can deliver four times better performance than traditional FPGAs.
Quantum is based on what Efinix calls an XLR (exchangeable logic and routing) cell that can function as either a look-up table (LUT)-based logic cell or routing switch. This claims to improve the active area utilization fourfold compared with traditional FPGAs, resulting in up to four times greater area efficiency and half the power consumption.
Efinix is currently developing silicon products based on Quantum and expects to begin sampling in December.
The funding round was led by Xilinx and Hong Kong X Technology Fund, an investment firm supported by Sequoia Capital China and focused on fast-growing tech firms. Samsung Ventures, Hong Kong Inno Capital and Brizan Investments also participated.
Sammy Cheung, co-founder and CEO of the start-up, said: “High volume applications and markets are prime targets for our Quantum-accelerated products.” He believes the technology can move into applications which are not currently targeted by FPGAs because they require lower cost and power efficiency.