One of the features of the new open ecosystem for telco networks is the rising use of merchant silicon rather than custom ASIC chips, to drive the new economics of the white box servers, switches and routers (see separate item). However, off-the-shelf chips come with trade-offs, and even Intel has backed away from the idea that a processor, even a powerful beast like a top end Xeon, can support all the rarefied functions and demanding requirements of a carrier network by itself.
Enter the FPGA (field programmable gate array), which can be programmed for particular functions, without going the full custom route. In the telco environment, FPGAs are increasingly used to support specialized coprocessors that can offload the most demanding tasks from the CPU. Since much of the work is still done by a standard processor powering a white box, the operator can get closer to the economics of COTS (commercial off the shelf) hardware than with proprietary boxes based on ASICs, but without sacrificing the required performance.
This trend has made FPGAs very strategic to chip vendors targeting the high performance computing, telecoms network and webscale sectors. Intel acquired the market leader Altera, and Qualcomm has been repeatedly linked to its largest rival, Xilinx, which partners with the mobile chip giant on its new server system-on-chip. The combination of Snapdragon processors and Xilinx FPGAs aims to convince OEMs and operators that a chip vendor from a device heritage can offer a platform capable of running a demanding application like a Cloud-RAN, and take on the silicon suppliers from the traditional server or switch-chip spaces, like Cavium, Broadcom and, of course, Intel.
With its eye on the expanding market for FPGAs in demanding compute and communications markets, Xilinx has unveiled its latest offering, Everest, which it claims to have cost $1bn to develop as it bids to close the gap with Intel/Altera, playing on a big boost in performance, as well as its still-independent credentials. The headline performance claim is a an increase in performance per watt of between 10 and 100 times, compared to a conventional CPU, with more adaptability than a GPU (graphics programming unit) or ASIC.
Everest is part of the new Adaptive Compute Acceleration Platform (ACAP) portfolio, Xilinx’s attempt to bring the same level of adaptability to hardware as software. Key to this is the silicon package that combines the new FPGA with real time processors and application processors, as well as all the required I/O – all in an optimized layout that frees up space for more programmable silicon in the Everest footprint.
This package shows that ACAP is not a pure FPGA offering, as it includes multiple silicon components that might normally be dedicated chips themselves. The design is an evolution of Xilinx’s previous strategy of adding more dedicated functions to the FPGA design, such as HBM memory controllers and ARM cores for running specific applications.
Xilinx is hoping to attract more software developers to take on the complexity of programming for FPGAs and so expanding beyond its core hardware developer base, into new use cases. New software libraries are being offered to ease the path, with Xilinx hoping to make it as easy as possible to get a TensorFlow developer on board, and so make its FPGAs as appealing as products like Google’s Tensor Processing Unit in high growth sectors like AI. If the ease of development issues can be addressed, FPGAs can score over dedicated chips as they can be molded to fit any AI task.
Key to this change is adoption by software engineers. Most don’t have the skill set to program for today’s FPGAs, and so there will be an arms race between Intel and Xilinx to create a friendly development environment. Xilinx says it wants to get FPGAs to the point where they can be viewed as “just another PCIe co-processor, like a GPU”, instead of requiring a hardware engineer to get anything out of them. That means providing a software library to allow a developer to configure the FPGA, without having to learn an entirely new Hardware Description Language (HDL) first.
In an interview with Anandtech, new CEO Victor Peng said that he has team members who believe Xilinx’s software libraries and APIs are easier to program than Nvidia’s CUDA, and added that Xilinx has enabled Python as a programming language – prioritizing its availability over C or C+ (which are both now also supported) because younger programmers are more familiar with Python. He added that Xilinx would stick with ARM cores for now, rather than open source RISC designs, because the ARM architecture has most support.
The promise of real time reprogramming has caught the eye of both AI developers and 5G networking vendors. In the launch, Xilinx highlighted six prime use cases for Everest, most of which will increasingly be integrated into 5G networks as operators look to incorporate cloud services infrastructure into their RANs, as they are already doing with content delivery engines. The six use cases were video live streaming, IoT sensor analytics, AI voice services, social network video screening, financial modeling and personalized healthcare.
The main draw for these applications is that their demands can change quite quickly, and so a cloud computing provider can reconfigure the FPGAs to address those changes efficiently – rather than have banks of surplus GPUs or ASICs lying around and not earning money. Both Amazon’s AWS and Microsoft’s Azure cloud platforms have begun offering FPGA services, with Azure now putting an Intel FPGA into every new server it brings online.
By contrast, AWS is a Xilinx customer, currently using its current 16nm products. Last autumn, Xilinx announced a partnership with AWS which could provide FPGA-enabled solutions on an as-a-service basis for data centers and operators, another way to address that shortage of skills in FPGA programming. AWS Marketplace will add Amazon FPGA Instances (AFIs), created by Xilinx, to its Amazon Machine Images (AMIs). That will enable them to offer FPGA-accelerated platforms as a cloud service. Customers no longer need to invest in specialized hardware and skills, but can instead configure and pay for an Amazon FI instance.
The first three services which are available in this way are accelerated video encoding and AR/VR processing for cloud video services, designed by NGCODEC; cloud-based advanced query services from RYFT; and a version of the genomic analytics platform from Edico Genome. All of these are best accelerated by FPGAs rather than being well-suited to GPUs.
While the initial targets are cloud service developers which want to accelerate their services without investing in FPGA technology, it is not hard to see how this system could also support hosted services for smaller operators, daunted by the infrastructure and skills required to implement Cloud-RAN or an AI-enabled network.
As well as the six use cases for cloud services, Xilinx has a second strategic focus, on its core markets, which include telecoms, automotive, broadcast, aerospace, infrastructure and industrial. These are now all looking for high performance embedded platforms which require limited hardware customization and are heavily software-defined.
The new design will be available next year. It will be built by TSMC, on a 7nm geometry, with around 50bn transistors per unit. Currently, Intel’s latest Altera Stratix FPGAs use a 14nm process, and house 30bn transistors, although Intel has hinted at a next generation design to be unveiled this year. That will reportedly use new NoC (network-on-chip) and CCIX (cache coherent interconnect for accelerators) technologies, which are not yet used in any Xilinx product.