Xilinx has taken the wraps off of its new Everest Field Programmable Gate Array (FPGA) chip, claiming a huge step forward in performance – more than enough to bother the other main player in the FPGA game, Intel’s Altera. With $1bn invested in Everest, Xilinx is hoping to turn the screw on Intel’s own $16.7bn acquisition.
The new Everest design is expected to ship in 2019. Xilinx is hoping to attract more software developers, expanding from its core hardware developer audience who have already grappled with the complexity of programming for FPGAs. New development software libraries are being offered in this vein, with Xilinx hoping to make it as easy as possible to get a TensorFlow developer on board.
Everest is part of the new Adaptive Compute Acceleration Platform (ACAP) product offering, Xilinx’s attempt to bring hardware’s adaptability capabilities up to the same sort of level as software. Key to this is the silicon package that combines Xilinx’s new FPGA chip with both real-time processors and application processors, as well as all the required I/O – all in an optimized layout that frees up space for more programmable silicon in the Everest footprint.
Notably, ACAP is not a pure FPGA offering, as it includes multiple silicon components that might normally be dedicated chips themselves. The design is an evolution of Xilinx’s previous strategy of adding more dedicated functions to the FPGA design, such as HBM memory controllers and ARM cores for running specific applications.
The new design will be built by TSMC, on a 7nm manufacturing process, with around 50bn transistors per unit. Currently, Intel’s latest Altera Stratix FPGAs use a 14nm process, and house 30bn transistors, although Intel does seem to have a next-gen chip up its sleeve. The new design will use new Network-on-Chip (NOC) and Cache Coherent Interconnect for Accelerators (CCIX) technologies, which are not yet used in any Xilinx product.
There are lots more details to be published over the coming months, but the promise of real-time reprogramming has caught the eye of both AI developers and 5G networking vendors. The headline performance claim is a 10x-100x increase in performance per watt compared to a conventional CPU, with more adaptability than GPU or ASIC.
In the launch, Xilinx highlighted six prime use cases for Everest – video live streaming, IoT sensor analytics, AI voice services, social network video screening, financial modeling, and personalized healthcare. The main draw for these applications is that their demands can change quite quickly, and so a cloud computing provider can reconfigure the FPGAs to address those changes efficiently – rather than have banks of surplus GPUs or ASICs lying around and not earning money.
Both Amazon’s AWS and Microsoft’s Azure cloud platforms have begun offering FPGA services, with Azure now apparently putting an Intel FPGA in every new server it onlines. Xilinx says AWS is a customer, using its 16nm products.
FPGAs are proving popular in AI workloads, as they can be configured to provide better performance than a CPU. They also might curry more favor with developers than something like Google’s Tensor Processing Unit (TPU), as the FPGA (in theory) can be molded to fit any AI task and should be more flexible than a dedicated AI framework chip. However, the core question for scale will always be price-to-performance, and it is not clear how that market will shake out.
Key to this change is adoption from software engineers. Most don’t have the skillset to program for today’s FPGAs, and so there will be an arms race between Intel and Xilinx to create a friendly development environment. Xilinx says it wants to get FPGAs to the point where they can be viewed as ‘just another PCIe co-processor, like a GPU,’ instead of the current assumption that you need to be a hardware engineer to get anything out of them – where a provided software library allows a developer to configure the FPGA, without having to learn an entirely new Hardware Description Language (HDL) first.
Recently appointed CEO Victor Peng has been doing the rounds, arguing Xilinx’s case for Everest to many outlets. Speaking to the venerable AnandTech, Peng said that Xilinx was focused on being a ‘data center first’ company, after identifying the sector as its biggest growth opportunity.
Peng said that enterprise Xilinx customers had seen huge boosts in their productivity, with AI inference up 40x, analytics tasks up 90x, and genomics up 100x (able to provide a genomic analysis in around 20 minutes, not 24 hours). Compared to Xilinx’s 16nm Virtex VUP9 FPGAs, when used in 5G remote radio heads, the new Everest design claims 4x the bandwidth.
The second strategic focus for Xilinx is in its core markets, which include automotive, broadcast, aerospace, infrastructure, and industrial. These are now, broadly, all looking for embedded platforms, according to Peng, which is a change from their older designs that had high levels of customization. The increasing adoption of software-defined systems has also driven demand for flexible hardware, like FPGAs.
In that interview (which is well worth a read), Peng said that he has team members that believe Xilinx’s software libraries and APIs are easier to program than Nvidia’s CUDA, and that Xilinx has enabled Python as a programming language – prioritizing its availability over C or C+ (which are both now supported) because younger programmers are apparently more familiar with Python. He added that Xilinx would stick with ARM cores for now, rather than open source RISC designs, because the ARM architecture has most support.