Close
Close

Published

Xilinx feels the 5G boost already, while Lattice targets low power FPGA markets

Network vendors may be concerned that 5G will not deliver the capex boost of previous mobile generations, for for companies focused on the wireless chipsets, the prospects are good. The burgeoning numbers of spectrum bands and device types which 5G will involve are happy news for companies delivering radio or basebands, and Xilinx is one of the chip companies already claiming to be seeing growth driven by 5G.

Xilinx recently reported results for its third fiscal quarter of 2019, which ended on December 31 2018. It turned in revenues of $800m, up 34% year-on-year, and its wireless communications business was its strongest, with 41% year-on-year growth. Part of this was down to 5G, the company said – it is present in large numbers of pre-commercial proof of concept initiatives, and in the beginnings of commercial roll-out in China and South Korea. The company expects to exceed $3bn in revenue this fiscal year, for the first time in its history.

CEO Victor Peng told the earnings call that he thought 5G was going to be “a larger deployment overall, compared to the past generations …. In 5G there’s going to be more radio.” Though the radio is the main base for the company’s FPGA (field programmable gate arrays) in wireless, the chips are also present in baseband processors for 5G, said Peng.

Xilinx’s core business remains the data center, where its FPGAs are enjoying growth driven by webscale expansion, and where it is extending its offerings across compute, storage and networking segments. Other growth drivers include “big data analytics acceleration, machine learning inference, video transcoding, network acceleration, as well as storage controllers,” Peng said.

Some of its upbeat outlook is coming from the widening variety of applications which require the performance and flexibility of an FPGA. As an integrated circuit which can be customized after manufacture, it has some level of programmability. It is typically used to support functions that require very intensive processing, offloading that from the central processor. As 5G, artificial intelligence, IoT and webscale applications move to the mainstream, these functions multiply.

It also benefits from being the main independent FPGA maker, since its arch-rival Altera was acquired by Intel. That sends some customers, which are Intel rivals or do not want to put all their eggs in the Intel basket, to Xilinx.

It also presents some risk, since Intel can offer an integrated processor/FPGA solution for use cases which benefit from that, which might include Cloud-RAN, for instance. So far, the progress on a fully unified platform from Intel has been slow – predictably, given the company’s track record on assimilating its strategic acquisitions.

But Intel has started to sell FPGA/CPU combinations for demanding tasks and it did publish a roadmap for unified solutions back in autumn 2017. It will offer an integrated solution sporting tight coupling between CPU and FPGA, targeting both ultra-low latency and high bandwidth applications. In some cases this will be based on a hybrid processor with the Altera Arria 10 FPGA integrated at the die with the Skylake generation Xeon CPU. They will be linked by Intel’s new UltraPath Interconnect (UPI) technology, with claimed data transfer rates of up to 10.4Gtps.

These FPGA and hybrid offerings are among Intel’s most powerful weapons for penetrating the mobile network market. They will help to reassure operators who do not believe a Cloud-RAN is viable because its functions cannot be fully supported by general purpose processors. That will be important if Intel is to translate its FlexRAN reference platform into commercial products. FlexRAN provides a reference architecture for 5G NR and Cloud-RAN, and is being used for a large number of 5G testing activities with partners like Aricent and National Instruments, and with operators.

Intel also has a base station platform built around its Xeon processor with accelerators optimized for signal processing. Prototypes based on FPGA chips were used in China Mobile’s huge C-RAN market trial, for instance.

Partly to respond to this competitive challenge, Xilinx also wants to look beyond the FPGA. Last October it proclaimed that it was “no longer an FPGA company” when it launched its Versal (short for ‘versatile universal’) family of products. The focus of these is firmly on “whole application acceleration”, and particularly on challenging tasks like AI processing, 5G beamforming and Cloud-RAN baseband processing.

Versal is the brand name for the products based on Xilinx’s ACAP (adaptive compute acceleration platform) architecture, which it has been gradually unveiling through most of the past year. The initial offerings consist of six families of devices. While they all contain Xilinx’s FPGA array fabric, which it now calls the ‘adaptable engine’, they also include several other elements – hence the company’s defocusing on the FPGA term which has defined its history.

The other main elements are:

  • Scalar Engines based on ARM Cortex-72 and Cortex R5 processors.
  • Massively parallel Intelligent Engines for AI and digital signal processing (DSP), based on hundreds of purpose-built, networked and programmable processors.
  • DSP Engines (formerly known as DSP slices), now enhanced with hardened, floating-point extensions.
  • Various hardened protocol blocks for standard interface and memory protocols including Ethernet, PCIe, CCIX and SDRAM controllers.
  • High speed Serdes ports from 32Gbps to 112Gbps, and programmable I/O.
  • On-chip memory distributed all over the device as local RAM for the scalar and AI engines, and embedded in the FPGA fabric.
  • A pervasive network-on-chip (NoC) to tie all the engines to the large numbers of on-chip memories.
  • A software-controlled Platform Management Controller (PMC) to manage booting, configuration, dynamic reconfiguration, encryption, authentication, power management, and system monitoring for the entire device. The PMC claims to speed up dynamic device reconfiguration eightfold compared to previous generation Xilinx devices.

The products are designed to be programmable in high level languages like C and Python, further moving away from the notion that FPGAs are extremely difficult to program.

Of the six Versal product families, three are targeted specifically at AI processing in many markets, including mobile operators, which are increasingly planning to use machine learning for advanced network optimization as well as extreme personalization of user experiences. The three AI products are the Versal AI Edge, AI Core and AI RF. Then there are three non-AI families, which include all the components listed above except the AI Engines. These are the Versal Prime, Premium and HBM.

Among the key tasks which Versal is designed to accelerate, according to Xilinx, are beamforming for 5G wireless communications, network packet communications, and smart controllers for large solid-state storage systems in data centers. These demanding processes are the reason to create a platform with so many different engines, since each engine will address a different type of requirement, from raw performance to power efficiency.

Other companies are seeking growth in the FPGA market by working around the data center and 5G network markets targeted by Xilinx and Intel, and riding on growth in other areas like the IoT. One of these is Lattice Semiconductor, which has focused on power efficiency and has turned its attention to edge computing, AI and IoT as its main sources of growth.

Lattice’s Deepak Boppana, senior director, of segment and solutions marketing, sees the market for FPGAs in AI applications to be split between the cloud and the edge, and Lattice is more focused on the latter. There is somewhat less pressure, for now at least, from Nvidia and Google, which in the centralized data center are encroaching on the FPGA with their GPUs and custom ASICs, respectively. In response, Boppana believes that Lattice’s competitors are hardening parts of their FPGA accelerator designs, effectively removing some of the reprogrammability in order to make them perform a single function more quickly. They are trying to match the speeds that task-dedicated ASICs (application specific ICs) are able to achieve, and to this end, Intel is positioning its Nervana ASICs as an option in these workloads, which could impact its Altera sales. They will also be conscious of the challenge of some large customers starting to design their own silicon for AI workloads, like Google’s Tensor Processing Unit (TPU) and AWS’s Graviton.

At the network edge, a plethora of applications that could integrate FPGA-powered AI-based functions. The flexibility that FPGA silicon provides is important in these early days of commercial AI, because the architectures are still evolving, as are the types of data to be collected by the sensors and analyzed. In time, specific ASIC designs might emerge, but in these early days, the FPGA seems like a more enticing proposition.

Lattice is playing in the milliwatt to watt (1mW to 1W) power range – orders of magnitude below Xilinx and Altera, and a lot less power-hungry than the low power FPGAs from Microsemi, which are stronger in the aerospace and defense markets. Boppana claims Lattice does not have direct rivals for the milliwatt-scale FPGAs, and that the start-ups who are venturing into this space have to work against the incumbent’s range of customer and partner relationships.

In the two-year frame, he sees smart home functions like voice and video analysis as the strongest applications driving AI-related FPGAs, and within three years he sees smart city applications, such as parking monitoring, automated toll collection, smart ATMs and vending machines. Towards 2024, the slower moving industrial and automotive industries should be in play, where FPGAs can support predictive analytics or maintenance functions, or sensor data processing.

The consumer opportunities may emerge sooner, but they will be more price sensitive, whereas the mission-critical applications in industrial and automotive will have far better margins. Boppana said that there is overlap between the different use cases in the underlying functionality of the silicon, and that the major difference is in the training data that feeds the machine learning model, which is then pushed to the end devices powered by the FPGAs. The ability to carry out in-field on-device training makes FPGA an obvious selling point since a user will need to be able to tweak the silicon that powers the models.

Even as ASICs emerge as requirements settle down, FPGAs are not an all-or-nothing proposition, and can complement dedicated chips inside a device. For example, a low power FPGA in a camera might be tasked with scanning for possible humans, before an ASIC is fired up to analyze the person’s face in detail.

On the developer side, a lot of work in the AI world is being done in frameworks such as Caffe, which are not part of the FPGA stacks. Lattice has developed neural network compilers, as part of its sensAI portfolio, which will translate the model from the framework onto the FPGA, so that a user does not need to be an expert in FPGA design.

Close