Close
Close

Published

Xilinx dethrones Intel exclusivity in Microsoft FPGA deal

Microsoft has committed to sourcing half its FPGA needs from Xilinx, the chief rival to Intel’s Altera division. It’s a blow to Altera, but not surprising, given Xilinx’s recent design wins and launches. Altera will undoubtedly release an updated design soon, but the deal ends a cushy relationship that has seen Microsoft install an Altera FPGA in all new Azure servers.

The initial report came from Bloomberg, but third-parties seem to have confirmed its veracity. It is not great news for Intel, especially as it is still looking for a return on its $16.7bn acquisition of Altera in 2015, and data center FPGAs are vital for that. Microsoft is diversifying its supply chain with the move, hoping to get the best out of the two rivals by encouraging some level of competition.

However, Intel’s Programmable Solutions Group reported Q3 revenue of $496mn, and as Microsoft is one of Intel’s largest customers, this does not bode well for the next year. Xilinx reported quarterly revenue of $673mn, in its April results presentation, with annual revenue hitting $2.54bn for 2018. Net income was $512mn. It’s worth noting that the Communications and Data Center segment was responsible for 34% of those Q4 revenues, so is smaller than Altera, but Xilinx has stated that its new strategy is focusing on data center customers.

Microsoft has said that it will continue to use Intel chips, so at least Altera isn’t getting the boot completely, and according to a source cited by Bloomberg, the Xilinx chips have to prove their worth before Microsoft determines at what scale it will deploy them. It seems unlikely that Xilinx will mess this introductory period up, but stranger things have happened. Microsoft and Intel recently announced Project Brainwave, an AI project centered around FPGAs, and so the Xilinx news, as inevitable as it seems to have been, will have smarted Intel.

Intel was somewhat forced to buy into the FPGA space to protect its Network Interface Controller/Card (NIC) market, where FPGA-based designs were encroaching on Intel’s 60% share of the NIC market. Buying one half of the duopoly, Xilinx being the other, allowed Intel to undercut much of the concern – as companies like Napatech were showing what sort of network optimization performance was available using expansion cards housing FPGAs in servers.

To this end, Intel scored a win with Microsoft, where Altera FPGAs would be installed in every new Azure server. Not long after that announcement, Amazon’s AWS said much the same thing, but didn’t specify which vendor/s it would be using. Google is another major public cloud provider that Intel and Xilinx will be targeting, but there are a large number of other providers that the pair can try to carve out between them.

If Microsoft and AWS have the right approach, that all new cloud computing servers need an FPGA in them, in order to offload certain AI-based functions to a specialist co-processor, then there’s a lot of headroom for both Intel and Xilinx here. Of course, there is a risk that their designs are eventually sidelined by dedicated ASIC chips – purpose-built to do one specific task very efficiently. But while the industry is still so fractured, and far from settled on stacks that would facilitate such ASIC development, the need for FPGAs will prevail.

On-the-fly reprogramming provides a lot of flexibility for developers, who can tweak the silicon to match the task at hand, while they explore the best way to solve a particular computing problem. However, once that solution has been found, it makes a lot of sense to create those custom ASICs at scale, which should drive down the cost.

As such, FPGAs made a lot of sense in labs, where developers could recreate smaller chips and prototype in the early phases, before moving on to actual small-batch pressings of the initial designs. In the cloud computing realm, the FPGAs are useful because the cloud provider might find that they need to wind down a collection of servers for one customer and then spin them up for a new customer’s completely different workload.

In that system, the FPGA lets the cloud provider avoid having to bin hundreds or thousands of customized ASICs every time they change workloads. It is an easy way to provide value-add services to the customers, and compete against the specialist AI silicon providers that would like to convince them that a private cloud or on-premises server filled with their special sauce silicon is a better investment. This market is far from being shaken-out.

Back in March, Xilinx unveiled its Everest FPGA chip architecture, which has now become known as Versal (a portmanteau of versatile and universal). Versal is part of the Adaptive Compute Acceleration Platform (ACAP). Xilinx’s developer conference was the stage that revealed Versal, but it also saw Xilinx partner with ARM to offer royalty-free ARM Cortex-M CPU designs, which can be used Xilinx customers as they design their own FPGAs.

The two new Xilinx designs, Versal Prime and Versal AI, are based on TSMC’s 7nm FinFET production process. There are 5 different AI designs, and some 9 Prime variants. The Versal SoCs have effectively shrunk the relative size of the FPGA inside the package, dedicated more silicon to specialist supporting functions that allow the SoC to be a more comprehensive processor. The Versal chips will be available in 2H 2019. Xilinx’s ACAP family is going to increase the pressure on Intel.

Close