Close
Close

Published

Broadcom/AT&T alliance highlights the new network hardware value chain

AT&T has announced the latest addition to the inner circle of partners which will help it adopt SDN (software-defined networking) throughout its organization over time. The telco is working with Broadcom in an alliance focused on software-driven innovation at silicon level. Initially the initiative will center on the wireline network, but AT&T has aggressive plans to extend its SDN and virtualization right out to the RAN. In that context, the Broadcom partnership is highly strategic, and also sounds some warning bells for Intel, which is positioning itself as the natural chip platform for the new virtualized networks.

Susan Johnson, SVP of global supply chain at AT&T, told LightReading: “Working with chipset providers allows us to guide innovation at the chipset level. That way, we can increase the speed of innovation while also improving cost structures.” AT&T already has a similar partnership with Intel, announced last year.

The Broadcom deal is part of a realignment of carriers’ supply chains as they shift their focus towards software and virtual networks, a process which is almost inevitably driving them towards an open source environment which has previously been alien.

It’s a long painful process for mobile operators and their suppliers to adjust to a world where the network will inevitably be software-driven and largely open source. The open source processes which – after a similarly tortuous multiyear adaptation – now dominate the enterprise are coming to network software and even hardware.

This has the potential to transform a mobile operator’s cost base, but also – along with shared and unlicensed spectrum – to lower barriers to entry for non-traditional service providers. And whether an MNO is galloping towards SDN (software-defined networking) and open source, or clinging to the past, there will be a necessary renegotiation of the relationship with suppliers throughout the value chain.

One effect will be a closer direct relationship between operators and major chip providers, without the OEM necessarily being the primary point of contact and purchasing. Operators can see the economic advantages of white label boxes, based on off-the-shelf processors and running network functions as virtual machines. These can replace the expensive, proprietary equipment of the physical network, which often relied on silicon which was designed or heavily customized by the OEM.

But the operators need to be sure that these standardized boxes will be able to cope with the stringent demands of a carrier network, and this is especially challenging when it comes to the RAN. Hence leading operators are increasingly heavily involved with the roadmaps of silicon majors, to ensure they are developing platforms with the firepower to handle the network at its most demanding, and which are fully optimized for the telco’s preferred SDN/NFV systems.

This has given Intel a significant opportunity to raise its profile in the mobile world (see separate item) by working directly with cutting-edge operators like China Mobile as they develop Cloud-RAN and other particularly challenging virtualized systems. Intel has been busily enhancing its capabilities for carrier networks, in particular by adding powerful coprocessors to its high end CPUs, to which specific and performance-heavy tasks can be offloaded; and by acquiring programmable chip capabilities with the purchase of FPGA maker Altera.

It is not just Intel however, despite its dominance of the server processor market and therefore of enterprise SDN. From the ARM-based community, Cavium has been particularly impressive to carriers with its Cloud-RAN platform, and of course Qualcomm announced its first server and cloud infrastructure chips last fall. These are not commercial yet, but Qualcomm has the significant advantage of existing close relationships with many mobile operators, its agenda having always been heavily intertwined with that of key MNO customers like Verizon.

Now Broadcom is eyeing its own chance to forge closer bonds with carriers and work directly with them to make its chips fully telco-friendly. It is embedded in broadband providers’ networks from high end switch-chips to CPE processors, but has never been a major player in wireless networks outside of WiFi. With SDN and 5G, it will have a significant opportunity to change that, as wireline and wireless networks converge in the cloud and the mobile operators become ever more reliant on a high speed transport infrastructure.

“We continue to form relationships with disruptive suppliers as we build a software-centric network and drive the industry to SDN,” said John Donovan, AT&T’s chief strategy officer, in a statement. Earlier this month, he updated AT&T’s virtualization goals, saying the target was to equip 55% of its network with software by the end of 2017 (with Cloud-RAN efforts starting from 2018). The company has already converted 34% of its network, surpassing its 2016 target of 30%, and it aims to reach 75% by 2020.

In pursuit of these goals, it is working very differently with its supply chain, as well as introducing new partners to its Domain 2.0 ecosystem, and Broadcom is the latest example. Its activities in cloud and SDN platforms are making AT&T behave, and procure systems, like an IT heavyweight as much as a telco. Indeed, Intel says AT&T is the first telco to join the ‘Super 7’ group of Internet giants who spend so much that they get early access to new chip technology. The seven members are Amazon, Facebook, Google, Microsoft, Baidu, Alibaba and Tencent.

In the first phase of its Broadcom work, the cooperation will be geared to boosting video and broadband speeds while reducing cost of delivery.  Johnson said the focus was “to accelerate the development of features and functions and accelerate the speed to market”. She is looking for chip-level innovations for routing, switching and access technologies, to boost capacity significantly while improving the cost structure.

This is a clear indicator of a US telco – like some of the east Asian operators – asserting control over the direction of hardware evolution, not just software, in order to accelerate innovation and tie this directly to carrier requirements. This goes to the heart of a key debate over telco virtualization. That is whether technologies that have essentially been repurposed from the enterprise world to suit carriers – from servers to software to open source standards like OpenStack – can really meet the most stringent demands and unique requirements of a telco network. Using existing platforms may reduce cost and time to market, and introduce operators to proven technology and a broad ecosystem, but many fear they will be accepting a solution which has too many compromises in terms of current and future performance.

“We’re not implying that COTS is insufficient,” Johnson insisted. “However, we don’t want to wait for the industry to define the quality, features, and functions our customers need. It’s critical to combine our understanding of customers’ needs with the technology roadmaps of chipset suppliers to deliver the best entertainment and communication experiences.”

Steering the hardware and software agenda to ensure it is optimal for telcos is one good reason for large operators like AT&T to take a proactive role in redefining their supplier relationships. There are two other important and related ones – to insert themselves into the unfamiliar open source process so that their influence extends beyond their familiar vendors and standards; and in so doing, to ‘own’ open source, rather than allowing Facebook, Google and Amazon free rein to use the open approach to disrupt the whole value chain, to the disadvantage of the telcos.

On one hand, open source projects such as Facebook’s OpenCellular and Telecoms Infra Project (TIP) hold out high hopes for operators, to slash their costs and end vendor lock-ins, moving to commoditized, interoperable equipment on which they can differentiate themselves through service quality and flexible resource allocation. But they also carry the threat that organizations such as Facebook will drive the agenda for the whole industry, pushing the operators to compete in a completely new way, and one where they are at a disadvantage to the cloud/web giants.

So TIP is a double-edged sword for those operators which have been accustomed to tight control of their circle of suppliers, since in return for the promise of lower costs, they hand the initiative in driving network infrastructure to Facebook – a company which, on the services side, is eating their lunch with over-the-top services such as Messenger and WhatsApp. After all, Facebook is not just aiming to disrupt the cost of mobile service delivery on the infrastructure side, but also on the consumer end,  with its internet.org initiative, which includes offerings like Facebook Basics without counting towards the user’s data allowance.

As with Google Android on the open software side, most of the technologies in TIP are, so far at least, coming from Facebook itself. That gives it a large level of control of how platforms evolve, but also leaves it with the bulk of the cost. Its most recent contribution was Voyager, unveiled in November, which Facebook says is the first white box transponder/router, and which supports Open Packet DWDM (dense wave division multiplexing). This design will be available to the TIP operators and vendors via one of its sub-projects, called Backhaul: Open Optical Packet Transport.

All this is not just about cost, but about keeping up the rate of evolution in important technologies such as DWDM, which enabled a step increase in fiber network capacity when it was first commercialized. In a blog post, three Facebook engineers, Ilya Lyubomirsky, Brian Taylor and Hans-Juergen Schmidtke, wrote: “The pace of innovation has slowed over the past 10 years as we approach the limits of spectral efficiency”.

They argue in the post that an open approach drives greater efficiencies into DWDM transport. “By unbundling the hardware and software in existing ‘black box’ systems, which include transponders, filters, line systems, and control and management software, we can advance each component independently and deliver even more bandwidth with greater cost efficiency,” they wrote. The same thinking extends to the RAN, and lies behind projects like the OpenCellular commodity small cell base station and Project Aries antenna array, which will also be offered to vendors and operators via TIP.

The TIP and OpenCellular projects are in their very early days, and despite strong early headwinds, it is by no means certain they will achieve the transformations they are targeting in the telecoms industry, rather than becoming bogged down by processes and by the inherent conflicts of interest between many of the members.

But MNOs need to be wary. While they can clearly benefit from low cost, commoditized and open hardware platforms, these also lower the barriers to new entrants, especially as more unlicensed and shared spectrum comes into play, along with radical approaches to wholesale capacity such as bandwidth-on-demand and network slicing.

As the network becomes an IT service platform, most of it should be indistinguishable from IT infrastructure, goes the thinking. Cloud computing, and to some extent WiFi, have shown the way to this open infrastructure, not the mobile industry, and the new ecosystems often go hand-in-hand with open source software and, increasingly, hardware – a movement in which the mobile players have remarkably little influence.

A research leader from no more subversive a company than Huawei, Peter Ashwood-Smith, posed the question last year: “Could 5G be more than 90% code from open source origins?” Ashwood-Smith, head of IP research at Huawei Canada and chair of the ITU’s fixed line 5G focus group, described an entirely open source 4G network which could be deployed today, with Linux-based eNodeB and evolved packet core, Android devices, GNU radios and an open source HSS.

Close