AT&T has embraced the open, virtualized networking idea so wholeheartedly that it has been contributing many of its internal developments into open source efforts, in a bid to drive wide-scale operator support for commoditized white box hardware running network functions in software. The aim is to ensure that the new economics of these open networks do not just deliver savings and agility for one operator, but are extended through the entire supply chain.
Among many open initiatives, AT&T has been an originator of ONAP (Open Network Automation Protocol), Open RAN, and Akraino Edge Stack, all housed by the Linux Foundation. It has also contributed developments to Facebook’s Open Compute Project (OCP), which aims to establish commodity economics in the server space – and it may only be a matter of time before AT&T gets more active in the OCP’s telco-focused stablemate, the Telecom Infra Project (TIP), which so far has been more heavily influenced by the large European operators.
The latest OCP move by AT&T is to submit its specifications for a distributed disaggregated chassis (DDC), based on Broadcom’s Jericho2 processors. As telecom networks move to run on cloud infrastructure, the OCP will become increasingly relevant to operators, especially those which, like AT&T, have been aggressive in moving towards white box servers and switches to reduce their total cost of ownership.
In its press release, AT&T said it “plans to apply the Jericho2 DDC design to the provider edge (PE) and core routers that comprise our global IP Common Backbone (CBB)” – the core network that carries all the telco’s IP traffic. For this platform, Broadcom has optimized Jericho2 for 400Gbps interfaces, which will be critical as AT&T starts to upgrade its network to support 400G, in order to handle 5G traffic.
“The release of our DDC specifications to the OCP takes our white box strategy to the next level,” said Chris Rice, SVP of network infrastructure and cloud at AT&T. “We’re entering an era where 100G simply can’t handle all of the new demands on our network. Designing a class of routers that can operate at 400G is critical to supporting the massive bandwidth demands that will come with 5G and fiber-based broadband services. We’re confident these specifications will set an industry standard for DDC white box architecture that other service providers will adopt and embrace.”
Traditional high capacity routers plug in vendor-specific devices, such as power supplies, fans and controllers, into an empty chassis, and new line cares can be added to increase capacity. In the DDC, each line card and fabric card is a standalone white box with its own power supplies, fans and controllers, and the backplane connectivity is replaced with external cabling. This means scaling up the system is no longer constrained by the dimensions of the chassis or the electrical conductance of the backplane.
Typical DDC configurations could range from a single line card system that supports 4Tbps of capacity; to a large cluster with 13 fabric systems and up to 48 line card systems, for 192Tbps of capacity. The links between the line card systems and the fabric systems operate at 400G and use a cell-based protocol that distributes packets across many links to support redundancy.
AT&T’s DDC white box design has three key elements:
- A line card system that supports 40 x 100G client ports, plus 13 400G fabric-facing ports.
- A line card system that supports 10 x 400G client ports, plus 13 400G fabric-facing ports.
- A fabric system that supports 48 x 400G ports. A smaller, 24 x 400G fabric systems is also included.
One of the results of open specifications like DDC is to lower the barriers to entry for new or smaller hardware vendors, introducing new levels of innovation and price competition to the supply chain. AT&T has had mixed success in fostering small suppliers – it included tail-f in its first set of ‘Domain 2.0’ partners, to support its software-defined networking (SDN) strategy, only to see the company snapped up by Cisco.
This time around, router start-up DriveNets committed support for DDC, saying its Network Cloud routing software stack would be the first commercial solution to support the model submitted to the OCP. It claims its stack can support 400G-per-port routing and can scale up to 768Tbps. With such promises, it will hope to start to infiltrate the tight circle of core router suppliers led by Cisco, Juniper and Nokia.
This shows AT&T, as with its white box switch developments – also partnered with Broadcom, as well as Intel’s newly acquired Barefoot – wants to influence the future specs for network hardware, not just the software platforms managed by the Linux Foundation efforts. Driving openness into both sides of the disaggregated network will be important to prevent a repeat of a common pattern in telecoms – large vendors implementing open software specs slightly differently, on their own optimized hardware, in order to prolong their lock-ins.
With the DDC, the telco is turning its attention to the wide area network and carrier routers which will handle all the traffic that 5G video, cloud and AI services should generate.
DriveNets is a well-established partner in AT&T trials and is believed to be deployed in some commercial network elements too. However, these trials show that multivendor openness has not quite reached the level of the chip yet – DriveNets has developed its Network Cloud specifically for Jericho2. In switch-chips, Broadcom’s dominance of the merchant market will be hard to weaken as vendors start to move away from their own customized designs, though in acquiring Barefoot – inventor of an open switch-chip – Intel clearly aims to have a go.
But to drive cloud network economics to their full extent, it will be necessary to have software functions which can run on any chip, not just any box.
This is not AT&T’s first contribution to the OCP. Last September it submitted its white box router specs to the project. The telco has already said it will install the routers, running its own network operating system, dNOS, in 60,000 locations over the next few years, replacing all its current cell site routers (AT&T has 60,000 towers and 5,000 central offices). Any supplier wanting to be part of that roll-out will have to conform to the specs, which will enable AT&T to source boxes from multiple suppliers, choosing the cheapest or most innovative, while ensuring interoperability.
If the design is taken up by other OCP members, the scale of the ecosystem, and the consequent price competition, could be very significant. AT&T’s reference design can be used as a guideline by any hardware vendor, though as with the core router, it has to be based on a specific chip (the Broadcom Qumran-AX switch-chip). Submitting it to OCP should encourage more suppliers to rise to that challenge (and other chips might follow in future). The gateway router design is supposed to support current and future cellular backhaul systems, being future-proof to some extent, by embracing a wide range of speeds on the client side, including 5G baseband units operating at 10G/25G and backhaul speeds up to 100Gbps.