AT&T has become far more than an advanced deployer of new network technologies – increasingly it is defining standards for the whole industry in a way that used to be the preserve of vendors or a few Asian operators. In recent weeks, it has made several announcements which may prove significant for the wider industry, not just AT&T. Its expanded alliance with China Telecom could have a significant impact on flexible business services; its Vyatta acquisition will bolster its network operating system and white box routers; and it is about to embark on its first tests with Cloud-RAN, a technology it has acknowledged will be the most challenging element of the network to virtualize.
The operator has already contributed its inhouse development, ECOMP, to the open source process, and under the auspices of the new ONAP (Open Network Automation Protocol) initiative, that technology is a candidate to be a standard for management and orchestration of virtualized telco networks.
In its quest to be globally adopted, AT&T not only went down the open source route, bypassing the more traditional processes of ETSI or 3GPP, but it agreed to combine its offering with that of China Mobile, Open-O. In fact, ECOMP was far more mature than Open-O, and ONAP is likely to be very close to the AT&T inhouse platform, but a joint development brings the Chinese market on-side, and refutes accusations that a US carrier is trying to impose its will on the industry.
The same US-Chinese balance of cooperation can be seen in the area of software-defined networking (SDN), where AT&T has announced an expanded agreement with China Telecom, which could have an influence comparable to that of ONAP. The two companies have renewed their joint venture, Shanghai Symphony Telecommunications, which was established in 2000 with a lifespan of 20 years. In doing so, they have expanded its remit to several technologies which both operators see as highly strategic, including SDN, cloud-based big data, VoLTE roaming and Internet of Things (IoT) for multinational enterprises.
The primary aim of the JV is to develop network services which will “help multinational customers use highly secure global communications to fuel business growth in China and around the world”. However, while that would boost revenue potential for both operators, they have another agenda, which is to set standards, in this case for telco SDN.
The firms said they had agreed to “help establish and accelerate SDN industry standards”.
AT&T is one of the most advanced operators in deploying SDN and virtualization in commercial networks. It aims to have 55% of its network functions virtualized this year, and at least 75% of its traffic on an SDN by 2020.
But its activity goes beyond this. One, it is acquiring technology to accelerate its NFV/SDN quest. Two, it is pushing forward with white box hardware – commoditized, standardized boxes on which the hugely programmable, virtualized network functions (VNFs) run. And three, it is putting many of its developments, including ECOMP, into open source programs.
One of its interesting acquisitions was the purchase of software assets from Vyatta Software from Brocade last month. This brought AT&T a network operating system, a distributed services platform, the vRouter product line, and other items, including some unspecified software “under development” and some intellectual property.
This could prove a very strategic deal for the US operator. Vyatta had been part of Brocade’s ambitious program to increase its business with telcos via virtualization, but that effort came to a halt when Brocade was itself acquired by Broadcom. Before that transaction is finalized, Brocade is divesting many assets in areas like wireless (Ruckus) and virtualization.
Brocade said the Vyatta Network OS was “built from the ground up to deliver robust network functionality that can be deployed virtually or as an appliance, and in concert with solutions from a large ecosystem of vendors, to address various SDN and NFV use cases.”
If AT&T builds on its new assets wisely, it will expand the range of very strategic software elements which it controls and develops itself, rather than sourcing them from large vendors. As announcements from Cisco and others show, a modern network OS and a service delivery platform – both optimized for the virtual or hybrid worlds – are critical to control of the end-to-end network. AT&T seems to be seeking that control itself, and potentially then extending its power to the wider industry by sharing its systems with other operators.
In particular, AT&T said that the acquisition would “bolster our ability to deliver cloud or premises-based VNFs”, which in turn should enable the operator to create and deploy new services across its network quickly and cost-effectively via virtualized platforms. To drive cost down further, it is working on white box hardware solutions and has already trialled two white box switch designs, one of them created inhouse in partnership with two start-ups, Barefoot and SnapRoute.
This program should be bolstered by the purchase of vRouter, which supports 80Gbps of virtual networking throughput in a software-based routers, though there may also be some conflicts. While vRouter runs the Vyatta network OS, AT&T’s switch uses the Flex Switch OS from SnapRoute (in which AT&T is an investor), combined with a chip from Barefoot.
There may be rationalization of technologies ahead, but controlling the key technology enablers of the new virtualized, flexible networks will keep AT&T in charge of its own destiny and pace of change, and break its reliance on a few external vendors. It has already shaken up its supply chain with its Domain 2.0 SDN-oriented procurement program, which has introduced smaller vendors alongside the giants, and is introducing more flexible contract terms. Andre Fuetsch, CTO and president of AT&T Labs, said in a statement when Vyatta was acquired: “Being able to design and build the tools we need to enable that transformation is a win for us and for our customers.”
And discussing the Barefoot switch in June, he highlighted the use of the open source pP4 programming language on the merchant silicon powering the white box device. Using P4, and tying the white box to ONAP, enables visibility down to a packet level, which could drive new types of services and quality mechanisms. “This is more than just about lowering cost and achieving higher performance,” he said in his keynote. “Frankly that’s table stakes. This is really about removing barriers, removing layers, removing all that internal proprietary API stack that we’ve lived with these legacy IT systems, now we can bypass all of that and go straight to ONAP” to achieve fine-grained per-packet visibility.
Fuetsch added that the project “took this very novel approach of building a hardware abstraction layer and running open source networking modules like BGP and OSPF on top”. This layer can then operate independently of the silicon, and AT&T said it has other white boxes, based on different chips, in the works, with the same network OS. The second design to be moving into field trials is a white box from Delta Electronics, running on a Broadcom switch-chip.
In fiber too, AT&T is turning to open source technology and virtualization to transform its own economics, and potentially share with others in future. It is preparing for a trial of 10Gig symmetric passive optical network technology (XGS-PON), in which it aims to combined next generation fibre with virtualized access functions, in order to reduce cost and allow multiple services (including broadband and backhaul) to be merged on a single network. And it is submitting an open design for a white box XGS optical line terminal (OLT) to Facebook’s Open Compute Project.
Eddy Barker, assistant VP for access architecture and design at AT&T, said: “In working on next generation PON, we have focused on trying to get the economics to where we are with GPON. A big aspect is just the equipment costs and more significantly the silicon and optics costs.”
AT&T also aims to hide the lower level details of the silicon through open software which would then run across chips from any compliant vendor. It has been working with ON.Labs, for instance, to deploy its ONOS (Open Network OS) and Virtual Optical Line Terminator Hardware Abstraction (VOLTHA). Some of that work has also incorporated technology from the CORD (Central Office Re-Architected as a Data Center) initiative, which sits alongside ONOS in the ON.Labs project.
Barker said: “We have been trying to bundle up the access components of CORD. It’s not that we plan to do it in a turnkey manner as in ON.Lab, but so we can disaggregate it and use parts with what we have already done within AT&T independently of CORD.” For instance, AT&T will use ONAP – based on its own ECOMP technology – rather than the CORD XOS operating system, which Barker told LightReading was “missing the emphasis on SDN control and virtualization in the access piece”.
This shows how operators are not just being converted to using open source technologies to reduce cost and time to market, but are taking an active role in contributing to these platforms, while also customizing them heavily when they deploy them in commercial systems. There is a myth that open source is easy to deploy because of the broad range of developers which work on the technologies. In fact, operators which have committed to platforms based around OpenStack, like ONAP, say that significant inhouse expertise and efforts are required to optimize the technologies for carrier-class performance.
And then there is the RAN. AT&Tsaid it plans to “experiment with new virtualized RAN core network capabilities later this year”. The operator has always said that it would focus first on the packet core, IMS, data center and other systems, and would move on to the RAN at the tail end of its virtualization program. The challenges to virtualizing a wide area mobile network remain daunting, from cost and performance of fiber in long fronthaul links, to a lack of standardized interfaces, to issues of orchestrating functions across the RAN, core and transport and across wireless and wireline links.
Yet an over-arching SDN/NFV strategy like AT&T’s will only deliver its full potential if the RAN is eventually included. That will allow true end-to-end network slicing, to deliver on-demand virtual slices of capacity, optimized for a particular service or customer. And it will allow network resources to be allocated flexibly across every part of the network.
AT&T said last month that it was planning to start virtualized RAN work this year, following on the heels of Verizon, which has also said it would start testing vRAN technologies this year, working with Ericsson and others.
Both US giants have already tested centralized RAN technology – which allows several cell sites to share baseband resources on one controller, but does not implement these in the cloud or with NFV, or even completely separate hardware and software functionality. Verizon has tested this approach in San Francisco and discussed plans to deploy it in Boston this year, but vRAN would be a more significant step, both in architectural change and in potential upside.
There is one common feature to all these disparate efforts to disaggregate the mobile network, virtualizating its functions on white box hardware and creating orchestrators to mastermind the whole machine. That is the threat to the margins, power and even survival of the traditional big-box network vendors, as they see the walls they carefully erected around their platforms being stormed.
In an interview, Fuetsch laid down the gauntlet to the major vendors, saying: “Here’s the big message for the OEMs. It is really a call for them to open up their architectures, open up their software and their own hardware so they can participate. They are going to have to make a choice here – do you want to be at the table or on the plate?”
AT&T in second fixed 5G trial:
AT&T has announced its second trial of fixed wireless technology based on the preliminary specs for the first 5G standard, 5G NR Non-Standalone. It is testing the system in 39 GHz millimeter wave spectrum in Austin, Texas, along with its TV subsidiary DirecTV .
Ericsson is providing the infrastructure and Intel the Mobile Trial Platform for the latest test. Participating customers will be able to stream live television via the DirecTV Now service. AT&T said the trial will last several months, and it will feed results back into the 3GPP standard process.
“By conducting the trial with a variety of audiences – residential, small business, and enterprise customers – using DirecTV Now and other applications, we expect to gain new insights into mmWave performance characteristics needed for industry standards development,” AT&T said in a statement.