With ECOMP, AT&T and Amdocs have scored significant points in the race to drive standards telco virtualization. The software – for management and orchestration (MANO) of virtualized networks – has been placed into open source, as promised last year, with Bell Canada and Orange as its other service provider backers.
But leadership in the standards for this enormously strategic piece of software – likened by AT&T SVP Chris Rice to the significance of Linux in computer operating systems – will not be won unchallenged. AT&T has played a strong hand by opening up the platform it developed with Amdocs at such an early stage. However, there are two other major open source projects in the MANO field, the OpenStack-based Open-O and ETSI’s OSM (Open Source MANO). Both of these also had their roots in inhouse developments by operators (China Mobile and Telefonica respectively).
So Rice may like to compare ECOMP (Enhanced Control Orchestration, Management and Policy) with Linux, but that is to gloss over how long the computer industry took to unify around an open operating system. There were years of Unix wars, with two major camps led by AT&T and the Open Software Foundation, and many semi-proprietary vendor implementations which were not fully compatible. Even Linux itself is fragmented in some areas, as Android has demonstrated.
There is no reason to believe that harmony will be any easier to achieve in NFV (Network Functions Virtualization), even though it is critical to interoperability and large-scale uptake.
Rice told LightReading: “In my mind, [ECOMP] is analogous to computer operating systems. This is the network operating system for the software-defined network going forward, and it is as important in that space as Linux was in the computer space.”
The software to control how virtual network functions (VNFs) are provisioned and orchestrated is, indeed, an operating system of kinds, and just as crucial if NFV is to be widely adopted and so have its maximum transformational effect on the behavior and economics of telco systems.
But that is precisely why large mobile and open source players will battle to seize the steering wheel. This will not just be a battle between different companies in their bid for power and influence, but between different philosophies of software. ETSI, originator of NFV itself, has moved into the MANO area with its OSM initiative, and represents an established way of setting standards in the telecoms market. By contrast, Open-O and now ECOMP, though devised by very established operators, represent the new influence of open source in telecoms, which is moving beyond devices and into the network itself.
While the conventional approach results in well-supported, universal standards, the process can be slow and detached from real world requirements. Meanwhile, open source bodies move quickly and tap into a very wide base of innovation and commercial requirements, but their platforms can be subject to fragmentation.
OpenStack – central to both Open-O and ECOMP – epitomizes the dilemma over open source. The platform provides a simpler approach to MANO than ETSI, and should enable operators to embark on virtualization with lower cost and faster time to market. However, it is less carrier-grade and some operators believe it is too flimsy for the heavy demands of a telco network. But the former arguments are winning out, at least for the first wave of implementations. Carriers may want to add more functionality later, but the interest in OpenStack reflects the overall drive to accelerate progress. This brings with it a new attitude to standards organizations, with OpenStack, ONF and OpenDaylight rising in th0eir influence over carriers, while the power of traditional bodies like ATIS, TIA, ITU-T and the TM Forum is waning.
ECOMP will only shift that balance further, and even more so if it can converge with Open-O down the line. As well as the three operators and Amdocs, there are six other initial project members – Ericsson, Huawei and Intel (all three also members of Open-O), plus Brocade, IBM and Metaswitch. Like Open-O, ECOMP will be hosted by the Linux Foundation, and these crossovers inevitably raise speculation that the two efforts will merge, to form a united front against ETSI OSM. Indeed, Jim Zemlin, executive director of the Foundation, spoke about potential “harmonization” of the two projects in future.
Not that AT&T wants to give the impression it is participating in a battle. Rice argues that ECOMP has gone beyond what either ETSI or Open-O offer, with its model-based approach that can be adapted to any set of capabilities according to the operator’s need. But those advances could be fed back into ETSI, especially if that body becomes convinced of the need for a standard model-based approach too, he said.
That approach is designed to simplify the process of virtualization and orchestration. Network engineers design services and set policies – using tools which AT&T has also open sourced, in the Service Design and Create portion of ECOMP – and then those services and policies are attached to the model so that operations can be automated.
This philosophy is important to ECOMP’s supporters, as is its relative maturity. They claim that it addresses many of the reservations around OpenStack-based platforms and their readiness for real world telco challenges. ECOMP was developed with Amdocs as part of AT&T’s ambitious NFV and SDN program, which has seen one-third of the carrier’s network functions already virtualized. That means the software is production code in a very serious telco deployment, giving it a weight that Open-O and OSM lack.
This maturity will attract other operators which want to embark on NFV/SDN but without taking the cost and risk that AT&T has done. AT&T says that there is a pipeline of operators waiting to announce their support, now that it is formally in the open source process.
The commercial reality of ECOMP was certainly a factor for the first operator to sign up, Orange, whose VP of APIs and digital ecosystem, Laurent Leboucher, said the company wanted a solution which was already commercially proven before it embarked on NFV/SDN – not easy in such a young market. He said: “It is not just about orchestration, it is the full design to orchestrate, operate and automate, the full execution. ECOMP was by far the most mature solution we had seen on the market.”
He agrees that some of these advances should be applied in ETSI too, in order to accelerate the production of full standards. “It isn’t open source versus standards,” he said.
Another advantage of ECOMP, according to Anthony Goonetilleke, president of the Amdocs product business group, is that it was designed to be deployable by smaller operators or larger players’ smaller subsidiaries, not just companies on the scale of AT&T. He told LightReading: “When you look at the Vodafones and Oranges of the world, where they are globally is at similar levels or higher than AT&T or Verizon, but if you drill down, their makeup is broken into different operators across Europe.” Amdocs is acting as integrator for Orange’s first ECOMP trial, in its Polish operating unit.
Orange Polska’s initial applications will be based on a virtual CPE for home customers. The operator plans to deploy a large proportion of the vCPE in the cloud, managed by ECOMP, in order to support efficiencies and new consumer services. That trial will be used to validate other use cases.
The France-based operator said it intends to follow the trial with a roll-out across its global footprint in 28 countries.
“In the future, these cutting edge technologies will give customers completely new possibilities, such as the ability to self-activate and deactivate services, or to enjoy flexible rating, based on the time they consumed the service,” said Piotr Muszyński, VP of strategy and transformation. “The operator, on the other hand, will receive tools that allow real time adaptation to meet the customer needs.”
Although AT&T will clearly lose full control of ECOMP now it will be run by an open source community, the carrier is likely still to drive much of its evolution and to continue to gain a position of influence and technology leadership from the platform (as open source initiators often do, Google Android being a particularly powerful example). And of course, its intimate knowledge of the framework will help it to develop strong services within it. While ECOMP, released under an Apache 2 licence, is a framework for a catalog of services, each operator will devise their own services – AT&T’s are not included in the open source platform.
This tactic – developing a major product for inhouse use and then sharing it, in a bid for wider influence and market advantage – is common among IT majors, but is new to telecoms operators, whose instincts have always been to keep their inventions to themselves for differentiation. But this is part of the “very different AT&T” which chief strategy officer John Donovan promised while discussing a blizzard of news announcements last week.
This involves greater openness with customers, partners and suppliers, and new ways to work with them, enabled by mechanisms like AT&T’s Network 3.0 Indigo data sharing framework (see inset), and by virtualization (which should apply to 55% of network functions by the end of this year). He believes AT&T is uniquely positioned where the three most important trends in telecoms – SDN, 5G and ‘giant data’ – intersect.
“When you take all three of those and you look at the intersection of them, there is no one else who can come talk to you today,” Donovan said. “At the intersection, the platform that is born out of that, that’s what we refer to as Indigo.”
He added: “When you look at Indigo, the platform will have a lot of tenets over time. As you think about our evolution as a company, this is the beginning of that dialogue where we are going to try to build an environment where customers, our partners and our peer companies can innovate and move networking into the natural next generation.”
These activities in cloud and SDN platforms are making AT&T behave, and procure systems, like an IT heavyweight as much as a telco. Indeed, Intel says AT&T is the first telco to join the ‘Super 7’ group of Internet giants who spend so much that they get early access to new chip technology. The seven members are Amazon, Facebook, Google, Microsoft, Baidu, Alibaba and Tencent.
AT&T can congratulate itself that it has been sufficiently visionary to embrace telecoms/IT/web convergence, rather than just talking about it like most of its peers, and so carve out a new identify for itself which should take it well beyond the specifics of US market rivalries and regulatory structures. However, it must also be careful not to push so far out of its comfort zone that its actions misfire because it lacks the right skills and contacts. There is a difficult line for operators to draw when they start to become cloud providers, and though AT&T may be behaving like a web giant, with its open source projects and infrastructure investments, its recent alliance with Amazon indicated that it knows it has limitations too.
The two companies will integrate their respective networking and cloud capabilities, going well beyond their existing work to connect devices to the cloud – also optimizing those links, preconfiguring sensors and devices for efficiency in the Internet of Things (IoT), and working on overall platform security and threat management.
This seems to show AT&T acknowledging a reality which most carriers will have to do too – that they are not in a position to compete with Amazon AWS or IBM directly in offering cloud services. But they have highly valuable expertise in device connectivity and management, and in provisioning and monetizing large numbers of gadgets and consumers. So alliances like this one are sure to proliferate, though some telcos will be more successful than others in avoiding a bitpipe role in the cloud, and securing a significant role in the value chain when they join forces with Amazon, Microsoft or vertical market platforms like GE’s Industrial Internet Initiative (in which AT&T is also the primary carrier member).
All but a few operators, particularly some in Asia, will pull back on their homegrown clouds and turn to third parties, which may see a large number of cloud assets coming to market. AT&T and Verizon have both been looking to sell at least some of these activities. Verizon is on the point of offloading its data center portfolio in a $3.6bn deal with Equinix and is now reported by LightReading to be planning to divest its wider enterprise cloud services business too, to an unknown buyer.
The moves come only five years after the carrier acquired cloud services player Terremark, which could have signalled a focus on enterprise cloud – Verizon having lost out in the public cloud market to Amazon and others. However, progress has been slow, and Verizon is now said to be planning to exit the enterprise cloud sector. Other US carriers, such as CenturyLink and Windstream, are also selling off physical data centers and focusing on virtualization to support modern enterprise services, with AT&T as their role model. But Verizon shows signs of shifting away from the segment – as LightReading points out, it “never used its cloud capabilities to remake its own internal network” and so its cloud activities can be more easily hived off, leaving Verizon free to pursue its current investment priorities in content and 5G.
Telcos used to dismiss the huge webscale businesses and argue that private or hybrid cloud providers would deliver greater value and optimized experience. But the economics of the giants are too great to resist – their costs, but also their ability to invest heavily in essential enablers like security. However, the public cloud majors still lack the ability to look deeply into the networks, so alliances with operators will hold significant value for them, especially in the IoT with its vast numbers of connections and sensors.
AT&T is drawing a clear line between what it believes to be the natural territory of the new virtualized, software-driven telco – of which it is one of the world’s most advanced examples – and what must be entrusted to partners. Its hope, no doubt, is that the actual cloud platform will become the commodity, not the newly flexible and programmable telco network – though of course, both will be just conduits for high value applications, services and virtual network functions (VNFs).
The trick for telcos will be to ensure they deliver enough of those high value elements to justify their huge investments in infrastructure, a process in which they will sometimes be competing with their own IT and cloud partners. But at least they can support huge scalability and remove at least one area of high cost from their plans by working with cloud giants. They can then concentrate on optimizing performance in their networks, and on deploying services and connectivity close to the user or the IoT device, via the localized IT platforms and low latency connections of Multi-access Edge Computing (MEC).
It remains to be seen which operators strike the right balance between cloud and edge, and between inhouse and partner infrastructure and services. With its program to virtualize 75% of its network elements, and its early moves to offer commercial SDN-enabled network-on-demand services to enterprises, AT&T is looking stronger than any other western operator and may, unusually for a telco, be in a powerful position with regards to the balance of power with Amazon.
AT&T CEO Randall Stephenson insists that the primary asset in the cloud value chain is the network. “We said we are not going to compete in this commoditized cloud. Actually, the cloud is what’s being commoditized; it’s not the network,” he told a conference run by Goldman Sachs last fall.
AT&T’s Network 3.0 Indigo:
In a world in which everything is connected to everything else, and the cloud, there will be a need to tap into information coming from many partners and even competitors, or big data analytics will risk being based on too narrow a set of internal assumptions. Yet new levels of trust and security, and new working practices, will be needed to make this open, cooperative world a reality while still protecting the interests of commercial bodies and of individual citizens.
AT&T’s approach is its recently unveiled data sharing network, Network 3.0 Indigo, which will support its own analytics processes, but also allow it to provide services to customers. There are few details of how it will work in commercial practice as yet, but it is based on several areas of technology in which the US telco has been very active in recent years – SDN, blockchain and ECOMP.
The end result, it says, will be a secure, trusted environment in which organizations can share data, and break down the siloes which have been put in place for privacy or competitive intelligence reasons. The resulting combinations of inputs from many sources and systems will enable new applications and services.
Victor Nilson, SVP of big data at AT&T, wrote in a blog post that Indigo could support many services which rely on fusing data from different organizations. For instance, a telemedicine platform in which doctors, hospitals, pharmacies and insurers could share patient data securely.
This is achieved by using SDN, identity management, access management and authentication, blockchain for auditing, plus machine learning and artificial intelligence for analytics. Nilson describes the results as “a manageable information sharing environment that provides the protection, privacy, security, compliance management of appropriate data usage, while at the same time unlocking some level of data that can be managed and mined across entities, within a company or across companies.”
Fundamental to these goals is the ability to share data without worrying about privacy and security. Currently, if data is aggregated, summarized or anonymized, in order to protect privacy, it loses much of its usefulness. Indigo will move away from a simple trusted/not-trusted security judgement and assess entities, such as endpoints, according to a more complex and nuanced set of criteria including reputation. And it will provide mechanisms to allow meaningful sharing of data even where the specific details of individual or location have been removed.
“Today there’s not a good way to do that,” Nilson said. “There’s not a good abstraction mechanism, there’s not a good sharing mechanism, there’s not a good security mechanism, there’s not a good authentication mechanism, that can pull that up to a higher level of data sharing.”
Indigo is not at the commercial stage yet but is likely to be so this year. Over the coming month, white papers will be issued to provide more detail, and specific use cases will be shared later in 2017. As with ECOMP, AT&T hopes Indigo will be adopted beyond its own enterprise and service provider base. It has not said whether it will open source elements of the network sharing framework, as it has with its orchestration technology, but it does want other operators to adopt the platform.