SK Telecom joins the battle to drive NFV interoperability and control

For the first time, the radio is not the heart and soul of the mobile network. Once, it was the radio where interoperability was essential, where vendors gained their competitive edge and customer lock-in, where operators differentiated their services. Now, as telco networks become increasingly software-driven, that center of control and industry power has shifted to the software which manages and coordinates all the applications and virtual network functions (VNFs).

SK Telecom of South Korea is the latest in a series of operators seeking to establish their influence over this vital element, by creating inhouse technologies to ensure multivendor interoperability and future-proofing, and by extending their overall ecosystem power by opening their developments to other carriers. The company’s T-MANO joins AT&T’s ECOMP and China Mobile’s OPEN-O (now merged to form the Open Network Automation Protocol or ONAP), and developments from Telefonica, NTT Docomo and others, as a potential de facto standard.

MANO (management and orchestration) is the brain of the virtualized network, and operators are battling to seize leadership in this area away from the major vendors, and even to establish themselves as standards setters for the entire industry. For the full vision of 5G to be realized, it will be essential for individual operators’ virtualized networks to interoperate and to be able to be orchestrated in a unified way. Network slicing will deliver on its potential to drive new economics and services, if those slices can be cut from a huge, flexible, ever-shifting pool of capacity. If it is implemented within individual operators’ networks, its impact will be far lower. An open, standard orchestrator is essential to make the vision into a reality;

That orchestrator may evolve from a conventional standards effort like ETSI’s Open Source MANO (OSM) or an open source initiative like the Linux Foundation’s ONAP. The two approaches may converge, and will certainly feed into one another, but in political terms, they are at loggerheads. The type of organization which ends up in the leadership role for this vital piece of software will help decide whether 5G will be largely an open source platform or a familiar standards body-driven one.

The economics of that matter to operators. Open source can drive down capex, accelerate adoption and broaden innovation, but it will almost certainly drive up opex, as MNOs will have to devote considerable inhouse effort to developing and deploying optimized solutions based on the raw open source foundation – or outsource that effort to vendors, old or new. A standards body solution comes with greater harmonization but will tend to be supplied and deployed more heavily by established vendors, swinging the pendulum towards capex and threatening vendor lock-in again.

That lock-in issue is key to the economics of 5G. Operators are keen to reassert their own control of their networks by ensuring that they define the vital underpinnings of virtual platforms – such as MANO, or the interface between physical cells and virtual basebands – rather than the vendors. That will enable them to swap kit and software from multiple vendors in and out of their networks simply, and to use open source or start-up offerings without the risk that usually works against those solutions. The investments by Orange, BT, AT&T and others, in start-ups which could challenge the power of the major equipment providers, show how keen operators are to have a broader, more competitive supply chain. In many cases – as epitomized by AT&T’s Domain 2.0 effort to build a new supply chain around software-defined networking (SDN) – virtualization and SDN will be the triggers for this.

It will also help avoid the situation in which the industry found itself with CPRI – officially a standard interface, but one that was driven by vendors. Each supplier has implemented CPRI specifications slightly differently, so that the basebands and remote radio heads which the interface connects, cannot be mixed and matched easily. But will operator-driven interfaces be equally prone to fragmentation?

Initially, then, SK Telecom’s T-MANO was an inhouse effort to establish its own APIs (application programming interfaces) so that it could use a common MANO approach for all its virtualized network elements, and introduce multiple vendors to the mix. This echoes what NTT Docomo described back in March 2016 when it announced the first commercial deployment of a multivendor, interoperable virtualized EPC (evolved packet core).

The operator’s CTO, Seizo Onoe, had said just months before that multivendor NFV technology was “regarded as pie in the sky”. So it was seen as a major breakthrough when he provided details of Docomo’s multivendor NFV plans and initial suppliers.

“Many NFV technologies already deployed still rely on single vendor, so we expect this truly multivendor NFV technology will be a long-awaited game changer in the mobile industry’s ecosystem,” Onoe said at the time. Its approach is to choose vendors in different areas on the condition that they ensure interoperability with their rivals’ systems – an approach, now emulated by SKT, which could hasten the development of interfaces and models that might be replicated round the world or included in future standards.

In Docomo’s deployment, Ericsson’s Cloud Execution Environment (CEE) – based on the OpenStack open source technology for orchestrating virtual functions – is the integration and cloud management platform for all the NFV functions from Ericsson and other vendors. The Swedish vendor claims it can interwork with any carrier-class virtualized network function and SDN on the market. The other vendors involved in the first stages were Cisco and NEC, the former supporting SDN-based automation of the VNFs with its Application Centric Infrastructure (ACI); the latter providing the first system to be virtualized, the vEPC, along with the VNF Manager (from its Netcracker subsidiary).

Docomo is initially deploying a vEPC in order to be able to increase capacity and maintain connectivity during spikes in activity, or in the event of disasters or outages, though other use cases, and a complete transition to a virtualized approach, will follow in future. It is virtualizing LTE EPC functions including the Mobility Management Entity (MME), the Serving Gateway (S-GW), and the Packet Data Network-Gateway (P-GW). Some vEPC providers, like Affirmed Networks, can disaggregate their VNFs so operators can select different components of a vEPC from different suppliers. In Docomo’s case, NEC will provide the whole platform.

Like Docomo, SKT had struggled to make a multivendor environment viable. It said that, before it developed T-MANO, it had to build and operate a separate management and orchestration platform for each vendor of NFV equipment. Despite vendor assurances that they would support open interfaces, in practice these have been immature or poorly specified, and each supplier’s specs have differed, forcing operators to incur additional cost and complexity by supporting multiple MANO systems.

Clearly, this cancels out some of the biggest supposed benefits of virtualization – flexibility, multivendor interoperability, dramatically lower operating costs. SKT, like NTT Docomo and others, refuse to see the potential of the NFV technologies – which they have helped to pioneer – squandered so early in the game.

Now, its suppliers will have to support its APIs if they want to be included in its aggressive deployments of virtualized and 5G networks over the coming few years.

SK Telecom said it will apply T-MANO first to its virtualized VoLTE (voice over LTE) routers and then expand to the virtualized LTE EPC and MMS server. From 2019, it will only deploy virtualized EPC, and like NTT Docomo, will insist that any supplier supports its APIs. It is also extending its NFV efforts to other areas of the network, and presumably T-MANO will follow over time. Unlike most operators, including AT&T, SKT does not see the RAN as the last and most challenging network function to be virtualized. It started introducing NFV to some base stations in 2016 as part of its Cloud-RAN project.

“With the commercialization of T-MANO, SK Telecom secures the basis for accelerating the application of NFV technologies to provide better services for customers,” said Choi Seung-won, head of the operator’s Infrastructure Strategy Office, in a statement. “We will continue to develop NFV technologies and accumulate operational knowhow for virtualized networks to thoroughly prepare for the upcoming era of 5G.”

Now the Korean company is going a step further and opening up its APIs for other operators to use, a move which echoes AT&T’s with ECOMP. SK has not yet said anything about a potential standards effort, but it did emphasize that T-MANO is based on the specifications set by ETSI, so it seems closer to the OSM approach than to the Linux Foundation and ONAP.

Other operators, notably Telefonica, already have their technology in the heart of OSM, but the commercial readiness of T-MANO will count for a lot. It was the fact that AT&T had deployed ECOMP itself, with measurable results, that ensured it the leading role in ONAP, while China Mobile’s less proven OPEN-O took a subordinate role. The same could happen if T-MANO were embraced by ETSI OSM, reducing the influence of Telefonica.

Already, two elements of the technology are included in the ETSI specs, said SKT. In 2015, it already commercialized an NFV system orchestrator based on ETSI standards, named T-OVEN.

This is where the fragmentation risk is most obvious – if a converged approach cannot be found between the very different approaches and industry cultures of ETSI and ONAP (not to mention other vendor-specific or open source initiatives now under development in MANO).

ONAP has an impressive list of MNO supporters already – Orange is the most advanced deployer, and there is also support from Bell Canada, China Mobile, China Telecom, China Unicom, Veon and Reliance Jio. Amdocs, which helped develop ECOMP, will hope that it will be in pole position to help such operators with the challenges of deploying solutions based on open source technology, with a range of ECOMP-related services.

On the ETSI OSM side, operator backers include SKT itself, Telefonica, BT, Telenor and Sprint.

Other major carriers are still on the fence. Verizon’s VP of global technology and supplier strategy, Srinivasa Kalapala, told Light Reading in March that his firm was doing due diligence on both MANO options, but has concerns about both. On the OSM side, he questions whether the technology is more than a VNF orchestrator, rather than a full service management platform; while “the concern we have with ONAP is whether it is truly open. How many groups are contributing?”.

OpenStack, which is central to ONAP, is at the heart of the telco dilemma over open source. The platform provides a simpler approach to MANO than ETSI, and should enable operators to embark on virtualization with lower cost and faster time to market. However, some operators believe it is too flimsy for the heavy demands of a telco network and requires too much inhouse development.

But the former arguments are winning out, at least for the first wave of implementations. Carriers may want to add more functionality later, but the interest in OpenStack reflects the overall drive to accelerate progress. This brings with it a new attitude to standards organizations, with OpenStack, ONF and OpenDaylight rising in th0eir influence over carriers, while the power of traditional bodies like ATIS, TIA, ITU-T and the TM Forum is waning.

AT&T argues that ECOMP has gone beyond what ETSI offers, with its model-based approach that can be adapted to any set of capabilities according to the operator’s need. But those advances could be fed back into ETSI, especially if that body becomes convinced of the need for a standard model-based approach too, something it has resisted so far. The model approach is designed to simplify the process of virtualization and orchestration. Network engineers design services and set policies – using tools which AT&T has also open sourced, in the Service Design and Create portion of ECOMP – and then those services and policies are attached to the model so that operations can be automated.

SK Telecom drives new architectures:

SK Telecom has been pushing the envelope in many areas of the telecoms network in order to maximize the efficiencies and flexibility of its 4G and 5G systems, and transform its business model. It has not only deployed the first commercial macro layer Cloud-RAN with open interfaces but sees that as a step in its broader program to implement software-defined infrastructure at every layer.

It also has a strategic alliance with Deutsche Telekom, with a plan to deploy a transcontinental pre-5G network incorporating many of the key technologies of the new networks (4G or 5G), such as NFV, software-defined infrastructure (SDI), distributed cloud and network slicing.

Last autumn, it said it was ushering in the era of “All-IT”, to take the “All-IP” network towards 5G. This moved the terms of the Cloud-RAN debate forward. There have been several major trials and even deployments of virtualized base stations, enabling a cluster of cell sites to be managed by a server running the baseband functions in software – Telefonica, NTT Docomo and KT are among the pioneers, while SKT itself demonstrated vRAN functions, with Nokia, back in 2013.

But its September 2016 test, conducted on a commercial LTE network, got close to the fully-fledged software defined RAN vision originally publicized by China Mobile. This goes further than virtualizing and centralizing base station functions. SKT also pointed to one of the key visions for 5G, to implement a software-defined network with open interfaces, so that third parties – specifically small and medium enterprises (SMEs), in SK’s thinking – can develop their own functions for the RAN, especially at the edge of the network.

SK Telecom refers to its lab network as a software-defined RAN (SDRAN), which – while adding yet another acronym to the overloaded area – is more descriptive than C-RAN. This is not just about running base station functions as virtual machines in the cloud – it is really about introducing open IT architectures that enable entirely new economics and business models, with elements such as an Ethernet fronthaul link.

The operator explained its definition of SDRAN as enabling “traditional base station functions to be implemented on a general purpose IT server, and its distinctive features include functional split between real time and non-real time processing functions, Ethernet-based interface, and intelligent operation”.

The Korean operator says that, by applying general IT technologies to the interface, not just to the actual base station processing, it is opening up the architecture and making the RAN into an IT environment. Data center techniques such as intelligent operations can be applied to the mobile network – for instance, a base station can “self-detect systemic errors and automatically restore the virtual machine”.

Working with Nokia, the operator has completed a field test of SDRAN and started to apply the technology to commercial networks. SKT is also feeding its virtualized base station work into ETSI’s NFV initiative.