Telefonica could join ONAP, averting disaster of fragmented orchestration

It is clear that network resources, whether physical or virtual, will need to be highly distributed to support low latency, sometimes high bandwidth applications such as emergency communications, connected vehicles and next generation mobile video or virtual reality. Of course, the mobile network is already very distributed and can support large numbers of devices, but only from the point of view of connectivity and data exchange.

Meanwhile, cloud computing and storage platforms have also evolved to interconnect many remote physical or virtual end points, but there has been little integration with transport networks. This gap is the subject of many R&D projects, including two EC H2020 initiatives, 5G-Crosshaul and 5G-Xhaul, as operators look to manage resources of all kinds from end-to-end to support automation and new services. This, in turn, is prompting a rethink of IP itself and how it is deployed in the new architectures. ETSI has a group looking at Next Generation Protocol (NGP), while others are looking to enhance IP for the new platforms.

This was one focus of the InterDigital MEC demonstration in Bristol, UK, which showed off an efficient way to distribute video content –a use case driver for edge computing which is always at the front of operators’ minds as they try to support video-obsessed consumers efficiently and cost-effectively. The demo used InterDigital’s implementation of Flexible-IP services (FLIPS), which was approved as a proof of concept by ETSI last year and is expected to be part of the 5G architecture. FLIPS was run in software on off-the-shelf computing hardware and commercial SDN switches, to stream video over WiFi to access points supported by the Bristol is Open smart city project. This supported a treasure hunt game in which any Android user could participate.

In the trial, latency was cut from tens of milliseconds to under 10ms compared to standard IP technology, while video distribution was made six times more efficient, said InterDigital.

“Latency reduction, higher bandwidth utilization, and the ability to deploy such services very close to end users rather than in some distant cloud are crucial to the success of MEC services,” said Dirk Trossen, senior principal engineer with the vendor. Other partners included CTVC, which provided the content. “Deploying services so close to end users is crucial to enable new services at the network edge but even more important is doing so without the need for deploying own infrastructure in operators’ networks”, said Stuart Porter of CTVC.

InterDigital said the trial also highlighted the advantages of a new approach to IP, to underpin FLIPS and MEC. This development, another one under Europe’s H2020 program, is clumsily called POINT (iP Over Information centric Networks –  the beTter ip), which is focused on reducing latency in distributed and edge-based topologies. As well as POINT, InterDigital is a member of another H2020 initiative, FLAME, which is targeting new efficient platforms for media delivery supporting personalized, interactive, mobile and localized (PIML) workflows.

Such trials make it clear that, in 5G, many of the services which will drive revenue growth for operators, new and old, will rely on the joint management and allocation of all kinds of resources – network, computing and storage. All these remote elements need to be orchestrated as a single pool in order to automate the provisioning of optimized resources for each service and user, and so transform the cost and flexibility of the network.

So a heterogeneous, open orchestrator is one of the critical components of the new network, and will be essential to support network slicing in a multi-operator environment – allowing optimized bandwidth, storage and compute power to be allocated dynamically to a service or user, tapping into the resource pools of several operators if necessary (for a multinational customer, for instance, or a service requiring ubiquitous coverage).

Without this orchestration, the multi-access edge will not become the open, programmable, multi-operator and multivendor arena which is envisaged by early movers and shakers such as Quortus and InterDigital. Some operators are looking to drive progress and assert control, notably AT&T, whose ECOMP management and orchestration (MANO) software is the foundation of the open source ONAP (Open Network Automation Protocol) initiative; or Telefonica, which has contributed significantly to ETSI’s OSM (Open Source MANO) rival.

These competitive efforts could start to converge if reports that Telefonica is to join ONAP prove correct. If it does, it will reflect the fact that ONAP has a broader scope than OSM, and has been deployed commercially by AT&T, which gives it a level of maturity that has attracted several operators to trial it, including Orange. ONAP was formed from the merger of ECOMP with the China Mobile-inspired OPEN-O and is now hosted by the
Linux Foundation.

Javier Gavilan, Telefonica’s planning and technology director, sparked the speculation when he said in an interview: “OSM cannot be compared with ONAP because the scope of ONAP is bigger and OSM is only a small part of the Telefonica transformation project. We are transforming our full stack and this is something we are doing and it could be a part of ONAP.” Verizon also said, earlier this year, that it sees ONAP has being broader, providing a full service management platform, while OSM is mainly an orchestrator for virtual network functions (VNFs). (However, the US carrier has yet to make its promised choice between the two systems.)

Telefonica belonging to both groups could be a catalyst for convergence, which would help address one of the many sources of fragmentation in the immature world of carrier SDN/NFV, though it would not be easy to marry two platforms with such different approaches. However, Gavilan said earlier this month that he had been in contact with AT&T about aligning ONAP and OSM efforts to develop common information models and processes. He told LightReading: “The industry has two different initiatives running in parallel with a lot of common points and the idea is to align them as much as possible.”

Just because software is open source does not make it easy to implement – indeed, many early movers in NFV/SDN have pointed out that they have had to invest heavily in skills and tools to make open source MANO or other systems capable of supporting their full requirements and integrating with their network architectures. Gavilan acknowledged that the integration of OSM into Telefonica’s hugely ambitious virtualization program, Unica, was not complete. He said: “A couple of months ago we had the first OSM release that was product-ready and we have launched an RFI just to understand what are the different proposals for integrating OSM with OpenStack architecture.”

Though many efforts, including ONAP, are converging around OpenStack, many telcos, including BT, still complain of its shortcomings in supporting the very specific and demanding requirements of an operator network. In a recent report, analyst at Analysys Mason said that Telefonica had faced “notable problems with OpenStack implementations” because the open cloud technologies “do not support the specific performance and distribution requirements of network functions”. The report authors advised the operator to “continue to exert pressure on Unica vendors to live up to their promises and collaborate with one another, and be prepared to bypass them if necessary and build new capabilities itself (as it is doing with OSM) to ensure it can realize its vision”.

Unica has suffered various setbacks, perhaps inevitably for a program which spans most of Telefonica’s network elements, across all its European and Latin American territories. Last year, the operator replaced HPE with Ericsson as the lead integrator on the project and it has stopped giving any firm timelines to complete aspects of the work, in contrast to AT&T, with its regular progress updates on its Domain 2.0 SDN initiative. However, Telefonica has recently given some updates, saying it will open new data centers – in Mexico this year and then Chile, Spain and the UK, to complement the Unica points of presence in Argentina, Colombia, Germany and Peru. The operator is virtualizing various functions and has demonstrated vEPC and vCPE, among others. It is currently relying on four different suppliers of VNFs to ensure a multivendor system. Huawei is providing the vEPC in Argentina and Peru, while ZTE is contributing a virtualized IMS in Peru; Ericsson offers the same VNF in Colombia; and Nokia is supporting a virtualized service router in several markets.

Over 50 VNFs from over 30 different vendors are currently being validated, says Telefonica, and deployment is likely to accelerate from 2018. The last step is likely to be the RAN. “We don’t have a solution for Cloud-RAN yet,” said Gavilan. “There are some good solutions but they are not mature enough”, and there will be the need to adapt the underlying transport architecture.

Nevertheless, Telefonica aims to have VNFs handling 20% of mobile user plane traffic and 50% of control plane traffic in the “short term”, starting by virtualizing data centers in central offices and then “moving step by step toward the edge of the network”. That, of course, is where an open orchestrator becomes essential. Only with that in place can Telefonica heed Analysys Mason’s warning that it cannot “reap the full benefits of Unica until it becomes the business-as-usual architecture for all of Telefonica’s communications services”. As a step in that direction, Telefonica is setting up a Center of Excellence whose role will be to define systems, processes and technologies that can be used in all 16 of the operator’s markets.

Even without Telefonica, ONAP effort is gaining momentum, gaining its first US cableco supporter, Comcast, whose signing-up was announced along with those of Fujitsu, Infosys, Netcracker and Samsung, making the membership 50 in all. ONAP also announced that it had adopted ICE, software developed by AT&T to incubate and validate VNFs before onboarding them, the aim being to allow VNF developers to do on-boarding very quickly (in a few days) using a self-service model. ICE will now be known as the VNF Validation Program (ICE) Project.

ONAP is an example of a project which is building rich, telco-specific functionality around open source platforms, but there needs to be more work on building bridges between the telco and IT/cloud worlds, and between open source and standards bodies.
Despite a recent lull in growth, ETSI NFV (Network Functions Virtualization) has become a well accepted starting point for virtualizing the network in an open, multivendor way which also combines cloud/IT and network orchestration, and centralized and edge-based elements. But there needs to be more work to build relationships – in software and in human processes – between NFV’s architecture and those emerging in SDN (software-defined networking) and various open source projects.

SDN, in the broad sense, can provision connectivity between virtual machines but it still needs to integrate with transport networks in a more meaningful way than just Layer 2 overlays over IP (Layer 3). OpenFlow provides some of this additional functionality when combined with SDN – it provides a basic common hardware model for a packet switch with a flexible way to automate networking decisions, so that as well as forwarding packets, it can automate the management and configuration of advanced networking rules and policies such as security groups, or dynamic instantiation of functions. That, in turn, allows multiple tenants or network slices to be provisioned automatically and on-demand, enabling many new business models.

Other approaches are to use a single SDN controller for multiple domains; or more robustly,  to implement a parent controller which orchestrates a number of domain-specific controllers. The latter provision resources in their own domains while the parent pulls the whole thing together. The Open Networking Foundation (ONF), which oversees OpenFlow, has been working on specifying a transport API (T-API), and two new interfaces – A-CPI (unified between the control plane and the application plane and between the orchestrator and network controllers) and D-CPI (interfacing with the data plane). Multiple SDN controllers and orchestrators can also be deployed in a peer-to-peer structure or mesh, synchronized using east-west interfaces, and standard protocols such as GMPLS.

The choice of protocol architecture splits between efficient, but inflexible, low level protocols with binary encodings, and high level protocols which support ease of development. The European Commission-funded 5G-Crosshaul project, part of the wider H2020 initiative, recommends: “A rough guideline is that low level efficient protocols are adapted to fast changing conditions while higher orchestration layers can use high level interfaces based on REST/RESTConf architectures with the required protocol stacks (e.g. HTTP) and text encodings (e.g., JSON, XML).”

The IETF standard YANG is rapidly being accepted as the data modelling language to describe and provision services and network devices, while the Control Orchestration Protocol (COP) is a pre-standard implementation of the Transport API concept, abstracting a control plane technology of a transport domain, relying on YANG and RESTConf.