Telcos urge a bolder, less fragmented approach to NFV

For most of this year, NFV (network functions virtualization) has been suffering from the almost inevitable backlash against a highly complex and immature technology which has been hyped to the skies. More urgent moves are needed to avoid fragmentation of the platform, and to deploy NFV in a flexible way, said speakers at last week’s NFV/SDN Congress in The Hague. But in addition to those familiar complaints, some CTOs also think operators have been too timid and need to set far bigger goals for their virtualization programs.

Arash Ashouriha, Deutsche Telekom’s deputy CTO, told the event: “From day one the aspiration was to learn from the hyperscale companies and the reality is that we are far away from the pure cloud-native vision.”

But, as many delegates remarked, by taking baby steps rather than embracing cloud-native fully, telcos are often introducing unwanted complexity which will compromise the end results, often because there are no unified frameworks or processes.

Some of the various initiatives surrounding management and orchestration (MANO) of NFV systems need to come together to provide a single framework, which would then be a better basis to move to full automation (see article above).

And according to Ashouriha, operators need to be bolder about fully decoupling the infrastructure from the services. This is not just about separating hardware and software (running most functions as virtual machines on commodity boxes); nor decoupling the user plane and control plane in a software-defined networking (SDN) environment. Both these changes are at the heart of most operators’ virtualization strategies. It is also about MANO and lifecycles. The DT executive explained: “We have the same lifecycle for VNFs as for infrastructure and our experience has shown this is a big mistake. We need to decouple these things and treat them separately.”

That may mean prioritizing continuity over the latest updates. For instance, DT “will not always put in the latest OpenStack release. You need a level of stability and continuity”. OpenStack, the open source cloud platform, is the basis of virtualization programs at several large operators such as AT&T, which has placed its own OpenStack-based MANO development into another open source initiative, ONAP (Open Network Automation Protocol).

But Ashouriha claims ONAP is only half an answer to the challenges because it does not have universal support and “the first release is not fully usable”. He also hinted that it might be too dominated by one or two operators, saying: “Lots of people are moving to ONAP but who is contributing in that community?” Orange and Bell Canada are testing AT&T’s ECOMP, on which ONAP is based (along with some code from China Mobile’s Open-O development).

The other concern around ONAP is that is is splitting the NFV sector by providing an alternative to ETSI’s Open Source MANO (OSM) project, which put out the third release of its software last week. There is rising pressure for OSM and ONAP to adopt the same information models to allow for common frameworks, interoperability among VNFs, and a broad developer ecosystem.

But there are politics involved too – much of the OSM code was contributed by Telefonica, another early mover in NFV trials, which will have the same desire as AT&T to steer this vital platform in its preferred direction and increase its overall influence.

Telefonica did hint, in August, that it might be open to joining ONAP, which would provide a powerful bridge between the two groups and might make it easier to cooperate on common models. Javier Gavilan, Telefonica’s planning and technology director, sparked the speculation when he said in an interview: “OSM cannot be compared with ONAP because the scope of ONAP is bigger and OSM is only a small part of the Telefonica transformation project. We are transforming our full stack and this is something we are doing and it could be a part of ONAP.”

Verizon also said, earlier this year, that it sees ONAP has being broader, providing a full service management platform, while OSM is mainly an orchestrator for virtual network functions (VNFs). (However, the US carrier has yet to make its promised choice between the two systems.)

Telefonica belonging to both groups could be a catalyst for convergence, which would help address one of the many sources of fragmentation in the immature world of carrier SDN/NFV, though it would not be easy to marry two platforms with such different approaches. However, Gavilan said earlier this month that he had been in contact with AT&T about aligning ONAP and OSM efforts to develop common information models and processes. He told LightReading: “The industry has two different initiatives running in parallel with a lot of common points and the idea is to align them as much as possible.”

And last week, DT senior program manager Klaus Martiny, the leader of the new ETSI zero-touch group (see earlier article), was also hinting at ETSI-ONAP convergence, saying in an interview: “ONAP and zero touch could be run in parallel, learning from each other and exchanging information. Maybe we can merge them or kill one. That could happen maybe in 2019 or 2020. We have to deliver something that is useful.”

While these signs of convergence remain tentative, there are reasons to feel more confident about NFV – a few cheerleaders, at least, believe the platform is sufficiently mature to support full-scale roll-out. AT&T is the most famous example, but Orange told the Dutch event that it has finished defining its “target architecture” and has been offering on-demand services to some enterprises based on its virtualized network trial architecture. Now it is ready to start work on “industrial mode” roll-out, starting in Spain and extending across its whole operations EMEA.

“The first step was to define. The next is to deploy across all our affiliates, starting in Europe but going soon also into the Middle East and Africa zone,” said Emmanuel Bidet, VP of convergent networks control. The operator has chosen key vendors – Red Hat for
OpenStack deployment, Juniper’s Contrail for the SDN controller, and hardware from HPE and Dell.

Bidet says this will provide a generic infrastructure on which multivendor VNFs can be onboarded. Orange is now working to source those VNFs and it will choose the vendor for its virtual evolved packet core (vEPC) before the end of the year. Many operators are choosing the EPC has the first function to virtualize – in Orange’s case this is partly to support rising traffic demands, but also specific requirements in particular hotspots or new applications.

Operators are seeing some reasons to boost confidence in NFV, especially the appearance of second generation VNFs. As AT&T’s Tom Anschutz, a distinguished member of technical staff told the Congress, early VNFs were just virtualized versions of the integrated hardware-software solutions used before, which might save some money by shifting to COTS hardware, but did not achieve dramatic operational efficiencies or support new services, nor free operators from the lock-in to the supplier of a particular network function.

Anschutz said: “Just doing the virtualization without re-architecting the software is not helping all that much. We are doing the initial investment and we are looking for a pay-out that is going to allow us to get simpler, more easy-to-compose software but in this first step we’ve added the complexity of managing virtual infrastructure and we haven’t pulled out the complexity of those vertically integrated components and systems.”

Now there is a shift towards a cloud-native environment, in which VNFs are designed specifically for the cloud, and in which functions are decomposed into microservices, or other tiny building blocks, which can then be mixed and matched flexibly to create new functions, and which promise to be more manageable, resilient and scalable.

Anschutz said cloud-native will make the network functions “elastic – they are not going to have just one chunk on a server with one set of performance characteristics but rather the function is going to understand the load it is trying to provide and it will either consume more resources or curtail, and consume less resources, depending on the current load.” He said that, with the second generation NFV, AT&T is coming
closer to the opex savings it had targeted from virtualization.

Peter Konings of Verizon was particularly vocal, in The Hague, in calling vendors to accelerate support for microservices. He said: “Today when we are delivering a Riverbed or Palo Alto service, we are providing a full-blown software package to customers. It is like selling a car with all of its options in only one version. We need more flexibility. We need to move to microservices so we can deliver exactly what customers need.”

ETSI NFV Release 2 provides much-needed unified APIs:

ETSI NFV has published a unified API definition which underpins six new NFV specs, covering VNF package structure, dynamic optimization of packet flow routing, accelerated resource management and hypervisor domain requirements. Telcos will be able to access the API specifications by the end of 2017. In addition, the ETSI ISG says 18 different work projects have been approved.

According to Diego Lopez, chairman of ETSI NFV,  the unified REST APIs will help a broader base of companies to get involved in pushing NFV forward. “With the ground-breaking work that is now being done we are getting closer to a stage when universal integration is finally achievable and vendors’ VNF solutions can be executed and managed via any orchestrator and management solution without integration problems arising,” he said. “Furthermore, all of the component parts of such management/orchestration systems will be completely interoperable with one another.”

The ISG is scheduling NFV Release 3, which will provide specs and guidance for operationalizing NFV. It will also perform in-depth studies on forward-looking topics, such as enhanced security and applying NFV to network slicing for 5G.