Close
Close

Published

ONS 2019: the balance is shifting from telco thinking to open source

If 2020 is likely to be the year when 5G moves into the mainstream, history may remember it as an even more important turning point for the mobile industry – the year when open source initiatives seized the balance of power from conventional standards organizations and traditional telco thinking. Not in the radio standards of course – it will be another generation or two before the 3GPP’s position is usurped. But when it comes to ETSI, increasingly projects inaugurated by that body are being taken up, perhaps hijacked, by open source groups hosted by the Linux Foundation (LF).

In two key areas, edge computing and the management and orchestration (MANO) of virtualized networks, the industry impetus has shifted from ETSI – Multi-access Edge Computing (MEC) and Open Source MANO (OSM) – to projects under two LF umbrellas, LF Edge and LF Networking Fund. Several of those projects were kicked off by AT&T, including the Akraino edge framework and, in MANO, the ONAP group (Open Networking Automation Protocol).

And that highlights one of the important reasons why open source is finally taking a major role in telecoms networks, so many years after it moved into the data center and the mobile device. Large operators, such as AT&T faced with very challenging economics if they deploy 5G in the same way as 3G and 4G, are determined to wrench control back from the large vendors and break their lock-ins, in order to accelerate the creation of an open, software-based, multivendor platform that should transform cost of ownership and commercial agility. Throwing their weight behind open initiatives is one way for the major telcos to assert control of their ecosystem and speed up the pace of transformation.

The edge and MANO projects were high profile at last week’s Open Networking Summit (ONS) in San Jose, California. But not content with stealing ETSI’s thunder in those areas, the Summit also saw a proposal to address one of the standards body’s core recent contributions to the world, NFV (Network Functions Virtualization). NFV is too cumbersome and progress has stalled, argued conference participants, as they welcomed a new initiative from the University of California at Berkeley, called ‘Lean NFV’, designed to make the foundations of virtualized networks simpler and quicker to deploy.

Most of the LF-hosted initiatives are still in their early stages, but 2020 is likely to be the year when we see the first at-scale deployments of ONAP, Akraino and other platforms like ORAN (another AT&T-initiated one, focused on the disaggregated RAN). In the impact on the supply chain, the cost of running networks and the services they can support, that may well prove to be a milestone more significant than the launch of 5G.

Lean NFV aims to rescue telco virtualization from a “sea of complexity”

When ETSI set up its NFV (Network Functions Virtualization) group in 2012, its work proved a genuine catalyst for the development of virtualized architectures that were optimized for telecoms networks, rather than just reworks of platforms created for data centers. But now progress has largely stalled, which a new group, supporting a streamlined, open approach to the technology, blames on NFV “drowning in a sea of complexity”.

For a while, progress on NFV was surprisingly rapid for an initiative driven by a traditional standards body, but in the past couple of years, the NFV bubble has burst. At-scale commercial deployments have been few and far between as the downside of setting the early pace became clear – NFV’s monolithic approach, based on virtual machines (VMs), had been overtaken by cloud-native approaches, pioneered in the webscale world.

There is now considerable momentum behind a shift to cloud-native approaches, which support virtual network functions (VNFs) that are developed specifically for the cloud, rather than converted from physical functions. They also allow for fully disaggregated systems, in which components can be containerised, and continually combined and recombined according to the task at hand.

Last autumn, Vodafone’s Fran Heeran (now returning to Nokia) said that operators should move from NFV to cloud-native to get a wider range of benefits and far greater agility, and although there are major NFV projects under way, notably Telefónica’s Unica, many operators are holding fire until cloud-native tools and platforms are more mature and well understood in the carrier environment. This is resulting in a lack of alignment between the progress of 5G – whose economic promises rest, in part, on virtualization from end to end – and the adoption of virtualized RAN and cloud-native core. The first wave of 5G deployments still center on physical RAN baseband and on the LTE core, sometimes still physical too, or sometimes virtualized in the old-fashioned way.

There is now a sense of urgency in the NFV community – the work to bring the platform up to date with the cloud-native era must be done more quickly, or risk the specifications being superseded, and the effort invested in them by operators and vendors wasted.

Yet the original ETSI group has responded by saying the changes have indeed taken longer than initially thought but this was only to be expected, given the scope and complexity of NFV. They estimated it would take a decade for the platform to be fully developed, which would give an end date of 2022 – too late for companies which see that as a date for a second 5G push, one which will rely on mature virtualized systems being available.

Here is a prime example of a conventional standards body losing the initiative to a more agile, open source approach. A new white paper heralds the arrival of a new group pushing ‘Lean NFV’, a framework which promises to enable easier integration and more rapid deployment of NFV in a cloud-native world – more quickly than ETSI would deliver these changes.

The aim is to kickstart NFV again, and rescue it from the stasis that has set in as its community grapples with cloud-native, containers, open source MANO (management and orchestration) and full disaggregation (the latter essential for the closely related goal of ending vendor lock-in).

Lean NFV made its debut at the Open Networking Summit in San Jose, California last week, amid a major love-in for open source activities in the telecoms world. Its initiators, at the University of California at Berkeley, are keen to stress that they are not starting from scratch, but want to restructure NFV so that its elements can be more easily deployed and integrated. And their efforts are supported by individuals from a variety of companies including Comcast, Intel and VMware.

The new Lean NFV website states that “the community has waited patiently for NFV solutions to mature. However, the foundational NFV white paper is now over six years old, yet the promise of NFV remains largely unfulfilled, so it is time for a frank reassessment of our past efforts.”

The white paper argues that NFV’s main problem lies in the difficulty of orchestrating hundreds of components. There are two main MANO approaches for NFV – ETSI’s Open Source MANO (OSM) and the Linux Foundation’s ONAP (Open Network Automation Protocol – see article below). But these have introduced new complexity – when AT&T put its ECOMP MANO technology into open source, resulting in ONAP, it contributed 8m lines of code.

Even below those MANO layers, Lean NFV’s proponents say the current NFV architecture involves too much tight coupling of elements, which leads to rigidity and makes automation difficult.

The Lean NFV approach is to identify four critical components in the platform and to focus exclusively on making these simple to deploy. All other aspects of NFV designs would then be left open for innovation and individual conception.

Three of the components are part of the existing NFV architecture. They are:

  • The NFV manager, which handles common lifecycle management tasks for individual VNFs and for end-to-end NFV service chains.
  • The computational infrastructure, which includes the compute resources (bare metal or virtualized) and the connectivity between them (a physical or virtual fabric). The former is managed by a compute controller such as Openstack, and the latter by an SDN controller.
  • VNFs – both data plane and control plane components. The VNFs also can have an elemental management system (EMS) to handle VNF-specific configurations.

Lean NFV also wants to add a fourth core component, which would help to reduce complexity. This would be a key value (KV) store, to function as a universal point of integration.

“We believe that, rather than standardizing on all-encompassing architectures, or adopting large and complicated code bases, the NFV movement should focus exclusively on simplifying these three points of integration, leaving all other aspects of NFV designs open for innovation,” says the white paper. “To this end, we advocate adding a fourth element to the NFV solution, a key value (KV) store, that serves as a universal point of integration.”

Scott Shenker, professor of computer science at University of California at Berkeley, and one of the founders of Lean NFV, told FierceTelecom: “So, why has NFV been so hard for the community? We realized that the problem was that they were focusing on the components, which weren’t all that difficult, but it connected them in very complicated ways.”

He said there were two crucial mistakes made by NFV. “One is that they embedded the NFV logic into the infrastructure manager. The second is they defined all these pairwise APIs between the components. If you want introduce a new feature, you have to go and muck around with those pairwise APIs, and that makes it hard to deploy because you have to change the infrastructure. It makes it very hard to onboard VNFs because how do you build a VNF that can integrate with this? And, they made it almost impossible to innovate because these components were now very tightly tied together. It was very hard to pull one out and stick something else in.”

Shenker, who was a co-founder of Nicira before it was acquired by VMware, also highlighted the importance of the KV store – developed with UCB colleague Sylvia Ratnasamy – for integration. He said there are three steps that make integration of components simpler. “One is use universal integration and then compose a key value store, so then the way you communicate is through talking to a common key value store. Two, you don’t start by requiring the infrastructure to know anything about NFV, so you don’t have to change that. Third, is having the wisdom to know you can stop.”

Enlarging on this during a presentation in San Jose, Ratnasamy said: “It’s lean in the sense that the only thing it specifies is where you go to the discovery stage. Everything else in the system is open to integration.” The NFV manager handles launching, configuration, chaining, monitoring, scaling and healing. Each of those capabilities is broken into components, or microcontrollers, with each microcontroller integrated with the KVS. Each microcontroller could come from a different vendor, according to Ratnasamy. “Key value and integration allows vendors to evolve in a way that’s incremental and independent of each other,” she said.

This would certainly support a key objective of many operators, to introduce multivendor VNFs to their virtualized networks. Lean NFV is meant to be an open architecture, streamlined and extensible architecture that can support a multivendor ecosystem for NFV.

To date, the few MNOs that have succeeded in multivendor NFV – NTT Docomo and SK Telecom, with their multivendor virtualized packet cores, for instance – have done so with considerable effort and investment, either with inhouse teams or systems integrators. Though the Lean NFV approach appears to try to reduce this complexity, it is still unclear how easy and automated it can become. With most of the innovation now being left to suppliers or inhouse teams, there may still be plentiful work for the integrators which have their beady eyes on the task of bringing virtualized, cloud-native platforms up to commercial grade for telcos. Companies as diverse as Amdocs (key deployer of ONAP), Radisys (a prime mover in the CORD open platform) and Tech Mahindra (an integrator for the new Rakuten cloud-native network in Japan) are developing NFV integration services, as are the major OEMs.

However, Lean NFV shows how this effort might be streamlined in future. “The main technical points are simple,” Shenker said. “Use the key value store as a universal point of integration and remove a need for specialized VIMs. That’s it.”

Constantine Polychronopoulos, CTO of VMware’s NFV/telco unit, supported the new approach at the ONS event. He said it would be particularly valuable for making it more practical to deploy edge computing.

Other signatories to the Lean NFV white paper include Ravi Srivatsav, formerly with NTT; James Feger, currently with F5 Networks and formerly with CenturyLink; Nagesh Nandiraju of Comcast; Krish Prabhu, formerly with AT&T; Uri Elzur of Intel; Mirko Voltolini of Colt Technology Services; and Christian Martin of Arista.

The next stage will be to run lab tests, proof-of-concepts and live trials for Lean NFV, which will require time of course, though its supporters say it will not need as long as the three years envisaged by ETSI (though it is worth noting that the UCB team has been working on the integration technology for six years, almost as long as NFV has been in existence. None of this is quick or simple.)

“Nobody pretends we’ve solved all the problems,” Polychronopoulos said. “What we have now is a concept”, or a starting point, and the aim is to build a community around Lean NFV to accelerate the process of bringing all this to commercial reality.

 

LF Edge announces new members and more Akraino blueprints

In recent years, the Linux Foundation has started to group and coordinate its projects in categories, to avoid duplication of effort, accelerate progress and encourage maximum participation. The first of these umbrella groups was the Cloud-Native Computing Foundation, which is now three years old. A year ago, it was followed by the LF Networking Fund, and later by the Deep Learning Foundation and LF Edge, the last of these established just two months ago.

The structure establishes a common administrative structure to help coordinate and rationalize projects in a similar area (though it is always voluntary for any project to join one of the groups). LF Edge, for instance, is described as “an umbrella organization to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system”.

At last week’s Open Networking Summit (ONS), the Linux Foundation cast the spotlight on the edge and networking groups. The former announced four new members, including Marvell, one of the chip providers which is extending its reach to create an end-to-end platform for the telecoms network. In that context, it will be important for the company to push its ARM-based processors into the emerging edge market as quickly as possible, to fend off Intel.

The other new members are Alef Mobitech, HarmonyCloud and Section. They join over 70 other organizations which signed up in February, when LF Edge was formed to bring together three standalone projects (initially) – the AT&T-initiated Akraino Edge Stack; EdgeX Foundry (originated by Dell) and Open Glossary of Edge Computing. They were joined by two new ones, Home Edge, and Project EVE.

The ONS was the first opportunity for members to join face-to-face technical steering committee and board meetings, with the aim of aligning mission and project goals, setting up working groups and identifying the most immediate opportunities.

Akraino Edge Stack unveiled some new blueprints, bringing its total to eight, with another 19 in development. These blueprints will provide stacks for various use cases, which are subsets of the core unified stack at the heart of Akraino and LF Edge. This will include infrastructure specifications, middleware, APIs and software development kits and will not just cover the telco edge, but also that of the cloud and the private enterprise.

So the blueprints split into two categories – those which sit mainly on the operator’s own edge, focused on base stations and central offices; and those which have lower latency or higher localization requirements, and so rely on customer premises locations.

The first blueprint, released earlier this year, provided the code for a full stack for a central office-based, telco-hosted edge node based on OpenStack. It was followed by a lighter stack for more remote telco sites such as base stations. Among the new bunch, others are starting to address non-telco edges, some with disaggregated hardware or very lightweight stacks for IoT applications.

The first release of Akraino is scheduled for later this quarter, and will include several validated blueprints.

The LF Edge projects, and especially Akraino, have their roots firmly in the telco industry, but they have recognized the danger of that more clearly than ETSI MEC did. The original MEC architecture was tightly wedded to telco sites and assumed these would be sufficient for most edge requirements, thus giving operators a golden opportunity to monetize their locations. But of course, the organizations which may end up dominating the edge computing value chain could well be non-telcos – cloud giants or industrial service providers, in particular.

The LF Edge view is that each of these has certain advantages in enabling edge-centric services – telcos are best placed to deliver highly mobile applications, and may argue that their control of 5G will also put them in pole position for many low latency services. However, the best location for a given app may often be within a company data center itself, so it is important that there is a common framework which can span edges that are owned and deployed by a range of organizations, including webscalers, operators and enterprises themselves.

LF Edge aims to tie these edges together to create greater harmonization across industries. There are hundreds of IoT services which could be enhanced by moving data and analytics closer to the device, and connecting them by high quality mobile links. But if each one has its own software stack and ecosystem, there will be very little scale and little attraction for developers, hardware makers or service providers. In particular, it is urgent to have a common template for identifying and guarding against security and privacy risks in the IoT.

“The market opportunity for LF Edge spans industrial, enterprise and consumer use cases in complex environments that cut across multiple edges and domains,” said Arpit Joshipura, general manager of networking, automation, edge and IoT at the Linux Foundation.

ETSI MEC has been forced to adapt its positioning, pushing its APIs rather than the full architecture, and creating more links to cloud and enterprise worlds. But it has not pivoted as much as it needs to do, unlike Akraino, which is increasingly working in conjunction with the OpenFog Consortium, whose framework has been adopted as the basis of IEEE edge compute standards. That will facilitate the task of developing more blueprints for non-telco verticals like automotive, keeping them all interoperable with Akraino so multiple industries can cooperate using common interfaces – and Akraino can flourish at the edge even where telcos do not.

The most active of the cloud giants in this area has been Amazon AWS, with its Greengrass IoT-focused edge developer platform and its Snowball edge box. Now Microsoft has come out with Azure Data Box Edge, which will be a head-to-head competitor with Snowball Edge, providing on-board storage and processing, and allowing users to transfer data between the edge and the public cloud. Like other Azure services, Microsoft’s box will be offered on a pay-as-you-go model, and as well as compute, storage and connectivity to the Azure cloud, it features an Intel Arria FPGA (field programmable gate array chip) to support machine learning.

Dean Paron, general manager of Azure Data Box, wrote in a blog post: “Data Box Edge can be racked alongside your existing enterprise hardware or live in non-traditional environments from factory floors to retail aisles.” One early adopter, geographical information systems supplier Esri, will use the device to help emergency responders in disconnected environments such as hurricane sites, allowing teams to “collect imagery from the air or ground and turn it into actionable information that provides updated maps … to coordinate response efforts even when completely disconnected from the command center,” Paron explained.

AT&T tends to steal the edge thunder because of Akraino, but Verizon is also working hard on this technology. The carrier recently tested edge systems on a live 5G network in Houston, Texas, and that cut latency in half for user sessions, according to Adam Koeppe, SVP of network planning. In 2019, mobile 5G and edge computing efforts will run in parallel, with edge infrastructure being installed at locations, such as central offices, which Verizon already owns.

The new LF Edge projects:

On the smart home front, the Home Edge Project, unveiled in February, is based on seed code contributed by Samsung Electronics, which clearly has a keen interest in boosting its position in this value chain. It is a major supplier of connected home appliances and media equipment, as well as smartphones, but has been outstepped by Amazon and Apple when it comes to intelligent home applications, especially those controlled by smart speakers. By defining the rules for a new AI-assisted home edge, Samsung may hope to improve its influence – its edge work has been geared to low latency applications like VR gaming, to high security in the smart home, and to responsive AI apps from home robots to intelligent content management.

The other new LF Edge initiative, also set up in February, is called Project EVE (Edge Virtualization Engine) and its starting point is code contributed by cloud developer Zededa. The aim is to create an open and unified edge architecture that can support all kinds of hardware, connectivity and software, whether on-premise or in the cloud. This agnostic approach, say EVE members, effectively removes the rigid perimeter and provides a more flexible edge with multiple security layers.

AT&T consolidates its LF power with new ORAN initiatives

The most powerful operator in the emerging open source telecoms world is AT&T, which has seeded a number of Linux Foundation projects in its determination to break OEM lock-ins and create an open, software-driven network in its own image.

Several of the projects it kicked off, including ONAP (Open Network Automation Protocol), ORAN Alliance and Akraino Edge Stack, were highly visible at last week’s Open Networking Summit (ONS), and there are others too, such as the Acumos AI initiative.

Indeed, the ORAN (Open RAN) Alliance’s move to the Linux Foundation was made official at the event. The group was formed from the merger of the China-oriented Cloud-RAN Alliance, and the xRAN Consortium, which was seeded with AT&T code and has published specifications for open interfaces between basebands and radio heads in a disaggregated, virtualized RAN.

In San Jose, the ORAN Alliance and Linux Foundation announced the creation of the ORAN Software Community. This will provide an application layer for RANs, aligned with the ORAN architecture, and will encourage the development of an open source infrastructure platform for 5G, wrote Arpit Joshipura, general manager of networking, automation, edge and IoT at the Foundation, in a blog post.

AT&T also contributed the initial seed code for a 5G RAN Intelligent Controller, which it co-developed with Nokia, to the ORAN Alliance. “This is really the first step here in terms of opening up the RAN,”said Andre Fuetsch, CTO and president of AT&T Labs. “We want to be able to expose more of the controller so we can drive more visibility and more control. And when you have more visibility and control, then you have programmability,” he told SDxCentral.

He was very clear about the end goal for all this sharing of code, saying: “It is going to break the vendor and technology lock-in … and open up a whole new level of interoperability. … Traditional areas of the network that have been controlled by a few are now being opened up to many. I think it’s fair to say that all the ORAN operators are highly motivated to drive more innovation and interoperability, and thus competition into their radio access networks.”

Although AT&T’s first 5G deployments do not include ORAN technology, they are making use of virtualized cores and white box switches. However, the company has said the vRAN is the most challenging aspect of a software-based network and will be implemented towards the end of its multiyear Domain 2.0 program to push software-defined networking (SDN) throughout its platforms and its supply chain.

The operator has said it aims to virtualize 75% of its core network functions by 2020, placing them under SDN control, and the RAN will follow after that. It already has white box switches carrying live 5G traffic, and at the ONS event, it demonstrated a white box cell site gateway router that promises peak speeds up to 100Gbps to support 5G backhaul.

“These white boxes and open source routing software that we’re deploying, the cell site router initiative that we’re putting in is going to 65,000 cell sites over the coming years,” Fuetsch said. The white box routers have also been installed in Toronto and London for business customers with plans to expand to 76 countries by the end of this year, according to AT&T.

The white box routers run AT&T’s own dNOS network operating system and software framework, which the telco open sourced in March under the auspices of the Linux Foundation and its Disaggregated Network Operating System (DANOS) project.

In October, it released its white box router specifications to the Facebook-led Open Compute Project, which aims to drive commoditized hardware into cloud infrastructure platforms. If the design is taken up by other OCP members, the scale of the ecosystem, and the consequent price competition, could be very significant. AT&T’s reference design can be used as a guideline by any hardware vendor, though it has to be based on a specific chip (the Broadcom Qumran-AX switch-chip). Submitting it to OCP should encourage more suppliers to rise to that challenge (and other chips might follow in future).

“With DANOs coming shortly you will have the puzzle pieces that everyone can take and employ whether you are a large operator or small operator,” Fuetsch told an event in December. “You can take this and make it happen. We think this is significant and a really critical component here.”

He added: “Next year we’ll begin to deploy more of these routers. We expect to have several thousand cell sites deployed with these boxes and this is what is powering our 5G network that we’re building right now. Our intent is to make this ubiquitous across our network over the next few years.”

White boxes are also going into other parts of the AT&T network, including on top of rack switches. “We’re now using those in the disturbed core that we’re now building for our network cloud, which will basically become 5G as it gets rolled out from the core,” Fuetsch said. “As you can see, all of these implementations are beginning to take root on all of our locations big and small.”

The CTO also played down one of the common criticisms of open source software in telecoms networks – the potential security vulnerabilities. “Open source is inherently more secure because you have more eyeballs on it,” he said.

Another open group developing a cell site gateway router is the Telecom Infra Project (TIP), initiated by Facebook. Its agenda mirrors many of the Linux Foundation’s telecoms efforts, though it does not have a pure open source model – it will also support licensing schemes. At its own Summit in October, it announced its Edge Application Developer Project, and a group called CANDI, which will develop the specs for

disaggregated cell site gateways, an essential component of future 5G deployments.

The edge project will be led by Intel and Deutsche Telekom – the latter is rapidly assuming the same kind of dominance of the TIP working groups that AT&T enjoys in the LFN. The new group will develop open source APIs to ease the task of creating software to run on edge compute assets that are located within the mobile operator’s network, and will draw on the work of DT spin-off MobiledgeX.

Meanwhile, CANDI (converged architecture for network disaggregation and integration) will be led by another of TIP’s most active operators, Telefónica, plus Japan’s NTT. This is a sub-group within the existing, and highly active, Open Optical Packet Transport (OOPT) initiative, which was established near the start of the TIP adventure. The founding product in the OOPT was the Facebook-designed Voyager, a DWDM optical transponder whose reference design has been adopted by several companies such as ADVA.

The new sub-group will focus on building an end-to-end reference design for a converged IP/optical network architecture that enables disaggregation, as well as evaluating the best integration points for the disaggregated components. Its starting point will be to identify real world, end-to-end use cases and deliver solutions using existing or new open software and technologies.

Open groups grow up, wooing telcos with testing/compliance programs

Another of the Linux Foundation’s umbrella groups, LF Networking (LFN), celebrated its first anniversary at the Open Networking Summit (ONS). Among the milestones it has passed during that time, it said, were deepening collaboration with conventional standards bodies and with adjacent open communities; and the creation of a compliance program.

Formed in early 2018, LFN is focused on integration, efficiencies and member engagement across seven projects:

  • Open Daylight
  • ONAP (Open Network Automation Protocol)
  • OPNFV (Open Platform for Network Functions Virtualization)
  • OpenDaylight
  • io (Fast Data.input/output)
  • PNDA (Platform for Network Data Analytics)
  • SNAS (Streaming Network Analytics System)

“We are thrilled to have recently celebrated our first year as an umbrella organization, bringing continued growth across the ecosystem supporting the end-to-end open source networking stack,” said Arpit Joshipura, general manager of networking, automation, edge and IoT at the Linux Foundation.

He announced two developments in the area of testing and compliance – still fairly rare activities for open source groups, which is one reason why they are sometimes regarded with mistrust by some telcos. First, LFN has created a compliance and testing tool that enables operators to automate for validation requirements developed within ONAP.

And second, the existing OPNFV Verification Program (OVP) has expanded to include

VNF (virtual network function) compliance testing based on ONAP requirements. The program is now interoperable with ONAP-based onboarding requirements based on Heat and TOSCA packages and combines testing for multiple parts of the NFV stack.

OVP also introduced its Verified Labs program, announcing the University of New Hampshire Interoperability Lab (UNH-IOL) as the first member.

The changes show close cooperation between ONAP and OPNFV, an example of what the Linux Foundation has been trying to achieve with its umbrella groups providing a platform for collaboration and results sharing between projects.

Heather Kirksey, director of OPNFV at LFN, told SDxCentral: “Things the telecom industry is really used to like performance, verification, and validation, we’re working to figure out how to marry those with open source fundamentally. If the idea is that open source enables you to build a best-of-breed stack and have more flexibility in your vendor model, as opposed to a monolithic system from one vendor” interoperability is critical, she added.

The goal of OVP is to help drive adoption of unified deployment models and improve interoperability between various software and hardware elements. Kirksey believes 5G will be a catalyst, commenting: “5G is one of the first big things we’re seeing deployed with this technology.” In particular, she thinks 5G will push more operators to get comfortable with the DevOps and CI/CD (continuous integration/continuous development) approach that lies at the heart of OVP, and more broadly of the transformation of the network into a cloud-based, software-defined platform.

“That really is the more difficult challenge, to have your [network operations center] people get comfortable with the idea that you’re going to do small updates to the network infrastructure all the time, the way hyperscale companies treat all their infrastructure,” Kirksey said.

Some operators are well down the path already. Orange and China Mobile have both used OPNFV’s continuous integration (CI) pipeline and testing projects to create an NFV onboarding framework. Orange uses OPNFV for NFVi and VIM validation, VNF onboarding and validation, and network service onboarding. China Mobile uses OPNFV for its Telecom Integrated Cloud (TIC) to continuously integrate, onboard and test NFVi, VIM and VNFs.

As open groups like OPNFV start to behave more like standards bodies in certain ways, such as the establishment of formal testing programs, so there may be more opportunities to work with the older entities.

For instance, OPNFV and ETSI now co-locate their NFV testing events, with the aim of increasing collaboration between the two processes. Last year, in an event held at ETSI’s base in southern France there was work on interoperability of the OPNFV platform in deployment, network integration and VNF applications, to support ETSI use cases. A virtual central office (VCO) demonstration was the centerpiece, covering residential services and a virtualized mobile network use case, including virtualized RAN and packet core for LTE.

Last summer, Luis Jorge Romero, director general of ETSI, acknowledged that the body was having many discussions about how to work with, or learn from open source, both internally and with open source communities. “We need to entertain this discussion because I agree with you, this has not happened,” he said, in answer to an audience question. “We’re trying to improve this communication, this relationship.”

ETSI’s flagship supporter, Telefónica, has radical NFV plans for 2019

Of course, we must not underestimate the contribution ETSI has made to bringing virtualization to the telecoms network, nor its remaining influence. In the case of its Open Source MANO (OSM) platform, it has a heavyweight supporter in the shape of Telefónica, whose Unica is one of the world’s most advanced NFV programs, and one of the closest to commercial reality at scale.

Last year, Telefónica was reported to be close to joining the Linux Foundation-hosted ONAP (Open Network Automation Protocol) project, which would have created a valuable bridge between that approach to NFV MANO, and the OSM group’s. That step has not yet been taken, however, and the Spanish telco remains fully committed to OSM.

At the recent Zero Touch Automation Congress in Madrid, its global CTIO, Enrique Blanco, offered some updates about OSM’s progress, stressing that virtualization is not, for Telefónica, just an efficiency drive, but a means of transforming customer experience. He told the conference: “We are not discussing in Telefónica how efficient we can get, and we are not discussing how we can reduce our capabilities in terms of capex or opex. We are discussing how we can offer our customers a real digital experience. And this is key for us.”

Unica was initiated in 2014, about 18 months after ETSI created the NFV project. Over the past five years, Telefónica has been steadily virtualizing a range of functions, starting with the packet core and CPE, and eventually moving towards virtualized RAN. Few are yet in at-scale commercial use, but many of the key enablers are coming into place.

As a step on the way to a disaggregated, automated 5G RAN, for instance, the operator is investing heavily in self-optimizing network (SON) technology. SON, rather like NFV itself, is a technology which once seemed to offer huge rewards, but has not been deployed to nearly the extent, or transformation effect, once expected. However, a new wave of SON technologies is evolving to meet 5G’s more demanding automation requirements. Blanco said SON is being used to automate new domains provided by network slicing, as well to automate emerging 5G backhaul and Massive MIMO systems. The telco is working with SON providers Cellwize and Nokia’s EdenNet, the latter to add machine learning to support future automation and optimization.

This is just one aspect of the automation program, along with implementing an architecture based on OSM and on another set of ETSI specifications, the zero touch service management (ZSM) model. Blanco said Telefónica is also drawing on the work of the TMForum on zero touch, as well as supporting open APIs. It is working with PI Works to plan future integration of OSM, OpenRAN and the cloud-native 5G core, with a set of AI-powered automated use cases.

Telefónica is using OSM Release Five to automate network-as-a-service processes that span different domains, network functions and NFV infrastructures. The big breakthrough in Release Five is end-to-end support, and in tandem with that, Telefónica will evolve its Fusion SDN (software-defined networking) project to iFusion, which will replace domain-specific SDN controllers with a single end-to-end open source SDN controller.

This year will be one of expanding Unica, Blanco explained – into more network domains and points or presence; to more vendors (a total of 18, up from 10 last year); and to more VNFs (58 are planned). Unica will be extended from its current home in data centers into central offices, and eventually base stations and the network edge. By 2021, Telefónica aims to reach what it calls the ‘Ultra Edge’, which will include smaller cell sites and enterprise premises.

Antonio Elizondo, head of network technology and innovation strategy at Telefónica, told TelecomTV: “Typically in our industry, we are used to very large and costly integrations; we need to avoid that.” While this seemed to echo the opinions of the new Lean NFV group (see lead article), the Spanish telco’s remedy is to adopt a common information model for OSM, which will help make products automated and ready to deploy from day one.

This will “provide the opportunity to every innovator in the industry to be a supplier for us”, and encourage more small vendors to be part of the Unica platform, Elizondo added.

Close