Close
Close

Published

Dilemmas mount up in the new MNO architectures

The architecture choice for many telcos is no longer ‘to virtualize or not’. The big decisions are which network elements to virtualize first, and when; and whether to convert physical systems to virtual, or start from scratch with a cloud-native strategy. And having decided that, what approach to take – one with its roots in the traditional enterprise market, like VMware; one coming from the telco world, probably under the auspices of ETSI; or a ‘pure’ open source option.

These three choices are starting to drive faultlines through the nascent telco virtualization and SDN (software-defined networking) market. At the edge of the network, which will become increasingly important as IoT devices proliferate, the three camps are increasingly visible. A new open source initiative, with Dell behind it, is a potential alternative to the work of ETSI MEC (Multi-Access Edge Computing) or Cisco’s IT-rooted Open Fog Consortium.

In recent weeks, the split over the best approach to management and orchestration (MANO) of virtual network functions has deepened. The ETSI approach, based around its Open Source MANO (OSM) has upped its game with Release Two. The main alternative, based around OpenStack and ‘pure’ open source, is the Open Network Automation Platform (ONAP), but some telco remain cautious about how far full openness can support all their network requirements. That is leading some to turn to more tried-and-tested platforms from the data center, like VMware.

With three routes to MANO, telcos need to see some hybrid solutions emerging

As Wireless Watch has been tracking over the past couple of years, one of the deepest sources of conflict in telco virtualization is the approach to management and orchestration (MANO). Managing NFV infrastructure and coordinating all the virtualized network functions (VNFs) are complex tasks, and doing them efficiently is central to getting optimal results from turning the telco network over to software.

There are two main open source candidates to standardize the underlying processes, ETSI’s Open Source Mano (OSM), which has just unveiled its second release; and the most influential OpenStack-based platform, ONAP (Open Network Automation Platform). ONAP was formed in February from the merger of two Linux Foundation-hosted efforts which originated in operator inhouse developments – AT&T’s ECOMP and China Mobile’s OPEN-O.

But this is not just about the high profile split between an OpenStack approach (relatively quick and simple, but lacking some telco-grade functionality), and a more classical telecoms process from ETSI.  It is also about the debate between open source or closed.  The Mobile Network (TMN) recently pointed to the difficulties some operators have had in adopting a ‘pure’ open source MANO approach, especially as many of their personnel with data center or virtualization experience were used to working with enterprise suppliers like VMware. They found it culturally and technically difficult to switch to an open source, entirely inhouse project.

TMN was told that the upcoming departure of David Amzallag , the leader of Vodafone’s NFV/SDN-based Project Ocean, “was a direct result of this gap between vision and deliverability. Now we have made no attempt to confirm that so it must be treated as hearsay for now, although the fact the opinion is out there is evidence that there is a conflict in play at the moment.”

This is really another facet of the same division – between openness and telco-specific functionality; between quick and affordable, and fully optimized for the demands of a carrier network. The main attractions of the open source approach are the relatively low investment, the strong innovation base, and in many cases, easier deployability than ETSI approaches – though most carriers agree that this comes at the expense of some telco-specific functionality. A hybrid approach will appeal to many telcos in the second wave of SDN/NFV, but for now, many are looking to open source to give them a relatively simple springboard.

Margaret Chiosi, formerly at AT&T and now VP of open ecosystems at Huawei US, looked ahead to a hybrid future in a recent speech at the Open Networking Summit. “You have all these open source pieces. They are great initial pieces, but you can’t just clean it up and run it, because it’s not complete,” she said. She believes the situation will resolve itself, with vendors providing commercial products within open source frameworks and using open APIs so that initially closed platforms will “evolve from closed to partially open to completely open, if possible – and then operators can start picking best of breeds and mix and match, using common APIs.”

This ‘best of both’ dream will not just involve hybrids of ETSI and OpenStack – something which is being driven by some vendors and operators, and also by the CORD (Central Office Re-architected as a Datacenter) initiative. There must also be work to combine the best of both worlds across open and closed environments. For instance, VMware has started to include support for OpenStack and ETSI OSM within its vCLOUD NFV platform. And the latest release of ETSI OSM software (see below) includes better interoperability with VMware via connector enhancements to the VMWare vCloud Director.

On the ONAP side, the deployability and richness of the OpenStack-based platform is being increased through work with other industry groups in other parts of the stack. The OPNFV (Open Platform for NFV) initiative, for instance, aims to create an integrated stack that makes NFV easy to deploy and run. The latest release of its software, Danube,
supports integration between the NFV Infrastructure/Virtual Infrastructure Manager (NFVi/VIM) and ONAP.

And CORD, with its mobile strand M-CORD, is also influential. Timon Sloane, VP of standards and membership at the Open Networking Foundation (ONF), which includes CORD in its projects, said that 70% of operators are now planning to support the solution. He said: “CORD is the total stack: OpenStack plus extensions to make it work in the operator environment; a full DevOps that everyone is after in the cloud world; networking is in there as a critical component, along with others; and service creation, which is where XOS fits in … it is what provides the microservice capabilities. And then a whole pile of infrastructure stuff. You can leverage all of that and get started right away.”

So MANO must be evolved and deployed within a broader environment, and that is reflected in the capabilities introduced with Release Two of ETSI OSM, announced last week. This claims to bring improvements in interoperability, performance, stability, security and resources footprint.

Among the new features are:
SDN assistance to manage and interconnect traffic-intensive VNFs with on-demand underlay networks.
Support for deployments in hybrid clouds through a new plug-in for Amazon Web Services.
Plug-in support for the ONOS SDN controller, which joins Open Daylight and FloodLight on
the list of supported controllers.
Dynamic network services to scale resources on demand.
Multiple installer options to ease OSM installation in different environments.

Further details are in a new white paper from the OSM Community.

“The SDN assistance in Release Two is a huge leap forward to enable full automation of key operators’ use cases with the most popular VIMs on the market”, said Francisco-Javier Ramón, chairman of the ETSI OSM group.

The group also announced the organizations which have signed up since Release One – ATOS, CableLabs, Comarch, DataArt Solutions, Dialogic, Keynetic, Netscout, Netrounds, PacketFront Software, Radcom, Spirent, TNO, Verizon, Wind River, Yotta Communications, CNIT, ZTE, Paderborn University and Seven Principles.

Halium and EdgeX: Mobile and IoT harmony, or more fragmentation?

Linux has revolutionized many aspects of IT by bringing open source right to the heart of the operating system, but its downside has always been fragmentation. The openness which has enabled it to eat away at Microsoft and Apple, and to enable many embedded platforms, has also made it hard to arrive at a unified applications base, especially in mobile.

But now there are two new efforts aimed at creating harmonized Linux platforms for mobile devices and for the Internet of Things. Both have a democratic aim in common – to allow developers, enterprises and users to choose from a range of operating systems and wireless protocols while still working within a unified environment.

But both face significant obstacles because they are challenging powerful entrenched interests. The first, Project Halium, will be hitting out at Google’s ongoing quest for homogeneous Android in mobile by allowing other Linux distributions to infiltrate themselves into an Android system.

The second, EdgeX Foundry, aims to enable interoperability at the edge of the network, but it will be treading on the toes of efforts by two industry groups with influential members, ETSI’s Multi-Access Edge Computing (see separate item for its latest developments), and the OpenFog Consortium. It is hosted by the Linux Foundation, and highlights the way in which that body, and the Linux community overall, is moving rapidly up the stack from operating systems to management, security and interoperability frameworks.

On the mobile front, a group of developers, under the name Project Halium, is aiming to breathe new life into Linux-based alternatives to Android, by creating a common base that will unite distributions like Ubuntu Touch, Sailfish, Plasma Mobile and others, and make it easier for them to be ported to hardware which was designed for Android.

The initiative has come out of Canonical, which controls Ubuntu – one of the most powerful Linux variants in the enterprise, but which has failed in repeated attempts to seize mobile share from Google. Project Halium’s first tool was released by a lead software engineer at Canonical. It allows Android apps to run on Linux desktops without using emulators.

Instead, Halium uses a Hybris, or compatibility layer, which enables Android driver support. This technology was originally developed by a Mer developer and has been adopted by all the Linux variations involved in the new project. But each one has involved a different and somewhat incompatible implementation of the Hybris.

To end this fragmentation, Project Halium proposes a common base which incorporates the Linux kernel, the Android Hardware Abstraction Layer (HAL), and Hybris. This does not replace the device’s original Linux version, but allows all the versions to operate as a common platform.

“Project Halium also aims to standardize the middleware used to interact with the hardware of the device. By having these parts shared, we believe that it will reduce the fragmentation we have currently,” wrote the team.

The project is currently at the draft stage and the next step will be to work on a proof of concept, using the Google Nexus 5 and 5X as the reference devices.

The hurdle for Halium, as for individual Linux distributions outside the Android fold, is that it really needs the support of device makers to achieve scale outside a small section of users with technical knowhow. James Noori, community manager at Jolla, the company behind Sailfish OS, said: “We need to remember here one important thing, what works with the ODMs? It does not really matter what we think is the best if it doesn’t work with the ODMs.”

As quoted by ZDnet, he also conjured up memories of previous attempts to create a unified Linux-based mobile platform as a counterweight to Android.  Nokia and Intel worked on MeeGo, but Nokia abandoned it in favor of Windows, while the open source Tizen project, despite Samsung backing, has made little commercial headway. Noori said: “Merging to the same code base things like kernel or drivers, or using same caf tag, is basically going back to MeeGo times, and the issues that were already existing there that we have been working on.”

Google may not be losing much sleep over the smartphone OS race, but in the IoT, there is a far more open playing field. The immaturity of the space, and the diversity of applications and devices within it, means Android is not assured of the same dominance it had in smartphones. Google is developing other software frameworks, like Brillo and Weave, which are more specific to the IoT, and to very low power embedded devices which do not require a full OS. But it is up against a host of alternatives, some backed by major vendors, others coming from the specialized real time OS (RTOS) community, or from open source.

The latest attempt to reduce fragmentation comes from a Linux Foundation-hosted project called EdgeX Foundry. This is developing an open source software platform to support interoperability between devices, sensors, apps and services, regardless of their OS, at the edge of the network.

The presiding genius behind this is Dell, which is contributing its two-year old Project FUSE sourcecode under Apache 2.0. This will seed EdgeX with over 125,000 lines of code and about 15 microservices. The idea is to enable developers to build their own products and services on top of an open foundation, creating an open framework for IoT-oriented edge computing.

Despite the influence of Dell, and about 50 founding members, there is already a group which is addressing the same issue as part of its remit – the OpenFog Consortium. This Cisco-inspired group recently unveiled its new Reference Architecture document, which is essentially a framework for managing emerging networks that blend centralized cloud processing with distributed network edge compute functions.

There is a great deal of common ground between this aspect of OpenFog, and EdgeX Foundry, but very little overlap between the two groups’ memberships – with only Dell, FogHorn Systems, and relayr appearing in both lists. That points to a division on political/commercial grounds rather than technology standpoint, and that’s without considering some of ETSI’s MEC work on multi-protocol edge frameworks. ETSI and OpenFog already come from contrasting environments, with MEC largely representing the telecoms agenda and OpenFog driven by the IT/cloud perspective. Now EdgeX adds another complication.

OpenFog has some serious IoT clout, with the likes of ARM, Cisco, Dell, GE, Intel and Microsoft in its ranks, but apart from Dell, none of these has yet joined EdgeX. Unless this starts to happen, there may be limited prospects for the two groups to converge their efforts, despite the logic of that.

However, lower down the stack, there has already been consolidation between overlapping standards efforts. The Open Computing Foundation (OCF) and the AllSeen Alliance were both addressing some of the issues concerning EdgeX, though at a lower level, rather than as part of a broader environment like OpenFog. They both wanted to establish common ways for devices to discover one another and establish communications, regardless of their wireless protocol. AllSeen and OCF have now merged and there could be further room for convergence with EdgeX.

Notably, two EdgeX members, Two Bulls and Beechwoods Software, had been working on the IoTX project, which was based on the AllSeen work and is now being rolled into EdgeX.

And the executive director of EdgeX Foundry, Philip DesAutels, was previously senior director of IoT for the AllSeen Alliance (and holds that same position in the Linux Foundation). He believes the goalposts have changed for those pursuing IoT interoperability. For instance, he does not believe that sensors with one purpose, such as those in street lights, need to communicate to sensors in other devices (a goal of AllSeen). The important thing is that they can interact, and information from all of them can be viewed holistically, through common middleware and analytics platforms.

He told FierceWirelessTech:  “I don’t think there’s going to be one protocol at the edge. I think we’re going to see more and more of them. In the consumer space, that’s really a problem because things don’t work together well as a result. In industry, smart cities, it’s less of a problem. It really doesn’t matter that my street lights can’t talk to my sewer monitoring sensors directly, as long as the ERP system that glues them together can talk to them.”

The EdgeX project is “solving a layer above that” and trying to create a standard framework, with APIs (application programming interfaces) so third parties can add services on top. Such frameworks need “a standard security model and a standard management model because that’s what people who build enterprise systems have’ and when we have that standard higher level services model on top, we should be able to plug in lots of services depending on what you want to do.”

“Success in IoT is dependent on having a healthy ecosystem that can deliver interoperability and drive digital transformation,” said Jim Zemlin, executive director of the Linux Foundation. “EdgeX Foundry is aligning market leaders around a common framework, which will drive IoT adoption and enable businesses to focus on developing innovative use cases that impact the bottom line.”

The EdgeX founding members are AMD, Alleantia, Analog Devices, Bayshore Networks, Beechwoods Software, Canonical, ClearBlade, CloudPlugs, Cloud of Things, Cumulocity, Davra Networks, Dell, Device Authority, Eigen Innovations, EpiSensor, FogHorn Systems, ForgeRock, Great Bay Software, IMS Evolve, IOTech, IoTium, KMC Controls, Kodaro, Linaro, MachineShop, Mobiliya, Mocana, Modius, NetFoundry, Neustar, Opto 22, relayr, RevTwo, RFMicron, Sight Machine, SoloInsight, Striim, Switch Automation, Two Bulls, V5 Systems, Vantiq, VMware and ZingBox.

The EdgeX industry affiliate members include Cloud Foundry Foundation, EnOcean Alliance, Mainflux, Object Management Group, Project Haystack and ULE Alliance.

All three Chinese MNOs trial MEC solutions with ZTE

The influence of China on the new platforms for virtualization, SDN and edge computing is profound. The three Chinese MNOs, as well as major vendors and research institutes, are contributing technology to conventional standards bodies and open source initiatives as they look to secure a far stronger position in 5G technology and IPR than they have had in the past.

For instance, all three Chinese mobile operators – China Telecom, China Mobile and China Unicom – are conducting trials of ETSI MEC (Multi-Access Edge Computing), with commercial launches expected in 2018.

The three operators are working with local vendor ZTE on pilots and technical verification, with a particular focus on traffic and smart city applications, or what ZTE likes to call the ‘Internet of Vehicles’.

It has worked with Ningbo Telecom, part of China Telecom, on a campus network to support local traffic offloading; and on smart parking projects based on MEC and NB-IoT. With China Mobile and Zhuhai, it has been running a project focused on precise indoor positioning, operating in Beijing since 2016; while ZTE and Unicom demonstrated a MEC-based virtual reality service in Shanghai.

The vendor is making a major bet on MEC and says it already has core technologies and patents covering several key enablers – virtualization, containers, high precision positioning, close-to-user content delivery networks and network slicing.

It is basing its efforts around selected use cases which it believes are both appropriate to an edge-based approach and likely to be commercially in-demand in the short to medium term, and its pilots will support 4G or 5G radio links, or a combination. The use cases include service localization, local caching, some IoT and smart city applications, and the Internet of Vehicles (which combines V2X and V2V with MEC and NB-IoT.)

Last month, ETSI officially changed the name of its MEC ISG from Mobile to Multi-access Edge Computing, as it had promised to do the previous autumn. As it embarked on phase two of the MEC ISG’s work, it significantly broadened the scope of the technology, which is becoming highly strategic to many organizations in the mobile, IoT and cloud worlds.

Alex Reznik of HPE, the new ISG chair, said: “In phase two, we are expanding our horizons and addressing challenges associated with the multiplicity of hosts and stakeholders. The goal is to enable a complete multi-access edge computing system able to address the wide range of use cases which require edge computing, including IoT.”

The expanded scope reflects a rapid growth in MEC’s relevance to many sectors – and perhaps a desire, by the telecoms industry, to stay ahead of the Open Fog Alliance. While the two organizations are officially complementary, their contrasting roots in telecoms and IT also bring the baggage of political tensions between the two worlds.

To steal a march on fog computing as the dominant approach to edge computing, MEC needed to be less focused on mobile operators. Their initial use cases focused on moving high bandwidth services closer to the subscriber, offering a standard process for techniques like video caching, in order to use capacity more efficiently and reduce latency.

But the remit quickly broadened – not all devices and services requiring low latency are mobile, especially in the IoT; optimally efficient use of network resources can be boosted by supporting seamless access across fixed, cellular and WiFi; operators want MEC to enable new services and revenue streams, not just enhance the performance of old ones. As many telcos try to turn themselves into cloud providers, it becomes increasingly important to know how and where to distribute the cloud resources and services, looking at the network as a whole, not just the cellular element.

So, multi-access edge computing was born and the second phase of MEC ISG’s project will not just address various access mechanisms, but also multiple MEC hosts being deployed in different networks, owned by different operators and running edge applications in a collaborative manner.

ETSI said in a statement: “Future work will take into account heterogeneous networks using LTE, 5G, fixed and WiFi technologies. Additional features of the current work include developer friendly and standard APIs, standards-based interfaces among multi-access hosts and an alignment with NFV architecture.”

It stressed the links to a broader virtualized network and cloud architecture – an area where MEC will be able to draw heavily on another key ETSI ISG, for NFV (Network Functions Virtualization).

Close