Close
Close

Published

Special Report: 5G is not just a network

To achieve 5G success, operators must stop talking about the network

For several years now, the new use cases for the 5G network have been discussed and hyped. Low latency, high availability connectivity, combined  with artificial intelligence, network slicing, and a host of other buzz technologies, will deliver an entirely new business case for operators and enable transformation for industries.

Yet, with 5G in its early commercial stages, we see a rising number of flagship events that tick all the boxes in terms of next generation enterprise applications – autonomous vehicles and robots enabled by low latency connections, edge computing, AI analytics and so on. Yet they rarely mention 5G. Last month, two hi-tech manufacturing initiatives were launched – the Open Industry 4.0 Alliance and the Open Manufacturing Platform. But though the smart factory is often touted as one of the most important markets which will drive new 5G revenues, mobile connectivity was scarcely mentioned.

Another darling of 5G pundits is the self-driving vehicle, and unlike most factory robots, cars clearly do have to be highly mobile. Yet when Tesla held its Autonomy Day for investors last week, with founder Elon Musk promising Level 5 autonomous driving next year, there was no mention of 5G.

This is not to say that 5G won’t be valuable for autonomous vehicles. The 5G Automotive Alliance, and trials like the UK’s AutoAIR testbed, demonstrate the potential to use 5G for vehicle connectivity which could deliver more than current wireless technologies such as C-V2X and DSRC. Neither of those were mentioned by Tesla either – and that highlights the issue for the 5G business case. 5G may well be very useful, but it is basically a piece of plumbing, as far as most industrial developers are concerned. The real transformation will come from new digital platforms, harnessing cloud-native technology and containers, edge computing, network slicing and so on.

To secure a place in the industrial and IoT value chains, which is more valuable than just being a connectivity pipe, operators will need to implement advanced cloud and software technologies in their networks and IT systems, and harness those more effectively than the cloud and data center service providers which will also be aiming for the pivotal role in digitalizing vertical sectors like manufacturing, and pushing them forwards towards the Internet of Things.

Operators have the advantage of control of connectivity, but they will even lose some of that if private network operators, and even giants like AWS, start to deploy their own networks in shared or industrial spectrum. The MNOs should also be in pole position to leverage network slicing and edge computing, since these can be built on their existing infrastructure and platforms.

However, they need to stop obsessing with network capabilities such as ultra-low latency (the requirement for which is questionable in most use cases anyway), and turn attention to accelerating their work on cloud-native cores, slicing engines, efficient orchestration, AI-enabled automation and open digital platforms. In other words, the technologies which the industries themselves, and the cloud providers, are focused on. It will be tough for operators to shine in these areas, and most will need partners (possibly AWS and Microsoft themselves). But if they remain focused on their comfort zone of the connectivity layer, they will find themselves providing just a pipe, and 5G will continue to be sidelined in the announcements of next generation industrial deployments.

AT&T’s Airship will keep OpenStack central to 5G migration

There have been ongoing debates about how central OpenStack will be to telcos, so it was clear that the community was staking its claim to a key role in 5G, during its flagship event, this week’s Open Infrastructure Summit in Denver, Colorado. While some operators, such as AT&T, have been strong supporters of OpenStack as the infrastructure software platform for their virtualized networks, others have criticized the complexity of implementing these systems. In the past year or more, the OpenStack project has progressively sought to answer critics and introduce refinements that make the platform well suited to the key cloud trends of the day – edge computing, cloud-native 5G, and the move from virtual machines (VMs) towards containers, particularly Kubenetes.

The latest, and nineteenth, release of the OpenStack Foundation’s software is codenamed Stein. It is focused heavily on Kubernetes, and the Foundation’s executive director, Jonathan Bryce, noted that a survey of the membership conducted a year ago found that 61% of planned deployments included integration of OpenStack with Kubernetes containers. This may not have been led by operators, but it will be important to them, as they start – rather too slowly, in many respects – to plan a migration from virtualized networks based on VMs, which tend to be software versions of old-fashioned functions, to services based on containers, which are designed specifically for the cloud (cloud-native) and are far more agile and reconfigurable.

The Stein release includes the OpenStack Magnum Kubernetes installer, which claims to halve launch times for Kubernetes clusters to about five minutes even with large numbers of nodes. A Kubernetes cluster, through integration with OpenStack cloud, can also take advantage of the Manila storage control plane, Cinder block storage service and Keystone identity service.

Tighter integration between the two environments is crucial for OpenStack supporters, since it positions them as complementary rather than competitors; this is also important to operators since it removes some of the risk of having to make an either:or choice between environments; or the fear of wasting OpenStack investments when moving to containers.

At last year’s summit in Vancouver, one of the most important developments in OpenStack/Kubernetes coexistence came with the launch of one of the many AT&T-driven open source initiatives in telecoms infrastructure. This was Airship, which was also backed by SK Telecom and Intel. Its objective was to make it easier to build and manage a telco cloud, by simplifying cloud lifecycle automation via bare metal containers. To this end, it created an open declarative platform to implement OpenStack on Kubernetes (OOK) and manage the full lifecycle of the resulting infrastructure,

starting from bare metal, to deliver a production-grade Kubernetes cluster.

The first official release of Airship made its debut in Denver. Airship 1.0 boasts improved security, resiliency and continuous integration and brings together several loosely coupled, interoperable open source tools for automated cloud provisioning. Those tools include OpenStack for virtual machines, Kubernetes for container orchestration, and metal as-a-service for bare metal. There are plans to include support for OpenStack Ironic, a project that provisions bare metal machines instead of virtual machines, soon.

The project will be part of a containerized OpenStack that is scheduled for release this summer.

It has its eyes firmly on simplifying 5G infrastructure and therefore moving to the center of operators’ plans. It is targeting use cases including delivery of software-defined networking (SDN) in the 5G network, of virtual network functions (VNFs), virtualized evolved packet core (vEPC), virtualized RAN (vRAN), backhaul and traffic shaping services. Ericsson, which has been a strong supporter of OpenStack in recent years, said it would be demonstrating a vRAN on an Airship-based, containerized OpenStack cloud.

AT&T is already using the technology in anger, having announced a three-year deal with cloud platform provider Mirantis in February. This deal is to build out the next  generation of the telco’s Network Cloud, this version focused on 5G. AT&T has been building this Network Cloud using Airship. AT&T is using the software to make it practicable to roll out a large number of data centers and manage them on a single lifecycle – the Network Cloud has more than 100 data centers so far. Now it plans, with Mirantis’s help, to refresh the cloud infrastructure to align Network Cloud better with its impending 5G upgrade and the push to a cloud-native core and even RAN.

The Network Cloud, formerly called AIC, was built using OpenStack cloud infrastructure software and will now use Mirantis’s commercial OpenStack distribution. The big difference is that Network Cloud for 5G will be built on OpenStack using containers, managed by the Kubernetes container management systems, not virtual machines (VMs) as before.

Of course, many workloads will still be running on VMs in the legacy platform for years to come, and the first container-based workloads will be specifically related to 5G.  But AT&T’s pace of change is rapid. According to its associate VP for network cloud software engineering, Ryan Van Wyk, the telco is deploying OpenStack on Kubernetes in more than 20 regions to date.

The platform is also supporting AT&T’s FirstNet public safety network, which is running as a workload on Network Cloud, demonstrating security advances made by the Kubernetes community.

Among the 5G-oriented virtual network functions (VNFs) to be deployed on the AT&T cloud are the virtualized evolved packet core (vEPC), RAN backhaul, traffic shaping services, customer usage tracking, smart voicemail, video streaming and many consumer facing services. However, even AT&T is currently cautious about putting a timescale on virtualizing its RAN, at any scale, in the cloud. And Amy Wheelus, AT&T’s VP of network cloud, pointed out: “The mobility 5G packet core is not containerized. The Network Cloud control plane is containerized.” She added: “It’s an undercloud platform, a lifecycle management platform.”

To be clear, Wheelus said, Mirantis CEO and co-founder Adrian Ionel explained that the company’s platform allows Kubernetes to be run on-premises, on bare metal, or in the cloud. For AT&T, that Kubernetes base will support OpenStack as a workload on top of the container orchestrator. OpenStack is needed to support NFV and to orchestrate VNFs from different vendors on one cloud. Mirantis expects the platform to run a few thousand nodes this year, and then scale to 10,000 nodes over the next three years, and more than 20,000 nodes “in the years to come”.

Mirantis is also working with India’s Reliance Jio to run OpenStack on top of Kubernetes, but companies can deploy different combinations too – for carmaker Volkswagen, it is running Kubernetes on OpenStack in an on-premises environment.

In Denver, AT&T and Mirantis provided updates on the Airship and Network Cloud project. Mirantis expects the platform to run a few thousand nodes this year, and then scale to 10,000 nodes over the next three years, and more than 20,000 nodes “in the years to come”. Ionel told SDX Central: “This is really about Kubernetes taking a prime role in the future infrastructure of a gigantic carrier. The scale of this is really staggering.”

There are other projects which see OpenStack working more closely with Kubernetes, such as LOCI (Lightweight Open Container Initiative) and OpenStack-Helm, which packages OpenStack components into container images that can be managed with Kubernetes or other container orchestration tools.

There are also some specific enhancements targeted at a future highly distributed and disaggregated virtualized network, based on 5G and edge computing. Many of these aim to ease the management of these distributed cloud systems in the 5G environment. For instance, OpenStack has added a network segment range management feature to its Neutron networking project, providing cloud administrators with an API extension to manage segment type ranges dynamically without having to edit configuration files.

Stein also allows for application scheduling based on minimum bandwidth requirements, using the OpenStack Nova compute service. And the increasing need, in 5G, to tailor connectivity to precise users or applications – eventually leading to slicing – also drives another API improvement, which supports “aliases to quality of service (QoS) policy rules”.

The Foundation has also continued the edge-focused work it kicked off two years ago, packaging subsets of components to suit small footprint infrastructure. This is a feature of both Airship and another project, StarlingX, which are both aimed at making it easier to deploy and manage cloud infrastructure.

 

Ahead of IBM marriage, Red Hat extends OpenStack to the 5G edge

Many of the discussions at the OpenStack Foundation’s Open Infrastructure Summit in Denver this week revolved around edge computing. This is an area where the OpenStack community, and telcos, have something in common – they had initially planned their next generation platforms with a centralized cloud and network in mind, but are now having to ensure they can carve out a high value role in a distributed, edge cloud world.

For the past couple of years, the OpenStack Foundation has worked on creating easily packaged, mix-and-match subsets of its code to be deployed on small footprint infrastructure to suit a variety of distributed cloud scenarios. In autumn 2017, the Foundation’s executive director, Jonathan Bryce, said edge computing had emerged as a “surprise” use case for the open source platform.

In the first quarter of 2018, the Foundation’s Edge Computing Group released a white paper, ‘Cloud Edge Computing: Beyond the Data Center’, which looked at the specific requirements of edge computing and its requirements for OpenStack. There was also a clear focus on edge in last year’s release of OpenStack, called Queens, which was intensified in this year’s Stein release (see previous item).

Queens included the work of several edge-focused projects, such as OpenStack LOCI (Lightweight Open Container Initiative) and OpenStack-Helm. Both these simplify the deployment of OpenStack services at the edge. The former provides LOCI-compatible images of OpenStack services, allowing them to be deployed by a container orchestration tool such as Kubernetes.

The latter is the result of a project led by AT&T, SK Telecom and SAP. It uses the Helm packaging system, which is a way to map a complex Kubernetes application, and define the interaction of containers. That enables a small footprint OpenStack deployment, with Kubernetes controlling it in an automated way. The project had its first full release in February 2018 and AT&T has been trialing it.

OpenStack has also been trying to work with established edge groups such as ETSI Multi-access Edge Computing (MEC) and the OpenFog Alliance. For instance, key OpenStack supporter Red Hat joined the ETSI MEC group in 2016 to help define use cases and architectures and ensure these remained complementary with open cloud platforms.

Red Hat is in the process of being acquired by IBM, a deal that will enhance the latter’s position in open source cloud platforms and in 5G. Red Hat believes OpenStack is the natural platform to provide one of ETSI MEC’s three management layers – hosting infrastructure. The others are the applications platform (where Red Hat pushes its Jboss offering) and the actual services, and much work has gone into defining open northbound APIs to interface with the individual apps and services. In MEC, the hosting infrastructure and apps platform are distributed across servers which are integrated with the RAN, at the base station, radio network controller or cell aggregation site.

This year, Red Hat has delivered distributed compute node (DCN) capabilities within its OpenStack Platform to make it easier for organizations to build an open edge architecture with reduced operational overheads. The company’s OpenStack Platform 13 supports central management of edge deployments with the same tools that manage core OpenStack deployments. It also supports a smaller OpenStack footprint at resource-constrained edge environments, including a configuration for just a single node at an edge location, and with support for low latency 5G.

Sandro Mazziotta, director of NFV product management at Red Hat (soon to be acquired by IBM), sees 5G as a key target. “If you asked me six months ago, I would have said it was a 2020 agenda item,” Mazziotta said at the event. “Now we are being asked by our customers to be ready for the second half of this year.” He questions whether Kubernetes will be sufficiently mature to support large telco 5G environments in that short a timeframe, especially for the telcos which are aiming to package their 5G workloads as cloud-native right from the start (admittedly, this is only a handful of operators – for most, cloud-native is a phase 2 activity, for the early 2020s or later).

“Using OpenStack and running containers in [virtual machines] is not a beautiful solution,” Mazziotta said. “But its pragmatic at this point to get to market.” He told SDX Central that most of Red Hat’s large operator customers are looking at major 5G roll-outs from next year that will run on top of container platforms. Red Hat currently offers its OpenStack platform alongside its Kubernetes-based OpenShift platform.

Red Hat also aims to make it easier for telcos, and other customers, to progress to full cloud-native operations. Virtualization 4.3 is the latest version of the company’s Kernel-based Virtual Machine (KVM) platform and the aim is to improve the performance of VMs while providing a path to containers as those become sufficiently mature and trusted for high performance environments like 5G. Red Hat’s aim is to encourage organizations to virtualize their Windows and Linux applications using VMs as a quick step towards a modern digital platform, and then set out a clear path to cloud-native.

“Red Hat Virtualization allows customers to more easily virtualize traditional workloads, while building a foundation to power future cloud-native and containerized workloads,” said the firm. ”Many companies have benefited from a lower total cost of ownership (TCO) after implementing Red Hat Virtualization.”

Docker and ARM target the edge cloud:

The two big names in containers are Kubernetes and Docker, and both have existed mainly in an Intel x86 world. But ARM-based processors are pushing their way into servers, and could be expected to be particularly strong in edge computing infrastructure. A new agreement between Docker and ARM could, therefore, be important to open up the distributed cloud more easily to ARM.

The two organizations have formed an alliance to help make containerized applications run more easily on ARM-based hardware, and to make it easy for developers to target both x86 and ARM platforms. Initially, they will create a cloud-based development environment focused on cloud, edge and IoT applications.

The first result will be a version of Docker Enterprise Engine tuned for Amazon’s ARM-based cloud infrastructure, AWS EC2 A1 instances, based on the company’s own Graviton processors and their 64-bit ARM Neoverse cores. ARM capabilities will be integrated into Docker Desktop and there will be a new extension to the Neoverse platform.

David Messina, EVP of strategic alliances for Docker, said: “For developers, what this means for them is an opportunity to do what they’ve done with x86 now seamlessly with ARM. And then for enterprises, many of our customers see the edge as an important part of their digital transformation strategies.” He added: “Even on an x86 laptop, all the commands they know about Docker, in terms of building containers, shipping them and running them, are now enabled for ARM.”

Next, Docker plans to address product lifecycle management, making its development environment applicable to other compute environments and consolidating edge workloads.

ONF takes control of P4 – next step, convergence of white box platforms?

The Open Networking Foundation (ONF) has been assembling a powerful group of projects geared to the future open, software-based telecoms network, offering a telco-centric approach to the cloud. Now it has taken full control of the P4 operating system for white box switches and routers, which will be an important component of next generation telecoms networks and help delvier the new economics desired for 5G.

The ONF had already largely replaced its own OpenFlow protocol with the more powerful P4 in its core projects, and will now officially add the technology to its portfolio, allowing for deeper integration with other major activities such as CORD (Central Office Re-architected as a Datacenter) and ONOS.

P4 will now become an open source project hosted by ONF, which should boost innovation and extend the reach of the technology. “It’s time for P4.org to be part of a larger, more established organization that can keep it open, independent and steadily growing for many more years to come,” said Professor Nick McKeown, co-founder and board member of both the ONF and P4.org.

That could create conflict with another open white box OS initiative, the Linux Foundation’s Danos, which is based on AT&T’s inhouse dNOS technology. However, AT&T is also heavily involved in the ONF, and its CTO Andre Fuetsch chairs the Foundation as well as leading the telco’s own ambitious white box roll-out – in which P4 is a key element.

The involvement of AT&T in both projects will not reassure those who think one operator is taking too much control of the whole next generation platform for it to remain fully open. But it does raise hopes for increasing collaboration between two projects which should be complementary and could, together, deliver a richer white box environment for demanding environments like 5G.

The ONF itself said it will “strategically align P4 activities with the Linux Foundation to advance our shared mission of promoting open source as a tool for transformation and innovation”.

An open OS is important to control and program the virtual network functions (VNFs) for switching and routing, which will be running on commoditized white box hardware in future. The P4 programming language describes how switches, routers and network interface cards (NICs) process packets across white box hardware.

That disaggregation should reduce total cost of ownership significantly over time, especially where operators are looking to densify their networks and their cloud platforms to support 5G and edge computing, involving huge numbers of elements. Those elements not only need to be cheap and simple to configure and deploy, but they need to be easily reconfigured and redeployed to support changing network demands.

A year ago, Fuetsch demonstrated AT&T’s use of P4 on merchant silicon for a white box switch, saying in a conference keynote: “This is more than just about lowering cost and achieving higher performance. Frankly that’s table stakes. This is really about removing barriers, removing layers, removing all that internal proprietary API stack that we’ve lived with these legacy IT systems, now we can bypass all of that and go straight to ONAP” to achieve fine-grained per-packet visibility.

“We see great synergy between all the ONF projects and P4, and our Stratum and COMAC projects are already making use of P4 in innovative ways,” said Guru Parulkar, executive director of the ONF. “More closely aligning our activities will be of benefit to both communities, and we expect P4 to play a pivotal role as we continue to pursue the broader next generation SDN agenda.”

Stratum is an open source switch OS for SDN architectures using white box switches, and its aim is to end the data plane vendor lock-in that arises from current proprietary silicon interfaces and closed APIs. COMAC (converged multi-access and core) is a reference design to support convergence of a telco’s mobile and broadband networks, and derives from CORD.

Google was an original code contributor to Stratum, along with switch-chip vendors
Barefoot Networks, Broadcom, Cavium and Mellanox; plus China Unicom and Tencent of China and Dell EMC. Google said it would “help grease the market” by introducing the software into its production systems in 2018, even before the full code release. Stratum will use the P4 language and runtime, and three open source protocols which Google helped develop — gNMI, OpenConfig and gNOI.

“We’re at an interesting inflection point,” Timon Sloane, VP of marketing and ecosystems for ONF, told EETimes. “We learned a ton from OpenFlow, but it has limitations, so the community strategically shifted to P4 and the P4 runtime to solve problems in a more comprehensive way.”

He acknowledged that ONF is no longer actively developing OpenFlow, which has been used by Google and telcos to access the data forwarding pipeline of network ASICs – but cannot access all the functions, and does not allow the pipeline to be programmed, unlike P4. Critically, OpenFlow did not support full multivendor interoperability because adjustments were needed for each ASIC.

The inclusion of Broadcom in the Stratum group was a feather in the cap for the ONF, since the chipmaker had previously been scornful of P4, and dismissed the idea that customers would want to program a network chip’s pipeline. If they did, it would offer its own C++ tools, it said when asked about the issue just a week earlier, at the launch of Jericho2 (see separate item). However, Broadcom is not expected to run P4 programs natively on its chips, but to write translation layers.

The same would go for network OEMs which still rely on ASICs rather than merchant silicon in their equipment. The biggest of those, Cisco, has not expressed interest in P4 to date.

The ONF, like other open platforms, has been moving towards the edge and put this, and 5G, at the heart of a strategic plan it published a year ago. It also acknowledges that the telco network and cloud are becoming so complex and broad in scope that cooperation between different standards and open source bodies is essential. ONF chair Fuetsch, for instance, hinted last year at the ONF becoming an open source arm for the Linux Foundation-hosted ORAN (Open RAN) Alliance (which also has Fuetsch as its chair). ORAN was formed from the merger of two bodies, the Cloud-RAN Alliance and the xRAN Consortium – the ONF had been partnering with the latter to develop an open source xRAN controller, based on ONOS and integrated with CORD.

The Foundation’s VP of marketing and ecosystem, Timon Sloane, told FierceWireless last year: “ORAN is not an open source group. It’s more around standardization and architectural definitions and how to build a next generation cloud native RAN. The ONF is really an open source group. We believe that the ONF is well-suited to be the open source arm for ORAN. Given the common operators that we both have and the xRAN historical relationship, we see a really bright future for that type of partnership.”

Fuetsch is also suggesting that the mobile variant of the CORD (Central Office Re-architected as a Datacenter) should be transitioned into the ONF’s reference design initiative to provide a more uniform mobile platform. The first big step in convergence of telco open initiatives came in 2016 when the ONF merged with ON.lab. That brought together the former’s major technology, the OpenFlow protocol (now being largely replaced by the more powerful P4); with ON.labs’ CORD and ONOS projects.

Feutsch wrote in the newsletter that ONF’s operator-driven board of directors – which includes representatives from AT&T, China Unicom, Comcast, Google, Deutsche Telekom, Telefónica, NTT and Turk Telecom – has officially asked the Foundation to pursue edge cloud more formally through reference designs and CORD. The goal is to create an edge cloud that combines access technologies too.

Like ETSI with its Multi-access Edge Compute (MEC) initiative, the ONF will seek to push a telco-centric view of the edge cloud. Fuetsch explained:

“The board’s direction for the ONF is to
(1) identify and focus on a few compelling use cases or applications especially in enterprise IoT space
(2) identify a high level architecture aligned with our reference designs that can implement those use cases
(3) investigate how CORD components can serve in the edge architecture, including how the architecture and CORD components relate to other edge initiatives and where the ONF architecture and CORD components offer a unique value proposition.

The ONF chair also wrote that there has been strong operator commitment to mobile CORD and a proposal has been made to move this to be another ONF reference design, with its own operator support group.

Last month, the Foundation announced four new reference designs to function as templates to create edge cloud use cases. They are SDN Enabled Broadband Access (SEBA); NFV Fabric; Unified, Programmable and Automated Network; and Open Disaggregated Network. Assembling the components of the reference designs enables proof-of-concept trials and tests, which ONF calls ‘exemplar platforms’.

 

ETSI and Linux Foundation get closer, hope to avoid standards split

Much of the narrative around the shift of telecoms networks to open, and even open source, platforms has related to two very different perspectives on that migration – that of the official standards body, notably ETSI, which has been adopting open processes for some key projects; and that of the fully open source organization, most prominently the Linux Foundation (LF), which has been assembling a major collection of telecom network projects.

Often, these two organizations have been working on similar areas of the platform, though taking different and potentially conflicting approaches. For instance, ETSI’s Open Source MANO (OSM) and the LF-hosted Open Network Automation Protocol (ONAP) both address the management and orchestration of virtualized networks, and have attracted different sets of operator supporters – led by AT&T, the initiator of ONAP, on one hand and Telefónica, which provided a lot of seed code for OSM, on the other.

Despite talk that these two efforts could converge – sparked last year by rumors that Telefónica would join ONAP as well – nothing has happened yet, but hopes will be raised again by the news that ETSI has signed a memorandum of understanding (MoU) with LF to collaborate more closely.

They said in a joint statement that they plan to “bring open source and standards closer and foster synergies between them”. The aim is to enable faster information sharing between the two groups, leading to quicker deployment of open networking technologies. Some participants hope these rather general objectives will lead directly to something more concrete, such as collaborations on specific projects, and potential common efforts related to interoperability and conformance testing.

There are many areas, not just MANO, where ETSI and LF projects could either be complementary, or could create divisions and the risk of fragmentation and a reduction in operator confidence. Key areas of mutual focus include NFV, edge computing and  artificial intelligence (AI). In the case of NFV, the LF-hosted Open Platform for NFV is clearly an extension of the core specifications established by ETSI NFV (though a joint effort would be valuable to help keep NFV relevant at all, as it comes under fire for being already too old-fashioned in an era of containerization and cloud-native).

Many of the LF projects have been AT&T-driven, such as Akraino for the edge stack and Acumos in AI, not to mention the ORAN Alliance for the disaggregated RAN. ETSI, as in NFV, was the early mover in edge computing, with Multi-access Edge Computing (MEC), but has seen its efforts challenged by other initiatives such as Akraino and the OpenFog Alliance.

Arpit Joshipura, LF’s general manager for networking, edge and IoT, said: “It’s encouraging to see how far the industry has come in such a short time. This agreement with ETSI signals it’s possible to reach a harmonization of collaborative activities across open source and standards for the networking industry. Working together results in less fragmentation, faster deployments, and more streamlined innovation.”

Luis Jorge Romero, Director General, ETSI, said: “We are eager to deepen our work with the open source communities at the Linux Foundation. Open source has been part of our working methods and our technical groups, Open Source MANO being an example, for several years now. Further collaboration provides the standards community with a quick feedback loop on how our specifications are being implemented.”

The apparent stand-offs between standards bodies and open source organizations have often been driven by rivalries between vendors and operators on both sides, and the underlying objectives, and even technologies, may not be as far apart as those political manoeuverings imply. And there are clearly pros and cons to both approaches, which might ideally be combined with a process to support full cooperation. Open source brings a broad innovation base, rapid evolution and agility, but often results in fragmentation and in solutions that may not be trusted for commercial carrier-grade deployments. Standards processes can result in fully unified specifications with robust testing and certification, but move very slowly and in a way that is beset by commercial politics.

Last month, speaking at the ONS North America event, Axel Clauberg (of Deutsche Telekom, and chair of another impactful open group, Telecom Infra Project) said open processes were vital to accelerate the pace of innovation as operators move towards open digital architectures.

“As part of the acceleration, what you want to do is avoid reinventing the wheel. And you have to work with the parties that invented the wheel, you have to collaborate,” he said. “So this is no longer a classical technical waterfall. It’s true agile work. And that requires a level of openness and collaboration. And it’s a different way of working also with the standards organization.”

In some ways, though, open groups are starting to behave more like standards bodies in certain ways, such as the establishment of formal testing programs, so that may help boost cooperation with the older entities.

For instance, OPNFV and ETSI now co-locate their NFV testing events, with the aim of increasing collaboration between the two processes. Last year, in an event held at ETSI’s base in southern France there was work on interoperability of the OPNFV platform in deployment, network integration and VNF applications, to support ETSI use cases. A virtual central office (VCO) demonstration was the centerpiece, covering residential services and a virtualized mobile network use case, including virtualized RAN and packet core for LTE.

Last summer, Luis Jorge Romero, director general of ETSI, acknowledged that the body was having many discussions about how to work with, or learn from open source, both internally and with open source communities. “We need to entertain this discussion because I agree with you, this has not happened,” he said, in answer to an audience question. “We’re trying to improve this communication, this relationship.”

Heather Kirksey, director of OPNFV at LFN, told SDxCentral recently: “Things the telecom industry is really used to like performance, verification, and validation, we’re working to figure out how to marry those with open source fundamentally. If the idea is that open source enables you to build a best-of-breed stack and have more flexibility in your vendor model, as opposed to a monolithic system from one vendor” interoperability is critical, she added.

 

Mozilla pushes W3C-based standard to bridge digital and physical IoT

The great challenge for the Internet of Things is finding an elegant way to bridge the divide between the digital and the physical worlds, including the historically closed technologies of 5G connections and devices. A significant step towards delivering such a bridge is a new development from browser firm Mozilla, with the launch of WebThings.

Based on the World Wide Web Consortium’s (W3C) Web of Things standard, WebThings is a graduation from Mozilla’s early-stage development, Project Things. It hopes to be a prime choice for anyone interested in using Web of Things for controlling connected devices out in the field – thanks to its capabilities and open source availability.

So far, the new would-be standard has not gained much attention, reflecting the worrying fragmentation in the IoT, which rarely sees a broad consensus developing around any one set of specifications. It may also reflect scepticism about Mozilla, which

has a track record of dumping IoT-related projects. For instance, after deciding that its Firefox OS wasn’t going to power smartphones, as initially intended, Mozilla refocused on smart homes and sensor networks back in March 2016. But less than a year later, it killed off this connected devices initiative.

Shortly after that, in July 2017, it unveiled the Project Things stack for the Raspberry Pi, following it up with the release of the gateway code for the W3C project – which would give any device connected to that gateway the equivalent of a URL, making it addressable in IoT applications.

Mozilla’s WebThings has two main components: WebThings Gateway v0.8, which is going to be running on the relevant gateway devices, and WebThings Framework, which is a library of software components that are to be used in applications, as well as the supporting API that lets other applications interact with these gateways and connected devices. It’s all on GitHub and Mozilla’s blog has a pretty detailed guide too.

The newest additions to the Gateway code have added data logging and better visualization tools, and Mozilla is stressing the privacy and security angles of keeping the user’s data on the gateway, and not transporting it to the cloud. With the API, these gateways can be linked to other W3C Web of Things environments, although there’s not been much progress on this ecosystem front.

A version of Gateway that will be based on OpenWrt is on the way, and this is going to be aimed at off-the-shelf WiFi gateways that could then be used for IoT applications too – opening it up to far more than just the Raspberry Pi hobbyists and developer community. OpenWrt is often found in commercial deployments, and using the OS as the basis means that the Gateway code doesn’t have to run on a separate box to the WiFi access point, as is currently the case.

Mozilla thinks its gateway code can be used commercially, but if it’s truly open source, this means that the firm will need to monetize it via support contracts, in the same way that Red Hat and Canonical do with RHEL and Ubuntu software platforms.

“The Mozilla IoT team’s mission is to create a Web of Things implementation which embodies those values and helps drive IoT standards for security, privacy and interoperability,” wrote software engineer Ben Francis in a blog post. “We look forward to a future in which Mozilla WebThings software is installed on commercial products that can provide consumers with a trusted agent for their ‘smart,’ connected home.”

Another company which has been heavily involved in the W3C work is Evrything, whose

CTO Dom Guinard explained that key to the project is network and transport protocol independence, meaning that the standard would be PHY agnostic, and therefore support all the usual suspects in the IoT. The second main thrust of the new proposal is the use of web protocols, including HTTP, WebSockets, and JSON, in order to achieve that transport layer agnosticism.

As the approach focuses on the application layer, Guinard likens it to an open version of Apple’s HomeKit or Nest’s Weave – application frameworks that support multiple physical layers, and ensure interoperability by defining the application layer interactions. He added that the W3C has no interest in defining another PHY.

Guinard explained: “The architecture of the IoT is currently a complex labyrinth of fragmented standards and alliances. This makes neutral institutions such as the W3C vital to ensure that no single, commercial interest dominates the space. Universal standards are essential; not only to provide a fair marketplace, but also to help the IoT achieve its full potential. By operating in silos, devices are making it much harder to work together to deliver a coherent, valuable user experience. This is why Evrythng has proposed an entirely web standards-based approach.”

Close