Close
Close

Published

Edge compute integration opens exciting business opportunities for MNOs

In Wireless Watch, we have often analysed how operators will need to deploy more resources at the edge of the network, and combine connectivity with processing and storage to support distributed cloud and low latency services. This means the 5G network, and the broader platform that surrounds it, will involve some unprecedented decisions about infrastructure, and the ownership of it – the kind which MNOs have not previously had to consider.

The first moves, being made now, towards densification of the RAN, have highlighted the challenges around finding the right sites and wireline transport to support a more distributed approach to delivering mobile services. But for MNOs which want to integrate edge compute capabilities – storage, processing and specific applications like content delivery networks – into their RANs, there are even more complex choices.

For instance, how many edge compute nodes are needed? For content delivery, a dozen might suffice for a smaller geography like the UK. For true low latency applications, or extremely personalized ones, the operator would be thinking of smaller nodes far closer to the end user – integrated with a small cell perhaps, or with indoor broadband equipment.

And having decided on the architecture, and the business case, operators will decide whether to deploy their own cloud infrastructure, or rely mainly on a third party; and how to integrate the transport links.

For those which make clever choices now, there may be new business opportunities. Mobile operators are inherently distributed and many are looking to convert their central offices, and even their cell sites, into locations for edge compute nodes – which could then be opened up to third party service providers and enterprises to host edge cloud applications.

A handful of operators are embarking on the projects which others, round the world, will watch to try to get clues. One is China Unicom, which said late last year that it would build out “thousands” of edge data centers in tandem with its 5G deployment.  This is part of a massive densification and 5G program by all three Chinese operators. Even allowing for their habitual optimism about the timelines for roll-out of new architectures, the scale is daunting. China Telecom, for instance, thinks it will have to deploy between 2m and 2.5m 5G base stations, up from 1.16m for LTE; many integrated with compute capability.

While Chinese operators have a great deal of greenfield deployment, many operators are more concerned with how to repurpose sites and infrastructure they already have. In Australia, Telstra said at last month’s Mobile World Congress that it plans to close half of its local exchanges and change the rest – about 2,500 – into local data centers to support 5G services.

Mike Wright, group managing director of networks, said: “We’re moving from a static to a dynamic network. All applications will come up in the cloud.” He added: “The real scale will come with IoT and edge computing. We will have a network that, with network slicing, will be able to support different services that have different requirements, whether low latency or greater capacity.”

An architecture like that will make it more effective to support 5G services with virtualization, as Telstra is planning across all its use cases. It has shown the way with its LTE network, in which it has deployed VoLTE as a virtual network function (VNF), while implementing software-defined networking (SDN) in its transport network.

Between those local edge data centers and the big centralized ones, there will be some new regional facilities, reflecting that many telcos are considering a multi-tiered, hierarchical architecture. This applies to cloud data centers and the RAN too, creating a balance between the scalability and efficiencies of a centralized approach, and the new services that a highly distributed structure could enable. This is particularly important in the Internet of Things, in which a large proportion of data and signal processing does not need to go further than the local node or small cell, but some will have to be aggregated to the cloud and the core.

The multi-level approach also applies to the radio network and its backhaul/fronthaul. China Telecom expects to require 25Gbps transport links to each of its 5G base stations, which will act as fronthaul connections to an intermediate aggregation point, which in turn will connect to the core. Those midhaul and long range backhaul links will be up to 100Gbps in capacity, it estimates.

This approach means the operator is currently planning, and starting to deploy, a new optical transport network, which will be designed in tandem with its edge compute and 5G roll-outs. This optical deployment will be in three categories – backbone, metro and access, and will be setting ambitious targets for capacity, low latency and also for ultra-precise synchronization, for coordination across multiple base stations.

This will require a new version of some key technologies, defined with 5G specifically in mind, says China Telecom. The telco has proposed a specification called M-OTN to the ITU-Transport group, seeking to kickstart such a standard, which would enable small-sized OTN devices and 1-millisecond latency.

As in the access network, there is a range of work taking place across traditional standards bodies and new, more open initiatives, such as the Next Generation Optical transport network Forum (NGOF), which was formed in China in late 2017 and includes China Telecom, China Unicom, Broadcom, Huawei, Lumentum, Nokia and ZTE, among others.

China Telecom is also pushing for enhancements to ROADM (reconfigurable optical add-drop multiplexer) to double the degrees of switching from 20 to 40, in order to support large-scale, fully meshed backbone networks.

On the other side of the world, AT&T has also been leading the thinking on the new IT and fiber infrastructure requirements of an edge-oriented 5G strategy. “We’re moving network access to cloud computation, but we’re keeping it physically close to our users. Rather than travel over wireless connections to data centers hundreds or thousands of miles away, we’ll propel this data across super-responsive 5G networks to computers just a few miles away,” Melissa Arnoldi, president of technology development at AT&T, wrote in a recent blog post.

Some of this work is being incorporated into an open source project called Akraino, hosted by the Linux Foundation – like other AT&T ventures into the open source world, this aims to accelerate the innovations that telcos require while ensuring the next generation of technologies is operator-driven, and specifically, driven by AT&T’s needs.

Akraino aims to define an open source software stack to support carrier-class, high availability cloud services optimized for edge computing. Its first code will be released in the second quarter this year. “Akraino Edge Stack, coupled with ONAP and OpenStack, will help to accelerate progress towards development of next generation, network-based edge services, fueling a new ecosystem of applications for 5G and IoT,” said Mazin Gilbert, VP of advanced technology at AT&T Labs. The affordability of white box routers will also make edge compute deployments more flexible and practical, says the operator (see separate item).

AT&T’s first application priority for the edge is to improve augmented reality and virtual reality services, and enable consumers to use them without having to invest in their own bulky equipment. It has a project in its edge compute test zone in Palo Alto, together with GridRaster, to test mobile AR/VR experiences.

The AT&T Foundry in Palo Alto hosts a test zone for developers of emerging applications, such as AR/VR and self-driving cars, which could help justify the operator’s 5G/edge computing platform.

One of the key discussion points in the mobile edge is the balance of power between the mobile operators with their connectivity and distributed sites, and the cloud providers with their scalable infrastructure and software. The best hope is for an accommodation between the two, with each drawing on the other’s strengths, and Amazon AWS has shown itself willing to form such an alliance with its Greengrass technology. Greengrass is an edge cloud development platform particularly focused on the Internet of Things, and last year it was involved in several trials and demonstrations in conjunction with ETSI MEC (Multi-access Edge Compute) technology, courtesy of Nokia and Saguna.

That highlighted the potential for cloud giants’ developer ecosystems, microservices platforms and infrastructure to be combined with MNOs’ networks to support new services and business models. Now Telefónica is showing the way forward, discussing its talks with Amazon about a possible edge compute service powered by the telco’s central offices (COs).

Such an alliance, the operator told last week’s Zero Touch & Carrier Automation Congress in Madrid, could see Telefónica facilities enabling the Greengrass edge compute platform, allowing customers to run cloud computing applications on devices instead of data centers.

This could be boosted by Telefónica’s OnLife program to upgrade its COs to make them more independent of its transport networks and data centers, so expanding their usefulness for a range of edge, cloud and IoT services, while also offloading processing from the central data facilities as the amount of network data and devices rises. The telco operates about 10,000 COs in Spain, and sees great potential to open them up to third parties.

“If we allow third party players to put their solutions at the edge and guarantee latency, the growth is going to be exponential in the services we provide,” Alfonso Carillo Aspiazu, the chief architect of OnLife, told the conference. OnLife could help Telefónica evolve into a ‘platform-as-a-service’ provider, taking advantage of its investment in upgrading and automating the infrastructure it requires anyway for its new fixed and mobile networks.

Telefónica has been using open source blueprints developed by the ONF under the CORD program to design its updated COs. This involves replacing proprietary equipment with white boxes for switching, routing and IT, and investing in software. It is also using a cloud management platform called Open Nebula.

Juan Carlos Garcia, Telefónica’s director of technology and architecture, revealed that the operator has removed about 400,000 network elements from its Spanish COs since the start of 2014.

Other operators have similar programs to replace old-style COs. Telstra is selling off half its COs and converting the rest into edge data centers. Deutsche Telekom’s Access 4.0 project will replace its COs with CORD-based centers. But most are focused primarily on cost reduction, while Telefónica is pointing the way to a strategy which could, additionally, support new service revenues, especially when combined with 5G and fiber connectivity, and opened up to powerful partners like AWS.

As CORD indicates, operator-initiated open projects will be important in broadening the ecosystem and speeding progress on telco edge platforms, but there is a high risk that they will not result in fully commercially robust and deployable technologies. For that, most operators expect to continue to work within conventional standards organizations, in parallel with the newer approaches.

One of those established bodies, of course, is ETSI, and its MEC group recently produced its latest white papers – but also acknowledged the need to work with other groups, including open source.

One of the most important open source bodies is the Open Networking Foundation (ONF), which hosts several operator-driven, edge-focused activities, such as CORD (Central Office Re-architected as a Datacenter). Earlier this month, eight operators announced an initiative under ONF auspices to kickstart commercial SDN systems at the edge.

As cloud and network resources are distributed to the edge, the architectures for software-defined networking (SDN) will need to adapt to work optimally in these topologies. All the operators – AT&T, China Unicom, Comcast, Deutsche Telekom, Google, NTT, Telefónica and Turk Telecom – are leading members of the Open Networking Foundation (ONF). Under the ONF’s auspices, they aim to create common reference designs to help integrate SDN elements from multiple suppliers when creating an edge-based system. As with other operator-driven initiatives, the members will put pressure on the vendors to support standard designs – and therefore encourage competition and lower prices – by insisting they will only buy equipment which conforms.

“I think it is probably one of the first times that network operators have come together and shown resolve, as well as capacity, to invest in a technology beyond writing papers and doing specifications,” Patrick Lopez, VP of networks innovation for Telefónica, told LightReading.

The group has set up an Open Source Supply Chain, which will include a select set of partners in specific categories to help develop the reference designs. These could include OEMs, systems integrators;, VNF vendors, platform software providers, ODMs and chip suppliers. Four OEMs are already in place, and they seem rather traditional amidst all the talk of a brand new vendor ecosystem – Fujitsu, Huawei and Samsung, plus an unnamed fourth. Also, Radware has signed up as an integrator partner, Ciena as a platform software partner and Intel in the chip category.

The reference designs will be instantiated in Exemplar Platforms, which will enable them to be further developed and prepared for proofs of concept and trials. The idea is to speed time to deployment by reducing the wide variety of specifications coming from the supply chain, by being more prescriptive upfront.

There is particular focus on the access network because it is the most complex and expensive, and the one which is most challenging when it comes to introducing radical new virtualized and cloud-like architectures.

To support its new strategy the ONF is setting up a new governance model that will include a Technical Leadership Team, Reference Design Teams and a Supplier Advisory Team to augment its existing structure.

Lopez said: “Open source development is not a spectator sport.  Reading documentations and reading our blueprints and our specifications is not going to be enough. Vendors that want to participate in this new value chain and want to be part of our procurement process will need to engage in developing solutions and be part of this network lifecycle.”

Many operators – and suppliers – believe these open initiatives are putting the MNOs back in the driving seat when it comes to mobile and cloud networks. Traditional standards work, especially in 3GPP, has been mainly dominated by engineers from supplier companies, but in the ONF, Telecom Infra Project and Linux Foundation, operators lead the way. That does not mean they can turn their backs on the conventional processes though. Open source boosts creativity, innovation and speed of evolution, but risks fragmentation. A base of entirely uniform and agreed standards remains essential, and operators also know that open source technologies, to be deployed at full telco-grade levels of performance, require considerable investment in skills or consulting – as many early OpenStack deployments have shown.

To continue to assert its place in the mobile edge value chain – alongside open initiatives – ETSI is focusing on the technical and use case implications of MEC’s intersection with 5G. This is a powerful one, since both technologies are at their most valuable when engaged in low latency services. At Mobile World Congress last month, ETSI MEC published two white papers to reinforce this argument, called ‘Cloud RAN and MEC: a perfect pairing’ and ‘MEC deployments in 4G and evolution towards 5G’.

These argue that MEC and virtualization will play a vital role in smooth transition from 4G to a 5G platform with a dense edge, and position the MNO for a pivotal role in the value chain courtesy of its sites and the potential to integrate telecoms and cloud infrastructure.

“The Cloud-RAN and MEC white paper addresses the benefits of, and challenges met by, a colocation between cloud radio access networks and multi-access edge computing,” said ETSI. The MEC ISG’s chair, Alex Reznik, added: “Increasingly, the industry is looking for guidance on how to put the overall solution together. As the first Standards Developing Organization to address the challenges of MEC, ETSI brings the world’s leading experts on MEC to the table. The ETSI ISG MEC can make a significant impact on the effort to make 5G a reality and we invite the industry to take advantage of everything we have to offer.”

Some MEC announcements at MWC 2018:

  • Quanta Cloud Technology demonstrated its ‘Central Office 2.0’ concept, based on its NFVi, and is developing solutions with Intel and Red Hat focused on low latency services.
  • 5G Berlin is a city innovation cluster whose first project will be to create a 5G testbed using 5G New Radio and MEC to manage the city’s street lamps on mmWave spectrum. Virtualized packet core specialist Core Network Dynamics will contribute the 4G/5G core and adapt it to advanced use cases such as multi-access connectivity with non-3GPP radios such as LiFi (Light Fidelity).
  • ADVA coordinated a demonstration with multiple partners to show how virtualized packet core and RAN components can be hosted at the network edge to support low latency applications and network slicing. Partners were BT, 6WIND, Accelleran, Athonet, Lumina Networks, Mavenir and Spirent.
  • Another cooperation between MEC and AWS Greengrass was seen in a proof of concept demonstrated with Vodafone and Saguna. This enabled driver monitoring, with a live camera feed streaming over a Vodafone LTE network; and within the RAN, Saguna’s MEC platform hosting the AWS Greengrass AI video analytics application.
  • Qwilt and start-up Athonet demonstrated live mobile edge video offload using Qwilt’s Open Edge Cloud content delivery network (CDN), running within the Athonet MEC Local Breakout environment. As well as video streaming for live and on-demand OTT services, there was also native 3GPP-compliant support for legal intercept, security, billing and other  functions.
  • Taiwanese hi-tech lab ITRI showed the results of a MEC PoC, running on Advantech’s Packetarium XLc micro data center and Wind River’s Titanium Cloud.
  • Virtuosys showed off its Edge Application platform, which allows management of services at the edge with a white label app store couple with an open development platform. The distributed compute platform is meshed, so that coverage and compute power increases for the whole system each time an Edge Server is added.
Close