MWC19: Cloud-native core and vRAN: the industry waits for these catalysts of change

Cloud-native core and vRAN: the industry waits for these catalysts of change

The biggest challenge for the 5G business case in the early years of the technology is that the first wave of standards are out of step with what many operators want to achieve. If 5G really is to become a super-flexible, super-economical platform supporting a huge array of industries and use cases, two of the critical success factors will be cloud-native technologies and shared spectrum. Neither of these is currently fully available for commercial deployment.

As we have argued before, this means a major dilemma for operators which feel pressurized – by their governments, customers, competitors or their own business plans – to launch 5G services early (2019-2021). They are having to deploy 5G conventionally, and so missing out on its cost and revenue potential in the first few years – plus, they will face the expense and disruption of migrating to a more modern platform in future, at a stage when their rivals may be coming up behind and even leapfrogging them by leveraging the newest, software-driven systems.

There are always these trade-offs between seizing early mover advantage and waiting for fully mature commercial platforms, but it is far more acute in the 5G era because so many operators had assumed that cloud-native cores, virtualized RANs and flexible spectrum would be available along with the first 5G networks. Instead, early movers are largely deploying physical RANs and virtualized, but not cloud-native cores. The latter are designed from scratch for the cloud, supporting modern techniques like containers, rather than just virtualizing existing physical functions as virtual machines, and so promise far greater performance, flexibility and programmability.

A few companies are adopting vRAN and cloud-native core from the get-go, but these are mainly greenfield providers like Japan’s Rakuten, or traditional operators deploying a 5G system for a specific, and new, use case, like Verizon’s fixed wireless.

Also, operators are having to spend large sums on exclusive spectrum licences again. Most of them support this system, because it keeps new competitors from getting their own airwaves (see item below about Vodafone), but their enthusiasm is starting to wane as their debts mount and high spectrum costs just add to the pain. From India to Italy, pressurized and indebted telcos are looking to use more shared spectrum to reduce their reliance on over-priced licences.

For 5G to support a wider ecosystem of industries and deployers – as it must do, to justify its existence – there must be more attention to alternative spectrum options. Shared spectrum can benefit existing and new deployers and encourage the rising trend of enterprise ‘sub-nets’ – private or semi-private cellular networks built and managed locally, often with their own core and edge node, and optimized for the particular enteprise’s requirements.

But this trend is taking place in 4G (most notably in the USA’s CBRS general access spectrum), because shared and unlicensed 5G radios will not be standardized until Release 16, due to be frozen late this year, and so will not be in commercial systems until the early 2020s. There is some movement to millimeter wave bands, which are supported in Release 15, and should improve the availability of spectrum capacity, and therefore reduce the cost of all airwaves (except, perhaps, the sub-1 GHz bands which are essential to wide coverage). But most operators, and many regulators, regard mmWave as experimental for now, and expect to wait well into the 2020s, before they deploy mainstream systems in these high frequencies.

Some of these decision points were covered in the first part of our two-issue special edition for Mobile World Congress (MWC) 2019. Others are picked up in today’s special report, which argues that the 5G technologies with the greatest potential to revolutionize the cellular model are evolving rapidly now, but still require a couple of years to be trusted, affordable and deployable by the majority of MNOs. In the meantime, there will be a difficult hiatus for vendors and some operators, as they wait to be able to roll out ‘true 5G’.

Virtualization has made more progress in the mobile core, mainly the 4G evolved packet core (EPC), than in most other areas of the network. However, there are drawbacks to initial virtualized EPC deployments, which mean some operators believe they should have waited until technologies were more advanced before taking the plunge.

Those drawbacks, according to one European operator interviewed at Mobile World Congress, are:

  • The early vEPCs are virtualized but not cloud-native, which means the physical functions have been placed into virtual machines, but often without being significantly reworked to suit the needs of digital services and future 5G.
  • That means another wave of migration will be required to move to cloud-native virtual network functions (VNFs), devised specifically for the cloud environment and to support new technologies like containers.
  • Many vEPCs have often been adopted for a specific, discrete application, or just certainly elements, like the MME, have been virtualized, so there is still plenty of work to be done to create a full platform.
  • The EPC has often been virtualized in isolation, whereas operators are now starting to think about end-to-end platforms with commonality between the RAN, core and transport.
  • Most operators have embarked on virtualization of their mobile and fixed cores separately, whereas many of the biggest benefits of a new cloud-native core network will be to support convergence for fixed/mobile operators.
  • Some MNOs are keen to use open source platforms to drive cost-efficiency and innovation into their virtualized networks. But open source solutions are only just maturing, within initiatives such as the Open Networking Foundation (ONF) and Telecom Infra Project (TIP).

“The enablers have not been there for a truly modern virtualized core,” said the operator. “They are just emerging now, so 2019 or 2020 will be a far better time to start work than 2017. The vEPC projects to date have largely been dress rehearsal.”

Two open source projects set the stage for converged 5G cores

The ONF – which is hosted by the Linux Foundation – announced two projects which should push the telco core market closer to towards the situation envisaged by our off-the-record interviewee (whose comments were echoed by many others at MWC). These are:

  • the Open Mobile Evolved Core (OMEC), a commercial, open source implementation of a 4G EPC, with migration roadmaps for 5G
  • the Converged Multi-access and Core (COMAC) initiative, which addresses the convergence of 5G and fixed broadband networks, and the ability to deliver services over wireline and wireless connections, regardless of access technology.

The OMEC is the latest project under the broadening umbrella of CORD (Central Office Re-architected as a Datacenter), one of the most significant of the ONF’s activities for the converged network future, since it addresses the virtualization of the whole network, from CPE to data center and across home, enterprise and mobile scenarios.

OMEC aims to deliver the specs for an open source mobile core which will have the same performance levels and scalability as a closed platform, whether virtualized or not. The project has been initiated by Sprint, which in the past year has been taking a more prominent role in open source projects, often focusing on groups like CORD and Telecom Infra Project (TIP), in which AT&T is less prominent than in the Linux Foundation projects it founded itself, such as ONAP and ORAN. Sprint did join the ORAN Alliance and the Linux Foundation Networking Fund (which houses several of the AT&T initiatives) last May, but is being more proactive in other groupings. It is a co-founder and co-chair, with Vodafone, of the new TIP 5G NR working group, and now is spearheading the CORD OMEC project.

The overall purpose of CORD is to define open ways to harness NFV and SDN to deliver cloud economics to the telco’s traditional networks and locations (including the central offices and, eventually, the cell sites). OMEC will be based on an NFV architecture that is optimized for general purpose hardware, initially based on Intel processors (though, thanks to Huawei, Marvell and others, ARM-based solutions are likely to play a role in future COTS platforms too).

Sprint says the solution has already been tested at scale, with a data plane based on the Linux Foundation’s DPDK (Data Plane Development Kit), which is designed to support large subscriber numbers. OMEC conforms to Release 13 of the 3GPP LTE standards and initially supports connectivity, billing and charging capabilities, as well as lightweight implementations for IoT or edge applications.

The OMEC project is a logical result of Sprint’s multiyear inhouse project to develop a virtualized EPC. It placed this into open source in mid-2017 and will now contribute its work – entitled ‘Clean CUPS Core for Packet Optimization’ (C3PO) – as the seed code for OMEC. Sprint’s platform, codeveloped with Mavenir and Metaswitch, implements the control/user plane separation (CUPS) which will be an essential element of the 5G core, but is also included in the final set of LTE standards, Release 14.

The result of a four-year collaboration with Intel, C3PO is designed to ease bottlenecks in mobile core packet performance by independently scaling the data plane and control plane. It collapses multiple EPC and SGi LAN elements in a single data plane instance. Intel Labs built the core control plane and data plane virtualized EPC applications, and Sprint developed the SDN controller enhancements.

Ron Marquardt, VP of technology at Sprint, said that the operator planned to fun field trials of OMEC for edge applications this year, while working with the ONF to expand the system and build a broad community.

This is yet another example of an operator abandoning the secretive approach of the past, which saw MNOs investing in inhouse technology to gain a performance edge over rivals. Instead, some of the R&D-heavy operators are keen to go down the open source route, to increase their ability to drive the technology agenda; to reduce their own cost and time to market once a base solution has been created; to harness cloud techniques more easily; and to set de facto standards by attracting a broad base of developers and adopters, even among rivals.

Sprint has been working for at least two years on replacing single-purpose, bare metal platforms with a unified NFV infrastructure (NFVi), initially running its packet core and IMS capabilities as virtual network functions (VNFs). It has also been deploying common infrastructure data centers across its network and has a roadmap to expand them, following a ‘cap and grow’ approach to commercializing NFV. This means capping any expansion of legacy core network hardware while adding new functionality and capacity on the virtualized platform. The first use cases were virtualized SMS and MMS messaging traffic.

Like AT&T, Sprint sees virtualization as a way to broaden its supplier ecosystem. For its virtualized EPC, Metaswitch supplied the SBC (session border controller), the CSCF (call session control function and the BGCF (breakout gateway control function). Mavenir is providing the TAS (telephony application server), MRF (media resource function) and pDRA (policy Diameter routing agent).

Intel, whose leading position in cloud and data center processors means it can only benefit from the virtualization of networks, has also contributed code to OMEC. Richard Uhlig, director of Intel Labs, said: “OMEC provides the open source community with a next generation disaggregated, scalable and virtualized mobile core, optimized using DPDK on Intel Xeon processors. This will help accelerate the transformation of networks to be ready for the exciting transition to 5G.”

“OMEC includes seven co-repositories for the comprehensive set of functionality called for in the 3GPP standards for EPC functionality,” said the ONF’s VP of marketing and ecosystem, Timon Sloane. “It’s optimized for IoT and 5G. It’s virtualized, disaggregated and then distributable, so you can place the pieces wherever they make the most sense. Very explicitly we’ve disaggregated the user plane and the control plane. This then allows the components to be placed where best suited anywhere in the network all the way from the access to perhaps even a public cloud or telco cloud or anywhere in between.”

The OMEC work will be one project feeding into the more specifically 5G-oriented COMAC, which aims to support seamless user experience and subscriber management regardless of access technology, and even as users move between different connections. Treating all access variants the same is essential to network slicing – for many operators, the main way that advanced 5G will be commercially justified. To address the widest variety of connectivity needs in the most optimized and granular way – across different industries, use cases and geographies – slices will need to aggregate different types of access, core and transport resources, as available or optimal for a given application, in a dynamic way.

The first reference design was launched just before MWC, supported by the ubiquitous AT&T, plus China Unicom, Deutsche Telekom, Google and Türk Telekom. The initial vendor supportetrs are Adtran, Intel, Radysis, gslab and HCL.

COMAC will leverage software-defined networking (SDN) and cloud to create converged access and converged core capabilities on a single platform.

The access architecture will be based on disaggregated RAN, core and Broadband Network Gateway (BNG) components.

It will draw on the work of other ONF projects (COMAC is its fifth reference design). These include OMEC, and fellow Linux Foundation project the ORAN Alliance, fo the disaggregated RAN. And it uses SDN from the ONF’s SDN-Enabled Broadband Access reference design as well as its Virtual Optical Line Termination Hardware Abstraction.

Elements of the RAN, core and gateway will be redistributed and aggregated into a unified access layer, creating a control plane powered by SDN, and a user plane powered by P4, the open language which allows for the programming of packet forwarding planes, to support flexible and customized services.

The resulting stack will manage high speed subscriber traffic regardless of a user’s access link. COMAC will be configurable for mobile 4G and 5G as well as PON, WiFi, Docsis cable and fixed wireless.

The converged core will support unified subscriber management, combining the functions most commonly used in mobile or broadband networks – 4G’s Mobility Management Entity (MME), the mobile Home Subscriber Server (HSS), and broadband’s BNG-Authentication.

By disaggregating all the elements and running them as microservices, the ONF says operators will be able to place those elements dynamically where they are most needed, on access, edge, core or public clouds.

“5G technology is a profound technology shift taking place in parallel with a massive upgrade in broadband networks,” said Oguz Sunay, chief architect for mobile at the ONF. “COMAC is on a path to become a pivotal piece of the edge network for operators, playing a very important role in the realization of next generation infrastructure where enhanced mobile connectivity must be paired with broadband to support new use cases and the development of next generation services.”

“Connectivity for users must evolve to keep pace with the rapid rate of cloud innovation,” added Ankur Jain, distinguished engineer at Google. “COMAC will enable microservices to run where they are best suited, whether in the public cloud or in close proximity to users.”

“COMEC is really about taking a fresh approach to building a unified platform,” Sloane told FierceTelecom. “To do that, first we disaggregated everything. We disaggregated mobile and broadband components. Then we recombined them in new and innovative ways to create common elements where they can serve both mobile and broadband. Things like user authentication, charging, billing, and security are all opportunities, the low hanging fruit, for being able to do better levels of integration.”

“With all of this, we’re creating a cloud-native platform. A platform for the access and edge cloud of the future,” he concluded. “Today mobile and broadband are separate networks. In this architecture, they both come into a common converged user plane that can handle traffic from either space. Then they are being controlled by common open source infrastructure. That includes RAN control. You’ll be able to do precise control for the whole spectrum and RAN and radio usage.”

The ONF was not offering a timeline for commercial deployment of COMAC but it has announced an ‘exemplar platform’, designed to make it easy to download, modify, trial and deploy components of the reference designs in order to speed up adoption and deployment.

The UK’s operators push ahead to cloud-native cores

Cloud-native technologies and full virtualization have been pushed most aggressively in Japan, China, South Korea and the USA. But the UK is emerging as a frontrunner in the cloud-native core too, with 3UK starting to trial the technology, and BT kicking off the procurement process for its own platform.

The Hutchison-owned fourth MNO says it is targeting security, scalability and cost benefits, above all, from its cloud-native core, which it has started to test with supplier Nokia. It is using the 5G-ready core to support a trial network for staff and hopes to extend the tests to selected consumers later this year, with a view to commercial deployment in 2020.

“In order to use the core network, we have to ensure that all of our mobile sites are connected to our new core. We achieved this milestone in December 2018.  This means all of our customers will be able to enjoy the benefits of the new core network when it goes live,” said a spokesperson.

Meanwhile, the move to a next generation core may be hastened at the UK’s largest MNO, EE, because of a policy of its parent, BT, not to have Huawei equipment in its network cores. Although BT has been a vocal supporter of Huawei kit in the 5G RAN, it said last year that it would replace EE’s Huawei core, denying this was a result of government pressure amid the recent US-inspired suspicions about the Chinese vendor’s potential spyware. It said that the change at EE was part of overall group policy.

It has drawn up blueprints for a next generation, cloud-native, 5G-ready core which will comply with forthcoming 3GPP Release 16 standards including support for network slicing.

EE’s director of mobility and analytics, Dave Salam, said the operator had started its procurement process for the new core. But he does not regret delaying implementation until at least 2020, and until Release 16 is available, since there will be a wider array of functionality that is commercially available and mature, and the integration costs of a very early, very heavily customized solution will be avoided. “AT&T has done this quite early but I’m not sure anyone has saved money out of that approach,” he said in an interview with LightReading. “We chose to delay for Release 16 to get the right functionality and maturity and have the technology in the right way to stitch elements together.”

Salam hopes to build a fully open and programmable network cloud using container and microservices technologies, eventually to replace an older-style virtualization platform based on VMware systems. This first generation core will support the first 5G services launch next year, using 5G NR Non-Standalone (which runs with a 4G core).

“We are using commercial hardware from standard vendors,” Salam said. “It is still virtualized but not full cloud-native and not Kubernetes or containerized.”

Then the full cloud-native core will support 5G standalone services and a far richer array of services, including the potential for end-to-end slicing – which BT has trialled heavily – at a later stage. That more advanced Release 16 network will deliver many of the promises of 5G, which are not fully supported in Release 15 – very low latency, and tight integration with edge computing, are among these.

Tom Bennett, EE’s director of network products and services, has said that BT would need 1,000 edge computing centers to cover the UK, for the type of services the telco envisages, such as enhanced gaming and virtual reality. BT/EE has reportedly ruled Huawei out of its edge plans, and is now conducting trials with Nokia and vEPC specialist Mavenir.

Vodafone and Orange seek to drive a common platform for NFVi

After a storming start, NFV (Network Functions Virtualization) has seen a far slower pace of commercial deployment than was expected a couple of years ago. There are various reasons for this, a major one being the fragmentation of NFVi (infrastructure) environments that exist, which means common deployment and testing processes for virtual network functions (VNFs) have not emerged, despite the efforts of ETSI and open source groups.

As in other areas of the virtualized network, such as orchestration, large operators are seizing the initiative in trying to drive common platforms. Orange and Vodafone are the latest examples, announcing at Mobile World Congress that they are putting the final touches on plans for a common approach to NFVi. That will help to ease VNF testing across different environments, and the two instigators would hope to get support from other operators, and so drive a broad industry consensus.

They stress that they are not trying to create a new standard for NFVi, but rather to drive consensus among operators on a common approach to virtualized architecture. So the two operators’ blueprint does not provide detailed specifications, but three outline configurations for three types of VNFs – network-intensive applications; compute-intensive applications; and ‘nominal cases’ (such as general IT workloads).

Each of the three categories will require different elements in the NFVi, such as accelerators and Smart NICs for network-intensive apps; and specialized chips such as FPGAs and graphical processor units for compute-intensive tasks. It is important that VNFs can access such resources in a common way.

The broad issue that the two operators are trying to address is that each telco has designed its NFVi slightly differently. They have put together the infrastructure required to deploy VNFs – compute, storage and networking resources plus a hypervisor – in their own ways. In some cases, including Orange, different divisions within one operator have deployed different configurations. And while some telcos have stuck closely to the ETSI NFV architecture, others have drifted further from it – Orange and Vodafone include a virtual infrastructure manager (VIM), based on OpenStack, in their NFVi definition, while ETSI considers the VIM to be separate, and to be part of the MANO (management and orchestration) layer.

The result of all this fragmentation is that VNF providers have to test their software against each individual operator’s NFVi, so VNFs are not truly portable or interoperable across different platforms. Development and test cycles are hugely lengthened because changes and fixes are operator-specific in many cases.

Markus Wuepping, head of Vodafone Group’s Cloud Center of Excellence, told LightReading: “There is a big risk of having a different NFVi configuration for every VNF deployment. You’d end up with a multi-silo deployment in our networks, which defeats the purpose of moving to cloud and leveraging the same shared infrastructure for multiple VNFs.”

At last fall’s Open Networking Summit, the SVP of Orange Labs Networks, Emmanuel Delpon, told the audience: “The issue is that there are today too many configurations and parameters proposed by the ecosystem of vendors. There is no one NFVi standard, there is no one standard for orchestration. The observation is that the disaggregation is leading to a fragmented market, which is bad for all.”

He summed up: “If we fail collectively to deliver one standard, I believe we will have an industry disaster. Some actors will arrive a kind of de facto standardization; they will try and bring their standards, and I do not believe all the industry will benefit.”

As Orange and Vodafone split the VIM from the orchestrator, this means that close interworking between common platforms for MANO and for NFVi will be important. Orange has been a major cheerleader for ONAP to be the universal MANO (rather than a rival system, ETSI’s Open Source MANO). And it also wants the industry to get behind another open source initiative hosted by the Linux Foundation, Open Platform for NFV (OPNFV), which addresses NFVi. But currently, OPNFV has about 60 different NFVi configurations, according to Orange, to reflect inputs from many vendors and operators, and in theory, a VNF supplier might have to test against every one.

The next major release in ONAP’s twice-yearly cycle will be Dublin, which will include the results of closer cooperation with OPNFV. The two organizations are doing joint verification and compliance, bringing ONAP under the OPNFV Verification Program (OVP) to enable easier testing for a full range of virtualization elements from VNFs to the NFVi.

OPNFV’s latest software release, Gambia, came out late last year. It focuses on continuous delivery, cloud-native network functions (CNFs), testing, carrier-grade NFVi features and upstream project integration.

According to LightReading, the GSMA is preparing an NFVi initiative based on MNO requirements, though Vincent Danno, a director in Orange’s Strategy, Architecture and Standardization unit, commented: “The GSMA is a good place to discuss with other service providers that share the same constraints as us, but ultimately the topic of reference NFVi implementations will need to operate within OPNFV or another Linux Foundation Networking project.”

This is just the latest example of how the initiative is passing from ETSI’s NFV working groups, which laid out the original architectures, to open source activities, particularly those hosted by the Linux Foundation.

Italy proves a breakthrough market for vRAN pioneers

In Barcelona, there was plenty of excitement about Altiostar, the virtualized RAN start-up which has scored a big win as part of Rakuten’s high profile contract for a cloud-native 4G network in Japan. But there are other vRAN specialists emerging to shake up the traditional mobile network ecosystem, and a particularly interesting one to us was JMA Wireless. JMA has initially focused on stadium and large venue networks as being one of the sectors which is already ripe for a virtualized, dense network that can be shared by multiple operators if required. Having acquired millimeter wave specialist Phazr at the end of last year, it is broadening its horizons and looking towards city and other markets and the start of 5G.

Such start-ups are important to the success of vRAN – which remains highly challenging for operators, and to a large extent a missing piece of the 5G jigsaw. By proving it can be done, specialist providers push the established network vendors into action (there are obvious conflicts of interest for the big four, in replacing expensive proprietary boxes with virtual network functions running on high performance, but non-proprietary, servers).

Some major operators are incubating small companies in this field, in a bid to kick their normal vendors into action and to facilitate a more open RAN ecosystem around the future, disaggregated platform. Orange’s incubation of Amarisoft, within the Facebook Telecom Infra Project, is one example, and the French telco has said publicly that such companies could “replace Ericsson”.

Of course, it is more likely that Ericsson, or another major OEM, will acquire these start-ups, but even that outcome would help accelerate the path to a fully virtualized and decentralized RAN.

JMA Wireless has recently taken its own steps towards that market reality, working with two operators in Italy, TIM and Wind Tre. Italy has been one of the world’s most active markets in terms of early vRAN trials and developments, especially by incumbent TIM, which has worked with Ericsson, Huawei, Altiostar and now JMA in this area. The operator launched LTE-Advanced stadium networks in two venues – the Olympic Stadium in Rome and the Dacia Arena in Udine – using JMA’s XRAN virtualized RAN platform.

This localized network, made up of distributed radio/antenna units controlled by a virtualized baseband controller, is integrated into TIM’s live macro network and claims to double mobile capacity in the stadiums. They also, says TIM, have lower footprint and power consumption than alternative approaches like DAS (distributed antenna system) or multiple integrated small cells, and so reduce installation and operating costs.

TIM says it will use the improved in-venue capacity and reliability to enable new services such as 360-degree HD video, supporting interactive and virtual reality applications in real time for attendees; as well as social media apps in which thousands of spectators can communicate simultaneously.

Domenico Angelicone, TIM’s head of access network technology, said: “This is another important milestone in the process of transforming our evolution toward 5G through virtualized software solutions. The ability to manage and reduce complexity improves customer value and network quality in an economically efficient way, paving the way for the new 5G digital services, such as immersive and virtual reality services.”

Meanwhile, the country’s third MNO, Wind Tre, is deploying an LTE-Advanced XRAN in the center of the city of Bologna. XRAN was integrated into an existing neutral host city system as a proof of concept (PoC), working with JMA’s Bologna Technology Centre. XRAN is already being used to support a multi-operator core network for Italy’s new entrant MNO, Iliad; and TIM and JMA have also activated a live XRAN network in the same city, deployed on a multi-operator Teko DAS platform.

Another milestone for TIM’s virtualization program came last year when it deployed a vRAN, with Ericsson, in Turin. “Our vRAN partnership with TIM shows that the evolution towards 5G through the virtualization, automation and digitalization of radio access networks is not just a talking point but an action point, as seen by the successful deployment in Turin. We are working closely with TIM to turn this city deployment into a nationwide deployment,” said Fredrik Jejdling, EVP at Ericsson, at the time.

TIM has also been trialling a vRAN based on technology from three specialist vendors, with integration by Tech Mahindra. These trials involve Baicells on the base station side, plus software start-up Phluido, and virtualized core and RAN stacks from Radisys, a leading light in the open source CORD (Central Office Re-architected as a Datacenter) group, and recently acquired by disruptive Indian operator Reliance Jio.

“We are looking to a deeper virtualization that will increase the cost attention we are looking for,” said Lucy Lombardi, SVP of technology ecosystem innovation at TIM. “The scope is to test fronthaul in existing networks, including microwave and fiber, as well as interoperability between different vendors. We are in the process of integrating Radisys and Phluido on one side and producing a radio resource unit with Baicells.”

Phluido is a four-year-old start-up which has developed a fronthaul technology that compresses data so that it can be transported over non-fiber connections such as Ethernet or wireless. It is also a leader of the TIP vRAN workgroup. It calls its approach to vRAN ‘radio as a service’ and claims its approach could make it as viable to virtualize the RAN in the cloud as the packet core. Founded by former Qualcomm engineers Dario Fertonani and Alan Barbieri (now CEO), it has developed a technique that allows the network to pre-process and compress the frequency data at the cell site, so that it can be realistically transmitted to the data center via a lower performance connection such as an Ethernet cable or microwave. Once there, the baseband processing would take place, running as an application on centralized servers.

Since it first mooted this idea, similar approaches have been developed, in different ways. Companies like Altiostar have advanced the viability of Ethernet cable for fronthaul and that is now the topic of a 3GPP standards effort. There are many applications where latency can be improved by processing much of the data and signalling at the edge of the network, in effect distributing the cloud and the baseband to many edge nodes.

But Phluido has its own particular solution to the fronthaul issue, and it may prove to be an alternative to the routes favored by ETSI and 3GPP, to bolster TIP and its supporters, and the push for an open platform. Its unique offering is the ‘as a service’ element, which would fit in with moves towards slicing, supporting large numbers of service providers from a pool of baseband capacity. This could be run on a wholesale basis or by a group of participating operators, which would build out slimline radio/antenna units at the macro or small cell sites and then tap into the baseband in the cloud.

As for JMA, it is closely aligned with the ORAN Alliance, which like TIP is working on open specifications for Ethernet fronthaul as well as broader disaggregated RAN architectures. The company expanded its platform at the end of last year when it acquired Phazr, a mmWave specialist which hopes to ride the wave of interest in high frequency spectrum for 5G.

JMA says the deal creates the industry’s first fully integrated architecture that can address 5G New Radio – both Non-Standalone and Standalone – for in-building coverage, dense capacity in large venues, and outdoor deployments. It brings together the acquirer’s virtualized RAN and small cell solutions, with Phazr’s mmWave radios. The combined platform can also support LTE and help operators migrate towards virtualized RAN and edge compute, claims JMA.

Phazr is still in start-up mode, and was originally backed by Fibertower, which is now owned by AT&T. CEO and founder Farooq Khan said JMA was a good fit because it is a 100% US–based company, and “our goal from the beginning has been to create a US-based alternative to the three big guys” in American 5G roll-outs (Ericsson, Nokia and Samsung).

Phazr has gained FCC and European CE certification for its mmWave radios, and has demonstrated its RAN conducting 5G data calls with Cisco’s packet core. The start-up says its 5G New Radio vRAN is being trialed today by three major US operators. The FCC and CE permits cover equipment operating in the 27.5-28.35 GHz, and the 31.8-33.4 GHz, bands, though the Phazr radios can support other bands.

Phazr uses what it calls ‘cliff computing’ – massive MIMO arrays, RF and baseband functions are integrated into a single physical unit to reduce latency and fronthaul requirements. The idea is not to revert to old-style all-in-one base stations, but to move compute resource close to the antennas to enable fully digital, hyperdense beamforming, and multiuser MIMO support across a 120-degree sector.

Last year, Phazr petitioned the FCC for an experimental licence to test its Quadplex technology, which pairs mmWave frequencies on the downlink with sub-6 GHz spectrum for the uplink, something that is supported in the 5G NR standards. The company plans to use the 3.5 GHz, 24 GHz, 28 GHz and 39 GHz bands for tests at its facilities in Allen, Texas, over the course of the next two years.

Phazr says it is seeing increasing interest in this hybrid approach. Khan says that implementing the uplink on mmWave is cost-prohibitive because the links need to be close together and will be about 10 times more numerous. Quadplex avoids the need for a power-hungry mmWave transmitter in 5G devices, reducing cost and power consumption.

Phazr also believes its digital approach to beamforming is required to make the most of 5G NR and of mmWave. The large vendors are mainly using hybrid beamforming, which combines analog and digital, but Khan argues this is limited by an inability to frequency-multiplex users or to use power-amplifier efficiency improvement techniques. It also behaves poorly in non-line of sight environments – hence the move to ‘hyperdense beamforming’.

Both Verizon and C Spire have previously applied for temporary permits to test the Phazr technology, which has also been in field trials with the cable industry’s R&D unit, CableLabs. Most of these tests have used the start-up’s low cost fixed wireless platform, which is based on a WiFi ASIC and incorporates 384 antennas in the base station and 64 in the CPE. It promises data rates of up to 30Gbps over one kilometer with line of sight, or 200-800 meters without.

The company has patents pending for its beamforming technologies and for a user-installable router, called Gazer, which combines the mmWave modem with 802.11ac WiFi. This allows the ‘5G’ Radio Backbone (RABACK) nodes to provide backhaul for gigabit access for standard WiFi devices.