The mobile operator no longer has the luxury of dealing with a relatively closed and well-defined set of technologies and partners. The mobile network is increasingly intertwined with fixed line connections, and also with broad virtualized, programmable platforms, which will be essential to enable new business models and justify the investment in 5G.
That sees operators getting deeply involved in a host of new technologies and standards, and increasingly emerging from the secrecy of inhouse labs and working through open source projects. Two important areas of effort are edge computing and machine learning (ML). Both are the focus of several open initiatives, in which certain operators, notably AT&T, are prominent. Both are starting to be deployed, often starting with the wireline network, but with the advent of 5G, attention will turn rapidly to mobile.
Both these topics are very important to the broader 5G and converged service platform, offering the potential to transform an operator’s cost base, while extending a connectivity network into a full distributed cloud platform. Applying AI/ML to network planning, optimization and healing could significantly further the goal of full automation of operations, and of allocating resources dynamically to applications and users on demand, and fully tailored to their requirements.
This relates to edge compute because edge, if it is to be telco-deployed (which is far from certain) will be part and parcel of the move to densify the network with additional cell sites, to support low latency, high capacity hotzones and full coverage. Combining cloud resources with those sites greatly expands the type of services which can be supported, enhancing the business case. But with so many moving parts in the network, automation will be essential to achieve good response – and both edge nodes and AI-enabled automated operations will be even more essential when operators get to network slicing.
By weaving edge compute nodes into the fabric of the network, telcos can support low latency services, but they also argue they can expand into cloud services by harnessing the distributed nature of their systems – an advantage over the webscale providers, whose network edge is currently far further from the end user than the operator’s. That, in turn, would allow them to provide a richer end-to-end slice in which they are including, and charging for, cloud resources as well as connectivity.
However, many vertical industries are pursuing their own approach to edge compute, and getting far closer to the user than the operator’s central office or cell site. The idea of integrated connectivity and compute/storage resource being deployed on industrial premises such as factories, stores and offices is growing, adding value to the case for in-building small cells. But with the potential to use shared spectrum and neutral host platforms, those deploying and monetizing these edge nodes may not be the MNOs.
This is why open initiatives which go beyond the telecoms vertical are essential to ensure the mobile edge is suited to many types of industries, not just classic MNO applications like video caching; and that, in turn, the telcos end up with standards which enable them to make their edge network useful to a wide selection of customers.
In this respect, activities like OpenFog and the AT&T-initiated Akraino will prove more valuable to the telcos’ future evolution than ETSI’s Multi-access Edge Computing (MEC), even though this has a far more operator-specific starting point. However, its initial starting point, that mobile sites would map well to edge compute nodes, giving the operator the best hand in the value chain, is over-simplistic, and MEC appears to be evolving towards defining application programming interfaces (APIs) which will work with less telco-specific platforms like OpenFog, rather than trying to define the whole system itself.
At the Akraino Summit earlier this month, hosted by AT&T in New Jersey, the attendees were discussing the newly released seed code, which aims to provide a blueprint for telco-based edge compute. This work is increasingly taking place under the auspices of the OpenFog Consortium, whose framework has been adopted as the basis of IEEE edge compute standards, and it may be that other blueprints will be developed for non-telco verticals such as automotive – which will be interoperable with Akraino, allowing industries to cooperate with common interfaces.
The Akraino Edge Stack also officially moved into the execution stage within the Linux Foundation, and announced a full governing structure and project scope. It also kicked off a technical committee.
According to Arpit Joshipura, general manager of networking and orchestration within the Linux Foundation, there are 12 vendors plus AT&T in the founding membership, but he insists other operators are already involved in technical work on Akraino, and will clearly hope they will soon join up as full members. To date, several operators are active in edge compute platforms, but their work is not coordinated – for instance, Deutsche Telekom has placed its own edge efforts into an autonomous unit called MobiledgeX, and has signed up the support of SK Telecom of Korea. (The vendor members of Akraino are ARM, Dell EMC, Ericsson, Huawei, Intel, inwinSTACK, Juniper, Nokia, Qualcomm, Radisys, Red Hat and Wind River.)
Akraino has moved the definition of the edge on from focusing only on cell sites and central offices. These are adequate for MNOs’ core consumer use cases like enhanced video delivery, but not for many industrial and IoT applications. So the project scope now extends to two types of edge network. One is the traditional one, where up to 20ms of latency and a considerable distance from the user is acceptable. The other is on customer premises, supporting shorter distances and latency of less than 5ms.
The Akraino community has started work on developing the middleware, software development kits (SDKs) and APIs, along with blueprints to address individual use cases in either of these scenarios. The first one, for which AT&T released the seed code last week, is conventional – a telco-hosted edge compute node located in the central office, with a full stack based on OpenStack. The full release is expected by the end of the year.
The next blueprint will be a more lightweight stack designed for deployment on smaller, more remote sites where there will be less space for equipment. Third in line is a non-telco blueprint which could work, according to Joshipura, “with disaggregated hardware or really lightweight, low-latency stacks”.
Akraino also has the ambition to unify many of the other edge-focused efforts underway in the telco and other verticals. It positions itself as part of the broader Linux Foundation platform for the open source edge, which also includes two other AT&T-initiated projects, ONAP (Open Network Automation Protocol) and Acumos AI, plus OpenStack and other cloud elements. Akraino says it is engaging with other groups as well, both inside and outside the Foundation, including ETSI MEC, CORD, EdgeX Foundry and OpenEdge Computing.
“Service providers are already participating in the technical committees, because they are open to anyone, you don’t have to join,” Joshipura told LightReading. “Then there’s a lot of what I would call enterprise and IIoT companies participating. If you look at the developer summit agenda, we have a whole set of what I’d call adjacent communities and projects participating. We’re working to see how to collaborate.”
“We fully embrace disaggregation as a means of driving innovation in CSP networks,” said Kevin Shatzkamer, VP of service provider solutions at Dell EMC, which was also an initiator of the EdgeX Foundry effort. “We see edge as the real enabler and on-ramp to the cloud and joined the Akraino Edge Stack project to help lead this important effort for customers to quickly scale edge cloud services.”
Meanwhile, elsewhere in the Linux Foundation’s growing open telecoms ecosystem, there is another example of telco-specific work being joined by related efforts focused on other industries. In AI, the Foundation already hosts Acumos AI, but now it has added two new projects to its Deep Learning Foundation. These have come from the webscale world, submitted by Baidu and Tencent, which indicates the rising influence of Chinese majors in the open source world. After years when both the telecoms industry and China found it hard to get their heads around open source, the barriers have finally come down, boosting innovation but calling into question many of the traditional standards processes and patent licensing schemes which have defined the mobile platform for so long.
As in edge, there are traditional ETSI and telco-specific projects addressing AI/ML, but operators and vendors are starting to look more keenly at open ecosystems like the Linux Foundation’s, in order to access a more cost-efficient, competitive platform; and to tap into projects which link the telco world, via open interfaces, to other verticals and the cloud market. These links will be essential if the 5G network is to be more than just a 4G upgrade, delivering mobile broadband connectivity, and will fulfil its bigger vision of being a fully automated, programmable network capable of supporting a host of specialized use cases and industries.
So the Acumos participants will gain from sharing developments with the new projects, and eventually from defined interfaces between them. Both new initiatives fall within the Deep Learning Foundation, one of several umbrella frameworks established by the Linux Foundation earlier this year. Another is the LF Networking Fund, which includes ONAP (see inset). The aim of these frameworks is to coordinate the activities of related projects, reducing overlap and accelerating the creation of a broad architecture.
The LF Deep Learning Foundation is to focus on assembling a common core AI infrastructure stack. Its foundational project is Acumos AI, which will design an open source framework for building, sharing and deploying AI apps. This will run over a standardized infrastructure stack with all the components needed to set up and run a general AI environment out of the box.
It has initially been built on code contributed by AT&T and its partner Tech Mahindra. They were also founding members of the foundation, along with Amdocs, Huawei, Nokia, ZTE, Tencent and Baidu.
The two Chinese web companies have now contributed the new projects.
Baidu’s project is an Elastic Deep Learning (EDL) framework to help deep learning cloud service providers to build services using open systems. It will contribute AI/ML code optimized to exploit Kubernetes elastic scheduling through its own PArallel Distributed Deep Learning (PaddlePaddle) software. Kubernetes is the platform originally developed by Google for automating deployment of containerized software, which is particularly suitable for AI components acting on data.
PaddlePaddle has made a splash here by improving performance, scalability, hardware utilization and robustness through a distributed approach to ML training. Baidu incidentally provides the world’s second most heavily used search engine by dint of having 76% of the field in China where Google is blocked, while partly as a result being number four on the Alexa Internet rankings of traffic.
Tencent – best known for WeChat – is the initiator of the Angel Project, aiming to develop a distributed ML platform based on the Parameter Server – co-developed with Peking University – with ‘out of the box’ algorithms optimized for handling higher dimension models with billions of parameters. This will be incorporated into Acumos over time.
“Angel shares a common goal with the LF Deep Learning Foundation: to make deep learning easier to use,” said Xiaolong Zhu, senior AI researcher at Tencent.
Parent company Tencent Holdings is the world’s biggest investment corporation and is investing hugely across the whole spectrum of AI from natural language processing to autonomous driving.
Both new projects already have almost 1,000 commits and use the Apache-2.0 open source licence.
A key goal of the LF Deep Learning Foundation is to take existing proven standards to create its framework for building and deploying AI applications wherever possible, working with established groups and vendors. What is less clear is how it will relate to other AI standards groups already in the field, although it is unique in addressing solely the infrastructure stack.
There is also the IEEE’s P2755 Working Group, but that has a higher level focus on application areas, defining Robotic Process Automation, Autonomics and cognitive processing, as well as ML and AI themselves. There is a danger again here of getting hung up on names, given that cognitive computing is little more than what IBM calls AI, to distinguish the work from the original AI concept defined by Alan Turing of simulating human intelligence rather than simply performing useful high-level tasks without requiring explicit programming.
There are also groups developing standards or specifications for AI or ML in specific domains, such as the ITU-T Focus Group on Machine Learning for Future Networks including 5G. This was established by ITU-T Study Group 13 at its meeting in Geneva in November 2017, to develop technical reports and specifications for ML in emerging mobile networks, including interfaces, network architectures, protocols, algorithms and data formats. This will overlap with the LF Deep Learning Foundation, given that AT&T for example has an interest in both. AT&T is a major contributor to 5G development and has also, alongside Indian IT outsourcing firm Tech Mahindra, written the initial code for the LF Foundation’s Acumos AI project.
What the likes of Tencent and Baidu, as well as other founder members Amdocs, AT&T, B.Yond, Huawei, Nokia, Tech Mahindra, Univa and ZTE, hope to get out of the foundation is a roadmap for AI deployments and some convergence around a common infrastructure stack, with agreed toolsets to reduce cost of implementation and deployment. The focus is very much on the technology itself and what will not be covered, at least under current plans, are higher level considerations relating to risks and societal aspects of AI which are attracting increasing interest. That will be the remit of other initiatives, such as the Partnership on AI to Benefit People and Society, founded by Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft in October 2017.