The critical architecture decision that every MNO needs to make in the next few months or years is how to balance resources between the center and the edge of the network (see Wireless Watch August 4 2017 for our analysis of this). The adoption of artificial intelligence (AI) to help optimize the network more intelligently and dynamically should help with this, but it may also become part of the dilemma.
In an interview with EETimes last week, Alex Tepper, founder and head of corporate and business development at Avitas Systems (a GE venture to apply AI in the industrial sector), said that the key limitation of AI today is its ability to compute on the edge device itself – in a drone, for instance, enabling it to change its behavior in-flight without the delay inherent in receiving instructions from the cloud. Qualcomm, Intel and others are working to bring AI to devices as small as handsets, but this is still a work in progress – so is the issue of miniaturizing AI going to be yet another delaying factor as MNOs work towards the new-look software-driven network architectures which will help get the best results from 4G and 5G.
Once networks are virtualized, the potential to transform their efficiencies and responsiveness should be limitless. But only if the planning of the physical, as well as the virtual, resources is fit for purpose. Depending on the use case and business model, there should be an optimal allocation of network and compute resources to different places along the chain, from cloud to switch to local gateway to the device itself. Ideally this should be capable of adjusting to changing circumstances in terms of traffic levels and type.
Of course, the ability of AI to help make these decisions is key – building on deep machine-based analysis of traffic patterns, user behaviour and history, and many other factors. It is one of three main areas which are driving telcos’ rising interest in AI:
More intelligent optimization of their network resources to improve efficiency and customer experience is one;
The others are less telco-specific, namely to build a new customer relationship through chatbots and other new interactions, which in turn build on a new holistic view of the user, and through hyper-personalized services;
And to harness the significant stores of valuable data which telcos hold on their subscribers and networks, for business-to-business revenue streams based on big data analytics and deep context awareness.
This interest is translating into real projects. A study of 48 mobile operators worldwide by Rethink Technology Research, to be published in October, found that 58% were engaged in AI tests, trials or real world deployments, with machine learning being the most commonly used technology.
A newly published survey of the broader telco space by CapGemini concluded that, while telcos used to be AI laggards, they are now leading large-scale deployments, with 49% deploying the technology, ahead of an average of 36% across all industries. These are being led initially, in the majority of cases, by customer service applications, but a full 93% of telco adopters said they expected AI to increase efficiency and effectiveness, while 79% claimed to have seen a 10% boost in sales thanks to AI.
Capgemini said in its report that the “low hanging fruit” use cases for telco AI are forecasting, managing risk and tracking customer and transaction histories. “Organizations need to have a clear view of where AI can create the most enduring advantage for them and their customers,” said the report, adding that, across many sectors, companies are failing to align AI investments with business opportunities. Interestingly, despite the massive AI R&D projects in the US and China, it is India, with its well-established base of software engineers, where the largest percentage of companies (across all sectors) are using AI commercially, at 58%.
Ron Tolido, CTO for CapGemini’s Insights & Data practice, said that many organizations are focusing more effort on complex AI projects and missing out on simpler deployments which could drive quicker returns.
This may be why many operators are looking first to chatbots and digital assistants, which are becoming well understood by the commercial side of the telco business and are very visible to customers.
For instance, in March, Amdocs announced aia, a digital intelligence platform which combines AI and machine learning capabilities – including cognitive computing services from IBM’s Watson – to make predictions, automate decisions and directly manage conversations with customers. This week, it jointly launched an AI-driven chatbot with Microsoft, specifically for MNOs, powered by aia and Microsoft Cognitive Services, the basis of what the Windows giant calls the new ‘conversational’ interface with the digital world.
However, more transformative effects could come from optimizing the network on a fully dynamic basis, linked to customer experience (proactively shifting a high value customer, who has wandered into a spot where a small cell is about to fail, to another connection, for instance). It will be easier for MNOs to improve their overall customer engagement once they are delivering an excellent network experience (not to mention users having to spend less time interacting with customer care in the first place). If that is not in place, Apple, Facebook and the others will continue to be the primary digital interface for most users.
Vendors are starting to support the optimization aim. Nokia has made several announcements in recent months, including its Autonomous Care offerings, unveiled in May. Earlier this month, ZTE said MNOs needed to accelerate their network AI efforts and announced a platform (as-yet somewhat ill-defined) to help. This will incorporate self-optimizing network (SON) capabilities as well as algorithms to support new interfaces based on natural language processing and facial recognition. The Chinese vendor aims to offer an end-to-end platform which covers a wide range of telco-specific use cases from intelligent automated networks to new consumer services, and which incorporates the algorithms along with the chips and terminal hardware. The elements promised include ‘self-researching AI chips’, robot modules and intelligent terminals such as smartphones and smart home controllers.
“Complemented with high computing power, precision algorithm and data analytics capability, AI technology will lead to the evolution of highly intelligent autonomous, automatic, self-optimizing and self-healing networks,” ZTE said in its release. “At this stage, operators and vendors are still proactively exploring and seeking more efficient, stable and accurate AI algorithms and solutions to reduce the operation labor cost and effectively improve operating income. The platform can help operators introduce new technologies and build next generation intelligent network more conveniently amidst the ongoing advancement of AI technologies.”
ZTE’s inclusion of smartphone and controller devices in its AI portfolio indicates that the algorithms – which, before the days of cheap mass storage and compute power – required a supercomputer to run, can now be applied to a mobile gadget. Intel and Qualcomm have both recently demonstrated neural processing engines running on chips targeted at gateways or mobile devices. They effectively take snapshots of broader machine learning models which are created and modified in the cloud, and run them locally to reduce latency and improve context awareness (see Wireless Watch September 4 2017).
This helps, but does not solve, the issue of how much intelligence and processing should be placed at the edge as opposed to the central engine. Edge-based AI improves responsiveness but an efficient way of updating the central platform is essential to avoid fragmentation. There are daunting issues of supporting smooth roaming for users who move from one AI-optimized, context aware cell to another with no such user experience. For challenges such as load balancing across different locations and times of day, a common view of the whole network is essential.
So the ability to do more AI at the edge does not answer all the questions of how to harness resources most efficiently as Tepper from Avitas makes clear. Avitas, being backed by GE – one of the flagwavers for using AI to make the Industrial Internet of Things a reality – is at the coalface of this movement. It recently announced an alliance with Nvidia to work on enabling AI in inspection services for the oil, gas and transportation industries. Such deals will be important for Nvidia, whose GPUs (graphics processing units) have been key enablers of affordable AI engines in recent years, but are now being challenged by more specialized processors like Google’s Tensor, or by FPGA-based approaches like Intel’s.
Nvidia wrote in a recent blog post: “How do you send a human being to inspect a petroleum refinery flare stack — one that operates at hundreds of degrees and requires negotiating a high risk vertical climb? The answer is you don’t.” While climbing a cell tower does not carry this level of risk, MNOs such as AT&T and T-Mobile USA have already experimented with drones to inspect and even install equipment, to save cost and liability. However virtualized, there will always be physical elements to a mobile network, and civil works can be the most expensive aspect of a roll-out, especially when it comes to large numbers of small cells to support urban densification.
Tepper points out that AI can create 3D models of an asset such as a cell tower, then layer “points of interest” on top of that to enable drones to spot problems and automate defect detection. Avitas and Nvidia are currently using truck-based AI engines to get closer to towers and industrial sites, but Tepper wants to get that intelligence into the drone itself. This is a far more complex, resource-hungry and mission critical task than supporting consumer applications based on vision processing, for instance, on a handset.
AT&T is also working on an edge computing model with AI elements to boost automation, revealed Marachel Knight, SVP of wireless network architecture, at the Mobile Future Forward conference this month. It aims to design its 5G RANs so that network computing components are geographically close to a tower or small cell to lower latency. It has already said that it plans to fit its edge computing platforms with high end GPUs and CPUs, and coordinate and manage all these elements with its software-defined network (SDN) controllers.
The goal is the same for AT&T and for GE (and many others) – to make AI highly personal and context aware, in order to go beyond automation and improved decision support, and enable new ways of working. On that journey, the right decisions, about how much to distribute or centralize, will help decide where the MNO fits into the complex AI value chain.