AT&T-driven Acumos AI initiative announces first open software release

The Linux Foundation’s Deep Learning Foundation – an umbrella group for several projects in this area – has announced Athena, the first release from the Acumos AI initiative, which is based on code originally submitted by AT&T. The US operator kicked off the program with a view to lowering the barriers to entry to AI, which it intends to use aggressively in its own operations, including planning and optimizing its networks.

AT&T has donated code in order to kickstart several open source projects, and as these start to release their first software one by one, it is becoming easier to join the dots between them and discern a broad, telecoms software platform being driven heavily by this one operator. For instance, Orange is already talking about the potential to integrate Acumos AI with ONAP (Open Network Automation Protocol), another Linux Foundation-hosted project based originally on AT&T code, this one focused on automated management and orchestration (MANO) for virtualized networks.

If enough operators follow Orange’s lead and support these AT&T-inspired projects, the US telco has a dual opportunity – to exert unparalleled influence over the development of virtualized, next generation networks; and to reduce its own costs by attracting a broad open ecosystem to its technologies.

Athena allows AI models and applications to be deployed in public or private cloud infrastructure, or in a Kubernetes container environment on a company’s own servers or on-premise virtual machines. This is an important first step for Acumos AI, which aims to establish an open source platform and framework to make it easier to build, share and deploy AI applications, because these will be running on a common infrastructure stack and on a generic set of components. This aims to create an ‘out-of-the-box’ AI environment.

Among the capabilities supported in Athena are:

  • one-click deployment of the platform using Docker or Kubernetes containers
  • a design studio for chaining together multiple models into create an end-to-end solution
  • use of security tokens to allow simple on-boarding of models from an external toolkit
  • decoupling of microservices generation so that models can be easily repurposed for different environments and hardware.

“The Acumos Athena release represents a significant step forward in making AI models more accessible for builders of AI applications and models, along with users and trainers of those models and applications,” said Scott Nicholas, senior director of strategic planning at The Linux Foundation. “This furthers the goal of LF Deep Learning and the Acumos project of accelerating overall AI innovation.”

As well as AT&T, contributors included Amdocs, Orange, Tech Mahindra and others. Acumos AI also features a marketplace – catalog of community-contributed AI models with common APIs that can be shared securely across multiple systems.

According to Mazin Gilbert, VP of advanced technology & systems at AT&T Labs, the Acumos project now has 15 members and nearly 1,000 registered users. The next release will be ready in mid-2019 and will introduce model training and data extraction pipelines, as well as updates to support closed-source model developers.

François Jezequel, head of ITsation, procurement and operators at Orange, pointed out the potential to integrate Acumos into ONAP work, which is housed in another of the LF Foundations, this one focused on Networking initiatives.

AT&T will be exploring similar connections, undoubtedly. Gilbert himself is a common link between several key projects, heading up AT&T’s efforts in Acumos and in yet another LF project, Akraino, which focuses on edge compute. He is also chair of the governing board of the LF Deep Learning Foundation.

He sees an immediate, and urgent, need for AI in the network itself, as this densifies amid 4G expansion and 5G introduction.

“Today, we have 35,000 microcells. We are going to 100,000 microcells building out 5G and more,” he said in an interview with LightReading earlier this year. “Where do you put those? What building? What pole? It takes a year to put one of those out today. That cannot scale. The question is how do you make this mainstream, reduce the time cycle, and take into account traffic changes?”

The answer is AI, he argues. AI and machine learning (ML) can “redo completely the network planning process”, enabling operators to understand “on the spot” where a small cell would be best placed and how it will interact with others nearby, and with the macrocell. “We can’t send an army of people every time we want to build one, it’s not possible to scale 5G without it,” he added.

The beginning of densification at AT&T was one reason why the telco took its AI efforts into the open source world at a far earlier stage than it did with other developments such as ECOMP (now the basis of ONAP) or xRAN (soon to be part of the Open RAN Alliance). It is becoming urgent to scale up the deployment of small cells, white box switches and routers, and edge compute nodes, to support the new-look 5G infrastructure, which in turn will enable 5G to support many new use cases requiring dense capacity and low latency.

In a study Rethink published earlier this year – ‘AI, SON and the Self-driving Cellular Network – network planning and optimization were found to be the main reasons why operators expected to invest in AI/ML in the next 3-4 years. The survey of almost 100 telco executives found that, between 2018 and 2022, between 60% and 74% of MNOs plan to deploy AI/ML to support SON in various areas of automated network planning, management and optimization.

The results show that 74% of MNOs plan to do this for RAN optimization by 2022 and about 70% for planning and to support RAN-based customer experience management. And 68% plan to do this for network maintenance, especially by harnessing predictive maintenance tools, while 60% are expecting to combined AI and SON in the management and orchestration of virtualized networks.

AT&T is already trialling the use of AI-driven drones to monitor infrastructure, including about 8m poles. Gilbert continued in the interview: “I can send a drone with video capabilities and machine learning that can tell me what is wrong and diagnose the problem. And in the future, that drone will have a robot that can fix the problem that doesn’t jeopardize someone’s safety.”

But Acumos is not confined to telcos. As with Akraino, the idea is to build solutions for different verticals, but with “generic foundations that will suit many industries, and specific tools, applications and marketplaces on top of those, to support each vertical’s specific needs”, as Gilbert put it. “AT&T and a bunch of other operators, we are interested in the telecom sector, so how does the marketplace for AI fit into the LFN and into ONAP and into 5G? That is where we are coming in. But other companies who joined are interested in different aspects of AI, more to do with green energy, more to do with engineering and infrastructure and healthcare, so that the specialization happens at the solution level not at the foundation level.”

Background to Acumos AI:

AT&T announced in November 2017 that it was working with Indian integrator Tech Mahindra to build Acumos, with the aim of making it cheaper and simpler for operators to deploy and share AI applications, via a marketplace system. That could accelerate the uptake of AI-driven telco processes, from network planning and optimization, to consumer services; and it will also reduce the power of the major AI platform providers.

In January, Amdocs signed up, saying it would contribute knowledge of AI data, mapping and data tools from a customer experience, network and media standpoint.

Acumos is an extensible framework for AI and machine learning (ML) solutions, built on open source technologies and running on AT&T’s Indigo data sharing and collaboration platform. It can federate across various AI tools available today (for instance connecting two microservices derived from Google and Amazon). It allows those AI microservices to be edited, integrated, packaged, trained and deployed and they can then be accessed from the marketplace, and chained to create more complex services.

To this end, Acumos will package established toolkits, such as TensorFlow and SciKit Learn. The latter is a set of off-the-shelf algorithms for recognizing patterns, such as Random Forests and Logistic Regression, while TensorFlow is more of a low level library providing the bricks for building machine learning algorithms and so provides greater flexibility and scope while requiring more effort.

Whichever is used, the framework provides an API, enabling developers to connect the algorithms together as if they came from the same development team. This frees data scientists to concentrate on tuning their data sets for the problem, while the model trainers can focus on the application without worrying about the underlying AI platform.

The framework also supports relevant non-AI tools such as microservices and containers to package and export production-ready AI applications as Docker files, which help developers and system administrators port applications, including all the dependencies, so that they can run across target machines automatically without further programming effort. Docker achieves this by creating safe Linux environments for applications called Docker containers.

Tech Mahindra will work with enterprises to help them apply the AI services and tap into intelligent telecoms connectivity to enable new use cases. The firm’s SVP and strategic business unit head, Raman Abrol, said: “Our ultimate goal with the Acumos Project is to accelerate and industrialize the deployment of AI at enterprises and get developers and businesses to collaborate effectively in order to improve how we all live, work and play.”