Close
Close

Published

Linux Foundation launches Deep Learning group, in AI unity push

AI and machine-learning (ML) could be held back by lack of skills in building applications and applying the technology effectively if rate of deployment comes anywhere close to matching the rampant hype currently sweeping the field. There have already been reports of skills shortages and a drain of the best developers and data scientists away from academia and smaller emerging AI specialists towards the big players such as Google, Amazon and Facebook, as well as the Chinese, which are investing huge sums in their programs. This threatens to segregate the field into islands of development peddling different technology stacks and combinations of standards – rather than the open innovative model that has stimulated other movements in the IT realm such as DevOps and microservices.

The recent launch of the LF Deep Learning Foundation by the Linux Foundation was therefore timely, by providing a focal point for convergence of relevant AI standards and technologies. The title is unfortunate, in so far that it should be open to all aspects of ML rather than just deep learning, which has become even more inflated by hype than other AI technologies. The result is that the field known as deep learning, based primarily around neural networks, has become subject to totally unrealistic expectations, and become seen as a panacea for solving many of the challenges associated with AI in general.

Deep learning provides a cascade of model layers, where each takes input from the one below and outputs to the one above, allowing inferences to be made and continually tested on a data set subject to different weights of the inter-layer links. Yet while it is a powerful method with great potential, it has limitations that call for combination with other techniques to pre-process the data into a form that will yield the best results.

There is also no independent proof we have seen that deep learning performs better than established statistical regression techniques for identifying patterns in data sets, beyond perhaps the two established use cases of image recognition and natural language processing. It should be seen as just a more sophisticated form of pattern recognition, admittedly with great potential for refinement and tailoring in principle to any application where large data sets are involved. It is just a tool within the AI hierarchy, which needs complementing with higher level reasoning and abstraction to deliver real value and not rely purely on statistical power across massive data sets. We should note it would take today’s deep learning systems many fatal accidents before they would learn how to cross a road safely while a human only needs to be told once because it is immediately obvious there are no second chances. This is the kind of pragmatism that is often devoid from AI discussions.

This though is beyond the immediate remit of the LF Deep Learning Foundation, which is to focus on assembling a common core AI infrastructure stack. To this end it has launched the Acumos AI Project to design an open-source framework for building, sharing and deploying AI apps. This will run over a standardized infrastructure stack with all the components needed to set up and run a general AI environment out of the box.

Acumos will package established tool kits, such as TensorFlow and SciKit Learn. The latter is a set of off-the-shelf algorithms for recognizing patterns, such as Random Forests and Logistic Regression, while TensorFlow is more of a low-level library providing the bricks for building machine learning algorithms and so provides greater flexibility and scope while requiring more effort.

Whichever is used, the framework provides an API, enabling developers to connect the algorithms together as if they came from the same development team. This frees data scientists to concentrate on tuning their data sets for the problem, while the model trainers can focus on the application without worrying about the underlying AI platform. The framework also supports relevant non-AI tools such as microservices and containers to package and export production-ready AI applications as Docker files, which help developers and system administrators port applications, including all the dependencies, so that they can run across target machines automatically without further programming effort. Docker achieves this by creating safe Linux environments for applications called Docker containers.

This highlights how the LF Deep Learning Foundation is taking existing proven standards to create its framework for building and deploying AI applications wherever possible, working with established groups and vendors. What is less clear is how it will relate to other AI standards groups already in the field, although it is unique in addressing solely the infrastructure stack.

There is also the IEEE’s P2755 Working Group, but that has a higher-level focus on application areas, defining Robotic Process Automation, Autonomics and cognitive processing, as well as ML and AI themselves. There is a danger again here of getting hung up on names, given that cognitive computing is little more than what IBM calls AI, to distinguish the work from the original AI concept defined by Alan Turing of simulating human intelligence rather than simply performing useful high-level tasks without requiring explicit programming.

There are also groups developing standards or specifications for AI or ML in specific domains, such as the ITU-T Focus Group on Machine Learning for Future Networks including 5G. This was established by ITU-T Study Group 13 at its meeting in Geneva in November 2017, to develop technical reports and specifications for ML in emerging mobile networks, including interfaces, network architectures, protocols, algorithms and data formats. This will overlap with the LF Deep Learning Foundation, given that AT&T for example has an interest in both. AT&T is a major contributor to 5G development and has also, alongside Indian IT outsourcing firm Tech Mahindra, written the initial code for the LF Foundation’s Acumos AI project.

A notable aspect of the project is involvement of two of the biggest Chinese technology companies whose strength and commitment mean that the country will vie with the US for leadership in setting the agenda for AI development. One is Beijing-based Baidu, which is already a powerhouse in AI development and will contribute AI/ML code to the LF Foundation project optimized to exploit Kubernetes elastic scheduling through its own PArallel Distributed Deep Learning (PaddlePaddle) software. Kubernetes is the platform originally developed by Google for automating deployment of containerized software, which is particularly suitable for AI components acting on data.

PaddlePaddle has made a splash here by improving performance, scalability, hardware utilization and robustness through a distributed approach to ML training. Baidu incidentally provides the world’s second most heavily used search engine by dint of having 76% of the field in China where Google is blocked, while partly as a result being number four on the Alexa Internet rankings of traffic.

Then the rather inaptly named Tencent Holdings based in Shanghai is the world’s biggest investment corporation and Asia’s most valuable company with $580 billion market value, best known among western consumers perhaps for its WeChat mobile chat service. It is also investing hugely across the whole spectrum of AI from natural language processing to autonomous driving and in November 2017 highlighted its ambitions and growing presence outside China by poaching Microsoft’s speech processing pioneer Dong Yu after 20 years of service to work in its own new AI branch nearby in Seattle. It also made waves by publishing its “2017 Global AI Talent White Paper” in December 2017 calling for huge global investment in AI education. It argued there were only 300,000 AI researchers worldwide and demand for well over a million, although admittedly without providing clear evidence for either figure which appear somewhat plucked out of a hat.

Tencent is also supporting the LF Deep Learning Foundation, donating its Angel project, which is a high-performance distributed ML platform it jointly developed with Peking University for big data models and will be incorporated into Acumos.

What the likes of Tencent and Baidu, as well as other founder members Amdocs, AT&T, B.Yond, Huawei, Nokia, Tech Mahindra, Univa and ZTE, hope to get out of the foundation is a roadmap for AI deployments and some convergence around a common infrastructure stack, with agreed toolsets to reduce cost of implementation and deployment. The focus is very much on the technology itself and what will not be covered, at least under current plans, are higher level considerations relating to risks and societal aspects of AI which are attracting increasing interest. That will be the remit of other initiatives, such as the Partnership on AI to Benefit People and Society, founded by Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft in October 2017.

Close