Google has responded to criticism that machine learning and AI are still way short of human cognition despite huge progress in specified domains such as mind games, vision and natural language processing. It has published an academic paper outlining ideas for the next generation of machine learning algorithms capable of more general reasoning outside the box in which they were trained. This sounds like an expansion of current unsupervised learning techniques, and in a sense it is, but incorporating a new graphical base to support inductive reasoning.
The paper then is a response to the criticism that AI is too domain specific, rather than addressing concerns over ethics or data bias. In fact, bias is discussed in the paper, but in a different context as a positive force. This is inductive or learning bias, which is the set of assumptions that in this case a system uses to predict what the output should be given inputs it has not seen before. This takes it more into the realm of human learning, although is no more than an extrapolation of what current machine learning can achieve. But even current supposedly unsupervised learning systems are limited in that while they can to an extent move between domains they still need guidance over assumptions as well as over the relevance of particular conclusions.
Current systems are very good at detecting signatures of patterns in complex data sets and at intelligently searching domains to much greater depth than humans are capable, as in games such as Go or chess. The data recognition capability can be transported from say spotting patterns indicative of an impending cyberattack to recognizing symptoms of a disease from a scanned image of diagnostic test. At a mathematical level, both problems are the same – pattern or signature recognition.
But current systems are not capable of inductive reasoning, except in very simple artificial cases, which involves generalizing from a small set of examples to an expanded domain which may be only partially related. Unlike deductive reasoning where conclusions are reached with certainty, inductive reasoning involves approximations at hopefully increasing levels of probability achieved in the light of feedback or accumulating evidence. This often makes use of Bayesian inference, the application of Bayes theorem of conditional probability which determines how likely it is that a given set of assumptions led to an observed outcome.
The inference part involves juggling around different assumptions. until the one that is most likely and best fits the observed outcome is found. The idea of induction has roots in mathematics where it is applied to a broad class of problems where the aim is to work out a general rule applicable to all cases from a small set of examples. In that case it generally leads to a precise rule with 100% certainty of being right, as in determining the formula for the sum of a sequence of numbers by considering just a few simple cases. In machine learning, the principle has to be applied more loosely in a fuzzy context, and the challenge lies in establishing a framework that is both sufficiently flexible to address a wide variety of domains and yet with the structure to facilitate the necessary inference calculations.
The Google paper, titled ‘Relational inductive biases, deep learning, and graph networks,’ does not discuss methods of inference such as Bayes but concentrates on the model in which they operate, advocating the overlaying of so-called Graph Networks (GNs) comprising nodes and lines or edges joining them that can be directed and associated with values in three dimensions. These are entities, relations and rules. The entity defines what a node or element represents, such as a physical object and its size or mass.
Then the relation associates these objects perhaps on the basis of whether one is larger or heavier than another by some factor. Finally, the rule defines functions that maps entities and relations within or even between domains, perhaps sifting them on the basis of size in this case. This forms the basis of inductive reasoning with the structure to move forward, into new domains potentially, but with the flexibility to iterate and come back to adjust rules, relations and even entities. In this way starting assumptions can be modified until they fit better in a wider set of domains.
Google considers that GNs are a better model for simulating human cognition because we assume that the world is composed of objects and relations between them. The entities and relations that GNs operate can correspond better to things humans understand such as physical objects that provide a basis for interpretation, analysis, visualization and extrapolation.
One notable point made in the paper is that the obsession with neural networks in their current form is misplaced, and that these can be dispensed with in machine learning if they do not suit a particular use case or situation. Instead relationships of objects can be modelled using a variety of machine learning approaches, including not just ones in vogue such as convolutional neural networks (CNNs), recurrent neural networks (RNNs) and long-short-term memory (LSTM) systems, but also others.
Another candidate is mathematical set theory involving coherent sets of objects defined by some category or relation. This has already been investigated in the context of machine learning, with several papers published going back at least to 2008 when an IEEE paper discussed application of rough set theory. The idea here is to draw conclusions or inferences from incomplete or uncertain data, applying methods such as Bayes.
But research on rough sets has been eclipsed by neural networks over the last few years and Google argues that other methods should be embraced more fully in the mix.
This should help bring AI closer to the fundamental mark of human intelligence defined as the ability to make “infinite use of finite means” where a small set of elements such as words can be composed productively in limitless ways as in new sentences and ultimately say works of literature. The underlying principle which Google believes GNs come closest potentially to realizing is that of combinatorial generalization, or constructing new inferences, predictions, and behaviors from known building blocks.
Needless to say though GNs are not a universal panacea either and other foundational approaches will also be needed for some problem domains.