Close
Close

Published

Drive for explainable AI risks being just grandstanding

The drive for AI and machine learning models that can explain their reasoning and be apparently aware of their behavior or performance has gained a lot of momentum this year, with the USA’s Darpa (Defense Advanced Research Projects Agency) latest to add its considerable weight to the movement. This is based on the fear that AI progress could escalate and get out of hand, to the point where decisions and actions are no longer accountable, not least because no human could understand the logic behind the reasoning. There is also concern over various forms of bias that can be enshrined in AI systems without any conscious intent by either developers or data scientists involved.

So it might seem hard to say what there is not to like about projects such as Darpa’s new Artificial Intelligence Exploration program, which it claims will invest in new AI concepts, including what it calls “third wave” AI with contextual adaptation and an ability to explain its decisions in ways that make sense.

Presumably, first wave includes expert systems that essentially just capture human expertise as static rules while second generation introduced machine learning capability and adaptation but without any internal feedback over the processes and conclusions reached. Darpa used the example of a machine vision system identifying a cat, explaining its decision by presenting labelled pictures of fur, paws and whiskers that between them indicate the whole object is a small feline beyond all reasonable doubt.

In that simple context the ability to explain why the system has identified an object makes sense, and the project is worthwhile more widely but only providing the context and limitations are recognized. However, it is not relevant for all domains of AI, including some applications of robotics where what matters is that the machine performs the right actions safely, and not whether it can justify them.

Such justification is also inappropriate for a large class of applications involving identification of patterns in data sets, including diagnostics in healthcare. It might seem obvious that a diagnostic system, like a human doctor, should be able to explain why it has reached a certain decision. But the advantage of machine learning in this context lies in its potential to identify signatures of underlying diseases from large complex sets of metabolic variables derived from blood tests and other sources including scans.

In such cases, the reasoning might be confined to the observation that a given set of diagnostic data correlated either weakly or strongly with known signatures of type 2 diabetes. It would then be up to human physicians to decide whether to accept that diagnosis and proceed upon it, with no absolute proof. The advantage of machine learning would be in being able to provide earlier warning of impending conditions as well as confirmatory evidence, rather than reasoning.

This caveat would apply to recent findings from a number of laboratories that machine learning can be applied to analysis of compounds contained in human breath as a new way of identifying signatures of disease. Smell detection has been part of Chinese medicine for centuries but given the limited human olfactory system and tendency to be affected by random factors, results have been mixed.

Machine learning can take advantage of machines such as gas-chromatography mass-spectrometers capable of identifying and quantifying thousands of volatile organic compounds in the air and then matching the resulting signatures with patterns known to be associated with various underlying conditions. While this particular technique is unlikely to reach the clinic for some years and might prove controversial, it could potentially provide rapid preliminary diagnoses suggesting further tests and again justification of the indications would be irrelevant.

However, there are situations, such as credit scoring and many others in finance, where justification would be desirable. Darpa says its goal is to develop machine learning models that at least maintain their prediction accuracy while being able to lay out steps taken during the training process. The ultimate aim is that humans should be able to trust and then manage emerging AI partners in decision making.

Darpa will pursue several lines in parallel with a focus on achieving various milestones quickly. To do this it will put projects out to tender with a requirement that feasibility is established within 18 months. Otherwise there is the risk of projects just fizzling out, as has happened so often in the history of AI.

It is fair to say that all the tech giants such as Google, IBM and Amazon involved in AI now have some such similar project, but they need to be more transparent themselves about the possibilities and limitations. Otherwise all they are doing is virtue signaling – a widespread affliction. The whole point of machine learning in many contexts is that it can analyze very large complex data sets and find patterns way beyond human capabilities to understand. The true challenges are around bias enshrined in the data not the logic employed analyzing it.

Close