The drive for AI and machine learning models that can explain their reasoning and be apparently aware of their behavior or performance has gained a lot of momentum this year, with the USA’s Darpa (Defense Advanced Research Projects Agency) latest to add its considerable weight to the movement. This is based on the fear that AI progress could escalate and get out of hand, to the point where decisions and actions are no longer accountable, not least because no human could understand the logic behind the reasoning. There is also concern over various forms of bias that can be enshrined in AI systems without any conscious intent by either developers or data scientists involved. So it might seem hard to…