Close
Close

Published

Machine-learning decision making in question as law begins adoption

A series of announcements from law firms embracing AI-augmented systems has raised questions about accountability and adoption, spurred in part by the mysterious nature of machine-learning operations. With the EU law considering a law that requires companies to provide explanations for their machine’s autonomous decisions, AI developers are entering unknown territory.

There’s a distinct irony in an AI-assisted lawyer being undone by a law designed to hold it accountable, but what the EU is debating is a means of examining the decision-making process used by these neural networks. However, the complexities of these systems make such a requirement potentially impossible to meet, as the machine-learning functions have essentially programmed themselves without human intervention.

Of course, the developers begin with a process, and can tweak the system if it begins coming up with incorrect answers, but broadly speaking, once these systems begin generating consistent correct answers, they are a being unto themselves.

While law-tech has already drawn some ire, with a couple of spoof press releases, the legal profession is just one of many that is well positioned to be highly automated – and Riot has recently published a paper on the kinds of technology that will power this transition.

However, law is perhaps the most influential sector that is poised for high levels of automation – with its high levels of bureaucracy and document handling on the one hand, and its incredibly life-changing impacts based on both successes and mistakes. What assurances do the citizenry have that an AI-assisted judge and jury can make the right decisions?

This is where the AI and law clash – while AI and ML are ways of more efficiently performing a process, both appear impenetrable to the common man. One wonders what checks and balances could be implemented to ensure that a legal system that is greatly augmented by AI-based processes would still be interpreting the law correctly, especially if it is not possible to understand how an AI-based system arrives at its conclusions.

As the MIT article notes (an excellent read), the researchers are essentially responding to an output, correcting the code or adjusting an algorithm to achieve the desired outcome. However, while the can influence these components, the inner workings of the neural networks that they use in their machine-learning applications are hidden from them.

When discussing an Nvidia-powered self-driving car, MIT says that “the system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.”

The article asks whether AI-based deep learning shouldn’t happen until ways of making the technologies more understandable to their creators and more accountable to their user are found, and in the context of law, that’s a very sensible line to take. Another talking point we have encountered in our research is that of military AI adoption, and the issue of how to hold a killer robot accountable for breaking the rules of engagement.

Using the example of the Mount Sinai Hospital’s Deep Patient project in New York, which began crunching 700,000 patient records, the article noted that Deep Patient’s creators couldn’t explain how it had begun to anticipate the onset of psychiatric disorders like schizophrenia – conditions that are “notoriously difficult for physicians to predict.” As its team leader Joel Dudley put it, “we can build these models, but we don’t know how they work.”

Which then raises the issue of accountability, especially when you consider how a hospital might defend itself against a lawsuit brought against it by a family that lost a child due to a misdiagnosis by an AI-based system. The question then is to who do you assign accountability – the doctor who trusted the AI-system, the developer who designed it, or the system itself.

Which brings us onto the recent spate of law-focused AI-based systems. The most prominent one is DoNotPay, a free service that used an AI-based system to power a chatbot interface that would help people challenge and overturn parking tickets. Developed by Joshua Browder, a second-year Stanford student, DoNotPay received a lot of coverage last year, after successfully challenging 160,000 of the 250,000 tickets that users submitted.

The app would ask questions to determine if an appeal was possible, before automatically carrying out the appeal process if it determined that it would be successful. With a 64% success rate, DoNotPay has appealed over $4m of parking tickets, and Browder plans on expanding it to work in Seattle, as well as moving into to flight delay compensation schemes, and refugee and asylum claims.

Facebook has also experimented with chatbots, with a long-term view of monetizing its Messenger service by letting businesses use it as a platform to interact with customers – for things like customer service, bill paying, or even sales. However, its first forays were poorly received, with chatbots for CNN and the WSJ prompting online backlash for spamming users and not being particularly intelligent.

For Facebook, its huge user-base represents ample opportunity for additional revenue streams, on the back of these chatbots, but it appears to have some way to go before its users see them sufficiently useful.

Other law-focused bots include LawDroid, which facilitates the registration and incorporation of companies via Facebook Messenger. The system was built by Foresight Legal, a virtual law firm that operated in a distributed manner and that had previously offered prenuptial contracts via LegalZoom’s portal and marketing. It uses  Python to handle the document authoring systems that Messenger can’t provide, but is rules-based and doesn’t use machine-learning functions – although v2.0 is scheduled to adopt that feedback mechanism.

Elsewhere, BakerHostetler (BH) made the news for its use of IBM’s Ross AI, used to handle a bankruptcy practice. CEO Andrew Arruda said that BH wasn’t the first law firm to license Ross, and that other announcements are due soon, but Ross is “the world’s first artificially intelligent attorney,” based on IBM’s Watson.

Learning from the queries it is asked, Ross is primarily used as a research tool – to make trawling through legal records, rulings, and precedents a much more time-efficient process. The obvious advantage here is that more staff time is freed up to work on each project, but for the firm using Ross, the tool should become more useful and knowledgeable as time goes by – better anticipating and answering questions.

Close