Close
Close

Published

Police forces need a stronger case for AI, waiting on progress

Like other sectors the police has been seduced by the promise of AI, with mixed results so far, and faces mounting pressure not just on the ethical front but also to justify investment in the technology at a time of budgetary constraints. Such concerns have been articulated in a report in the UK, published by the country’s Royal United Services Institute for Defense and Security Studies (RUSI) and University of Winchester, which calls “as a matter of urgency” for much clearer guidance and codes of practice outlining appropriate constraints governing how police forces should trial predictive algorithmic tools. This mirrors similar concerns expressed elsewhere, especially in the US because where the most advanced deployments are.

The report focuses specifically on machine learning, which is good in the sense that the term AI has become so overused, diffuse and all-embracing that it has become semantically meaningless. The call for a formal body to scrutinize introduction of ML in policing is needed to ensure technologies comply with data protection legislation and respect human rights, the report argues. But that is all well understood and the more interesting aspect concerns the need for a much stronger evidence base and criticism of police forces for adopting a scatter gun approach while assuming that ML algorithms as they stand now will almost automatically deliver widespread benefits across all departments from crime prevention to reduction in bureaucracy.

This is despite some negative feedback from some of the few trials that have been conducted, such as from the Welsh police CCTV based facial recognition system where 2,279 of 2,470 potential matches were incorrect. It is true that the Chinese have reported far better results and the Welsh police system has since improved, so here’s the rub. ML does hold great promise for policing in many areas but is often not fit for purpose just yet and should be assessed in rigorous trials in the same way that new technologies have to be in healthcare.

This comes at a time when law enforcement is attracting huge investment in the ML community and a slew of start-ups. Broadly these encompass six areas; crime prediction, crime prevention, crime detection, crime investigation, incident reporting and back office paperwork. In the front-line facial recognition has attracted most interest with application for prevention, detection and investigation, with the static CCTN network complemented by cameras worn by police staff as they roam, courting further controversy.

The Chinese trials found that CCTV-based recognition suffered from highly variable accuracy depending on distance and orientation of individuals as well as weather and time of day. This led to interest in sunglasses incorporating facial recognition lenses tested in Zhengzhou and built by Beijing-based LLVision Technology. These have greater flexibility and accuracy because the wearer can focus on a suspect and gain a closer more frontal view, but even this system is still susceptible to varying levels of accuracy in the field.

In a trial it was capable of recognizing individuals from a pre-loaded database of 10,000 in 100 milliseconds, with accuracy approaching 99%, but levels in practice have been considerably lower, especially under rain or when lighting is poor. Even so, its use is being extended to other cities, including parts of Beijing. The glasses feed data back to local mobile units and from there to centralized databases.

Mobile camera technology has also been widely deployed in the US although not so much for immediate recognition of potential suspects. There, body cameras made by Axon, best known for its supposedly non-lethal (less lethal is a better term) Taser stun-gun, are being used by various police forces, and the company’s AI division has been focusing on effective consolidation of the data in cloud storage.

Axon manages well over 5 petabytes of video captured by cameras deployed by over half of US police forces, with the stated objective being to hold officers to account since their activities will be monitored, as much as to combat crime. ML algorithms are being employed to catalogue the data and make it more searchable, and that itself has prompted fears over privacy and data protection.

To counter that Axon, which changed its name from Taser to avoid being associated exclusively with the stun-gun, in April 2018 launched an AI ethics board to guide use of products, in particular addressing how its facial recognition technology might affect community policing.

There are some less contentious areas where ML is being introduced and is showing potential for improving police performance and efficiency. One is incident reporting, where a speech recognition system called Dragon Law Enforcement from Nuance Communications has been deployed in the US for police staff to dictate details of incidents instead of writing them down or keying them in.

This system has roots going back 40 years to speech recognition pioneer Ray Kurzweil, who was one of the first to develop algorithms under the banner of AI for speech processing. Improvement has been slow but has accelerated in the last decade as processing power has become sufficient to run deep ML models and execute relevant algorithms in real time.

US police forces have reported a threefold increase in reporting speed compared with typing and 99% recognition accuracy. Although such accuracy has been achievable for some years it has required clear enunciation and gaps between words. ML algorithms have enabled the system to cope with more careless everyday speech without gaps between words.

Crime prediction is another area where ML is being applied and where further evidence of efficacy is needed. One tool in this area called CrimeScan was developed at Carnegie Mellon University in the US on the premise that violent crime is like a communicable disease in that it tends to break out in geographic clusters. Furthermore, it can develop from lesser crimes, for example through increased tensions between rival gangs or different ethnic groups living in close proximity.

This software runs off an expanding database comprising reports of crimes, such as basic assaults, vandalism, disorderly conduct and emergency calls about incidents such as shots fired. It also incorporates trends of both minor and serious crimes with seasonal and diurnal variations. It claims to be able to predict both progression of violent crime epidemics and their outbreak from more minor precursors, but again larger scale evidence is needed.

There are other cases where AI based systems perform less well or no better than humans, even though they have been deployed with the conviction they will be more accurate. One example is a system called Compas developed by Equivant in the US for assessing the risk of convicted criminals re-offending. One study showed that the technology was no better at predicting an individual’s risk of recidivism than people selected at random.

Furthermore, it appeared biased along racial lines, with African Americans almost twice as likely to be labeled at higher risk than people with white complexions – even among those who do not actually re-offend. Whites by the same token were much more likely to be mistakenly identified as lower-risk. This was an example of false generalization, which can be a cause of bias in ML. That was identified in that UK report as an issue that needs addressing in police use of AI.

Close