Close
Close

Published

Met Police’s Robocop desires lead to barbed AI criticism, looming ban?

The UK’s police forces seem pretty keen to jump on board with new technologies, apparently without properly benchmarking them. This should be the only reasonable conclusion if the findings from the University of Essex Human Rights Center are to be trusted, as they found that London’s Metropolitan Police Service’s shiny new facial recognition system is only right about 19% of the time – whereas the Met reckons it’s more like 99.9%.

The new criticism comes only a couple of months after Big Brother Watch started filing Freedom of Information requests, and found that the Met’s system was generating false positives 98% of the time – correctly identifying only two people, neither of whom were actually criminals. Elsewhere, the UK’s Information Commission has said that facial recognition lacks a sufficient legal framework, after a citizen challenged South Wales Police’s use of a facial recognition. A judicial review is underway, in that case.

This new criticism comes as the Met announced it was going to be the first UK police force to use drones to monitor cars on the road. Ostensibly, the drones will be looking for dangerous drivers, but it seems very unlikely that there isn’t going to be some degree of scope-creep here. The first trial will consist of one drone, but given the success enjoyed by French police in Bordeaux, who have had such a system since 2017, we anticipate a lot more Met drones deployed in the coming years.

The police’s desire to embrace technology to tackle crime will see AI and drones applied to other uses too, with surveillance being one of the most immediate applications. However, there’s a rising opposition movement to the surveillance capitalism that has been so successfully monetized by the web giants, and it seems logical that some of this sentiment is going to start being directed at police forces.

We saw how San Francisco moved to nip facial recognition in the bud, banning city police and organizations from using the technology on its citizens. That discussion touched on the Amazon Rekognition system that managed to identify 28 members of Congress as wanted criminals, but was also apparently good enough to be used by a number of police forces in the US.

So then, it seems almost inevitable that, if left to their own devices, the police are going to shortly start strapping these live or automatic facial recognition (LFR/AFR) systems to drones, cars, horses, or perhaps even the body-worn cameras that are becoming increasingly popular. It will start with one specific application, but if left unchecked, that dragnet is going to get wider and wider.

Taser, one of the premier supplier of the body-worn camera systems, is looking to create a database of all video evidence that could be retroactively scanned for suspects – using machine-learning to make the files more easily searchable. China’s enthusiasm for LFR-enabled wearable glasses has been covered with glee by Western media outlets, but the Taser system could also be hugely intrusive.

So, has San Francisco managed to cast the first stone in a global trend? Will London follow suit, setting the tone for the rest of the UK and perhaps Europe? Would there be an outright ban, or a more accommodating approval process that mandated oversight and some level of competence?

At this stage, all those answers are unclear, but it feels like there is a palpable level of public hostility to these sorts of technologies – that they are overstepping the divide between the internet and the tangible world, and that because of this, the public can’t meter its usage. As in Toronto, so in London – one can’t simply opt out of city life, once the smart city technology has become too authoritarian.

Returning to the latest findings, the University of Essex was granted access to six of the live trials underway with the Met, where NEC’s Neoface technology was being used to find persons of interest. Notably, NEC also sells the system to marketers, advertisers, casinos, and retailers, to track people as they interact with the environment, in order to better sell to or monetize them and look for potential troublemakers on the premises.

The researchers had access from June 2018 to February 2019, and said that the Met officers were too keen to stop potential suspects before the Neoface results could be properly checked. Given the Met’s ongoing trouble with ‘stop and search’ powers, which have consistently found to have controversial racial biases, perhaps a tool like Neoface could help alleviate those concerns. However, there’s a sizable body of evidence that shows AI-based tools often have their own racial and gender biases, which would rather complicate matters.

Amazingly, the watchlists that the trials were using were often out of date, and the researchers said it was highly possible that the Met’s usage would be challenged and ruled as unlawful, should it ever make its way to a court, because there is no explicit legal authorization for the use of LFR in domestic law. The lack of public guidance means that it likely won’t satisfy ‘in accordance with the law,’ if challenged on human rights grounds.

Of the 42 people that were identified in the trials, 22 were stopped but only 8 were actually being sought by the police. What’s worse, some of the 8 were wanted for crimes that had already been dealt with, and yet were still arrested on an offence that the researchers said would not be considered serious enough to tackle using facial recognition. Based on those figures, there’s no way to reach the Met’s 1 in 1000 claim, and the force does not want to disclose how it arrives at that number.

As the researchers put it, “the mixing of trials with operational deployments raises a number of issues regarding consent, public legitimacy and trust – particularly when considering differences between an individual’s consent to participate in research and their consent to the use of technology for police operations. For example, from the perspective of research ethics, someone avoiding the cameras is an indication that they are exercising their entitlement not to be part of a particular trial. From a policing perspective, this same behavior may acquire a different meaning and serve as an indicator of suspicion.”

As for the Met’s response, Deputy Assistant Commissioner Duncan Ball told The Register “we are extremely disappointed with the negative and unbalanced tone of this report. The MPS maintains we have a legal basis for this pilot period and have taken legal advice throughout. We will again review this once we have the outcome of the South Wales judicial review. This is new technology, and we’re testing it within a policing context. The Met’s approach has developed throughout the pilot period, and the deployments have been successful in identifying wanted offenders. We believe the public would absolutely expect us to try innovative methods of crime fighting in order to make London safer. We fully expect the use of this technology to be rigorously scrutinized and want to ensure the public have complete confidence in the way we police London.”

Close