We are currently experiencing network problems with the desktop version of Police Oracle. We hope to have these resolved as soon as possible.

AI could be used to detect rogue officers, says academic

Artificial intelligence could be used to “track down rogue” officers but the technology is going unused due to a lack of “political will”, an academic has said.

Professor Karen Yeung, who researches AI at Birmingham Law School, told told the Justice and Home Affairs Committee the technology could be used to look at police complaints data to identify “rogue” officers.

“Why are we not collecting data, which is perfectly possible now, about individual police behaviour?," she said. 

“We might have tracked down rogue individuals who were prone to committing violence against women. We have the technology."

She said that "whole swathes of society" - including those in public authority such as the police - were untouched by AI tools which were mainly used as prediction tools "against poor people."

She said that AI firms were not building criminal risk assessment tools to identify insider trading or who’s going to commit the next corporate fraud, because it does not look for those kinds of crimes and there is no high volume data.

“We have datasets about how welfare recipients behave, and how people who come into contact with police have certain kinds of encounters, but we don't have large data sets about white collar criminals, for example, or financial. And so we're not building prediction tools in relation to affluent individuals - we don't collect data about them,” she told Police Oracle.

She said she was talking not just about police, but about public authority data. “The public authority data is much more extensive for people who have encounters with public authorities. So for social welfare recipients, there's going to be a much bigger dossier on a social welfare recipient than someone who doesn't have get routine social welfare. And because there's more data you can use that data to create predictive tools.

"There's no reason why we can't collect complaints data and look for outliers. That's what we do, and all sorts of other contexts. We do in relation to potential welfare fraudsters, for example.

“So we could do that about any kind of behavioural data you'd like if we collected it. It's not rocket science.”

Professor Yeung added: “If we had complaints data at the level of individuals then we could identify outliers and we wouldn't actually need terribly sophisticated algorithms to do that.”

The committee also heard from Silkie Carlo, director of Big Brother Watch, who said the Metropolitan Police had achieved just 11 true positive matches over “four or five years” of testing on the streets of London, along with “an awful lot of false positive matches”, after capturing tens if not hundreds of thousands of people’s faces.

She went on to say even some of the positive matches were of people who were not wanted in connection with any crime but appeared on databases of people with mental health problems or protesters.

She said: “Their current rate over the entirety of their employment is 93 per cent false positive matches, so I struggle to see a world in which that could be considered proportionate.”

Prof Yeung added that the police did not know how many false negatives the technology had returned because it had only been used in live tests rather than controlled, scientific conditions.

The Met claim they use facial recognition in a lawful and proportionate way.

Leave a Comment
View Comments 5
In Other News
Cheshire joins forces rolling out facial recognition technology
Lack of AI controls risks 'new Wild West', peers warn
New standard could increase use of artificial intelligence by forces
Forces trial facial recognition phone app to help identify offenders
Cleveland launches tender for crime reporting phone app
More News