Skip to content

UK’s House of Lords Aims to Prevent AI in Policing from Becoming ‘new Wild West’

“What would it be like to be convicted and imprisoned on the basis of AI which you don’t understand and which you can’t challenge?” asked Baroness Hamwee, chair of the Justice and Home Affairs committee.

Speaking as the Lords’ Justice and Home Affairs Committee delivered its report, ‘Technology rules? The advent of new technologies in the justice system,’ Hamwee warned that appropriate safeguards around the use of AI must be taken to ensure that this Kafkaesque scenario never happens.

The report, which follows a ten-month inquiry which heard from experts in the fields of AI, law enforcement and MPs, broadly acknowledged the potential these tools had to support policing.

However, it also warned that the pace of their deployment and the absence of appropriate safeguards had serious implications on a person’s human rights and civil liberties.

The House of Lords report expresses surprise at the “proliferation of AI tools” being used without proper regulation or oversight – particularly by police forces across the country.

Rather than finding “scrutiny,” which the committee considered “essential” the committee found a “landscape in which new technologies were developing at a pace that public awareness, government and legislation have not kept up with – a new Wild West.”

Hamwee made the point that “computers were not always right” referencing the Horizon British Post Office scandal which led to the wrongful prosecution of hundreds of Post Office sub masters for false accounting and fraud.

“We had a strong impression that these new tools are being used without questioning whether they always produce a justified outcome. Is “the computer” always, right? It was different technology but look at what happened to hundreds of Post Office managers,” she said.

To “facilitate scrutiny” the committee urged the government to create a “mandatory register of algorithms” used by the police and justice system as well as introducing a “duty of candour” on the police.

It has also recommended that a national body be established to set “strict scientific validity and quality standards” that new technologies would need to adhere to.

“The Government must take control. Legislation to establish clear principles would provide a basis for more detailed regulation. A ‘kitemark’ to certify quality and a register of algorithms used in relevant tools would give confidence to everyone – users and citizens,” said Hamwee.

The report also called for mandatory training of technology users and the development of local ethics committees within police forces.

These measures, it said, would ensure full transparency over the police’s use of AI “given its potential impact on people’s lives, particularly those in marginalized communities.”

“We welcome the advantages AI can bring to our justice system, but not if there is no adequate oversight. Humans must be the ultimate decision makers, knowing how to question the tools they are using and how to challenge their outcome,” Hamwee concluded.

Written bAnn-Marie Corvin and republished with permission from TechInformed.

CATEGORIES

MEDIA CONTACT

Reach out to us at [email protected]

Related Posts

Procter & Gamble on Scaling AI for Enterprise

Procter & Gamble on Scaling AI for Enterprise

Data, talent, platforms and trust: these are the four key pillars firms need when building AI into their applications according…
Scaling Up: AWS’s Allie Miller Outlines the Key Trends in AI and Machine Learning

Scaling Up: AWS’s Allie Miller Outlines the Key Trends in AI and Machine Learning

We are entering an AI world where start-ups are transforming everyday lives through artificial intelligence and machine learning. AWS’ Allie…

ScaleUp:AI uses cookies to enhance your experience and help us analyze website usage. By continuing to browse or dismissing this banner, you indicate your agreement as outlined in our Privacy Policy.