Rise in false digital arrests undermines trust in the rule of law

Image: @a_chosensoul via Unsplash

A growing body of research suggests artificial intelligence is increasingly producing miscarriages of justice and false arrests as biometric and surveillance tools become embedded in law enforcement, resulting in over-reliance on algorithmic results that are treated as facts rather than probabilities. Researchers have also raised red flags about the lack of accountability and auditability in AI-assisted law enforcement and criminal prosecution, raising questions about whether automated decision-making could compromise due process rights and erode public trust in the criminal justice system.

The most visible example of this phenomenon is false facial recognition, with civil rights groups warning that without proper oversight, it represents a threat to civil liberties and freedom of expression. This month Wired reported that shortly after President Trump’s second term began, policies and directives outlining oversight checks and balances, including mandated privacy protections and the fact that facial recognition could not be used as the sole basis for law or civil enforcement actions, disappeared from public view, suggesting the scrutiny that historically governed their use no longer exists.

A Washington Post investigation published last year looked at eight false digital arrest cases in the US. In one of them, county transit police detective Matthew Shute built a case against 29-year-old Christopher Gatlin for assaulting a security guard at a desolate train platform. The sole item of “evidence” relied on to target Gatlin was produced by a machine. Detective Shute uploaded a grainy still from a blurry video of the incident that showed a hooded attacker whose face was partially obscured by a surgical mask. He fed the photo into a facial recognition software program that uses AI to analyse images and compare them to a mugshot database. It spat out Gatlin, who’d previously been arrested for traffic violations and a burglary charge that was later dropped due to lack of evidence. Despite having no ties to the crime scene, he was arrested and spent 17 months in jail for a crime he didn’t commit.

In a review of 23 police departments across the country, the Washington Post found that 15 spanning 12 states were using AI facial recognition tools as a shortcut to locating and apprehending suspects without any independent evidence connecting them to crime scenes. They did so even though official policy in most cases states the results should be regarded as “unscientific”, and ought not to be used as the single basis for any decisions.

In all eight cases, which were eventually dismissed, police failed to take basic investigative steps, such as checking alibis or using DNA and fingerprint evidence in their possession. The Post investigation likely only provided a small snapshot of a much larger problem because police rarely disclose if they have used AI matches to build cases, and most states don’t require them to. One leading facial recognition software company, Clearview AI, reportedly boasted in an investor pitch that 3 100 police departments use its tools.

False digital arrests are not confined to the US. Last year the Metropolitan Police in the UK detained and questioned Shaun Thompson after he was stopped outside a tube station in London when live facial recognition technology wrongly identified him as a criminal suspect. He found the experience intimidating and traumatising, and has brought a landmark High Court challenge that argues the new surveillance technology violates the right to privacy, freedom of expression and assembly.

Statistical illiteracy among law enforcement officials and legal practitioners may be partly to blame. An article published in Harvard Political Review this month argues statistical complexity and the fact that AI systems are proprietary shields their potentially suspect methodologies from being interrogated. This makes it difficult for courts to evaluate AI-generated evidence. One solution proposed is ongoing technical training for judges, and access to advisory boards with statistical experts capable of reviewing algorithmic evidence to enable courts to distinguish between solid evidence and flawed machine-generated claims.

Next
Next

Social media ban for under-16s gains momentum