NewsEight wrongful arrests highlight flaws in facial recognition tech

Eight wrongful arrests highlight flaws in facial recognition tech

Facial recognition software has mistakenly identified at least eight Americans, leading to their arrest, reports the "Washington Post".

Errors of AI: Eight people wrongly arrested
Errors of AI: Eight people wrongly arrested
Images source: © Getty Images | 2023 John Keeble
Kamila Gurgul

In the United States, at least eight individuals have been wrongfully arrested due to incorrect identification by facial recognition software. According to the "Washington Post", police in the USA utilise artificial intelligence technology to detain suspects, often without additional evidence.

Problems with identification

The newspaper analysed data from police reports, court records, and interviews with officers, prosecutors, and lawyers. The findings suggest the problem could be significantly larger because prosecutors rarely disclose the use of AI, and the law requires this only in seven states. The total number of wrongful arrests caused by AI errors remains unknown.

In the eight identified cases, the police did not undertake basic investigative actions, such as checking alibis, comparing distinguishing features, or analysing DNA and fingerprint evidence. In six cases, they ignored the suspects' alibis, and in two, they overlooked evidence contradicting their assumptions.

In five instances, key evidence was not collected. The "Washington Post" cites an example of a person arrested for attempting to cash a forged cheque, where the police did not even verify the suspect's bank accounts. Physical characteristics of the suspects, which contradicted the AI recognition, were disregarded three times, such as in the case of a heavily pregnant woman accused of car theft.

In six cases, witness statements were not verified. An example includes a situation where a security guard confirmed the identity of a suspect accused of stealing a watch, despite not being present during the incident.

Concerns about the technology

Facial recognition software performs almost perfectly under laboratory conditions, but its effectiveness in real-world scenarios remains questionable. Katie Kinsey from NYU notes the lack of independent tests verifying the accuracy of the technology on unclear surveillance images. Research by neurologists at University College London suggests that users may blindly trust AI decisions, leading to erroneous judgments.

The "Washington Post" emphasises that over-reliance on AI systems can obstruct accurate assessment of situations, which is particularly perilous in the context of the justice system.

Related content

© Daily Wrap
·

Downloading, reproduction, storage, or any other use of content available on this website—regardless of its nature and form of expression (in particular, but not limited to verbal, verbal-musical, musical, audiovisual, audio, textual, graphic, and the data and information contained therein, databases and the data contained therein) and its form (e.g., literary, journalistic, scientific, cartographic, computer programs, visual arts, photographic)—requires prior and explicit consent from Wirtualna Polska Media Spółka Akcyjna, headquartered in Warsaw, the owner of this website, regardless of the method of exploration and the technique used (manual or automated, including the use of machine learning or artificial intelligence programs). The above restriction does not apply solely to facilitate their search by internet search engines and uses within contractual relations or permitted use as specified by applicable law.Detailed information regarding this notice can be found  here.