Los Angeles Police Department bans the use of third-party facial recognition technology

Polycount

Posts: 3,017   +590
Staff
(Lack of) privacy: Facial recognition technology has always been the source of much debate surrounding privacy in the digital era. While many companies and governments have embraced the tech for purposes like law enforcement or machine learning, some have begun to push back against it lately. Multiple cities have banned police departments from utilizing facial recognition tech, and now, the city of Los Angeles is following suit -- sort of.

The law enforcement organization has allegedly been taking advantage of software provided by highly-controversial facial recognition tech company Clearview to track down criminals. According to a report from Buzzfeed News, over 25 LAPD employees performed nearly 475 searches as of "earlier this year," so officers have certainly gotten a decent amount of use out of the tech.

The trouble with Clearview's software, however, is that it uses images and content scraped from social media websites to build a database of faces that can be used by its clients. That's where the controversy comes in -- if given the choice to consent to this sort of scraping, we imagine most people would opt-out.

Being added to a database that can and will often be used by law enforcement with no warning is a frightening prospect. Artificial intelligence is capable of making mistakes, after all, as we saw recently when an AI-powered sports camera operator mistook a lineman's bald head for a soccer ball.

Asserting its first-amendment rights, Clearview has stated in the past that it's completely free to perform this sort of scraping, and social media platforms have no legal grounds to stop the process.

Legal or not, though, the LAPD seems to have had a change of heart: moving forward, it will be putting an indefinite "moratorium" on the use of all commercial facial recognition software.

However, here's the catch: the police department can still use facial recognition technology, it just has to be in-house. As Buzzfeed News says, a new policy proposal will allow the LAPD to use a "Los Angeles County system that relies on suspect booking images." That's still not an ideal situation for privacy proponents, we're certain, but it's definitely a step in the right direction.

Image credit: Metamorworks, Alice Photo

Permalink to story.

 
I'm much more interested in _how_ the technology is used, vs. who created it.

I'd be fine for example with a warrant for a facial recognition search being approved by a judge for a specific set of images linked to a specific crime, especially if the rules of evidence that would later apply at trial accounted for the limitations of the results (I.e., a facial recognition match alone could never be considered sufficient.) I'd actually like to live in a world where this limited, reasonable use existed.

I'd be much less fine with the exact same technology used for auto-tagging "matches" en masse and bulk mailing citations based off it, with burden of proof essentially shifted to the recipient to prove it wasn't them and/or the image wasn't depicting a crime.
 
Now, without any legal hurdles, a private business can destroy the life and career of a customer without a hint of due process. With only a two day seminar, Genetec arms any employee with the ability to “enroll” anyone they would like to falsely accuse. The unsuspecting customer is added to Facefirst’s “massive, centrally managed database” and watchlisted without due process, and without being told. Then, as employers continue to use the Facefirst watchlist to violate the FCRA, the unsuspecting victim is stalked and harassed relentlessly until all regard for constitutional rights have disappeared. The system works.
 
Back