Twitter wants Clearview AI to remove photos of people it has scraped from its service


Posts: 798   +12
Staff member

Twitter recently demanded that Clearview AI remove the data it collected from the public profiles of its users, after news broke that the latter had been scraping photos from the service for years and selling facial recognition software to law enforcement.

Clearview AI has been flying under the radar in a sea of apps and services, but an investigation from the New York Times revealed that Clearview has managed to gather more than 3 billion photos from big online platforms like Facebook, Twitter, and even Venmo, which have been used by the FBI, the Department of Homeland Security, and 600 other agencies from around the world as a means to quickly identify suspects.

The New York Times says in its report that Clearview AI will "end privacy as we know it," and they may not be mistaken.

The way the service works is that you upload a photo of a person you're looking for and the app will use scour through its huge photo database to find a match using AI. Since these were scraped from many online profiles, the search has a chance of yielding a full name, an email address, and other details.

It's worth noting that at least for now, the service isn't available for the general public, but there's plenty of speculation among investors that it may well be in the near future.

Services like Facebook offer an option that prevents your profile from appearing in search engine results, which means it won't be scraped by Clearview AI. Still, if someone took a picture that includes you and uploaded it somewhere online, that has a chance of being scraped as well. A recent court fight between hiQ Labs and Linkedin set a precedent that scraping public online data doesn't violate any law.

Interestingly, Clearview AI has updated its privacy policy to allow anyone to request the removal of any content it has from your online profiles. The catch is that in order to do that you have to provide a "name, a headshot and a photo of a government-issued ID to facilitate the processing of your request." Furthermore, there's no guarantee that they'll honor your request, which is more worrying.

Between Chinese tech companies that are apparently shaping UN facial recognition standards and companies like Clearview AI, law enforcement sees big potential in these tools to fight crime, but regulators are worried they open up a new avenue of abuse. Senator Ed Markey recently sent a letter to Clearview AI CEO, Hoan Ton-That, where he noted that "any technology with the ability to collect and analyze individuals’ biometric information has alarming potential to impinge on the public’s civil liberties and privacy."

The senator is concerned that the service "is capable of fundamentally dismantling Americans' expectation that they can move, assemble, or simply appear in public without being identified." He asked the company to list its customers and explain if its employees have access to the images uploaded in the database, as well as how it complies with Children’s Online Privacy Protection Act (COPPA).

Permalink to story.



Posts: 1,476   +737
Closer and closer to the black mirror episode where you can sensor people out of everyone's lives


Posts: 306   +34


Posts: 542   +190
A mere extension of the Google street map era. When you define 'public' as any space not directly inside a private home with the windows closed you get this.

When it goes public, plan on the insurance and finance industry buying in to check 'action' photos for high risk identifier people.

Squid Surprise

Posts: 4,099   +3,258
Privacy died the day the internet began... many of us just haven't fully come to that realization yet...

In another generation or 2, people will be genuinely baffled as to what the term really means...