Intel detection tool uses blood flow to identify deepfakes with 96% accuracy

midian182

Posts: 8,482   +104
Staff member
In brief: Deepfakes are one of those technologies that, while impressive, are often used for nefarious purposes—and their popularity is growing. Companies have been working on ways of identifying a real video from an altered one for years now, but Intel's new solution looks to be one of the most effective and innovative.

Deepfakes, which usually involve superimposing someone's face and voice onto another person, started gaining attention a few years ago when adult websites began banning videos where the technique was used to add famous actresses' faces to porn stars' bodies.

DF videos have become increasingly advanced since then. There are plenty of apps that let users insert friends' faces into movies, and we've seen the AI-powered process used to bring old photos back to life and put young versions of actors onto the screen once again.

But there's also an unpleasant side to the technology. In addition to being used to create fake revenge porn, it's been utilized by scammers applying for remote jobs. There was also an app designed to remove women's clothes digitally. But the biggest concern is how deepfakes have led to the spread of misinformation—the fake video of Ukraine president Volodymyr Zelensky surrendering was spread on social media earlier this year.

Organizations, including Facebook, the Defense department, Adobe, and Google, have created tools designed to identify deepfakes. Intel and Intel Labs' version, aptly called FakeCatcher, takes a unique approach: analyzing blood flow.

Rather than going with a method that examines a video's file for tell-tale signs, Intel's platform uses deep learning to analyze the subtle color changes in faces caused by the blood flowing in veins, a process called photoplethysmography, or PPG.

FakeCatcher looks at the blood flow in the pixels of an image, something that deepfakes are yet to perfect, and examines the signals from multiple frames. It then runs the signatures through a classifier. The classifier determines whether the video in question is real or fake.

Intel says that combined with eye gaze-based detection, the technique can determine if a video is real within milliseconds and with a 96% accuracy rate. The company added that the platform uses 3rd-Gen Xeon Scalable processors with up to 72 concurrent detection streams and operates through a web interface.

A real-time solution with such a high accuracy rate could make a massive difference in the online war against disinformation. On the flip side, it could also result in deepfakes becoming even more realistic as creators try to fool the system.

Permalink to story.

 

madboyv1

Posts: 1,826   +763
The lip movement (or lack thereof) at the beginning of the video threw it straight into uncanny valley territory and kinda made it obvious (to me) that it was a deep fake, but this is still pretty interesting tech.
 

human7

Posts: 152   +131
At the end of the day, the deepfake system will ultimately win, it's just a question of time. The perfect system may not be deep learning based, but from an information theoretic point of view, there's no reason to believe that a perfect faker is impossible. It's just a problem of both of these:
1) copying information that is already out there (such as a face), which is possible since we aren't talking about quantum mechanics, which means we don't have to worry about the no-clone theorem, and
2) generating information that is not already out there (the content being faked, which is either the subset of the image or the whole image), but is indistinguishable from what could be or would be the real thing.
 

kiwigraeme

Posts: 1,399   +1,038
Unless you fall into the 4% and Intel says your fake.

There's a movie in that somewhere

The statistics for positives, false positives , false negatives etc is interesting . Lots of people have been found guilty on mis-understood or represented numbers . Same for medical analysis of treatments a 4% positive may be no good if only a small % of people get something and the cure is horrible - ie you end up with more non blighted people taking a horrible cure than blighted who are happy with risk

96% may be ok for intel -even gold standards like fingerprints - with out enough points 10 000 people in the world could match - that's your day messed up if the police, immigration forget to do additional checks .
Examples abound of stupid people stopping others going about their business with just the same name - yet 30 years difference in age-- but you have the name!

In court you must also ask what is the chance person is innocent
 

m3tavision

Posts: 1,109   +940
Deep fake is based on WHO the person is, that is watching and did they fake them out. If it's a collective of people at the movies who have no clue or premonition then they could be faked.

If it's an individual with a good eye and knowledge, perhaps not.