Samsung's latest deepfake technique requires just a single source image

Shawn Knight

Posts: 15,289   +192
Staff member
In brief: The ability to create fabricated video footage has persisted for decades but has historically required a great deal of time, money and skill to pull off in a convincing fashion. These days, deepfake systems make it trivially easy to produce stunningly accurate doctored footage and thanks to Samsung’s latest research, now even less effort is required.

Modern deepfakes traditionally require a large amount of source imagery – training data – to work their magic. Samsung’s new approach, dubbed few- and one-shot learning, can train a model on just one single still image. Accuracy and realism improve as the number of source images increases.

For example, a model trained on 32 photographs will be more convincing than a version trained with a single image – but still, the results are stunning.

As exhibited in a video accompanying the researchers’ paper, the technique can even be applied to paintings. Seeing the Mona Lisa, a work that exists solely as a single still image, come to life is quite fascinating and fun.

The downside, as Dartmouth researcher Hany Farid highlights, is that advances in these sorts of techniques are bound to increase the risk of misinformation, fraud and election tampering. “these results are another step in the evolution of techniques ... leading to the creation of multimedia content that will eventually be indistinguishable from the real thing,” Farid said.

That’s great if you’re watching a fictional movie but not so much when tuning in to the evening news.

Image credit: Face Recognition of a woman by Mihai Surdu

Permalink to story.

 
And the reason Samsung and other companies are building this tech? To sell to those with the deepest pockets. Infowar is the new gun running and the end results will be quite similar.
 
Who the **** cares no one is going to be stealing your phone unless you are a celebrity if that's the case learn to not take nudes and sex videos. Hidem in a vault only you know how to get into...
 
Who the **** cares no one is going to be stealing your phone unless you are a celebrity if that's the case learn to not take nudes and sex videos. Hidem in a vault only you know how to get into...
What makes you think the only photos of you exist on your own phone? Besides, I doubt anyone will be making deepfakes of you. They will be of celebrities and officials.
When videos come out showing the Chinese president saying that all Americans should be shot, will that be a deepfake or the real deal? Considering how gullible the average internet user already is, these deepfakes are going to cause a lot of trouble.
 
Okay, bringing History to life is one of the first actual uses of this research that makes sense. Other than that I can't think of a single good thing that could come out of this type of machine learning.
 
Who the **** cares no one is going to be stealing your phone unless you are a celebrity if that's the case learn to not take nudes and sex videos. Hidem in a vault only you know how to get into...

That's not true. Kids in your street may be using your video to make deep fake of you having sex with a goat. And it would be so convincing that your dad would think you've gone back to your roots.
 
Who the **** cares no one is going to be stealing your phone unless you are a celebrity if that's the case learn to not take nudes and sex videos. Hidem in a vault only you know how to get into...
That's a very shallow take on a technology that could be used to cause incredible harm. Creating something fake that is pretty much indistinguishable from the real thing has an unlimited potential, good or bad.
 
Back