State University of New York researchers have trained an AI to detect 'Deepfakes'

By Polycount ยท 8 replies
Jun 15, 2018
Post New Reply
  1. There have always been ethical concerns associated with the advancement of AI and machine learning technology, but they've only become more pronounced in recent years.

    The use of AI for military purposes is obviously one concern, but as we've seen recently with the rise of "Deepfakes," neural networks and AI can be used for far more subtle, and not immediately harmful purposes.

    The term Deepfake, for the unaware, usually refers to AI-created, fake pornography. This pornography typically involves a popular actor or actress' face being superimposed over that of a porn model's.

    Communities focused on the creation and distribution of this sort of content have cropped up around the internet. As a result, services like Twitter, Discord, and Pornhub have deemed this pornography "nonconsensual," opting to remove it from their platforms.

    With enough time and technological advancement, political speeches could be falsified, and "fake news" could become a genuine threat.

    However, pornography isn't the only form of video Deepfake tech could spread to. With enough time and technological advancement, political speeches could be falsified, and "fake news" could become a genuine threat.

    That said, researchers from the State University of New York may have found a way to fight fire with fire. The individuals in question taught an AI to detect 'Deepfakes' by closely monitoring eye blinking; something Deepfake clips don't reliably replicate because they are typically trained with images rather than videos.

    This is certainly a positive development, but it remains to be seen whether or not researchers will be able to keep up with the rapid advancement of Deepfakes in the future.

    Image courtesy IndieWire

    Permalink to story.

  2. koblongata

    koblongata TS Booster Posts: 107   +26

    AND THEN there is going to be another deeper fake AI trying to make it undetectable, some computing power war future for real...
  3. davislane1

    davislane1 TS Grand Inquisitor Posts: 5,201   +4,311

    Step 1: Play a Deus Ex game

    Step 2: Realize that is the future

    Step 3: Farm credits

    Step 4: ????
  4. Evernessince

    Evernessince TS Evangelist Posts: 2,911   +2,070

    Computer and AIs worked based on logic and algorithms. When you create an AI to fake something, it's going to do it in a manner that is characteristic of that algorithm. That's exactly it's weak point as well, because then you simply need to program another AI to find patterns in images (or series of images for video) that exemplify the handiwork of the fore mentioned algorithm. For computers which produce nearly everything based off math, it's as simple as applying an answer check for any given DeepFake algorithm to find if it was produced by an AI or not.
  5. Uncle Al

    Uncle Al TS Evangelist Posts: 4,164   +2,637

    The time of video fakery is already here and has been demonstrated in more than a few forums. It's so bad and so convincing that I expect to see the courts turn away video evidence in the next 24-36 months because of questionable reliability. It's been done with audio for several decades and even today, identifying the real from the fake can be nearly impossible .....
  6. Theinsanegamer

    Theinsanegamer TS Evangelist Posts: 1,227   +1,322

    Isnt modern internet traffic already like 70% bots? At this rate, all internet traffic will be bots trying to outdo each other until the end of time.
  7. Bubbajim

    Bubbajim TS Evangelist Posts: 431   +383

    Late to the tread, but haven't you basically also just described the malware/anti-virus war too? That's hardly a thing of the past.
  8. Evernessince

    Evernessince TS Evangelist Posts: 2,911   +2,070

    I've described reverse engineering software in general.
  9. Bubbajim

    Bubbajim TS Evangelist Posts: 431   +383

    Well yes, quite, which is why it seems odd that your initial comment sounded like you were saying that the issue of "deep fakery" is one that would be simple to overcome, technologically speaking. The simple solution doesn't really factor in the inevitable arms race with these kinds of things.

Similar Topics

Add your comment to this article

You need to be a member to leave a comment. Join thousands of tech enthusiasts and participate.
TechSpot Account You may also...