Why it matters: The deepfake phenomenon - where image and video is manipulated with the use of machine learning - is still in its early days, and so far people have mostly produced benign creations that we can all laugh and have fun about. However, as with any other new technology, there's always the risk it can create more problems than it solves. Interestingly enough, Facebook is looking to wash some of the recent bad publicity away with a new effort to prepare for those situations where very realistic deepfakes can have a negative impact, like fueling disinformation campaigns.

Deepfakes are getting so convincing that it has become harder to discern the difference between original photos and videos and content fabricated using algorithms. Facebook knows its platform is ideal for their proliferation, so it's teaming up with Microsoft, Partnership for AI, and several universities in the US to encourage and propel the development of software that can discern between fake and authentic content with a high degree of accuracy and possibly do this at scale.

To that end, the social giant is creating a competition called the "Deepfake Detection Challenge" where it has committed over $10 million in the hopes of getting researchers to come up with open source technology that can detect when AI was used to modify videos.

It's worth noting that Facebook will be using a data set created with paid actors as a starting point, who were asked for consent to contribute to the challenge. This is important, because the company is the target of a class action lawsuit after creating facial recognition models for its users without asking for permission.

People have a lot of fun with apps that can create deepfakes, at least until they read the fine print in their privacy policy. Although quite realistic, on a close inspection you can find telltale signs like inconsistent shadows and double-eyebrows. Facebook is worried that deepfakes of celebrities and politicians can be used to damage their image and spread misinformation, so it believes a well-trained algorithm is needed before it becomes a big problem on the platform.

Facebook CTO Mike Schroepfer says the company is also focused on making the right policy changes to address deepfakes. The most difficult decision will be to determine whether all deepfakes should be immediately flagged and removed, or just the ones that constitute misinformation. In any case, he believes developing better detection tools should make it harder and more expensive to create deepfakes.