Why it matters: Deepfakes are a hot topic as of late, whether it's about their potential to conduct sophisticated misinformation campaigns or the inexpensive nature of the tools that are readily available online. As these tools get better, Twitter wants to have the right policy in place to allow people to have fun with the technology and also remove manipulated content that might be harmful.

As is usually the case with Twitter, the company wants to craft a better policy but isn't exactly sure how to do it so it's asking for user feedback. A deepfake pioneer recently said that "perfectly real" videos are less than a year away, so the company is trying to have new features in place that could help it navigate those dangerous waters.

The social giant wants users to come up with definitions on what it means that a video, picture, or audio recording is "synthetic" or "manipulated." Del Harvey, vice president of trust and safety, believes "deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation," so he's looking for everyone's perspective on the matter.

Specifically, Twitter is planning at least three ways to approach this problem. One is to mark tweets with deepfakes as such, in an attempt to prevent anyone from taking them too seriously. Another is to warn people before tweeting or retweeting a deepfake. Then there's the possibility of adding a link to an article or Twitter Moment that could educate users about the possibility that tweets can contain manipulated assets.

The company is also considering the idea of removing tweets that "could threaten someone's physical safety or lead to serious harm," but hasn't gone into more detail on what exactly is considered as such. You can provide feedback through this survey until November 27, and the company has also called for partners to help develop efficient detection tools for deepfake content.

In the meantime, people are having a lot of fun with deepfake apps like Zao - at least until they discover all the privacy implications. Regulators are already crafting laws that push back against political and pornographic deepfakes, and companies like Microsoft and Facebook are scrambling to develop the most effective tools for their detection.