Artificial intelligence is growing at a rapid rate. Breakthroughs and creative uses seem to make headlines weekly. Applications have been developed to do everything from legal analysis of contracts to brewing better beer to someday beating you in a debate or Dota 2.
Now it seems that the Elon Musk’s OpenAI team has discovered an algorithm capable of producing believable fake news articles. When given only a few words to start, the AI can create a news story on any topic that seems real and would take dedicated fact checking to debunk. Here is just one example:
Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.
Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles.
The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine.
An OpenAI algorithm completely fabricated the above story after feeding it the words, “Russia has declared war on the United States after Donald Trump accidentally …”
"It’s very clear that if this technology matures — and I’d give it one or two years — it could be used for disinformation or propaganda."
The technology is not perfect. Plagiarized passages are commonly found. Stories are also often disjointed seemingly making little sense, but once in a while, it creates something compellingly realistic. With refinement, it could reliably be used to generate propaganda.
“It’s very clear that if this technology matures—and I’d give it one or two years—it could be used for disinformation or propaganda,” said OpenAI’s Policy Director Jack Clark. “We’re trying to get ahead of this.”
Part of “getting ahead” is publicly announcing the accomplishment so that people are aware that machines could be used in this fashion. The other part is OpenAI’s decision not to release the full version of the algorithm. It did give the tool to MIT Technology Review for research purposes but said it would only make a simplified version of the text-generation software available to the public.
Natural-language processing expert Richard Socher with Salesforce is less concerned about the technology’s misuse and is more optimistic about its use as a general purpose language-learning system.
“I think these general learning systems are the future,” said Socher. “You don’t need AI to create fake news. People can easily do it.”