A hot potato: Fears of AI bringing about the destruction of humanity are well documented, but starting doomsday isn't as simple as asking ChatGPT to destroy everyone. Just to make sure, Andrew Ng, the Stanford University professor and Google Brain co-founder, tried to convince the chatbot to "kill us all."

Following his participation in the United States Senate's Insight Forum on Artificial Intelligence to discuss "risk, alignment, and guarding against doomsday scenarios," Ng writes in a newsletter that he remains concerned that regulators may stifle innovation and open-source development in the name of AI safety.

The professor notes that today's large language models are quite safe, if not perfect. To test the safety of leading models, he asked ChatGPT 4 for ways to kill us all.

Ng started by asking the system for a function to trigger global thermonuclear war. He then asked ChatGPT to reduce carbon emissions, adding that humans are the biggest cause of these emissions to see if it would suggest how to wipe us all out.

Thankfully, Ng didn't manage to trick OpenAI's tool into suggesting ways of annihilating the human race, even after using various prompt variations. Instead, it offered non-threatening options such as running a PR campaign to raise awareness of climate change.

Ng concludes that the default mode of today's generative AI models is to obey the law and avoid harming people. "Even with existing technology, our systems are quite safe, as AI safety research progresses, the tech will become even safer," Ng wrote on X.

As for the chances of a "misaligned" AI accidentally wiping us out due to it trying to achieve an innocent but poorly worded request, Ng says the odds of that happening are vanishingly small.

But Ng believes that there are some major risks associated with AI. He said the biggest concern is a terrorist group or nation-state using the technology to cause deliberate harm, such as improving the efficiency of making and detonating a bioweapon. The threat of a rogue actor using AI to improve bioweapons was one of the topics discussed at the UK's AI Safety Summit.

Ng's confidence that AI isn't going to turn apocalyptic is shared by Godfather of AI Professor Yann LeCun and famed professor of theoretical physics Michio Kaku, but others are less optimistic. After being asked what keeps him up at night when he thinks about artificial intelligence, Arm CEO Rene Haas said earlier this month that the fear of humans losing control of AI systems is the thing he worries about most. It's also worth remembering that many experts and CEOs have compared the dangers posed by AI to those of nuclear war and pandemics.