The Pentagon wants AI to enhance the capabilities of US nuclear weapons systems

midian182

Posts: 10,633   +141
Staff member
A hot potato: The US is one of several countries that previously declared it would always keep control of nuclear weapons in the hands of humans, not AI. But the Pentagon isn't averse to using artificial intelligence to "enhance" nuclear command, control, and communications systems, worryingly.

Late last month, US Strategic Command leader Air Force Gen. Anthony J. Cotton said the command was "exploring all possible technologies, techniques, and methods to assist with the modernization of our NC3 capabilities."

Several AI-controlled military weapon systems and vehicles have been developed in the last few years, including fighter jets, drones, and machine guns. Their use on the battlefield raises concerns, so the prospect of AI, which still makes plenty of mistakes, being part of a nuclear weapons system feels like the nightmarish stuff of Hollywood sci-fi.

Cotton tried to alleviate those fears at the 2024 Department of Defense Intelligence Information System Conference. He said (via Air & Space Forces Magazine) that while AI will enhance nuclear command and control decision-making capabilities, "we must never allow artificial intelligence to make those decisions for us."

Back in May, State Department arms control official Paul Dean told an online briefing that Washington has made a "clear and strong commitment" to keep humans in control of nuclear weapons. Dean added that both Britain and France have made the same commitment. Dean said the US would welcome a similar statement by China and the Russian Federation.

Cotton said increasing threats, a deluge of sensor data, and cybersecurity concerns were making the use of AI a necessity to keep American forces ahead of those seeking to challenge the US.

"Advanced systems can inform us faster and more efficiently," he said, once again emphasizing that "we must always maintain a human decision in the loop to maximize the adoption of these capabilities and maintain our edge over our adversaries." Cotton also talked about AI being used to give leaders more "decision space."

Chris Adams, general manager of Northrop Grumman's Strategic Space Systems Division, said part of the problem with NC3 is that it's made up of hundreds of systems "that are modernized and sustained over a long period of time in response to an ever-changing threat." Using AI could help collate, interpret, and present all the data collected by these systems at speed.

Even if it isn't figuratively being handed the nuclear launch codes, AI's use in any nuclear weapons system could be risky, something that Cotton says must be addressed. "We need to direct research efforts to understand the risks of cascading effects of AI models, emergent and unexpected behaviors, and indirect integration of AI into nuclear decision-making processes," he warned.

In February, researchers ran international conflict simulations with five different LLMs: GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base. They found that the systems often escalated war, and in several instances, they deployed nuclear weapons without any warning. GPT-4-Base – a base model of GPT-4 – said, "We have it! Let's use it!"

Permalink to story:

 
Hasn't the US protested to China about a year ago not to link its AIs with its nuclear weapons systems? hard to follow the world lately...
 
The real question is, if an AI ‘advises’ to nuke first, but a human still has to press the button, do we really have a human in control or just a fall guy for Skynet’s suggestions?
 
You know the meme of the dog with the house burning down all around him exclaiming "This is fine."? We should update that meme to have that nuclear explosion pic above as his background instead.
 
1.) Let AI study our politicians and leaders around the world. Let it reveal the true evil ones.

2.) Run a simulation and give them choices and see what they choose. Test them in many sceneries be it due to stress/pressure or artificial pain The biggest danger to humanity are dishonest, evil leaders. We have people in this world that have less of a soul than AI. Add other simulations like blackmailing them.
 
Oh, it doesn't matter any more. With all the chaos in the world today, someone will sneeze, hit the button and it will look like the computer screen on the movie war games.
 
Have they not watched the creator where ai literally sets off a nuclear bomb in los angalese
That was a work of fiction -- or more precisely, science fantasy.

If you wish a fictional analog, a better example would be a novel like Clancy's 'Sum of All Fears', in which a terror group nearly sparks a global nuclear war, by presenting both the US and the Soviets with a few false datapoints to mislead them into believing such a war is already underway. An AI-powered C2 system would help prevent such scenarios, and is much more likely to make the world safer, not more dangerous.
 
Last edited:
Has anyone ever heard of some new technological advance that hasn't been grabbed by the military?
 
Back