The US Department of Defense is using ML algorithms to find airstrike targets

Alfonso Maruccia

Posts: 1,001   +301
Staff
Chatbots Kill: In 2017, the Pentagon established Project Maven to apply machine learning (ML) technology to identify targets in real-time combat situations. The program has now seemingly been turned into a proper war tool, though members of the US military vow there's still a human pulling the final trigger.

Since early February, the US Department of Defense has employed ML algorithms to identify targets for over 85 air strikes in Iraq and Syria. According to Schuyler Moore, CTO for the United States Central Command (CentCom), the Pentagon began using AI technology in actual battle situations after Hamas terrorists attacked Israel on October 7, 2023.

The terrorist's surprise attack changed everything, Moore told Bloomberg, as the DoD finally decided to deploy the AI algorithms developed by Project Maven. The US military immediately began doing things it had never done with AI warfare technology.

"October 7th everything changed," Moore said. "We immediately shifted into high gear and a much higher operational tempo than we had previously."

Developers designed Project Maven's algorithms to work from video footage captured by US drones, helping detect soldiers or other potential airstrike targets. Since February 2, CentCom has identified and destroyed enemy rockets, missiles, drones, and militia facilities with Maven AI.

Moore tried to demystify the new object recognition algorithms' alleged "killing" capabilities, claiming that every step involving AI ends with human validation. The CentCom also used an AI recommendation engine that suggested attack plans and the best weapons to use during operations. However, the results weren't up to human standards.

Project Maven has proven to be a very controversial topic in recent years. Google exited the program after facing significant employee backslash, but other companies were more than happy to keep working on AI warfare with Pentagon officials.

As the AI-infused airstrikes revealed by Moore confirm, the DoD is now willing to push forward with deploying "intelligent" technology on the battlefield. The Pentagon is seemingly already working on integrating large-language models (LLM) in actual combat decisions.

Craig Martell, the DoD Chief Digital and AI Officer, recently said that the US could fall behind adversaries if it doesn't adopt generative AI models in warfare operations. Of course, the US government must conceive the proper "protective measures" and mitigations for national security risks, preventing and dealing with issues that could arise from poorly managed training data.

Permalink to story.

 
As long as people are always reviewing and making the final decisions, and as long as the error rates the algorithm suffers are tightly monitored and controlled, I have no problem with ML being a tool in the toolbox. What that toolbox is being used for (killing, war, security, etc) is a whole different moral can of worms, but the morality doesn't really change now that there is an algorithm in the mix. Again, so long as there is human oversight. Every action by government needs to be accountable to someone, whether that action is police, justice, or military in nature.
 
Let me guess how that ML model was trained...

Recognize a person carrying a gun and a prayer rug, and you can't miss.
 
Last edited:
Let me guess how that ML model was trained...

Recognize a person carrying a gun and a prayer rug, and you can't miss.
Also learn to recognize men with long beard without moustache and white skullcap or checkered turbans...then we have a problem.
 
This is the exact same technology that is now ubiquitous for detecting objects such as people, cats, chairs, etc. in images, including your smart phone. In 2017 it was a big deal, but not so much anymore (from a technology perspective). The AI is just finding objects of interest and then a person decides if it is a target or not. It likely still requires a human to validate false alarms and missed objects. It was an inevitable step given the technology.

The step from object detect/classification to automated prosecution of a target is still a human job as required by DoD policy. Will this change some day, maybe? Hopefully not.

I would be more worried about some bad actor using Open Source technology to do this, because it is very doable. The technology is there, its the training data that is key/difficult.
 
...and here we have it, war without reason;
if putting our men on these grounds is not worth the loss, then why incite battle in the first place on these unfair grounds we've engineered?
Imagine losing your relatives and land to a (semi) automated drone rather than a human being you can fairly run or retreat from, fight or defend. What are you to blame, what will you have to do to meet closure?
This inhumane means of combat will only further deteriorate morale which drives extremism, traditional war is dead, and in its place will be the automaton.
 
And don't bother to ask why they will do it. The answer: Because they can, and will make a ton of money selling the drones.
 
In a full scale war with a 100 million army it will.

I saw somewhere that US army is already researching anti (drone) swarm strategies... Seems like using AI to help identify targets on a crowded situation like this would be useful.
Also the US army itself is researching offensive drone swarm strategies/tech:

https://www.bloomberg.com/opinion/a...r-drone-swarms-for-possible-war-against-china

Not quite the million scale but who knows if it scales way up using smaller drone in future wars...

- ChatGPT there's a mosquito on my kitchen how should I approach it?
- Use a bazooka
 
Back