Pentagon will pay $2 billion for AI infused weapons

mongeese

Posts: 643   +123
Staff
Why it matters: AI on the battlefield has the potential to allow for faster and more coordinated military strikes and responses. In the future, it could also be used to calculate the enemy’s strategy far more accurately and effectively. But imagine if you were trying to live peacefully in a war-torn country with drones flying overhead – would you be able to trust computers not to make a mistake?

I couldn’t. I don’t want AI to be able to decide if my life should be ended, so I don’t think it should be given that power over anyone. It seems that military commanders share my opinion and refuse to trust AI without extensive human oversight, but the Pentagon and DARPA (Defense Advanced Research Projects Agency) have initiated a new AI program to change that.

In a Washington conference celebrating the 60th anniversary of DARPA, they revealed that they have allocated $2 billion to develop military AI over the next 5 years. While this isn’t much by DARPA standards, it’s a lot for AI and will bring about some serious improvements. It’s the most spent on AI yet, but it’s only one of over 25 concurrent AI programs DARPA is running. In July they revealed that defense contractor Booz Allen Hamilton received $855 million for a 5-year unspecified AI-related project and that DARPA would be granting up to $1 million per research group that can improve complex environment recognition in AI.

The main target is to create an Artificial Intelligence that can explain its choices to an overseer so that it can prove that it is using common sense and has a digital ‘moral compass’. DARPA also aims to tackle concerns that AI is unpredictable and prove that it can account for unexpected variables the way a human can. Currently, the AI in use can only give a “confidence rating” as an error percentage, and they are not permitted to fire without a human signing off.

This move could be a successor of Project Maven – an initiative by the Pentagon to improve object and environment recognition in military situations. Google was their primary partner but after an outcry from Google employees afraid they were creating software that could one day be used to kill people, Google executives decided against renewing the contract.

The Rand Corporation, another Pentagon contractor, has also voiced its concerns. They have particularly emphasized the impact of AI on nuclear warfare, highlighting that AI could possibly circumvent certain foundations of the theory of mutually assured destruction. Most notably, they claim that in the future AI could be used to predict the exact location of mobile Inter-Continental Ballistic Missiles and use conventional weapons to destroy a foreign power’s nuclear arsenal. The only solution for the foreign power is an immediate attack, thus initiating nuclear war.

So far, the government has not responded to these concerns. Instead, the Trump administration has begun constructing a Joint Artificial Intelligence Center to coordinate AI research across the Department of Defense.

Ron Brachman, who was previously in command of DARPA’s AI research, said during the conference that “we probably need some gigantic Manhattan Project to create an AI system that has the competence of a three-year-old.” We’ll simply have to wait and see if Trump wants to invest that much money, time and energy into AI – or if Putin will do it first.

Permalink to story.

 
The main target is to create an Artificial Intelligence that can explain its choices to an overseer so that it can prove that it is using common sense and has a digital ‘moral compass’

A key attribute to AI development is to reason pass that of human capacity (eventually), which is in direct contradiction with what these people expect from the AI, to be as dumb as they are.

A moral compass in AI is not supposed to include an emotional compass, which will make it always different from that of humans, though for the better.

In the end, our world is held together by political compromises, not by military strategies, and we are far away from creating an AI that could fully comprehend that.
 
Last edited:
That's a lot of ****ing money being spent on what are essentially game scenarios that have little probability of coming to pass (unless they are deliberately made so just because there's the capability to do it.) I just get more and more tired of the huge obscene waste that the military for the very most part is.
 
The biggest thing the military wants as far as AI is actually on the weapon's themselves. Look at how the LRASM operates once it's fired, the system is designed to designate targets with no data link from launcher or friendly controller, and attack the most vital or weakest target (set by shooter) and coordinate this with other missiles in the wave if necessary/capable. All of that is done by the missile after being shot, they are designed to operate in area where the shooters battle network is being actively degraded. They want that but even more capable, weapons launch, has a list of targets and the wave of missiles decided amongst themselves which targets are going to be hit by which missiles and the best approaches based on what they are detecting around them and other assets are finding in real time. The safety side of all of this at this point though is that a person is still deciding when the weapon is released and has preset what target they can and can't hit.
 
If we can't get a grip on autonomous driving, this will never safely happen. It would be stupid to put an AI behind a missile, when the same AI can't safely drive a car. A car requires way less computing in the same amount of time.
 
If we can't get a grip on autonomous driving, this will never safely happen. It would be stupid to put an AI behind a missile, when the same AI can't safely drive a car. A car requires way less computing in the same amount of time.
Yes but unlike a car you want the missile to crash into something :D
 
What could go wrong? Do we really think that computers will make better decisions than we do? Man is fallible. Anything he builds is equally fallible but also arbitrary and can't adjust.
 
"Most notably, they claim that in the future AI could be used to predict the exact location of mobile Inter-Continental Ballistic Missiles and use conventional weapons to destroy a foreign power’s nuclear arsenal. The only solution for the foreign power is an immediate attack, thus initiating nuclear war."

Or, they develop their own AI, which also predicts mobile launch sites, and you also hold off on your first strike, because you know they can predict where you'll launch from. Or you both develop AIs designed to predict the other side's predictions, and make moves into the lower probability locations.

AI is just going to be another arms race, like what nukes were/are.
 
"Most notably, they claim that in the future AI could be used to predict the exact location of mobile Inter-Continental Ballistic Missiles and use conventional weapons to destroy a foreign power’s nuclear arsenal. The only solution for the foreign power is an immediate attack, thus initiating nuclear war."

Or, they develop their own AI, which also predicts mobile launch sites, and you also hold off on your first strike, because you know they can predict where you'll launch from. Or you both develop AIs designed to predict the other side's predictions, and make moves into the lower probability locations.

AI is just going to be another arms race, like what nukes were/are.
Or they could all sit down and negotiate an honorable peace and stick to their commitments.

Oh, silly me. No world governments would ever do that at this time. There's too much gold in them thar hills - even if it becomes radioactive and thus worthless.
 
It's a sad world we live in a world where one group is constantly out to conquer and subdue another and the only real result is death and decay and there's no real winners. All the money spent on military technology could be better spent improving lives and the health of the planet. I don't know, conservation? Use to be a common term and goal.
 
I mean all this sci-fi talk is just media hype. They haven't even developed ONE AI yet, in the entire world, only a bunch of machine-learning Artificial Stupids. It's a pipe dream with current tech, and purely fictional.
 
I mean all this sci-fi talk is just media hype. They haven't even developed ONE AI yet, in the entire world, only a bunch of machine-learning Artificial Stupids. It's a pipe dream with current tech, and purely fictional.
Right on point, IMO. Artificial stupids to say the least - and that is what makes this so scary, IMO!
 
"Most notably, they claim that in the future AI could be used to predict the exact location of mobile Inter-Continental Ballistic Missiles and use conventional weapons to destroy a foreign power’s nuclear arsenal. The only solution for the foreign power is an immediate attack, thus initiating nuclear war."

Or, they develop their own AI, which also predicts mobile launch sites, and you also hold off on your first strike, because you know they can predict where you'll launch from. Or you both develop AIs designed to predict the other side's predictions, and make moves into the lower probability locations.

AI is just going to be another arms race, like what nukes were/are.
Or they could all sit down and negotiate an honorable peace and stick to their commitments.

Oh, silly me. No world governments would ever do that at this time. There's too much gold in them thar hills - even if it becomes radioactive and thus worthless.

What is your alternative? I see articles about the google employees whining that they dont want to make evil AI, etc etc, but they don't ever give their alternatives. Do they really believe that China and Russia are not developing military focused AI? I mean its one thing for Americans to get complacent about such things, no one has ever invaded the US and its not likely to happen anytime soon simply from geographical issues. But Europe has a long long history of violence and upheaval. Yet I talk to plenty of Europeans who seem to think war or anything similar is basically impossible in today's world.

Developing military applications for these tools isn't optional. If you don't, someone else will.

Also, "the only solution is immediate nuclear attack". Again, what is his alternative or fix? Because China sure isn't going to decide that AI just makes them too powerful so they won't research it. The same goes for Russia. Its a moronic mindset. Plus this reads like general AI development won't naturally bleed over into military applications. It will. Even if no one in the world spent any money developing military AI it would still eventually happen simply from bleed over from peaceful uses of AI.
 
From a pragmatic standpoint, we know any bit of technology that can be leverage for military use will be. It is unfortunate because with every generation it becomes more capable of even more destruction. It's hard to calculate if someone will get crazy or brave enough to to use it. The indefinite build-up of military capacity has to be globally questioned at some point for there is another byproduct capacity of human improvement. I say this even though I'm facinated with an ability and it's possible necessity and yet I never want to see used.
 
Military AI is not even that bad, as will be the AI developed by spying agencies. Those agencies (like CIA, Mossad, FSB, MI-6, etc) are already killing scientists and other civilians for political and financial reasons.

They usually kill those who are good for the society, and finance those who are monstrous. In movies and series they show themselves as saviors of the world, while in reality they are financing terrorism or doing terrorism themselves.

Now, those bastards will be even more powerful with the help of AI. They'll be killing the best people in society in even less detectable ways. Eventually those bastards will be killed by AI too (either enemy AI or their own) but that's too good, too mild punishment for those MFs.
 
From a pragmatic standpoint, we know any bit of technology that can be leverage for military use will be. It is unfortunate because with every generation it becomes more capable of even more destruction. It's hard to calculate if someone will get crazy or brave enough to to use it. The indefinite build-up of military capacity has to be globally questioned at some point for there is another byproduct capacity of human improvement. I say this even though I'm facinated with an ability and it's possible necessity and yet I never want to see used.

Except if we look back over the past 50+ years we see some interesting statistics. Even though we have developed the most powerful weapons mankind has ever seen, mankind as a whole has suffered *less* war, *less* death and *less* conflict over that period of time then at any other time in human history. We also have lower poverty levels and lower levels of hunger then at any other time in human history.

Id like to say we as humans have somehow just gotten better but I believe the reality is less rosy and more self serving. The richer and fatter we get the less interested we become in losing that. So as a whole, 1st world nations are less interested in dangerous conflicts or things that might risk their comfort. As for conflicts in general, thats also down to money and comfort. Its one of the reasons im not a huge fan of sanctions as a political tool. As long as war is seen as a losing proposition and non-direct conflict options look like the better choice, most country's are going to choose that option. With the world being so globally connected financially and for producing advanced products I believe we likely won't see anymore major, direct, conflicts. Their simply too costly with very little upside. At least for the foreseeable future anyway. Its part of why capitalism is such a wonderful system. It doesn't reward wasteful violence, or a conquerors mindset. As long as invading another major power is a financial deadend we should be reasonably safe from crazy dictators or other fools giving it a shot.

I hope I'm right.
 
Except if we look back over the past 50+ years we see some interesting statistics. Even though we have developed the most powerful weapons mankind has ever seen, mankind as a whole has suffered *less* war, *less* death and *less* conflict over that period of time then at any other time in human history. We also have lower poverty levels and lower levels of hunger then at any other time in human history.

Id like to say we as humans have somehow just gotten better but I believe the reality is less rosy and more self serving. The richer and fatter we get the less interested we become in losing that. So as a whole, 1st world nations are less interested in dangerous conflicts or things that might risk their comfort. As for conflicts in general, thats also down to money and comfort. Its one of the reasons im not a huge fan of sanctions as a political tool. As long as war is seen as a losing proposition and non-direct conflict options look like the better choice, most country's are going to choose that option. With the world being so globally connected financially and for producing advanced products I believe we likely won't see anymore major, direct, conflicts. Their simply too costly with very little upside. At least for the foreseeable future anyway. Its part of why capitalism is such a wonderful system. It doesn't reward wasteful violence, or a conquerors mindset. As long as invading another major power is a financial deadend we should be reasonably safe from crazy dictators or other fools giving it a shot.

I hope I'm right.

That still hopes that the decision to commit violence against your neighbor is based on a rational calculation on resources required and resources available. Europe, Asia, Africa, South America, even North America (with its relatively short history) all have ethnic conflicts that are always simmering just beneath the surface, conflicts that are based on who's grandfather murder whom, and have nothing to do with haves and have-nots. The kind of raw emotional conflict that would happily give up access to a product from a neighbor, if it meant finally getting to make things "even".
 
Last edited:
What is your alternative? I see articles about the google employees whining that they dont want to make evil AI, etc etc, but they don't ever give their alternatives. Do they really believe that China and Russia are not developing military focused AI? I mean its one thing for Americans to get complacent about such things, no one has ever invaded the US and its not likely to happen anytime soon simply from geographical issues. But Europe has a long long history of violence and upheaval. Yet I talk to plenty of Europeans who seem to think war or anything similar is basically impossible in today's world.

Developing military applications for these tools isn't optional. If you don't, someone else will.

Also, "the only solution is immediate nuclear attack". Again, what is his alternative or fix? Because China sure isn't going to decide that AI just makes them too powerful so they won't research it. The same goes for Russia. Its a moronic mindset. Plus this reads like general AI development won't naturally bleed over into military applications. It will. Even if no one in the world spent any money developing military AI it would still eventually happen simply from bleed over from peaceful uses of AI.
Sit down and figure out a way to come to peace between nations. No, it is not an easy task, and it would almost certainly mean changing the basis for the world's economic system; however, with the alternative being annihilation, it is a far better alternative. Any nation that thinks an nuclear war is winnable is insane, IMO. Giving control of weapons to AI is even more insane, IMO.
 
Of course, its imminent ! who can be punished for faulty bit of hardware ? "oh dear we better take it out of operation , lets try the next gen model we've been working on and see how we go.".
 
Back