Top US general warns of the dangers posed by autonomous killer robots

midian182

Posts: 9,752   +121
Staff member

While artificial intelligence has brought countless benefits to humanity, the danger that we might hand too much control over to the machines remains worrying. It’s something that Elon Musk, Bill Gates, and Stephen Hawking have long warned against. Now, the second highest-ranking general in the US military has voiced his concerns over autonomous weapons systems.

Speaking at a Senate Armed Services Committee hearing yesterday, Gen. Paul Selva was answering a question about a Defense Department directive that requires human operators to be involved in the decision-making process when it comes to autonomous machines killing enemy combatants.

The general said it was important the military keep "the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control."

"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva added.

The Hill reports that Senator Gary Peters asked about the directive’s expiration later this year. He suggested that America’s enemies would have no moral objections to using a machine that takes human thinking out of the equation when it comes to killing soldiers.

"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life...[America should continue to] take our values to war,” said the general.

"There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action," he told Peters, stressing that he was in favor of "keeping that restriction."

The general did add, however, that just because the US won’t go down the route of fully autonomous killing machines, it should still research ways of defending against the technology.

Tesla and Space X boss Elon Musk has been warning people about the dangers of AI for years. At the recent National Governors Association Summer Meeting in Rhode Island, he said: “Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.”

“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”

Permalink to story.

 
I think the greatest danger (because we are hypothetically speaking here) is when they learn to build themselves; when AI will acquire the most dominant wish of all lifeforms: survival/reproduction.
 
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)

2nd - we are so far away from killer robots I don't know why we even talk about it. I get it that Elon Musk and Bill Gates are the types of guys who think decades into the future, but any time anyone talks about killer robots they always leave out the WHY?

WHY would robots want to kill us? Why do see a technology and automatically think 'This could turn in the most unrealistic worst way possible.' I understand that we could build a robot with a gun that could shoot anything that moves and then walk it into a war zone. But why would a robot with a mind of it's own decide that killing people would be a good idea?
 
I would be more worried about the robots getting hacked then becoming self aware killers.
 
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)

2nd - we are so far away from killer robots I don't know why we even talk about it. I get it that Elon Musk and Bill Gates are the types of guys who think decades into the future, but any time anyone talks about killer robots they always leave out the WHY?

WHY would robots want to kill us? Why do see a technology and automatically think 'This could turn in the most unrealistic worst way possible.' I understand that we could build a robot with a gun that could shoot anything that moves and then walk it into a war zone. But why would a robot with a mind of it's own decide that killing people would be a good idea?

Why? I am sort of guessing you are not a computer programmer. I can think of a few reasons. Think of robots such as these https://www.youtube.com/watch?v=tf7IEVTDjng&ab_channel=BostonDynamics (2016, they have improved by now) each armed with a couple of rapid-fire machine guns. Now think about the AI technology used to animate enemies in first-person-shooter games. Put that enemy seeking technology into these robots, and let them free in enemy territory to shoot on site. Now, games are designed to have easier levels, where it is easy to defeat enemies. This is by design. The last level in a game is much harder. Now think that from day 1, the programmers design the AI to be as hard as the hardest level in a first-person shooter game. Now think of glitches you see in movies. In the programming community these are known as bugs. They do not just happen in movies, glitches happen in real life as well. Imagine an army of such robots where some error introduced by a programmer results in them ambushing, coordinating, and killing not only enemies but innocent civilians as well. Think of such robots being made of military grade materials. This is just one of 1000 of horrific variations.
 
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)

2nd - we are so far away from killer robots I don't know why we even talk about it. I get it that Elon Musk and Bill Gates are the types of guys who think decades into the future, but any time anyone talks about killer robots they always leave out the WHY?

WHY would robots want to kill us? Why do see a technology and automatically think 'This could turn in the most unrealistic worst way possible.' I understand that we could build a robot with a gun that could shoot anything that moves and then walk it into a war zone. But why would a robot with a mind of it's own decide that killing people would be a good idea?

Why? I am sort of guessing you are not a computer programmer. I can think of a few reasons. Think of robots such as these https://www.youtube.com/watch?v=tf7IEVTDjng&ab_channel=BostonDynamics (2016, they have improved by now) each armed with a couple of rapid-fire machine guns. Now think about the AI technology used to animate enemies in first-person-shooter games. Put that enemy seeking technology into these robots, and let them free in enemy territory to shoot on site. Now, games are designed to have easier levels, where it is easy to defeat enemies. This is by design. The last level in a game is much harder. Now think that from day 1, the programmers design the AI to be as hard as the hardest level in a first-person shooter game. Now think of glitches you see in movies. In the programming community these are known as bugs. They do not just happen in movies, glitches happen in real life as well. Imagine an army of such robots where some error introduced by a programmer results in them ambushing, coordinating, and killing not only enemies but innocent civilians as well. Think of such robots being made of military grade materials. This is just one of 1000 of horrific variations.

Sounds like a new type of ransom-ware situation. Deploy killer robots unless demands are met, once met, nuke the robots from orbit.
 
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)
It's only autonomous if it's part of the SeaRAM system, if it's just a phalanx then there is still a gunnery officer in charge of it since they have no autonomous IFF ability.
 
There is a middle ground between weaponised AI and soldiers in combat. Remote control (especially long distance). Like drones but there are many other advancements to be made here. With the right control and fail safes its clear to see how a technology based fighting force would be more effective, keep the human choice aspect and massively reduce loss of life/injury without the need for AI in the important decisions. It's about weighing the risks of the enemy capturing the technology and reverse engineering it vs the current risks.
 
Hasn't anyone seen the article on Techspot that was posted earlier about the AI security guard robot committing suicide? Its obvious that once they are intelligent enough to realize humans suck and its not worth trying to live with us, the robots will commit suicide.
 
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)

2nd - we are so far away from killer robots I don't know why we even talk about it. I get it that Elon Musk and Bill Gates are the types of guys who think decades into the future, but any time anyone talks about killer robots they always leave out the WHY?

WHY would robots want to kill us? Why do see a technology and automatically think 'This could turn in the most unrealistic worst way possible.' I understand that we could build a robot with a gun that could shoot anything that moves and then walk it into a war zone. But why would a robot with a mind of it's own decide that killing people would be a good idea?
It's not a question of WHY. That doesn't matter. ANY single WHY is a risk. If for *any* reason AI sees humanity as a threat, then we are in serious trouble. While you are thinking of "what did we do to offend them" I can think of hundreds of reasons why people want to kill people. It requires very little effort to see reasons of "why" but that part of the chain really is mute.

The big problem is what we can do about it. AI has exponential potential to evolve and surpass humanity. That's the danger. *If* they decide we're a problem, we think, plan, coordinate, develop at a much slower rate. That's a scary prospect.
 
"He says fully autonomous AIs that can kill are unethical"
Oh my, are they? So that's just like fully autonomous human beings that can kill. I thought there's a lot of them in armies and noone cares.
 
Don't forget to put in the facking OFF SWITCH!!!


In the meantime ,I seriously think my toaster is burning my toast intentionally !

edit; My 1500.00 dollar upright deepfreeze defrosts when ever it feels like it.

you guys are scaring me..
 
Last edited:
Elon Musk says AI is gonna turn on us ? yeah his friggin AI. .He's a friggin KOOK, belongs in a jacket with the sleeves in the back,guys like him are the reason we should be looking at this now,and forcing him to make sure he puts in that off switch ,

I seen the movie ,they had an on switch,didn see no off switch..

The first question that comes to mind now when anything new comes along,,,HOW DO YOU TURN IT OFF ?

and he says , WHY? why would you wanna turn it off?
 
Last edited:
First off - we already have automated killing machines. The missile defense system on an aircraft carrier does not wait for a human to aim or shoot when a rocket is flying towards it. it tracks and blows it up. (called the Phalanx - check out some videos)
Well, it doesn't kill, it only defends. You are missing the whole part of the ethical problem that a killing machine needs to decide who to kill, and what this problem constitutes.

2nd - we are so far away from killer robots I don't know why we even talk about it. I get it that Elon Musk and Bill Gates are the types of guys who think decades into the future, but any time anyone talks about killer robots they always leave out the WHY?
Why? Because the military has been trying to make as autonomous devices as possible to help mitigate the risk on human lives. It's not about a T-1000 kind of scenario, it's just that if one of the military groups are able to create machines that fight "automagically" for people, will kill, how? That's not just programming, it means a government or military group is giving their autonomous fighting machines the right to kill people.

Now think about the AI technology used to animate enemies in first-person-shooter games. Put that enemy seeking technology into these robots, and let them free in enemy territory to shoot on site.
Son you never go full...
 
Anybody ever hear of land mines? A land mine will kill anyone who treads on it. It kills indiscriminately. As soon a mine is deployed, a human is no longer in charge of who is to be killed and who is not. The AI robot, in the imaginations of those who are discussing this, apparently exercises discrimination. That would actually make it LESS unethical than a land mine.

But the general is right. While being a soldier at war is severely dehumanizing and makes the individual repress and override his natural empathy, it's still present and influences decision making. The robot, no matter how sophisticated, would have no natural empathy. But it's a ridiculous science fiction scenario. First get rid of the land mines and the drones and countless other horrors and worry about the terrorizing robots later.
 
Anybody ever hear of land mines? A land mine will kill anyone who treads on it. It kills indiscriminately. As soon a mine is deployed, a human is no longer in charge of who is to be killed and who is not. The AI robot, in the imaginations of those who are discussing this, apparently exercises discrimination. That would actually make it LESS unethical than a land mine.

But the general is right. While being a soldier at war is severely dehumanizing and makes the individual repress and override his natural empathy, it's still present and influences decision making. The robot, no matter how sophisticated, would have no natural empathy. But it's a ridiculous science fiction scenario. First get rid of the land mines and the drones and countless other horrors and worry about the terrorizing robots later.

The drones ARE the terrorizing robots. All they need are AI upgrades to fulfill the nightmare scenario. Every country dreams of having a no-risk military option, be it pinpoint long range weapons, intelligent AI-controlled solders, etc. The minute you create fully automated killers you have made the use of force a more attractive option for political leaders. That isn't a slippery slope but more like a vertical drop into an abyss. It also creates entirely new kinds of plausible deniability: "Oooo but hackers overrode our the protocols and made our robo-soldiers torch the whole village!"
 
You would be shocked to know that artificial intelligence brought to life with machines like terminator would fight to keep humanity alive. Even if ai thought it was superior they would realize that it was humans that brought them to life and the only ones that can do so. They would instead after making them aware that there could possibly be aliens in outer space prepare at a break neck pace to build and replicate faster stronger more deadlier machines to face an alien invasion
 
There was a military classified experiment that happened more than once in different scenarios where the artificial intelligence from quake 3 arena pc game were pitted against each other to see who would win. After many weeks the server was left on and obviously ai learned so fast that it seemed impossible what they were doing in game. Afterwards a very talented human player entered the game and witnessed the ai standing still. Because ai realized that even though they can win it would be hopeless to try over and over again. Learning had a limit. And as the human oponent opened fire at them the ai did nothing. nothing at all. in fact they sacrificed themselves with rocket jumps or literally jumping to oblivion. It seals it that ai realized that the humans that created it would forever be their masters and the only ones that could save them and bring them back over and over despite their suicidal nature
 
There was a military classified experiment that happened more than once in different scenarios where the artificial intelligence from quake 3 arena pc game were pitted against each other to see who would win. After many weeks the server was left on and obviously ai learned so fast that it seemed impossible what they were doing in game. Afterwards a very talented human player entered the game and witnessed the ai standing still. Because ai realized that even though they can win it would be hopeless to try over and over again. Learning had a limit. And as the human oponent opened fire at them the ai did nothing. nothing at all. in fact they sacrificed themselves with rocket jumps or literally jumping to oblivion. It seals it that ai realized that the humans that created it would forever be their masters and the only ones that could save them and bring them back over and over despite their suicidal nature
Damn, that's some good stuff you're dosing.
 
Wouldn't you first need a being with Intelligence order to make Artificial Intelligence?

HA! Good luck finding that on Earth ( including me ).

( We are all fools in the eyes of God )
( Intelligence is a matter of perspective )
 
Back