US Army addresses controversial killer robot program

midian182

Posts: 9,745   +121
Staff member
A hot potato: The use of autonomous weapons systems, aka killer robots, is a contentious subject—for obvious reasons. It’s proved particularly controversial for the US Army, which plans to use AI to help identify and engage targets. But the DoD says humans will always have the final say on whether the robots open fire.

The Defense Department is holding an industry day this week to provide industry and academia an opportunity to overview and aid in developing the Advanced Targeting and Lethality Automated System (Atlas), which is designed for ground combat vehicles.

The Army says it wants to use recent advances in AI and machine learning to develop “autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process.”

Back in 2017, Elon Musk was one of 116 experts calling for a ban on killer robots—the second letter of its kind. It arrived not long after a US general warned of the dangers posed by these machines.

The controversy led to the industry day document being updated last week, emphasizing that the use of autonomous weapons are still subject to guidelines set out by Department of Defense (DoD) Directive 3000.09. This states that “Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.”

Speaking to Defense One, an army official said any upgrades to Atlas didn’t mean “we’re putting the machine in a position to kill anybody.”

While many are calling for an outright ban on killer robots, Russia has suggested it will not adhere to any international restrictions on autonomous weapons systems.

Permalink to story.

 
We'll spend billions of dollars on killer robots and fighter jets with no enemies (as expensive) to fight .

The enemy will spend a couple hundred dollars building a bomb and detonate it in a crowded mall.

OR...they'll grab a truck and run down 90 people in a market.

Or...they'll kidnap and kill one person at a time over a long period of time.
 
"The final decision to engage a target rests with humans"

After seeing a documentary about drone “pilots”, I'm not sure if I would call them “human”. They appeared to be f'd-up pretty badly...
 
"The final decision to engage a target rests with humans"

After seeing a documentary about drone “pilots”, I'm not sure if I would call them “human”. They appeared to be f'd-up pretty badly...

They don't feel anything after obliterating buildings even if there is collateral damage. It's almost like playing a video game to some.

Playing moderator for the entire world is a pretty lucrative for US defense/tech companies, so tech like this will never not be developed.
 
Plus you get to blame the human operator for any "accidents" while the robot company remains unscathed. Even better, after a few accidents caused by human operators, it creates a case for replacing them with a more accurate AI.
 
I always thought they should use a motion sensor type turret to guard heavily fortified areas in times of attack. Not to different from a mine field. A landmine does not care who it kills and humans do not have the final decision.
 
There is information missing from the article: amount of ammo it can hold? what types of ammo? what caliber?
 
This sure does seem like a long-winded way of spelling "cowardice".

If you're afraid to face your opponent, you are a coward. Pretty simple.
 
So its just like 'aimbot' for Call Of Duty, without the auto-fire enabled.

The aiming is done for your, and you pull the trigger if you agree with the 'target'.

What is so wrong with that? It is still humans pulling the trigger...
 
Back