Fears of 'killer robots' prompt boycott of South Korean university AI lab

Cal Jeffrey

Posts: 4,152   +1,416
Staff member

Every time a robotic firm such as Boston Dynamics announces a new robot or a break through in AI, the Skynet jokes are soon to follow. Whether the machines are somewhat cute or downright creepy, we all have a good laugh at the coming robopocalypse.

However, more than 50 academic researchers are not laughing at Korea Advanced Institute of Science and Technology (KAIST) which is working with military contractor Hanwha Systems to allegedly create autonomous weapons. The artificial intelligence researchers hail from almost 30 different countries and are calling for a boycott against the South Korean university.

In an open letter to the president of KAIST, Sung-Chul Shin, the researchers express their concern over the university’s collaboration with Hanwha Systems, South Korea’s primary arms manufacturer.

“At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons. We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control. We will, for example, not visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST.”

The signatories of the boycott letter fear that autonomous weapons remove moral and ethical restraints that would allow terrorists and despots to unleash atrocities on innocent populations. They describe the threat of such weapons as a “Pandora’s box [that] will be hard to close if it is opened” and urge the university to abandon its work on harmful tech and focus on AI that benefits society.

Hanwha Systems is known to manufacture cluster munitions, an indiscriminate weapon that has been banned in 120 countries under an international treaty. Hanwha’s ethical ambiguity in arms production is what prompted the researcher to vow to exclude KAIST from future collaboration.

According to The Guardian, university president Shin denies that KAIST is working on lethal weapons.

“I would like to reaffirm that KAIST does not have any intention to engage in development of lethal autonomous weapons systems and killer robots,” said Shin. “As an academic institution, we value human rights and ethical standards to a very high degree. I reaffirm once again that KAIST will not conduct any research activities counter to human dignity including autonomous weapons lacking meaningful human control.”

The United Nations will be convening in Geneva next week to discuss this very topic. Over 20 countries have already expressed the need for a complete ban on “killer robots.”

Body Image via The Guardian

Permalink to story.

 
Of course once we remove the human element from warfare it makes war that much easier to start and maintain. The number of drone attacks in Afghanistan continues to rise and it no longer requires an executive approval to proceed. This does not guarantee an escalation but if you look at history, it's a very strong indicator of the same. The less and less we remember the horrors of war, the more inclined we are to jump in and we humans are terrible at remembering our history and applying it to present day issues.

Keeping humans safe is certainly the objective but a strong set of principles certainly needs to be introduced and maintained.
 
Sounds more like people have been watching too much sci-fi...

The problem with sci-fi is that its worst visions have a habit of coming to pass while the positive ones almost never do.

Its worth noting that so far there's no proof that KAIST is actually collaborating on AI that would eventually control weapons. However, its safe to assume that the fruits of any joint efforts between them and S Korean weapons makers will eventually have some military application.
 
Last edited:
I never understood why we haven't made a machine go turret with a motion sensor.
because its typically good to have a human to differentiate between "good guy" and "bad guy"...? you dont toy around with peoples lives, if the computer is wrong 0.01% of the time its still not good enough.
 
I never understood why we haven't made a machine go turret with a motion sensor.
We did that in the 80's, never put into production, but they tested it as a vehicle self protection system, 2-3 machine guns mounted on the vehicle controlled by optical sensors and once they were activated they were autonomous.
 
Soldier for the future what would he/she be like today or sometime in the unforeseen events. Man/Woman vs Machines who would win the battle? Drop the bomb end it all who would win that idealistic ideal? War to End all Wars? Should we all put all our eggs in a basket and home it never comes to that reality?
 
We ourselves are killers so why would we want some other faith ? Look at them, each and every one of them has killed/taken a part in killing of countless animals and such, why is their life more valuable, why should they live ?
I am guilty of the same thing though, yet I do not want to die, that doesn't mean I do not deserve it. Perhaps an AI will have more heart than we humans have and actually do something to help its environment, even if that means saving the planet from the human race; yea it's a bad thing for us, but guess what....we are not the center of the universe, our existance on this Earth is little more than a drop of water in the ocean, we are not neccesary nor needed.
In the end I hope that this primitive human thinking will dissapear before a propper AI is born/created so we may find common ground to coexist peacefully with one another, if that's even possible.
 
Back