Elon Musk has long been a proponent for the regulation of artificial intelligence, calling it humanity’s “biggest existential threat,” and “more dangerous than North Korea.” Now, the Tesla boss has joined Google DeepMind co-founder Mustafa Suleyman and a group of 116 experts in signing a letter to the UN that expresses their concerns over AI. It calls for a ban on the development and use of all autonomous weapons, aka killer robots.
The open letter’s signatories, which includes founders of AI and robotics companies, warn that the technology could follow gunpowder and nuclear arms to become the “third revolution in warfare.”
"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways," explains the letter.
"We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
The letter goes on to call lethal autonomous weapons systems “morally wrong,” and recommends they join the list of weapons banned under the UN’s convention on certain conventional weapons, which bans or restricts the likes of chemical incendiaries and blinding lasers.
A CCW Review Conference was scheduled to begin formal discussions on autonomous weapons today, but the meeting has been postponed until November.
A similar open letter calling for a ban on “offensive autonomous weapons” was also signed by Musk back in 2015. Professor Stephen Hawking, Noam Chomsky, and Steve Wozniak added their names to the document, which prompted the UN to begin talks on the subject.
Lethal autonomous weapons able to identify and engage targets without human intervention are currently being developed by a number of countries, with some already being used in the field. Samsung’s SGR-A1 sentry gun, which is deployed along the Korean Militarized zone, reportedly has autonomous firing capabilities.
While speaking at a Senate Armed Services Committee hearing last month, the second highest-ranking general in the US army – Gen. Paul Selva – said: "I don't think it's reasonable for us to put robots in charge of whether or not we take a human life."