In context: In 1948, Isaac Asimov coined the "Three Laws of Robotics" in his short story Runaround. The core of the rules were that robots should never harm humans whether by autonomous action (or inaction) or by following commands by another human (murder by proxy). As we advance into the twenty-first century, AI is getting to the point where it could cause problems left unchecked or used unethically.

On Monday, the European Commission published guidelines for the ethical development and application of artificial intelligence.

With AI and machine learning growing at a rapid rate, researchers and lawmakers alike are concerned about potential pitfalls that could come with the creation and deployment of powerful AI algorithms.

Deep fakes, news generators, and medical imaging malware have been flagged for potential misuse. Privacy and data use has also been at the forefront as of late. It was only a matter of time before regulators became interested in tempering the technology with rules or laws.

The European Commission stopped short of enacting or proposing legislation regarding artificial intelligence, but did come up with a set of guidelines for creating “trustworthy AI.”

Isaac Asimov's "Three Laws of Robotics"

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The general rules look to ensure that AI employed in various applications maintains transparency (think Google Duplex identifying itself) as well as a fair amount of human oversight. The commission wants to be sure that systems are secure and can handle errors or tampering well enough that they cannot cause harm.

The EU also wants to be sure that citizens have control over any data that an AI may collect and that it complies with the General Data Protection Regulation (GDPR), which took effect last year.

Machine learning and AI systems should never discriminate based on unchangeable traits (race, disabilities, etc.) and should “ensure accessibility.” The commission believes systems should be used for the betterment of society, but perhaps more important than anything else; AI should have mechanisms in place to ensure accountability should something go wrong.

The commission is launching a pilot program this summer to involve stakeholders in evaluating the guidelines and coming up with recommendations on how to implement the rules. In early 2020 they hope to begin incorporating feedback from the pilot into a cohesive set of regulations.

A full description of guidelines can be found on the EC's website.