EU publishes guidelines urging ethical AI development

Cal Jeffrey

Posts: 4,486   +1,612
Staff member
In context: In 1948, Isaac Asimov coined the "Three Laws of Robotics" in his short story Runaround. The core of the rules were that robots should never harm humans whether by autonomous action (or inaction) or by following commands by another human (murder by proxy). As we advance into the twenty-first century, AI is getting to the point where it could cause problems left unchecked or used unethically.

On Monday, the European Commission published guidelines for the ethical development and application of artificial intelligence.

With AI and machine learning growing at a rapid rate, researchers and lawmakers alike are concerned about potential pitfalls that could come with the creation and deployment of powerful AI algorithms.

Deep fakes, news generators, and medical imaging malware have been flagged for potential misuse. Privacy and data use has also been at the forefront as of late. It was only a matter of time before regulators became interested in tempering the technology with rules or laws.

The European Commission stopped short of enacting or proposing legislation regarding artificial intelligence, but did come up with a set of guidelines for creating “trustworthy AI.”

Isaac Asimov's "Three Laws of Robotics"

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The general rules look to ensure that AI employed in various applications maintains transparency (think Google Duplex identifying itself) as well as a fair amount of human oversight. The commission wants to be sure that systems are secure and can handle errors or tampering well enough that they cannot cause harm.

The EU also wants to be sure that citizens have control over any data that an AI may collect and that it complies with the General Data Protection Regulation (GDPR), which took effect last year.

Machine learning and AI systems should never discriminate based on unchangeable traits (race, disabilities, etc.) and should “ensure accessibility.” The commission believes systems should be used for the betterment of society, but perhaps more important than anything else; AI should have mechanisms in place to ensure accountability should something go wrong.

The commission is launching a pilot program this summer to involve stakeholders in evaluating the guidelines and coming up with recommendations on how to implement the rules. In early 2020 they hope to begin incorporating feedback from the pilot into a cohesive set of regulations.

A full description of guidelines can be found on the EC's website.

Permalink to story.

 
Publishing guidelines urging ethical AI development is about as pointless as asking people to abide by the Ten Commandments or to not cheat on their taxes. Nefarious people will find nefarious uses for tech and it's been this way since the dawn of time. But I suppose someone was paid a lot of grant money to write said guidelines....
 
The term 'discriminate' is super loaded. They have software right now that can determine with a fair degree of accuracy if someone is gay, just from a couple of photos. That means Transgenders can be spotted from a facial photo. They can also determine if someone has Jewish, Black, Indian, Asian or other ethnic background. Just from a photo. Now add in voice recognition, body type, posture, walking patterns. All of these can be data mined to determine a person's nature.

Basically, if a human can spot identifying features, then a computer AI can be trained to discern those same features with similar or greater accuracy. Imagine if you could download an app that can tell you with some degree of accuracy what someone's sexual preferences are. That would be extremely popular. How about if a person has an unknown weakness for alcohol or other drugs? You can bet that state security and gangster organizations would want that technology, it would make spotting blackmail victims much easier.

Normally, society would tackle this head-on. But today's climate says that any kind of discrimination is sexist or racist. There is compelling reason to hide the capability, and deny that it exists. What if it shows that high IQ is primarily found in certain groups? Or violent characteristics on another? Heaven forbid. It's a 1984 scenario.

That's why the 'powers that be' are trying to corral this capabability now, before the public becomes fully aware that it exists. They want AI to self-censor so the mirage of equality can be maintained. And also to keep the public from understanding the scope.
 
That's why the 'powers that be' are trying to corral this capabability now, before the public becomes fully aware that it exists. They want AI to self-censor so the mirage of equality can be maintained. And also to keep the public from understanding the scope.

I think you're deluding yourself if you believe "the powers that be" are trying to corral this capability. They've embraced it and only want you to think it isn't or won't be used. The intelligence community has used AI for years as has Google, Facebook, Amazon, the Chinese government with its social points system, and others. The "powers-that-be" are using it to keep US corraled and put forth an illusion that they're working in our best interest to keep AI from being used against us.
 
Back