Proposed EU regulations aim to restrict AI use based on its risk to public safety or liberty

Cal Jeffrey

Posts: 4,140   +1,406
Staff member
In context: As artificial intelligence systems get more ubiquitous, the need to regulate their use becomes more apparent. We have already seen how systems like facial recognition can be unreliable at best and biased at worst. Or how governments can misuse AI to impinge on individual rights. The European Union is now considering implementing formal regulation on AI's use.

On Wednesday, the European Commission proposed regulations that would restrict and guide how companies, organizations, and government agencies use artificial intelligence systems. If approved, it will be the first formal legislation to govern AI usage. The EC says the rules are necessary to safeguard "the fundamental rights of people and businesses." The legal framework would consist of four levels of regulation.

The first tier would be AI systems deemed "unacceptable risk." These would be algorithms considered a "clear threat to safety, livelihoods, and rights of people." The law would outright ban applications like China's social scoring system or any others designed to modify human behavior.

The second tier consists of AI technology considered "high risk." The EC's definition of high-risk applications is broad, covering a wide range of software, some of which is already in use. Law enforcement software that uses AI that may interfere with human rights will be strictly controlled. Facial recognition is one example. In fact, all remote biometric identification systems fall into this category.

These systems will be highly regulated, requiring high-quality datasets for training, activity logs to trace back results, detailed documentation, and "appropriate human oversight," among other things. The European Union would forbid the use of most of these applications in public areas. However, the rules would have concessions for matters of national security.

The third level is "limited risk" AIs. These mainly consist of chatbots or personal assistants such as Google's Duplex. These systems must provide a level of transparency significant enough that they can be identified as non-human. The end-user must be allowed to decide whether or not he or she interacts with the AI.

Finally, there are programs considered "minimal risk." These would be AI software that poses little to no harm to human safety or freedoms. For example, email filtering algorithms or AI used in video games would be exempt from regulation.

Enforcement measures would consist of fines in the range of six percent of a company's global sales. However, it could take years for anything to go into effect as European member states debate and hammer out the details.

Permalink to story.

 
It's good that governments are starting to grapple with these issues. It will be a long and ongoing process. Many of the issues will not have a clear right answer and instead involve tough judgment calls. Think for example of the centuries it took for our legal systems to evolve, trying to balance the needs to protect all citizens from crime while also not creating a police state. AI regulation will be even harder.

Edit: one immediate fringe benefit: maybe companies will stop slapping dubious "AI" labels on routine algorithms involving no actual machine judgment beyond the fixed rules installed by the programmer in the first place if they know it opens them to new regulatory processes.
 
But in the end:
'AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned'
Oh yeah, because china social system enhances safety, livehoods and rights of the people... /s
Get real. If you want your every step to be controlled then please don't live in eu.
 
I agree EU is an overbearing, over-regulated, over-taxed joke. I can't blame the UK for leaving this giant nanny state. They are on many levels no better than China.
 
Great, and while their at it they can get rid of electronic voting, repeatedly proven to be easily used to commit fraud.
 
Poor attempt. As you said, its time to get real.

Instead of giving your ever expanding fantasy ridden liberty list, go live in jungle. No one will bother to collect your data. Plain & simple.
As for EU, you can take whatever left of them after they pay for what they have done. I dont like the fact that you are enjoying your "rights" sitting on my money while being "concerned" about China.
I don't have to live in jungle to protect my data, living in eu is much nicer anyway. And no worries, I don't sit on your monies and I'm not concerned about China. China is simply scumm government with a lot of good people, and my only concern was china eu treaty, but seems like it won't go through so I'm fine with that.
Maybe you need to feel a whip over your back and big brother cameras around to feel safe, but it is much easier to keep people properly educated and allow them to use this knowledge well. Obviously, seems like China and us took another approach.
 
If you want your every step to be controlled then please don't live in eu.

I wouldn’t go that far. The EU is simply ensuring that excessive control and micromanagement of day-to-day life remains the exclusive purview of the European governments and doesn’t extend into AI-enabled business enterprises (who would likely be more effective at it than the Eurocrats themselves).
 
In context: As artificial intelligence systems get more ubiquitous, the need to regulate their use becomes more apparent. We have already seen how systems like facial recognition can be unreliable at best and biased at worst. Or how governments can misuse AI to impinge on individual rights. The European Union is now considering implementing formal regulation on AI's use.

On Wednesday, the European Commission proposed regulations that would restrict and guide how companies, organizations, and government agencies use artificial intelligence systems. If approved, it will be the first formal legislation to govern AI usage. The EC says the rules are necessary to safeguard "the fundamental rights of people and businesses." The legal framework would consist of four levels of regulation.

The first tier would be AI systems deemed "unacceptable risk." These would be algorithms considered a "clear threat to safety, livelihoods, and rights of people." The law would outright ban applications like China's social scoring system or any others designed to modify human behavior.

The second tier consists of AI technology considered "high risk." The EC's definition of high-risk applications is broad, covering a wide range of software, some of which is already in use. Law enforcement software that uses AI that may interfere with human rights will be strictly controlled. Facial recognition is one example. In fact, all remote biometric identification systems fall into this category.

These systems will be highly regulated, requiring high-quality datasets for training, activity logs to trace back results, detailed documentation, and "appropriate human oversight," among other things. The European Union would forbid the use of most of these applications in public areas. However, the rules would have concessions for matters of national security.

The third level is "limited risk" AIs. These mainly consist of chatbots or personal assistants such as Google's Duplex. These systems must provide a level of transparency significant enough that they can be identified as non-human. The end-user must be allowed to decide whether or not he or she interacts with the AI.

Finally, there are programs considered "minimal risk." These would be AI software that poses little to no harm to human safety or freedoms. For example, email filtering algorithms or AI used in video games would be exempt from regulation.

Enforcement measures would consist of fines in the range of six percent of a company's global sales. However, it could take years for anything to go into effect as European member states debate and hammer out the details.

Permalink to story.

This is one of the few topics where I have no opinion. While I definitely support the development of artificial intelligence, I understand those who say this could be a risk for our liberty.
 
Back