What just happened? European Union lawmakers on Wednesday voted overwhelmingly to enact a landmark law governing artificial intelligence. Called the AI Act, the new regulation was approved by 523 votes to 46 and is expected to change the way AI is used by government agencies and businesses in the EU.

In a press release following the vote, the European Parliament said that the new law is the first-ever legal framework on AI, and will do a balancing job of addressing the risks related to the use of the emerging technology while positioning Europe to play a leading role in its development globally. The lawmakers also stated that the Act was designed to safeguard the general public from harmful uses of artificial intelligence while providing AI developers and organizations with binding guidelines on how to use it in a non-intrusive manner.

According to the EU, the AI Act will safeguard the fundamental rights of European citizens, as well as protect democratic values and environmental sustainability from "high-risk AI." As part of the plan, the new law bans certain AI applications that the lawmakers believe threaten citizens' rights. These include "biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases."

Other applications in the "banned" list include "emotion recognition" in workplaces and schools, and social scoring. To prevent a dystopian future showcased in movies like Minority Report, the law also categorically prevents predictive policing when it is based solely on profiling a person or assessing their characteristics. Finally, AI that "manipulates human behavior or exploits people's vulnerabilities" is also barred under the new law.

However, as with most civil rights and privacy laws, there are some exemptions for law enforcement. While the use of AI for policing has largely been forbidden in the new law, officials are permitted to use real-time biometric identification systems in some situations if "strict safeguards" are met. For example, police in the EU can use AI in targeted searches for missing people or to prevent a terrorist attack, provided they get prior judicial or administrative authorization.

Coming to general-purpose AI software, the law clearly stipulates that it must meet certain transparency requirements, including "compliance with EU copyright law and publishing detailed summaries of the content used for training." The Act also specifies that AI-generated images or manipulated audio and video content, otherwise known as "deepfakes," must be properly labeled as such.