European Parliament Passes AI Act to Safeguard Rights and Foster Innovation

prawo-europejskie-unia-europejska.jpg

In a landmark move on Wednesday, the European Parliament adopted the Artificial Intelligence Act, a groundbreaking piece of legislation designed to safeguard fundamental rights and safety while fostering innovation in the development and use of AI technologies.

This regulation, the culmination of negotiations with member states concluded in December 2023, garnered overwhelming support from Members of the European Parliament (MEPs), passing with 523 votes in favor, 46 against, and 49 abstentions.

The Act prioritizes the protection of fundamental rights, democracy, the rule of law, and environmental sustainability from the potential risks posed by high-risk AI applications. It aims to position Europe as a global leader in the responsible development of AI by encouraging innovation and establishing clear obligations for developers based on the potential risks and impact level of their AI systems.

Banned Applications for Citizen Protection

The regulation prohibits certain AI applications deemed threats to citizens’ rights. This includes:

  • Biometric systems identifying sensitive characteristics.
  • Untargeted scraping of facial data for facial recognition databases.
  • Emotion recognition in workplaces and schools.
  • Social scoring systems.
  • Predictive policing based solely on profiling.
  • AI manipulating human behavior or exploiting vulnerabilities.
  • Law Enforcement Exceptions with Safeguards

Real-time biometric identification by law enforcement is generally prohibited, with exceptions allowed for specific, narrowly defined situations. These exceptions require strict safeguards like limited use in time and location, and mandatory prior judicial or administrative authorization, for scenarios like finding missing persons or preventing terrorism.

High-Risk AI Systems: Clear Obligations

The Act sets clear obligations for developers of high-risk AI systems used in critical infrastructure, education, employment, essential services (healthcare, banking), law enforcement, and democratic processes. These developers must:

  • Assess and mitigate risks.
  • Maintain transparency and accuracy.
  • Ensure human oversight.
  • Allow for complaints and explanations regarding decisions impacting rights.

Transparency and Support for Innovation

General-purpose AI (GPAI) systems must adhere to transparency requirements, including complying with EU copyright law and publishing detailed summaries of training content. More powerful GPAI models face additional requirements to assess and mitigate systemic risks, and report incidents. Artificial or manipulated content (“deepfakes”) must be clearly labeled.

To foster innovation, the Act mandates the establishment of regulatory sandboxes and real-world testing at the national level, accessible to SMEs and startups, to develop and train innovative AI before market placement.

Legislators’ Views

Brando Benifei (S&D, Italy) highlighted the significance of the world’s first legally binding AI law, aimed at reducing risks and combating discrimination while ensuring transparency. Dragos Tudorache (Renew, Romania) emphasized the link between AI and fundamental societal values, noting the Act as a starting point for new technology-centered governance models.

Next Steps and Background

The regulation awaits final legal checks before formal adoption and endorsement. It will become effective twenty days after publication in the official Journal and fully applicable 24 months later, with certain provisions having earlier application dates.

The Act responds to citizen proposals from the Conference on the Future of Europe, focusing on enhancing EU competitiveness, ensuring a safe and trustworthy society, promoting responsible digital innovation, and using AI and digital tools to improve access to information for all. This legislation represents a significant step in managing the societal impacts of AI while harnessing its potential for good.