The EU AI Act: Strong Regulation with GDPR-Like Impact

The EU AI Act sets new standards that need to be taken seriously and it has teeth

Willy Leichter

August 28, 2024

The EU AI Act: Strong Regulation with GDPR-Like Impact

Subscribe to our Blogs

Get weekly updates on the latest industry news, thought leadership, and other security topics in our blogs.

As generative AI races forward, more organizations are becoming concerned about potential regulatory minefields. Governments globally are drafting or have enacted hundreds of regulations around AI, but the most influential so far is the EU Artificial Intelligence Act. Proposed by the European Commission in April 2021, the EU AI Act seeks to establish a legal framework for the safe and trustworthy use of AI technologies within the European Union. 

But as we saw with GDPR and HIPAA before that, strong legislation in a major commercial region, can become a de facto requirement for all global enterprises. In fact, many organizations welcome specific legal guidance for new technology, so they can have confidence in their investments. The AI space will continue to evolve quickly, and not all legal pitfalls are fully understood, but the EU AI Act sets new standards that need to be taken seriously, and it has teeth – with fines up to 6% of an organization’s annual turnover. Following is a summary of the act and its implications.

 Objectives of the EU AI Act

The primary goal of the EU AI Act is to create a harmonized legal framework that governs AI technologies across the EU. The legislation aims to ensure that AI systems used in the EU are safe, transparent, and respect fundamental rights, including privacy, non-discrimination, and human dignity. It also seeks to foster innovation and investment in AI by providing legal certainty and promoting a trustworthy AI ecosystem.

The Act is grounded in the belief that while AI has the potential to drive economic growth and address global challenges, it also poses risks that must be managed. These risks include bias, discrimination, loss of privacy, and the potential for AI systems to be used in ways that are harmful to individuals or society.

 Risk-Based Approach to Regulation

The EU AI Act adopts a risk-based approach to regulation, categorizing AI systems into different levels of risk, with corresponding regulatory obligations:

  • Prohibited AI Systems: These are systems that pose an unacceptable risk to fundamental rights and are therefore banned outright. Examples include AI systems that manipulate human behavior to the detriment of users, such as those exploiting vulnerabilities of specific groups, and systems used by governments for social scoring, like the systems used in China.
  • High-Risk AI Systems: These are AI systems that pose significant risks to health, safety, or fundamental rights. High-risk AI systems include those used in critical sectors such as healthcare, transportation, education, law enforcement, and employment. These systems are subject to stringent requirements, including rigorous data governance, transparency, human oversight, and compliance with specific technical standards. Before deployment, high-risk AI systems must undergo a conformity assessment to ensure they meet these requirements.
  • Limited-Risk AI Systems: These systems pose a lower level of risk but are still subject to certain transparency obligations. For example, AI systems that interact with humans (such as chatbots) must inform users that they are interacting with an AI system. Additionally, AI systems that generate or manipulate content (like deepfakes) must disclose that the content is AI-generated.
  • Minimal-Risk AI Systems: These are AI systems that pose little to no risk to users. The vast majority of AI applications, including most consumer-facing AI products, fall into this category. These systems are not subject to any specific regulatory requirements under the Act, though developers are encouraged to adhere to voluntary codes of conduct and best practices.

Requirements for High-Risk AI Systems

High-risk AI systems are at the core of the EU AI Act's regulatory focus. The Act imposes several key requirements on these systems to ensure their safety and reliability:

  • Data Governance: High-risk AI systems must be trained on high-quality datasets that are representative, free of bias, and relevant to the intended application. This is to prevent discriminatory outcomes and ensure that the AI system performs reliably across different contexts and populations.
  • Transparency and Documentation: Developers of high-risk AI systems must maintain detailed documentation of the system’s design, development, and performance. This documentation must be sufficient to demonstrate compliance with the Act’s requirements and must be made available to regulatory authorities upon request. Additionally, high-risk AI systems must be designed to provide clear and understandable information to users about how they work, the risks involved, and the safeguards in place.
  • Human Oversight: The Act mandates that high-risk AI systems must be subject to effective human oversight to mitigate risks. This could involve the ability for a human operator to intervene or override the AI system’s decisions when necessary. The level of human oversight required will depend on the specific application and the risks involved.
  • Robustness and Accuracy: High-risk AI systems must be designed to achieve a high level of accuracy, robustness, and security. They should perform consistently and reliably across various conditions, and safeguards must be in place to prevent and mitigate errors or failures.
  • Conformity Assessment: Before a high-risk AI system can be deployed, it must undergo a conformity assessment to verify that it meets the Act’s requirements. This assessment can be conducted either by the AI provider or by a third-party assessment body, depending on the specific circumstances.

Prohibited AI Practices

The EU AI Act explicitly prohibits certain AI practices that are deemed to pose unacceptable risks to individuals and society. These include:

  • Manipulative AI Systems: AI systems that exploit vulnerabilities of specific groups, such as children or people with disabilities, to materially distort their behavior in a way that causes harm are prohibited. This includes AI systems that use subliminal techniques to influence individuals' decisions without their awareness.
  • Social Scoring by Governments: The Act bans AI systems used by public authorities for social scoring, where individuals are rated based on their behavior, social status, or other personal characteristics. This is to prevent discriminatory practices and the erosion of individual freedoms.
  • Real-Time Biometric Identification in Public Spaces: The use of AI systems for real-time biometric identification (e.g., facial recognition) in public spaces by law enforcement is prohibited, except in narrowly defined circumstances, such as for the prevention of serious crimes or during public emergencies. Even in these cases, the use of such systems must be strictly necessary, proportionate, and subject to adequate safeguards.

 Governance and Enforcement

The EU AI Act establishes a comprehensive governance framework to ensure effective implementation and enforcement of the regulations:

  • European Artificial Intelligence Board (EAIB): The Act proposes the creation of the European Artificial Intelligence Board, which will be responsible for overseeing the implementation of the Act, coordinating the activities of national authorities, and providing guidance on AI-related issues.
  • National Competent Authorities: Each EU member state is required to designate national authorities responsible for enforcing the AI Act. These authorities will have the power to investigate non-compliance, impose fines, and order the withdrawal of non-compliant AI systems from the market.
  • Penalties for Non-Compliance: The Act includes significant penalties for non-compliance, with fines of up to 6% of a company’s global annual turnover for the most serious violations, such as the use of prohibited AI practices or failure to comply with the requirements for high-risk AI systems.

Support for Innovation

While the EU AI Act is primarily focused on risk management, it also includes provisions to support innovation and the development of AI technologies in the EU:

  • Regulatory Sandboxes: The Act encourages the creation of regulatory sandboxes, where AI developers can test their systems in a controlled environment under regulatory supervision. These sandboxes are designed to facilitate innovation by allowing developers to experiment with new AI technologies while ensuring compliance with the Act’s requirements.
  • Support for SMEs and Startups: The Act recognizes the challenges faced by small and medium-sized enterprises (SMEs) and startups in navigating complex regulatory requirements. To support these businesses, the Act includes measures to reduce administrative burdens, provide guidance on compliance, and facilitate access to the necessary expertise and resources.

Global Impact

The EU AI Act is expected to have a significant global impact, influencing AI regulation beyond the borders of the EU. As one of the first comprehensive regulatory frameworks for AI, the Act is likely to set a global standard for AI governance, similar to how the EU’s General Data Protection Regulation (GDPR) has shaped global data protection practices.

Non-EU companies that wish to operate in the EU market will need to comply with the Act, which may lead to the adoption of similar regulatory practices in other jurisdictions. Moreover, the Act’s emphasis on ethical AI and the protection of fundamental rights aligns with broader global trends towards more responsible and transparent AI development.

Conclusion

The EU AI Act represents a bold and ambitious effort to regulate artificial intelligence in a way that balances the need for innovation with the imperative to protect fundamental rights and public safety. By adopting a risk-based approach, the Act seeks to address the specific risks posed by different types of AI systems while fostering a vibrant and competitive AI ecosystem in Europe. Its impact is likely to extend far beyond the EU, shaping the global conversation on AI governance and setting a precedent for other countries to follow.