What is the European Union Artificial Intelligence Act (EU AI Act)?
Explore IBM's AI governance solution Subscribe for AI updates
Abstract diagram of a decision tree

Published: 8 April 2024
Contributors: Matt Kosinski

What is the EU AI Act?

The EU Artificial Intelligence Act, or EU AI Act, is a law that governs the development and use of artificial intelligence in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI systems according to the threats they pose to human health, safety, and rights.

Considered the world's first comprehensive regulatory framework for AI applications, the EU AI Act bans some AI uses outright and implements strict safety and transparency standards for others.

The act also creates targeted rules for designing, training, and deploying general-purpose artificial intelligence models, such as the foundation models that power ChatGPT and Google Gemini.

Penalties for noncompliance can reach EUR 35,000,000 or 7% of a company's annual worldwide revenue, whichever is higher.

In the same way that the EU’s General Data Protection Regulation (GDPR) inspired other nations to adopt data privacy laws, experts anticipate the EU AI Act will spur the development of stronger AI governance and ethics standards worldwide.

What the EU AI Act means for you and how to prepare
Related content

Register for the IDC report

Who does the EU AI Act apply to?

The EU AI Act applies to providers, deployers, importers, and distributors of AI systems and models in the EU.

The act defines AI systems as systems that can, with some level of autonomy, process inputs to generate outputs that influence people and environments. These influential outputs include things like predictions, decisions, and content. 

In the language of the act, AI model mainly refers to general-purpose AIs (GPAIs) that can be adapted to build various AI systems. For example, the GPT-4 large language model is an AI model. The ChatGPT chatbot built on GPT-4 is an AI system.

Other important terms in the act:

  • Providers are the people and organizations that create AI systems and models.

  • Deployers are people and organizations that use AI tools. For example, an organization that buys and uses an AI chatbot to handle customer service inquiries would be a deployer.

  • Importers are people and organizations that bring AI systems and models from outside Europe to the EU market.
Applications outside the EU

The EU AI Act applies to people and organizations outside of Europe if their AI tools, or the outputs of those tools, are used in the EU. 

For example, say a company in the EU sends customer data to a third party outside the EU, and that third party uses AI to process the customer data and sends the results back to the company. Since the company uses the output of the third party's AI system within the EU, the third party is bound by the EU AI Act. 

Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf.

Exceptions

While the act has a broad reach, some uses of AI are exempt. These include:

  • Purely personal uses

  • Models and systems developed solely for military and national defense

  • Models and systems used only for research and development

  • Free, open-source, low-risk AI models that publicly share their parameters and architecture are exempt from most AI Act rules, but not all. (See "Rules for general purpose AI models (GPAI)" below for more information.)
Learn how the IBM® watsonx™ AI and data platform helps organizations build responsible, compliant AI applications
What requirements does the EU AI Act impose?

The EU AI Act contains a number of rules meant to support the responsible use and development of AI. Some of the most important provisions include bans on dangerous AI, standards for developing and deploying high-risk AI, transparency obligations, and rules for general-purpose models. 

It is worth noting that many of the EU AI Act's finer details surrounding implementation are still being ironed out. For example, the act notes that the European Commission will release further guidance on requirements like post-market monitoring plans and training data summaries. 

Risk-based AI regulations

The EU AI Act sorts AI systems into different categories based on risk level. Risk here refers to the likelihood and severity of the potential harm that an AI could cause to health, safety, or human rights. 

Broadly, the act addresses four categories of AI risk:

·       Unacceptable risk

·       High risk

·       Limited risk

·       Minimal risk

Unacceptable risk

AI applications that pose an unacceptable level of risk are banned. The EU AI Act explicitly lists all prohibited AI practices, which include:

  • Systems that intentionally manipulate people into making harmful choices they otherwise wouldn't.

  • Systems that exploit a person's age, disability, or social or economic status to materially influence their behavior. 

  • Biometric categorization systems that use biometric data to infer sensitive personal information, such as race, sexual orientation, or political opinions.

  • Social scoring systems that use irrelevant or inconsequential behavior and characteristics to promote detrimental treatment of people.

  • Real-time, remote biometric identification systems used in public places for law enforcement purposes. There are some narrow exceptions here, such as using these tools in targeted searches for victims of certain serious crimes.

  • Predictive policing systems that profile people to evaluate their likelihood of committing a crime. 

  • Facial recognition databases that perform untargeted scraping of internet or CCTV images.

  • Emotion recognition tools used in schools or workplaces, except when these tools are used for medical or safety purposes.

The European Commission reserves the right to revisit and amend this list, so it is possible that more AI uses will be banned in the future.

High risk

The bulk of the act deals with high-risk AI systems. There are two ways for a system to be considered high-risk under the EU AI Act: if it s used in a regulated product or explicitly named as high-risk. 

Products in some sectors, like toys, radio equipment, and medical devices, are already regulated by preexisting EU laws. Any AI systems that serve as the safety components of these regulated products, or that act as regulated products themselves, are automatically considered high-risk. 

The act also lists specific AI uses that always count as high-risk. These include:

  • Any biometric systems not expressly banned by the EU AI Act or other EU or member state laws, except for systems that verify a person's identity (for example, using a fingerprint scanner to grant someone access to a banking app).

  • Systems used as safety components for critical infrastructure, such as water, gas, and electricity supplies.

  • Systems used in educational and vocational training, including systems that monitor student performance, detect cheating, and direct admissions.

  • Systems used in employment environments, such as those used to recruit candidates, evaluate applicants, and make promotion decisions.

  • Systems used to determine access to essential private or public services, including systems that assess eligibility for public benefits and evaluate credit scores. This does not include systems used to detect financial fraud.

  • Systems used for law enforcement, such as AI-enabled polygraphs and evidence analysis.

  • Systems used for migration and border control, such as systems that process visa applications. This does not include systems that verify travel documents.

  • Systems used in judicial and democratic processes, such as systems that directly influence the outcomes of elections.

  • Profiling—automatically processing personal data to evaluate or predict some aspect of a person's life, such as their product preferences—is always high risk.

As with the list of banned AI, the European Commission may update this list in the future.

Providers of high-risk systems must follow these rules:

  • Implement a continuous risk management system to monitor the AI and ensure compliance throughout its lifecycle. Providers are expected to mitigate risks posed by both the intended use and foreseeable misuse of their systems.

  • Adopt rigorous data governance standards to ensure that training and testing data are properly collected, handled, and protected. Data should also be high quality, relevant to the system's purpose, and reasonably free of biases.

  • Maintain comprehensive technical documentation of system design specifications, capabilities, limitations, and regulatory compliance efforts.

  • Implement automated event logs in AI tools to help track system operations, trace outcomes, and identify risks and serious incidents.

  • Provide deployers of AI systems with the information they need to comply with regulations, including clear instructions on how to use the system, interpret outputs, and mitigate risks.

  • Design systems to support and enable human oversight, such as by supplying interfaces that allow users to monitor, override, and intervene on system operations.

  • Ensure that AI systems are reasonably accurate, robust, and secure. This can include creating backup systems, designing algorithms to avoid bias, and implementing appropriate cybersecurity controls.

If an AI system falls into one of the high-risk categories but does not pose a significant threat to health, safety, or rights, providers can waive these requirements. The provider must document proof that the system poses no risk, and regulators can penalize organizations for misclassifying systems.

Learn how watsonx.governance helps organizations mitigate risks, manage policies, and meet compliance requirements with explainable AI workflows
Limited risk

Limited risk AI systems are systems that meet specific transparency obligations—rules that specific types of AI must follow regardless of their risk level. These rules include:

  • AI systems should clearly inform users when they are interacting with artificial intelligence. For example, a chatbot should tell people it is a chatbot.

  • Organizations must inform people whenever they use emotion recognition or biometric categorization systems. Any personal data collected through these systems must be handled in accordance with the GDPR. 

  • Generative AI systems that create text, images, or other content must use watermarks or other machine-readable signals to mark that content as AI-generated

  • Deployers must clearly label deepfakes and communicate this fact to audiences.

  • Deployers that use AI to produce text on matters of public interest, such as news articles, must label the text as AI-generated unless a human editor reviews and takes responsibility for it.
Minimal risk

The minimal risk category (somtimes called the 'minimal or no risk category') includes AI tools that don't directly interact with people or that have very little material impact when they do. Examples include email spam filters and AI in video games. Many common AI uses today fall into this category. 

Most of the AI Act's rules do not apply to minimal-risk AI (although some may need to meet the transparency obligations listed above).

Rules for general purpose AI models (GPAI)

Because GPAI models are so adaptable, it can be difficult to categorize them according to risk level. For this reason, the EU AI Act creates a separate set of rules explicitly for GPAI.

All providers of GPAI models must:

  • Maintain updated technical documentation describing, among other things, the model's design, testing, and training processes.

  • Provide deployers of their models, like organizations building AI systems on top of them, with the information they need to use the model responsibly. This information includes the model's capabilities, limitations, and intended purpose.

  • Establish policies to follow EU copyright laws.

  • Write and make publicly available detailed summaries of training data sets.

Most free, open-source GPAI models are exempt from the first two requirements. They only need to follow copyright laws and share training data summaries.

Rules for GPAI models that pose a systemic risk

The EU AI Act considers some GPAI models to pose a systemic risk. Systemic risk is the potential for a model to cause serious, far-reaching damage to public health, safety, or fundamental rights. 

Under the act, a model is said to pose a systemic risk if it has "high-impact capabilities." Essentially, this means the model's capabilities match or exceed those of the most advanced GPAI available at the time. 

The act uses training resources as key criteria for identifying systemic risk. If the cumulative amount of computing power used to train a model is greater than 1025 floating point operations (FLOPs), it is considered to have high-impact capabilities and pose a systemic risk. 

The European Commission can also classify a model as a systemic risk if it determines that the model has an impact equivalent to those high-risk capabilities, even if it does not meet the FLOPs threshold. 

GPAI models that pose a systemic risk—including free, open-source models—must meet all of the preceding requirements plus some additional obligations:

  • Perform standardized model evaluations, including adversarial testing, to identify and mitigate systemic risks.

  • Document and report serious incidents to the EU AI Office and relevant state-level regulators.

  • Implement adequate security controls to protect the model and its physical infrastructure.

Providers of GPAI models can reach compliance by adopting voluntary codes of practice, which the EU AI Office is currently drawing up. The codes are expected to be completed within nine months after the act takes effect. Providers that don't adopt these codes must prove their compliance in other ways. 

Additional requirements

Providers, deployers, importers, and distributors are generally responsible for ensuring the AI products they make, use, or circulate are compliant. They must document evidence of their compliance and share it with authorities upon request. They must also share information and cooperate with one another to ensure that every organization in the AI supply chain can comply with the EU AI Act.

Providers and deployers must also ensure that staff members or other parties working with AI on the organization's behalf have the necessary AI literacy to handle AI responsibly.

Beyond these broad requirements, each party has its own specific obligations.

Obligations for providers
  • Design AI systems and models to comply with relevant requirements.

  • Submit new high-risk AI products to the appropriate authorities for conformity assessments before putting them on the market. Conformity assessments are third-party evaluations of a product's compliance with the EU AI Act. 

  • If a provider makes a substantial change to an AI product that alters its purpose or affects its compliance status, the provider must resubmit the product for assessment.

  • Register high-risk AI products with the EU-level database.

  • Implement post-market monitoring plans to track AI performance and ensure continued compliance over the system's lifecycle.

  • Report serious AI incidents—such as deaths, critical infrastructure disruptions, and breaches of fundamental rights—to member state authorities and take corrective action as necessary. 
Obligations for deployers
  • Use AI systems for their intended purpose and as instructed by providers.

  • Ensure that high-risk systems have appropriate human oversight.

  • Inform providers, distributors, authorities, and other relevant parties of serious AI incidents.

  • Maintain AI system logs for at least six months or longer, depending on member state legislation.

  • Deployers using high-risk AI systems to provide essential services—such as financial institutions, government bodies, and law enforcement agencies—must conduct fundamental rights impact assessments before using an AI for the first time.
Obligations for importers and distributors

Importers and distributors must ensure that the AI systems and models they circulate comply with the EU AI Act. 

An importer or distributor is considered an AI's provider if it puts its own name or trademark on a product or makes a substantial change to the product. In this case, the importer or distributor must assume all the provider responsibilities outlined in the act.

Learn how IBM OpenPages can simplify data governance and regulatory compliance
How will the EU AI Act be enforced?

Enforcement of the act will be split between a few different bodies.

At the EU level, the European Commission has created the AI Office to help coordinate the consistent application of the act across member states. The AI Office will also directly enforce GPAI rules, with the ability to fine organizations and compel corrective action. 

Individual member states will designate national competent authorities to enforce all non-GPAI regulations. The act requires each state to establish two different authorities: a market surveillance authority and a notifying authority.

Market surveillance authorities ensure that organizations comply with the EU AI Act. They can hear complaints from consumers, investigate violations, and fine organizations.

Notifying authorities oversee the third parties that conduct conformity assessments for new high-risk AI products.

EU AI Act penalties

For using banned AI practices, organizations can be fined up to EUR 35,000,000 or 7% of worldwide turnover, whichever is higher.

For other violations, including violations of GPAI rules, organizations can be fined up to EUR 15,000,000 or 3% of worldwide turnover, which is higher.

For giving incorrect or misleading information to authorities, organizations can be fined up to EUR 7,500,000 or 1% of turnover, whichever is higher.

Notably, the EU AI Act has different rules for fining startups and other small organizations. For these businesses, the fine is the lower of the two possible amounts. This aligns with the act's general effort to ensure that requirements are not so onerous as to lock smaller businesses out of the AI market.

When does the EU AI Act take effect?

The European Parliament approved the EU AI Act on 13 March 2024. The European Council will complete a final round of checks, and the law will go into effect 20 days after its publication in the Official Journal of the European Union. Observers expect this to happen in May 2024.

The full extent of the law won't take effect until 2026, with different provisions phasing in over time:

  • At six months, the prohibitions on unacceptably risky systems will take effect.

  • At 12 months, the rules for general-purpose AI will take effect for new GPAIs. GPAI models already on the market will have 24 months to comply.

  • At 24 months, the rules for high-risk AI systems will take effect.
Related solutions
watsonx

Easily deploy and embed AI across your business, manage all data sources, and accelerate responsible AI workflows—all on one platform.

Explore watsonx

IBM OpenPages

Simplify data governance, risk management and regulatory compliance with IBM OpenPages — a highly scalable, AI-powered, and unified GRC platform.

Explore OpenPages

Resources Build responsible AI workflow

Our latest eBook outlines the key building blocks of AI governance and shares a detailed AI governance framework that you can apply in your organization.

3 key reasons why your organization needs Responsible AI

Explore the transformative potential of Responsible AI for your organization and learn about the crucial drivers behind adopting ethical AI practices.

Is your AI trustworthy?

Join the discussion on why businesses need to prioritize AI governance to deploy responsible AI.

What is data governance?

Learn how data governance ensures companies get the most from their data assets.

What is explainable AI?

Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production.

What is AI ethics?

AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Learn about IBM's approach to AI ethics.

Take the next step

Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.

Explore watsonx.governance Book a live demo