Home Topics EU AI Act What is the Artificial Intelligence Act of the European Union (EU AI Act)?
Explore IBM's AI governance solution Subscribe for AI updates
Abstract diagram of a decision tree

Updated: 20 June 2024
Contributors: Matt Kosinski and Mark Scapicchio

What is the EU AI Act?

The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that will govern the development and/or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.

Considered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The act also creates rules for general-purpose artificial intelligence models, such as IBM’s Granite and Meta’s Llama 3 open-source foundation model.

Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.

In the same way that the EU’s General Data Protection Regulation (GDPR) can inspire other nations to adopt data privacy laws, experts anticipate the EU AI Act will spur the development of AI governance and ethics standards worldwide.

What the EU AI Act means for you and how to prepare
Related content

Register for the IDC report

Who does the EU AI Act apply to?

The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives). Worthy of mention are the definitions of providers, deployers and importers under the EU AI Act.

Providers

Providers are people or organizations that develop an AI system or general-purpose AI (GPAI) model, or have it developed on their behalf, and who place it on the market or put the AI system into service under their name or trademark.

The act broacly defines an AI system as a system that can, with some level of autonomy, process inputs to infer how to generate outputs (e.g., predictions, recommendation, decisions, content) that can influence physical or virtual environments. It defines GPAI as as AI models that display significant generality, are capable of competently performing a wide range of distinct tasks, and that can be integrated into a variety of downstream AI systems or applications. For example, a foundation model is a GPAI; a chatbot or generative AI tool built on that model would be an AI system.

Deployers

Deployers are people or organizations that use AI systems. For example, an organization that uses a third-party AI chatbot to handle customer service inquiries would be a deployer.

Importers

Importers are people and organizations located or established in the EU that bring AI systems of a person or company established outside of the EU to the EU market.

Application outside the EU

The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU. 

For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.

Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf.

Exceptions

While the act has a broad reach, some uses of AI are exempt. Purely personal uses of AI, and AI models and systems used only for scientific research and development, are examples of exempt uses of AI.

Learn how the IBM® watsonx™ AI and data platform helps organizations build, scale and govern custom AI solutions
What requirements does the EU AI Act impose?

The EU AI Act regulates AI systems based on risk level. Risk here refers to the likelihood and severity of the potential harm. Some of the most important provisions include:

  • a prohibition on certain AI practices that are deemed to pose unacceptable risk,

  • standards for developing and deploying certain high-risk AI systems,

  • rules for general-purpose AI (GPAI) models.

AI systems that do not fall within one of the risk categories in the EU AI Act are not subject to requirements under the act (these are often dubbed the ‘minimal risk’ category), although some may need to meet transparency obligations and they must comply with other existing laws.  Examples can include email spam filters and video games. Many common AI uses today fall into this category. 

It is worth noting that many of the EU AI Act's finer details surrounding implementation are still being ironed out. For example, the act notes that the EU Commission will release further guidance on requirements like post-market monitoring plans and training data summaries.

Prohibited AI practices

The EU AI Act explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn't make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice.

The EU Commission can amend the list of prohibited practices in the act, so it is possible that more AI practices may be prohibited in the future.

 A partial list of Prohibited AI practices at the time this article was published include:

  • Social scoring systems—systems that evaluate or classify individuals based on their social behavior—leading to detrimental or unfavorable treatment in social contexts unrelated to the original data collection and unjustified or disproportionate to the gravity of the behavior

  • Emotion recognition systems at work and in educational institutions, except where these tools are used for medical or safety purposes

  • AI used to exploit peopls vulnerabilities (e.g. vulnerabilities due to age or disability)

  • Untargeted scraping of facial images from the internet or CCTV for facial recognitiondatabases

  • Biometric identification systems that identify individuals based on sensitive characteristics

  • Specific predictive policing applications

  • Law enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is generally required).
Standards for high-risk AI

AI systems are considered high-risk under the EU AI Act if they are a product, or safety component of a product, regulated under specific EU laws referenced by the act, such as toy safety and in vitro diagnostic medical device laws.

The act also lists specific uses that are generally considered high-risk, including AI systems used:

  • in employment contexts, such as those used to recruit candidates, evaluate applicants, and make promotion decisions

  • in certain medical devices

  • in certain education and vocational training contexts

  • in the judicial and democratic process such as systems that are intended to influence the outcome of elections

  • to determine access to essential private or public services, including systems that assess eligibility for public benefits and evaluate credit scores.

  • in critical infrastructure management (e.g water, gas and electricity supplies etc.)

  • in any biometric identification system which are not prohibited, except for systems that whose sole purpose is to verify a person's identity (for example, using a fingerprint scanner to grant someone access to a banking app).

For systems included in this list, an exception may be available if the AI system does not pose a significant threat to health, safety, or rights of individuals. The act specifies criteria, one or more of which must be fulfilled, before an exception can be triggered (for example, where the AI system is intended to perform a narrow procedural task). If relying on this exception, the provider must document its assessment that the system is not high-risk, and regulators can request to see that assessment. The exception is not available for AI systems that automatically process personal data to evaluate or predict some aspect of a person's life, such as their product preferences (profiling), which are always considered high-risk.

As with the list of prohibited AI practices, the EU Commission may update the list of high-risk AI systems in the future.

Requirements for high-risk AI systems

 

High-risk AI systems must comply with specific requirements. Some examples include:

  • Implementing a continuous risk management system to monitor the AI throughout its lifecycle. For example, providers are expected to mitigate reasonably foreseeable risks posed by the intended use of their systems.

  • Adopting rigorous data governance practices to ensure that training, validation and testing data meet specific quality criteria. For example, governance around the data collection process and the origin of data, and measures to prevent and mitigate biases must be in place

  • Maintaining comprehensive technical documentation with specific information including system design specifications, capabilities, limitations, and regulatory compliance efforts.

There are additional transparency obligations for specific types of AI. For example:

  • AI systems intended to directly interact with individuals should be designed to inform users that they are interacting with an AI system, unless this is obvious to the individual from the context. A chatbot, for example, should be designed to notify users that it is a chatbot.
     

  • AI systems that generate text, images, or other certain other content must use machine-readable formats to mark outputs as AI generated or manipulated. This includes, for example, AI that generates deepfakes—images or video altered to show someone doing or saying something they didn’t do or say.

 

Obligations on operators of high-risk AI systems

We highlight some obligations on key operators of high-risk AI systems in the AI value chain—providers and deployers—below.

Obligations on providers of high-risk AI systems

Providers of high-risk AI systems must comply with requirements including:

  • Ensuring high-risk AI systems comply with the requirements for high-risk AI systems outlined in the act. For example, implementing a continuous risk management system

  • Having a quality management system in place

  • Implementing post-market monitoring plans to monitor the performance of the AI system and evaluate its continued compliance over the system's lifecycle.
Obligations on deployers of high-risk AI systems

Deployers of high-risk AI systems will have obligations including:

  • Taking appropriate technical and organizational measures to ensure they use such systems in accordance with their instructions for use

  • Maintaining automatically generated AI system logs, to the extent such logs are under their control, for a specified period

  • For deployers using high-risk AI systems to provide certain essential services—such as government bodies or private organizations providing public services—conducting fundamental rights impact assessments before using certain high-risks AI systems for the first time.
Rules for general purpose AI (GPAI) models

The EU AI Act creates separate rules for general-purpose AI models (GPAI). Providers of GPAI models will have obligations including the following:

  • Establishing policies to respect EU copyright laws

  • Writing and making publicly available detailed summaries of training data sets.

If a GPAI model is classified as posing a systemic risk, providers will have additional obligations. Systemic risk is a risk specific to the high-impact capabilities of GPAI models that have a significant impact on the EU market due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or the society as a whole, that can be propagated at scale across the value chain. The act uses training resources as one of the criteria for identifying systemic risk—if the cumulative amount of computing power used to train a model is greater than 1025 floating point operations (FLOPs), it is presumed to have high-impact capabilities and pose a systemic risk. The EU Commission can also classify a model as posing a systemic risk. 

Providers of GPAI models that pose a systemic risk, including free, open-source models, must meet some additional obligations, for example:

  • Documenting and reporting serious incidents to the EU AI Office and relevant national regulators

  • Implementing adequate cybersecurity to protect the model and its physical infrastructure.
EU AI Act fines

For non-compliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.

For most other violations, including non-compliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.

The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.

Notably, the EU AI Act has different rules for fining start-ups and other small and medium size organizations. For these businesses, the fine is the lower of the two possible amounts specified above

When does the EU AI Act take effect?

Initially proposed by the European Commission in April 2021, the EU AI Act was approved by the European Parliament on April 22 2024 and by the EU Member States on May 21 2024. The law will go into effect in stages 20 days after its publication in the Official Journal of the EU. Some of the most notable dates include:

  • At six months, the prohibitions on prohibited AI practices will take effect.

  • At 12 months, the rules for general-purpose AI will take effect for new GPAI models. Providers of GPAI models that are already on the market 12 months before the act enters into force will have 36 months from the date of entry into force to comply.

  • At 24 months, the rules for high-risk AI systems will take effect.

  • At 36 months, the rules for AI systems that are products or safety components of products regulated under specific EU laws, will apply.
Related solutions
watsonx

Easily deploy and embed AI across your business, manage all data sources, and accelerate responsible AI workflows—all on one platform.

Explore watsonx

IBM OpenPages

Simplify data governance, risk management and regulatory compliance with IBM OpenPages — a highly scalable, AI-powered, and unified GRC platform.

Explore OpenPages

Resources Build responsible AI workflow

Our latest eBook outlines the key building blocks of AI governance and shares a detailed AI governance framework that you can apply in your organization.

3 key reasons why your organization needs Responsible AI

Explore the transformative potential of Responsible AI for your organization and learn about the crucial drivers behind adopting ethical AI practices.

Is your AI trustworthy?

Join the discussion on why businesses need to prioritize AI governance to deploy responsible AI.

What is data governance?

Learn how data governance ensures companies get the most from their data assets.

What is explainable AI?

Explainable AI is crucial for an organization in building trust and confidence when putting AI models into production.

What is AI ethics?

AI ethics is a multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Learn about IBM's approach to AI ethics.

Take the next step

Accelerate responsible, transparent and explainable AI workflows across the lifecycle for both generative and machine learning models. Direct, manage, and monitor your organization’s AI activities to better manage growing AI regulations and detect and mitigate risk.

Explore watsonx.governance Book a live demo

The client is responsible for ensuring compliance with all applicable laws and regulations. IBM does not provide legal advice nor represent or warrant that its services or products will ensure that the client is compliant with any law or regulation.