What is the Artificial Intelligence Act of the European Union (EU AI Act)?

20 September 2024

Authors

Matt Kosinski

Writer

Mark Scapicchio

Content Director

What is the EU AI Act?

The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development and/or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.

Considered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The act also creates rules for general-purpose artificial intelligence models, such as IBM’s Granite and Meta’s Llama 3 open-source foundation model.

Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.

In the same way that the EU’s General Data Protection Regulation (GDPR) can inspire other nations to adopt data privacy laws, experts anticipate the EU AI Act will spur the development of AI governance and ethics standards worldwide.

3D design of balls rolling on a track

The latest AI News + Insights 


Discover expertly curated insights and news on AI, cloud and more in the weekly Think Newsletter. 

Who does the EU AI Act apply to?

The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives). Worthy of mention are the definitions of providers, deployers and importers under the EU AI Act.

Providers

Providers are people or organizations that develop an AI system or general-purpose AI (GPAI) model, or have it developed on their behalf, and who place it on the market or put the AI system into service under their name or trademark.

The act broadly defines an AI system as a system that can, with some level of autonomy, process inputs to infer how to generate outputs (for example, predictions, recommendation, decisions, content) that can influence physical or virtual environments. It defines GPAI as AI models that display significant generality, are capable of competently performing a wide range of distinct tasks, and that can be integrated into a variety of downstream AI systems or applications. For example, a foundation model is a GPAI; a chatbot or generative AI tool built on that model would be an AI system.

Deployers

Deployers are people or organizations that use AI systems. For example, an organization that uses a third-party AI chatbot to handle customer service inquiries would be a deployer.

Importers

Importers are people and organizations located or established in the EU that bring AI systems of a person or company established outside of the EU to the EU market.

Application outside the EU

The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU.

For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.

Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf.

Exceptions

While the act has a broad reach, some uses of AI are exempt. Purely personal uses of AI, and AI models and systems used only for scientific research and development, are examples of exempt uses of AI.

AI Academy

Trust, transparency and governance in AI

AI trust is arguably the most important topic in AI. It's also an understandably overwhelming topic. We'll unpack issues such as hallucination, bias and risk, and share steps to adopt AI in an ethical, responsible and fair manner.

What requirements does the EU AI Act impose?

The EU AI Act regulates AI systems based on risk level. Risk here refers to the likelihood and severity of the potential harm. Some of the most important provisions include:

  • a prohibition on certain AI practices that are deemed to pose unacceptable risk,

  • standards for developing and deploying certain high-risk AI systems,

  • rules for general-purpose AI (GPAI) models.

AI systems that do not fall within one of the risk categories in the EU AI Act are not subject to requirements under the act (these are often dubbed the ‘minimal risk’ category), although some may need to meet transparency obligations and they must comply with other existing laws.  Examples can include email spam filters and video games. Many common AI uses today fall into this category.

It is worth noting that many of the EU AI Act's finer details surrounding implementation are still being ironed out. For example, the act notes that the EU Commission will release further guidance on requirements such as postmarket monitoring plans and training data summaries.

Prohibited AI practices

The EU AI Act explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn't make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice.

The EU Commission can amend the list of prohibited practices in the act, so it is possible that more AI practices may be prohibited in the future.

 A partial list of Prohibited AI practices at the time this article was published include:

  • Social scoring systems—systems that evaluate or classify individuals based on their social behavior—leading to detrimental or unfavorable treatment in social contexts unrelated to the original data collection and unjustified or disproportionate to the gravity of the behavior

  • Emotion recognition systems at work and in educational institutions, except where these tools are used for medical or safety purposes

  • AI used to exploit people's vulnerabilities (for example vulnerabilities due to age or disability)

  • Untargeted scraping of facial images from the internet or closed-circuit television (CCTV) for facial recognition databases

  • Biometric identification systems that identify individuals based on sensitive characteristics

  • Specific predictive policing applications

  • Law enforcement use of real-time remote biometric identification systems in public (unless an exception applies, and pre-authorization by a judicial or independent administrative authority is generally required).

Standards for high-risk AI

AI systems are considered high-risk under the EU AI Act if they are a product, or safety component of a product, regulated under specific EU laws referenced by the act, such as toy safety and in vitro diagnostic medical device laws.

The act also lists specific uses that are generally considered high-risk, including AI systems used:

  • in employment contexts, such as those used to recruit candidates, evaluate applicants and make promotion decisions

  • in certain medical devices

  • in certain education and vocational training contexts

  • in the judicial and democratic process such as systems that are intended to influence the outcome of elections

  • to determine access to essential private or public services, including systems that assess eligibility for public benefits and evaluate credit scores.

  • in critical infrastructure management (for example, water, gas and electricity supplies and so on)

  • in any biometric identification system which are not prohibited, except for systems that whose sole purpose is to verify a person's identity (for example, using a fingerprint scanner to grant someone access to a banking app).

For systems included in this list, an exception may be available if the AI system does not pose a significant threat to health, safety, or rights of individuals. The act specifies criteria, one or more of which must be fulfilled, before an exception can be triggered (for example, where the AI system is intended to perform a narrow procedural task). If relying on this exception, the provider must document its assessment that the system is not high-risk, and regulators can request to see that assessment. The exception is not available for AI systems that automatically process personal data to evaluate or predict some aspect of a person's life, such as their product preferences (profiling), which are always considered high-risk.

As with the list of prohibited AI practices, the EU Commission may update the list of high-risk AI systems in the future.

Requirements for high-risk AI systems

High-risk AI systems must comply with specific requirements. Some examples include:

  • Implementing a continuous risk management system to monitor the AI throughout its lifecycle. For example, providers are expected to mitigate reasonably foreseeable risks posed by the intended use of their systems.

  • Adopting rigorous data governance practices to ensure that training, validation and testing data meet specific quality criteria. For example, governance around the data collection process and the origin of data, and measures to prevent and mitigate biases must be in place.

  • Maintaining comprehensive technical documentation with specific information including system design specifications, capabilities, limitations, and regulatory compliance efforts.

There are additional transparency obligations for specific types of AI. For example:

  • AI systems intended to directly interact with individuals should be designed to inform users that they are interacting with an AI system, unless this is obvious to the individual from the context. A chatbot, for example, should be designed to notify users that it is a chatbot.
     

  • AI systems that generate text, images or other certain other content must use machine-readable formats to mark outputs as AI generated or manipulated. This includes, for example, AI that generates deepfakes—images or video altered to show someone doing or saying something they didn’t do or say.

Obligations on operators of high-risk AI systems

We highlight some obligations on key operators of high-risk AI systems in the AI value chain—providers and deployers—below.

Obligations on providers of high-risk AI systems

Providers of high-risk AI systems must comply with requirements including:

  • Ensuring high-risk AI systems comply with the requirements for high-risk AI systems outlined in the act. For example, implementing a continuous risk management system.

  • Having a quality management system in place.

  • Implementing postmarket monitoring plans to monitor the performance of the AI system and evaluate its continued compliance over the system's lifecycle.

Obligations on deployers of high-risk AI systems

Deployers of high-risk AI systems will have obligations including:

  • Taking appropriate technical and organizational measures to ensure they use such systems in accordance with their instructions for use.

  • Maintaining automatically generated AI system logs, to the extent such logs are under their control, for a specified period.

  • For deployers using high-risk AI systems to provide certain essential services—such as government bodies or private organizations providing public services—conducting fundamental rights impact assessments before using certain high-risks AI systems for the first time.

Rules for general purpose AI (GPAI) models

The EU AI Act creates separate rules for general-purpose AI models (GPAI). Providers of GPAI models will have obligations including the following:

  • Establishing policies to respect EU copyright laws.

  • Writing and making publicly available detailed summaries of training data sets.

If a GPAI model is classified as posing a systemic risk, providers will have additional obligations. Systemic risk is a risk specific to the high-impact capabilities of GPAI models that have a significant impact on the EU market due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or the society as a whole, that can be propagated at scale across the value chain. The act uses training resources as one of the criteria for identifying systemic risk—if the cumulative amount of computing power used to train a model is greater than 10^25 floating point operations (FLOPs), it is presumed to have high-impact capabilities and pose a systemic risk. The EU Commission can also classify a model as posing a systemic risk.

Providers of GPAI models that pose a systemic risk, including free, open-source models, must meet some additional obligations, for example:

  • Documenting and reporting serious incidents to the EU AI Office and relevant national regulators.

  • Implementing adequate cybersecurity to protect the model and its physical infrastructure.

EU AI Act fines

For noncompliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.

For most other violations, including noncompliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.

The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.

Notably, the EU AI Act has different rules for fining start-ups and other small and medium-size organizations. For these businesses, the fine is the lower of the two possible amounts specified above.

When does the EU AI Act take effect?

The law entered into force on 1 August 2024, with different provisions of the law going into effect in stages. Some of the most notable dates include:

  • From 2 February 2025 the prohibitions on prohibited AI practices will take effect.

  • From 2 August 2025, the rules for general-purpose AI will take effect for new GPAI models. Providers of GPAI models that were placed on the market before 2 August 2025 will have until 2 August 2027 to comply.

  • From 2 August 2026, the rules for high-risk AI systems will take effect.

  • From 2 August 2027, the rules for AI systems that are products or safety components of products regulated under specific EU laws, will apply.
Related solutions IBM® watsonx.governance™

Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.

Discover watsonx.governance
AI governance consulting services

Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

Explore AI governance services
IBM OpenPages®

Simplify how you manage risk and regulatory compliance with a unified GRC platform.

Explore OpenPages
Take the next step

Direct, manage and monitor your AI using a single platform to speed responsible, transparent and explainable AI.

Explore watsonx.governance Book a live demo
Footnotes

The client is responsible for ensuring compliance with all applicable laws and regulations. IBM does not provide legal advice nor represent or warrant that its services or products will ensure that the client is compliant with any law or regulation.