The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development and/or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.
Considered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.
The act also creates rules for general-purpose artificial intelligence models, such as IBM’s Granite and Meta’s Llama 3 open-source foundation model.
Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.
In the same way that the EU’s General Data Protection Regulation (GDPR) can inspire other nations to adopt data privacy laws, experts anticipate the EU AI Act will spur the development of AI governance and ethics standards worldwide.
The EU AI Act applies to multiple operators in the AI value chain, such as providers, deployers, importers, distributors, product manufacturers and authorized representatives). Worthy of mention are the definitions of providers, deployers and importers under the EU AI Act.
Providers are people or organizations that develop an AI system or general-purpose AI (GPAI) model, or have it developed on their behalf, and who place it on the market or put the AI system into service under their name or trademark.
The act broadly defines an AI system as a system that can, with some level of autonomy, process inputs to infer how to generate outputs (for example, predictions, recommendation, decisions, content) that can influence physical or virtual environments. It defines GPAI as AI models that display significant generality, are capable of competently performing a wide range of distinct tasks, and that can be integrated into a variety of downstream AI systems or applications. For example, a foundation model is a GPAI; a chatbot or generative AI tool built on that model would be an AI system.
Deployers are people or organizations that use AI systems. For example, an organization that uses a third-party AI chatbot to handle customer service inquiries would be a deployer.
Importers are people and organizations located or established in the EU that bring AI systems of a person or company established outside of the EU to the EU market.
The EU AI Act also applies to providers and deployers outside of the EU if their AI, or the outputs of the AI, are used in the EU.
For example, suppose a company in the EU sends data to an AI provider outside the EU, who uses AI to process the data, and then sends the output back to the company in the EU for use. Because the output of the provider’s AI system is used in the EU, the provider is bound by the EU AI Act.
Providers outside the EU that offer AI services in the EU must designate authorized representatives in the EU to coordinate compliance efforts on their behalf.
While the act has a broad reach, some uses of AI are exempt. Purely personal uses of AI, and AI models and systems used only for scientific research and development, are examples of exempt uses of AI.
The EU AI Act regulates AI systems based on risk level. Risk here refers to the likelihood and severity of the potential harm. Some of the most important provisions include:
AI systems that do not fall within one of the risk categories in the EU AI Act are not subject to requirements under the act (these are often dubbed the ‘minimal risk’ category), although some may need to meet transparency obligations and they must comply with other existing laws. Examples can include email spam filters and video games. Many common AI uses today fall into this category.
It is worth noting that many of the EU AI Act's finer details surrounding implementation are still being ironed out. For example, the act notes that the EU Commission will release further guidance on requirements such as postmarket monitoring plans and training data summaries.
The EU AI Act explicitly lists certain prohibited AI practices that are deemed to pose an unacceptable level of risk. For example, developing or using an AI system that intentionally manipulates people into making harmful choices they otherwise wouldn't make is deemed by the act to pose unacceptable risk to users, and is a prohibited AI practice.
The EU Commission can amend the list of prohibited practices in the act, so it is possible that more AI practices may be prohibited in the future.
A partial list of Prohibited AI practices at the time this article was published include:
AI systems are considered high-risk under the EU AI Act if they are a product, or safety component of a product, regulated under specific EU laws referenced by the act, such as toy safety and in vitro diagnostic medical device laws.
The act also lists specific uses that are generally considered high-risk, including AI systems used:
For systems included in this list, an exception may be available if the AI system does not pose a significant threat to health, safety, or rights of individuals. The act specifies criteria, one or more of which must be fulfilled, before an exception can be triggered (for example, where the AI system is intended to perform a narrow procedural task). If relying on this exception, the provider must document its assessment that the system is not high-risk, and regulators can request to see that assessment. The exception is not available for AI systems that automatically process personal data to evaluate or predict some aspect of a person's life, such as their product preferences (profiling), which are always considered high-risk.
As with the list of prohibited AI practices, the EU Commission may update the list of high-risk AI systems in the future.
High-risk AI systems must comply with specific requirements. Some examples include:
There are additional transparency obligations for specific types of AI. For example:
We highlight some obligations on key operators of high-risk AI systems in the AI value chain—providers and deployers—below.
Providers of high-risk AI systems must comply with requirements including:
Deployers of high-risk AI systems will have obligations including:
The EU AI Act creates separate rules for general-purpose AI models (GPAI). Providers of GPAI models will have obligations including the following:
If a GPAI model is classified as posing a systemic risk, providers will have additional obligations. Systemic risk is a risk specific to the high-impact capabilities of GPAI models that have a significant impact on the EU market due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights or the society as a whole, that can be propagated at scale across the value chain. The act uses training resources as one of the criteria for identifying systemic risk—if the cumulative amount of computing power used to train a model is greater than 10^25 floating point operations (FLOPs), it is presumed to have high-impact capabilities and pose a systemic risk. The EU Commission can also classify a model as posing a systemic risk.
Providers of GPAI models that pose a systemic risk, including free, open-source models, must meet some additional obligations, for example:
For noncompliance with the prohibited AI practices organizations can be fined up to EUR 35,000,000 or 7% of worldwide annual turnover, whichever is higher.
For most other violations, including noncompliance with the requirements for high-risk AI systems, organizations can be fined up to EUR 15,000,000 or 3% of worldwide annual turnover, whichever is higher.
The supply of incorrect, incomplete or misleading information to authorities can result in organizations being fined up to EUR 7,500,000 or 1% of worldwide annual turnover, whichever is higher.
Notably, the EU AI Act has different rules for fining start-ups and other small and medium-size organizations. For these businesses, the fine is the lower of the two possible amounts specified above.
The law entered into force on 1 August 2024, with different provisions of the law going into effect in stages. Some of the most notable dates include:
Learn how the EU AI Act will impact business, how to prepare, how you can mitigate risk and how to balance regulation and innovation.
Learn about the new challenges of generative AI, the need for governing AI and ML models and steps to build a trusted, transparent and explainable AI framework.
Read about driving ethical and compliant practices with a platform for generative AI models.
Gain a deeper understanding of how to ensure fairness, manage drift, maintain quality and enhance explainability with watsonx.governance™.
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
Learn how to select the most suitable AI foundation model for your use case.
Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.
Simplify how you manage risk and regulatory compliance with a unified GRC platform.
The client is responsible for ensuring compliance with all applicable laws and regulations. IBM does not provide legal advice nor represent or warrant that its services or products will ensure that the client is compliant with any law or regulation.