What is AI governance?

Side view of young businessman using computer while sitting at desk at home

What is AI governance?

Artificial intelligence (AI) governance refers to the processes, standards and guardrails that help ensure that AI systems are safe and ethical. AI governance frameworks direct AI research, development and application to help ensure safety, fairness and respect for human rights. Such frameworks additionally help organizations maintain regulatory compliance and secure sensitive data with respect to AI-powered technologies. 

AI governance encompasses a wide range of practices, protocols, safeguards, systems and tools. AI professionals at the forefront of governance are establishing methods and policies for developing and training new AI models, operating guidelines for applying these models, as well as software and other technologies to create layered safety measures with appropriate human regulatory oversight.  

Top 50 Best Software 2026 IT Management Products

IBM watsonx.governance

Recognized as one of the best IT Management Products in the 2026 G2 Best Software Awards

AI models, along with machine learning (ML) and deep learning (DL) algorithms, are ultimately trained on human data. As a result, AI systems are susceptible to human biases. Left unchecked, AI systems influenced by erroneous bias can lead to serious harm to individuals and entire populations. AI governance policies aim to correct these types of potentially discriminatory or otherwise dangerous errors. 

Effective AI governance oversight mechanisms address risks such as bias, privacy infringement and misuses while still fostering innovation and building trust. An ethical AI-centered approach to governance requires human oversight and input from a wide range of stakeholders–including developers, users, policymakers and ethicists. Implementing a modern and robust AI governance policy helps AI systems adhere to moral and ethical social values while also providing mitigations to lower various vulnerabilities and reduce risk levels across a broad range of AI applications. 

While AI systems continue to advance, AI governance processes are also evolving to set and meet global standards. Although addressing regulatory requirements is a major business incentive behind governance program investments, the benefits of these types of solutions extend beyond compliance––proactively reducing liability risk by bolstering data privacy, data security and data access.  

AI governance provides a structured approach to mitigate the potential risks associated with AI systems. Through protocols and practices such as data governance and continuous monitoring, AI governance policies can effectively evaluate and guide AI tools. When implemented correctly, AI governance prevents flawed or harmful decisions throughout the entire AI pipeline, from the datasets the models are trained on, up to the execution of AI-derived solutions.     

Simply put, AI governance serves to establish the necessary oversight methods and methodologies required to align AI behaviors with ethical standards and societal expectations, thus safeguarding users, providers and any incidentally involved third parties against inadvertent negative, adverse or inequitable impacts.

Why is AI governance important?

AI governance is essential for reaching a state of compliance, trust and efficiency in developing and applying AI technologies. With AI’s increasing integration into organizational and governmental operations, its potential for negative impact has become more visible. In fact, research from the IBM Institute for Business Value found that 80% of business leaders see AI explainability, ethics, bias or trust as a major roadblock to generative AI adoption. AI governance seeks to address these challenges by inspiring trust and preventing any potentially adverse impact from the use of AI technology. 

Certain high-profile incidents have made headlines highlighting the danger of ungoverned AI systems. In one such incident Microsoft’s Tay Chatbot began parroting toxic behavior learned from social media posts because of training on unregulated datasets. While this incident was upsetting, flaws in the AI COMPAS software, led to even more damaging results. Used in criminal sentencing, inherent bias in the AI model led to unjust criminal prosecution and only served to further underscore just how important AI governance is when it comes to building and maintaining public trust in AI systems. 

These instances show that AI can cause significant social and ethical harm without proper oversight, emphasizing the importance of governance in managing the risks associated with advanced AI. By providing guidelines and frameworks, AI governance aims to balance technological innovation with safety, helping to ensure that AI systems do not violate human dignity or rights.

Understanding how AI systems make decisions to hold them accountable for their conclusions is an essential part of AI governance to help ensure that these types of programs make fair and ethical choices. From deciding which ads to show to which users, to determining loan eligibility, AI systems are used to make decisions all the time, from the trivial to the critical. When relying on these systems, transparent decision-making and explainability cannot be more important for ensuring AI systems are used responsibly, and for building trust. 

Moreover, AI governance is not just about helping to ensure one-time compliance; it’s also about sustaining ethical standards over time. AI models can drift, leading to output quality and reliability changes. Current trends in AI governance are moving beyond mere legal compliance. More and more we’re seeing increased value placed on socially responsible efforts in AI development and applications that safeguard against financial, legal and reputational damage, while still promoting the ethical growth and advancement of this exciting new technology.

AI Academy

Uniting security and governance for the future of AI

While grounding the conversation in today’s newest trend, agentic AI, this AI Academy episode explores the tug-of-war that risk and assurance leaders experience between governance and security. It’s critical to establish a balance and prioritize a working relationship for both to achieve better, more trustworthy data and AI your organization can scale.

Examples of AI governance

Examples of AI governance include a range of policies, frameworks and practices that organizations, businesses, ruling bodies and governments are implementing with the common goal of promoting the responsible use of AI technologies. These examples demonstrate how AI governance happens in different contexts:

The General Data Protection Regulation (GDPR): The GDPR is an example of AI governance, particularly in the context of personal data protection and privacy. While the GDPR is not exclusively focused on AI, many of its provisions are highly relevant to AI systems, especially those that process the personal data of individuals within the European Union.

The Organization for Economic Co-operation and Development (OECD): The OECD AI Principles, adopted by over 40 countries, emphasize responsible stewardship of trustworthy AI, including guidelines promoting transparency, fairness and accountability in AI systems.

AI ethics boards:  Many companies have established dedicated AI ethics boards or committees to oversee AI initiatives, ensuring they align with ethical standards and societal values. These boards often include cross-functional teams from legal, technical and policy backgrounds. Since 2019, IBM’s own AI ethics board has reviewed new AI products and services to ensure that they align with IBM’s responsible AI principles. 

Who oversees responsible AI governance?

According to a report from the IBM Institute for Business Value, 80% of organizations have a separate part of their risk function dedicated to risks associated with the use of AI or generative AI. 

At the enterprise level, the CEO and senior leadership are ultimately responsible for implementing AI governance throughout the AI lifecycle, typically delegating certain practical policy tasks to relevant stakeholders such as the CTO and their downstream. As AI governance carries extensive regulatory implications, legal departments and general counsel are critical stakeholders when assessing and mitigating risk, often tasked with ensuring AI applications comply with relevant laws and compliance requirements. 

Depending on the size of a given organization, a dedicated audit team may be responsible for validating the data integrity of any utilized AI systems, confirming that they operate as intended without introducing any errors or biases. While smaller operations may not have dedicated audit teams, as AI technology continues to draw more and more resources, these types of teams and roles are increasingly becoming a priority. 

Wherein AI governance may impact financial matters, the CFO and their downstream are most commonly responsible for managing costs associated with AI initiatives including any governance measures instituted to mitigate financial risks to both the business itself and potentially clients. 

However, at the end of the day the responsibility for AI governance does not rest with any single individual or department; it is a collective responsibility requiring every leader, stakeholder and team member to prioritize accountability and help ensure that AI systems are used responsibly and ethically across the organization. 

By prioritizing accountable AI governance, investing in rigorous staff training, and fostering a transparent culture of open communication and collaboration, managers and leadership can send a clear message to all employees that everyone must use AI responsibly and ethically. 

While the CEO and senior leadership are responsible for setting the overall tone and culture of an organization, when it comes to the kind of continuous monitoring governance demands across all AI pipelines, the serious risks associated with AI require any and all stakeholders to do their part in ensuring the appropriate application of this powerful technology. 

Principles and standards of responsible AI governance

The transformative power of AI technology across countless disparate industries and use cases is still coming into focus. The introduction of modern generative AI models, such as Chat GPT, Claude, and Midjourney, capable of generating new content–including text, images, audio, video and code–captured and ignited public imagination with dazzling possibilities. 

However, while many artists find inspiration in these creative tools, many see them as threatening. Drawing from existing creative works, early forms of these models have been accused of plagiarism, as their output has appeared to reinterpret copyrighted intellectual property without accreditation. 

Responding to these concerns, current iterations of these models have enacted more strident policies for vetting training data, ensuring that any works used to train models are appropriately licensed. AI governance efforts continue to ensure that these valuable tools used to enhance the creative process remain compliant and ethical. 

Going beyond media creation, other exciting types of AI, such as agentic AI models, are being used to automate tedious tasks to advance productivity and research in fields as far apart as law and pharmacology. From coding and development, to astronomy and physics, AI is transforming the way industries operate. However with its broad applicability comes an even greater need for thorough AI governance.   


The principles of responsible AI governance are essential for organizations to safeguard themselves and their customers. Guiding organizations in the ethical development and application of AI technologies, these principles include:

  • Empathy: Organizations should understand the societal implications of AI, not just the technological and financial aspects. They need to anticipate and address the impact of AI on all stakeholders with an empathetic approach that addresses the concerns of not only providers, but clients, customers, and everyone affected by the use and outcomes of AI engines. 

  • Bias control: It is essential to rigorously examine training data to prevent embedding real-world biases into AI algorithms, helping to ensure fair and unbiased decision-making processes. Bias control can be especially important in situations where AI may be making decisions that can unfairly impact certain marginalized communities, such as criminal sentencing or medical diagnostics.  

  • Transparency: There must be clarity and openness in how AI algorithms operate and make decisions, with organizations ready to explain the logic and reasoning behind AI-driven outcomes. Should an issue arise from a decision made by an AI, it’s critical that methods be made available to triage any potentially flawed decision-making processes to understand how and why AI models come to certain conclusions. 

  • Accountability: Organizations should proactively set and adhere to high standards to manage the significant changes AI can bring, maintaining responsibility for AI’s impacts. For example, when a doctor assesses a patient’s test results, such as an x-ray, they are clearly accountable for conclusions and actions resulting from their interpretation of the data. As AI systems begin to analyze medical information, often providing faster and more accurate diagnosis, there is a vital need to establish accountability through the AI pipeline, delineating where the doctor’s accountability may end and the AI model’s accountability begins. 

While regulations and market forces standardize many governance metrics, individual organizations must still determine how to best define relevant measurements for their unique business. Assessing AI governance effectiveness can vary by organization; each organization must decide what parameters they must prioritize. With various considerations like data quality, model security, cost-value analysis, bias monitoring, individual accountability, continuous auditing and adaptability all depending on the organization’s domain, AI governance can never be a one-size-fits-all solution.

Levels of AI governance

AI governance doesn’t have universally standardized “levels” in the way that, for example, cybersecurity might have defined levels of threat response. Instead, AI governance has structured approaches and frameworks developed by various entities that organizations can adopt or adapt to their specific needs.

Organizations can use several frameworks and guidelines to develop their governance practices. Some of the most widely used frameworks include the NIST AI Risk Management Framework, the OECD Principles on Artificial Intelligence and the European Commission’s Ethics Guidelines for Trustworthy AI. These frameworks provide guidance for a range of factors, including transparency, accountability, fairness, privacy, security and safety.

Levels of AI governance can vary depending on any given organization’s size, the complexity of the AI systems in use and the regulatory environment in which the organization operates.

While not exhaustive, a general overview of AI governance approaches includes:

Informal governance

It is the least intensive approach to governance based on the values and principles of the organization. There might be some informal processes, such as ethical review boards or internal committees, but there is no formal structure or framework for AI governance.

Ad hoc governance

A step up from informal governance, ad hoc governance describes the development of specific policies and procedures for AI development and use only as needed. This type of governance is often developed in response to specific challenges or risks and might not be comprehensive or systematic.

Formal governance

The highest level of governance, formal governance involves the development of a comprehensive AI governance framework. This framework reflects the organization’s values and principles and aligns with relevant laws and regulations. Formal governance frameworks typically include risk assessment, ethical review and oversight processes.

How organizations are deploying AI governance

The governance of AI involves establishing robust control structures containing policies, guidelines and frameworks to address various and specific challenges. It involves setting up mechanisms to continuously monitor and evaluate AI systems, ensuring they comply with established ethical norms and legal regulations.

The concept of AI governance becomes increasingly vital as automation, driven by AI, becomes prevalent in sectors ranging from healthcare and finance to transportation and public services. The automation capabilities of AI can significantly enhance efficiency, decision-making and innovation, but they also introduce challenges related to accountability, transparency and ethical considerations.

Effective governance structures in AI are multidisciplinary, involving stakeholders from various fields, including technology, law, ethics and business. As AI systems become more sophisticated and integrated into critical aspects of society, the role of AI governance in guiding and shaping the trajectory of AI development and its societal impact becomes ever more crucial.

AI governance best practices beyond mere compliance require a strategic approach to develop a more robust system capable of monitoring and managing AI applications. For enterprise-level businesses, any AI governance solution should enable broad oversight and control over AI systems. 

A general AI governance roadmap might include:

1.    Visual dashboards: Dashboards provide real-time updates on the health and status of AI systems, offering a clear overview for quick assessments.

2.    Health score metrics: Overall health scores for AI models aggregate multiple data points into an intuitive and easy-to-understand unified metric to simplify at-a-glance monitoring.  

3.    Automated monitoring: Automatic detection systems for bias, drift, performance and anomalies help ensure models function correctly and ethically without the need for round-the-clock human monitoring.  

4.    Performance alerts: Automated performance alerts for when a model deviates from its predefined performance parameters enable rapid responses to any issues that may arise.

5.    Custom metrics: Custom metrics that align with an organization’s key performance indicators and thresholds help ensure AI outcomes contribute to customized business objectives.  

6.    Audit trails: Easily accessible logs and audit trails support accountability and facilitate reviews of the decisions and behaviors of AI systems.

7.    Open source compatibility: Open source tools promote transparency by revealing the functional code to public scrutiny. As a result, these tools typically reflect the most current security risk mitigations as well. Maintaining compatibility with the latest open-source machine learning development platforms allow AI governance frameworks to benefit from the flexibility and community support of open source software.

8.    Seamless integration: The most effective AI governance platforms are designed to integrateseamlessly with existing infrastructure, including databases and software ecosystems, to avoid silos and enable efficient workflows.

By adhering to these practices, organizations can establish a robust AI governance framework that supports responsible AI development, deployment and management, helping to ensure that AI systems are compliant and aligned with ethical standards and organizational goals.

What regulations require AI governance?

AI governance practices and AI regulations have been adopted by several countries to prevent bias and discrimination. It’s important to keep in mind that regulation is always in flux and organizations who manage complex AI systems need to keep a close eye as regional legal frameworks evolve.

The EU AI Act

The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose. Officially in force since August 2024, the AI act will be fully applicable within two years of that date, with some exceptions, namely—for instance, rules for high-risk AI systems that have been embedded into regulated products will have an extended transition period (until 2027).

Considered the world’s first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.

The act also creates rules for general-purpose artificial intelligence (GPAI) models, such as IBM Granite and Meta’s Llama 3 open-source foundation model. Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.

In July 2025, the European Commission published three tools to support the responsible and ethical development and deployment of GPAI models:

The United States' SR-11-7

Historically, US SR-11-7 was the regulatory model governance standard for effective and strong model governance used in banking.1 The regulation required bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation or recently retired. However, as of April 17, 2026, the Federal Reserve issued their Revised Guidance on Model Risk Management (SR-26-2),  superseding and replacing SR-11-7. The revised intreragency risk management model guidence from the Federal Reserve, OCC, and FDIC shifts focus towards a more explicitly risk-based and proportioanal methodology, with regularotory expectations varying based on an institutions size, complexity and model-risk profile, verses earlier, one size fits all governance requirements.

Leaders of effected institutions also must also prove that their own models are achieving the business purpose they were intended to solve and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.

Beyond these banking-specific legislations, the White House issued A National Policy Framework for Artificial Intelligence in March 2026. The framework urges congress to address six key “policy topics” related to AI governance, including: 

  1. Protecting children and Empowering Parents
  2. Safeguarding and Strengthening American Communities
  3. Respecting Intellectual Property Rights and Supporting Creators
  4. Preventing Censorship and Protecting Free Speech  
  5. Enabling Innovation and Ensuring American AI Dominance
  6. Educating Americans and Developing an AI-Ready Workforce

Canada's Directive on Automated Decision-Making

Canada’s Directive on Automated Decision-Making describes how that country’s government uses AI to guide decisions in several departments.2 The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens.

Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language, develop a human intervention failsafe and establish recurring training courses for the system. As Canada’s Directive on Automated Decision-Making is guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR-11-7 does in the US.

Since releasing their initial directive, the Canadian government has most recently released an AI Strategy for the Federal Public Service 2025-2027, with additional considerations focusing on both the public and private sectors  to better serve Canadians through responsible AI adoption.

AI governance regulations and guidelines in the Asia-Pacific region

In April 2021, the European Commission presented its AI package, including statements on fostering a European approach to excellence, trust and a proposal for a legal framework on AI.3

The statements declare that while most AI systems fall into the category of “minimal risk”, AI systems identified as “high risk” will be required to adhere to stricter requirements and systems deemed “unacceptable risk” will be banned. Organizations must pay close attention to these rules or risk fines.

    In 2023, China issued its Interim Measures for the Administration of Generative Artificial Intelligence Services. Under the law, the provision and use of generative AI services must “respect the legitimate rights and interests of others” and are required to “not endanger the physical and mental health of others, and do not infringe upon others’ portrait rights, reputation rights, honor rights, privacy rights and personal information rights”.

    Other countries in the Asia-Pacific region have released several principles and guidelines for governing AI. Singapore’s federal government released a proposed governance framework for generative AI development, in 2024, as well as an AI governance model framework for agentic AI in 2026.India, Japan, South Korea and Thailand are also exploring guidelines and legislation for AI governance.3

    Authors

    Tim Mucci

    IBM Writer

    Gather

    Cole Stryker

    Staff Editor, AI Models

    IBM Think

    Related solutions
    IBM® watsonx.governance®

    Govern generative AI models from anywhere and deploy on the cloud or on premises with IBM watsonx.governance.

    Discover watsonx.governance
    AI governance solutions

    See how AI governance can help increase your employees’ confidence in AI, accelerate adoption and innovation and improve customer trust.

    Discover AI governance solutions
    AI governance consulting services

    Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.

    Discover AI governance services
    Take the next step

    Direct, manage and monitor your AI with a single portfolio to speed responsible, transparent and explainable AI.

    1. Explore watsonx.governance
    2. Book a live demo
    Footnotes

    1 “SR 11-7: Guidance on model risk management”. Board of Governors of the Federal Reserve System, Washington, D.C., Division of Banking Supervision and Regulation. April 4, 2011.

    2 “Canada’s new federal directive makes ethical AI a national issue”. Digital. March 8, 2019.

    3 “Asia-Pacific regulations keep pace with rapid evolution of artificial intelligence technology” . Sidley. August 16, 2024.

    Footnotes

    1 “SR 11-7: Guidance on model risk management”. Board of Governors of the Federal Reserve System, Washington, D.C., Division of Banking Supervision and Regulation. April 4, 2011.

    2 “Canada's new federal directive makes ethical AI a national issue.” Digital. March 8, 2019.

    3 “Asia-Pacific regulations keep pace with rapid evolution of artificial intelligence technology” . Sidley. August 16, 2024.