Artificial intelligence (AI) governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical. AI governance frameworks direct AI research, development and application to help ensure safety, fairness and respect for human rights.
Effective AI governance includes oversight mechanisms that address risks such as bias, privacy infringement and misuse while fostering innovation and building trust. An ethical AI-centered approach to AI governance requires the involvement of a wide range of stakeholders, including AI developers, users, policymakers and ethicists, ensuring that AI-related systems are developed and used to align with society's values.
AI governance addresses the inherent flaws arising from the human element in AI creation and maintenance. Because AI is a product of highly engineered code and machine learning (ML) created by people, it is susceptible to human biases and errors that can result in discrimination and other harm to individuals.
Governance provides a structured approach to mitigate these potential risks. Such an approach can include sound AI policy, regulation and data governance. These help ensure that machine learning algorithms are monitored, evaluated and updated to prevent flawed or harmful decisions, and that data sets are well trained and maintained.
Governance also aims to establish the necessary oversight to align AI behaviors with ethical standards and societal expectations so as to safeguard against potential adverse impacts.
AI governance is essential for reaching a state of compliance, trust and efficiency in developing and applying AI technologies. With AI's increasing integration into organizational and governmental operations, its potential for negative impact has become more visible.
High-profile missteps such as the Tay chatbot incident, where a Microsoft AI chatbot learned toxic behavior from public interactions on social media and the COMPAS software's biased sentencing decisions have highlighted the need for sound governance to prevent harm and maintain public trust.
These instances show that AI can cause significant social and ethical harm without proper oversight, emphasizing the importance of governance in managing the risks associated with advanced AI. By providing guidelines and frameworks, AI governance aims to balance technological innovation with safety, helping to ensure AI systems do not violate human dignity or rights.
Transparent decision-making and explainability are also critical for ensuring AI systems are used responsibly and for building trust. AI systems make decisions all the time, from deciding which ads to show to determining whether to approve a loan. It is essential to understand how AI systems make decisions to hold them accountable for their decisions and help ensure that they make them fairly and ethically.
Moreover, AI governance is not just about helping to ensure one-time compliance; it's also about sustaining ethical standards over time. AI models can drift, leading to output quality and reliability changes. Current trends in governance are moving beyond mere legal compliance toward ensuring AI's social responsibility, thereby safeguarding against financial, legal and reputational damage, while promoting the responsible growth of technology.
Examples of AI governance include a range of policies, frameworks and practices that organizations and governments implement to help ensure the responsible use of AI technologies. These examples demonstrate how AI governance happens in different contexts:
The General Data Protection Regulation (GDPR): The GDPR is an example of AI governance, particularly in the context of personal data protection and privacy. While the GDPR is not exclusively focused on AI, many of its provisions are highly relevant to AI systems, especially those that process the personal data of individuals within the European Union.
The Organisation for Economic Co-operation and Development (OECD): The OECD AI Principles, adopted by over 40 countries, emphasize responsible stewardship of trustworthy AI, including transparency, fairness and accountability in AI systems.
Corporate AI ethics boards: Many companies have established ethics boards or committees to oversee AI initiatives, ensuring they align with ethical standards and societal values. For example, IBM has launched an AI Ethics Council to review new AI products and services and help ensure that they align with IBM's AI principles. These boards often include cross-functional teams from legal, technical and policy backgrounds.
In an enterprise-level organization, the CEO and senior leadership are ultimately responsible for ensuring their organization applies sound AI governance throughout the AI lifecycle. Legal and general counsel are critical in assessing and mitigating legal risks, ensuring AI applications comply with relevant laws and regulations.
Audit teams are essential for validating the data integrity of AI systems and confirming that the systems operate as intended without introducing errors or biases. The CFO oversees the financial implications, managing the costs associated with AI initiatives and mitigating any financial risks.
However, the responsibility for AI governance does not rest with a single individual or department; it is a collective responsibility where every leader must prioritize accountability and help ensure that AI systems are used responsibly and ethically across the organization.
The CEO and senior leadership are responsible for setting the overall tone and culture of the organization. When prioritizing accountable AI governance, it sends all employees a clear message that everyone must use AI responsibly and ethically. The CEO and senior leadership can also invest in employee AI governance training, actively develop internal policies and procedures and create a culture of open communication and collaboration.
AI governance is essential for managing rapid advancements in AI technology, particularly with the emergence of generative AI. Generative AI, which includes technologies capable of creating new content and solutions, such as text, images and code, has vast potential across many use cases.
From enhancing creative processes in design and media to automating tasks in software development, generative AI is transforming how industries operate. However, with its broad applicability comes the need for robust AI governance.
The principles of responsible AI governance are essential for organizations to safeguard themselves and their customers. These principles can guide organizations in the ethical development and application of AI technologies, which include:
Empathy: Organizations should understand the societal implications of AI, not just the technological and financial aspects. They need to anticipate and address the impact of AI on all stakeholders.
Bias control: It is essential to rigorously examine training data to prevent embedding real-world biases into AI algorithms, helping to ensure fair and unbiased decision-making processes.
Transparency: There must be clarity and openness in how AI algorithms operate and make decisions, with organizations ready to explain the logic and reasoning behind AI-driven outcomes.
Accountability: Organizations should proactively set and adhere to high standards to manage the significant changes AI can bring, maintaining responsibility for AI's impacts.
In late 2023, The White House issued an executive order to help ensure AI safety and security. This comprehensive strategy provides a framework for establishing new standards to manage the risks inherent in AI technology. The US government's new AI safety and security standards exemplify how governments approach this highly sensitive issue.
AI safety and security: Mandates developers of powerful AI systems to share safety test results and critical information with the US government. It requires the development of standards, tools and tests to help ensure AI systems are safe and trustworthy.
Privacy protection: Prioritizes developing and using privacy-preserving techniques and strengthens privacy-preserving research and technologies. It also sets guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
Equity and civil rights: Prevents AI from exacerbating discrimination and biases in various sectors. This includes guiding landlords and federal programs, addresses algorithmic discrimination and helps to ensure fairness in the criminal justice system.
Consumer, patient and student protection: Helps advance responsible AI in healthcare and education, such as developing life-saving drugs and supporting AI-enabled educational tools.
Worker support: Develops principles to mitigate AI's harmful effects on jobs and workplaces, including addressing job displacement and workplace equity.
Promoting innovation and competition: Catalyzes AI research across the US, encourages a fair and competitive AI ecosystem and facilitates the entry of skilled AI professionals into the US.
While regulations and market forces standardize many governance metrics, organizations must still determine how to best balance measures for their business. Measuring AI governance effectiveness can vary by organization; each organization must decide what focus areas they must prioritize. With focus areas such as data quality, model security, cost-value analysis, bias monitoring, individual accountability, continuous auditing and adaptability to adjust depending on the organization's domain, it is not a one-size-fits-all decision.
AI governance doesn't have universally standardized "levels" in the way that, for example, cybersecurity might have defined levels of threat response. Instead, AI governance has structured approaches and frameworks developed by various entities that organizations can adopt or adapt to their specific needs.
Organizations can use several frameworks and guidelines to develop their governance practices. Some of the most widely used frameworks include the NIST AI Risk Management Framework, the OECD Principles on Artificial Intelligence, and the European Commission's Ethics Guidelines for Trustworthy AI. These frameworks provide guidance for a range of topics, including transparency, accountability, fairness, privacy, security and safety.
The levels of governance can vary depending on the organization's size, the complexity of the AI systems in use and the regulatory environment in which the organization operates.
An overview of these approaches:
This is the least intensive approach to governance based on the values and principles of the organization. There might be some informal processes, such as ethical review boards or internal committees, but there is no formal structure or framework for AI governance.
This is a step up from informal governance and involves the development of specific policies and procedures for AI development and use. This type of governance is often developed in response to specific challenges or risks and might not be comprehensive or systematic.
This is the highest level of governance and involves the development of a comprehensive AI governance framework. This framework reflects the organization's values and principles and aligns with relevant laws and regulations. Formal governance frameworks typically include risk assessment, ethical review and oversight processes.
The concept of AI governance becomes increasingly vital as automation, driven by AI, becomes prevalent in sectors ranging from healthcare and finance to transportation and public services. The automation capabilities of AI can significantly enhance efficiency, decision-making and innovation, but they also introduce challenges related to accountability, transparency and ethical considerations.
The governance of AI involves establishing robust control structures containing policies, guidelines and frameworks to address these challenges. It involves setting up mechanisms to continuously monitor and evaluate AI systems, ensuring they comply with established ethical norms and legal regulations.
Effective governance structures in AI are multidisciplinary, involving stakeholders from various fields, including technology, law, ethics and business. As AI systems become more sophisticated and integrated into critical aspects of society, the role of AI governance in guiding and shaping the trajectory of AI development and its societal impact becomes ever more crucial.
AI governance best practices involve an approach beyond mere compliance to encompass a more robust system for monitoring and managing AI applications. For enterprise-level businesses, the AI governance solution should enable broad oversight and control over AI systems. Here is a sample roadmap to consider:
Visual dashboard: Use a dashboard that provides real-time updates on the health and status of AI systems, offering a clear overview for quick assessments.
Health score metrics: Implement an overall health score for AI models by using intuitive and easy-to-understand metrics to simplify monitoring.
Automated monitoring: Employ automatic detection systems for bias, drift, performance and anomalies to help ensure models function correctly and ethically.
Performance alerts: Set up alerts for when a model deviates from its predefined performance parameters, enabling timely interventions.
Custom metrics: Define custom metrics that align with the organization's key performance indicators (KPIs) and thresholds to help ensure AI outcomes contribute to business objectives.
Audit trails: Maintain easily accessible logs and audit trails for accountability and to facilitate reviews of AI systems' decisions and behaviors.
Open source tools compatibility: Choose open source tools compatible with various machine learning development platforms to benefit from the flexibility and community support.
Seamless integration: Helps ensure that the AI governance platform integrates seamlessly with the existing infrastructure, including databases and software ecosystems, to avoid silos and enable efficient workflows.
By adhering to these practices, organizations can establish a robust AI governance framework that supports responsible AI development, deployment and management, helping to ensure that AI systems are compliant and aligned with ethical standards and organizational goals.
AI governance practices and AI regulations have been adopted by several countries to prevent bias and discrimination. It's important to remember that regulation is always in flux, and organizations who manage complex AI systems need to keep a close eye as regional legal frameworks evolve.
The Artificial Intelligence Act of the European Union, also known as the EU AI Act or the AI Act, is a law that governs the development or use of artificial intelligence (AI) in the European Union (EU). The act takes a risk-based approach to regulation, applying different rules to AI according to the risk they pose.
Considered the world's first comprehensive regulatory framework for AI, the EU AI Act prohibits some AI uses outright and implements strict governance, risk management and transparency requirements for others.
The act also creates rules for general-purpose artificial intelligence models, such as IBM® Granite™ and Meta’s Llama 3 open source foundation model. Penalties can range from EUR 7.5 million or 1.5% of worldwide annual turnover to EUR 35 million or 7% of worldwide annual turnover, depending on the type of noncompliance.
SR-11-7 is the US regulatory model governance standard for effective and strong model governance in banking.1 The regulation requires bank officials to apply company-wide model risk management initiatives and maintain an inventory of models implemented for use, under development for implementation or recently retired.
Leaders of the institutions also must prove that their models are achieving the business purpose they were intended to solve, and that they are up-to-date and have not drifted. Model development and validation must enable anyone unfamiliar with a model to understand the model’s operations, limitations and key assumptions.
Canada’s Directive on Automated Decision-Making describes how that country’s government uses AI to guide decisions in several departments.2 The directive uses a scoring system to assess the human intervention, peer review, monitoring and contingency planning needed for an AI tool built to serve citizens.
Organizations creating AI solutions with a high score must conduct two independent peer reviews, offer public notice in plain language, develop a human intervention failsafe and establish recurring training courses for the system. As Canada’s Directive on Automated Decision-Making is guidance for the country’s own development of AI, the regulation doesn’t directly affect companies the way SR-11-7 does in the US.
In April 2021, the European Commission presented its AI package, including statements on fostering a European approach to excellence, trust and a proposal for a legal framework on AI.3
The statements declare that while most AI systems fall into the category of "minimal risk," AI systems identified as "high risk" will be required to adhere to stricter requirements, and systems deemed "unacceptable risk" will be banned. Organizations must pay close attention to these rules or risk fines.
In 2023, China issued its Interim Measures for the Administration of Generative Artificial Intelligence Services. Under the law, the provision and use of generative AI services must “respect the legitimate rights and interests of others” and are required to “not endanger the physical and mental health of others, and do not infringe upon others' portrait rights, reputation rights, honor rights, privacy rights and personal information rights.”
Other countries in the Asia-Pacific region have released several principles and guidelines for governing AI. In 2019, Singapore’s federal government released a framework with guidelines for addressing issues of AI ethics in the private sector and more recently, in May 2024, released a governance framework for generative AI. India, Japan, South Korea and Thailand are also exploring guidelines and legislation for AI governance.3
Learn how the EU AI Act will impact business, how to prepare, how you can mitigate risk and how to balance regulation and innovation.
Learn about the new challenges of generative AI, the need for governing AI and ML models and steps to build a trusted, transparent and explainable AI framework.
Read about driving ethical and compliant practices with a platform for generative AI models.
Gain a deeper understanding of how to ensure fairness, manage drift, maintain quality and enhance explainability with watsonx.governance™.
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.
Learn how to select the most suitable AI foundation model for your use case.
Govern generative AI models from anywhere and deploy on cloud or on premises with IBM watsonx.governance.
Prepare for the EU AI Act and establish a responsible AI governance approach with the help of IBM Consulting®.
Simplify how you manage risk and regulatory compliance with a unified GRC platform.
1 “SR 11-7: Guidance on model risk management.”, Board of Governors of the Federal Reserve System Washington, D.C., Division of Banking Supervision and Regulation, 4 April 2011.
2 “Canada's new federal directive makes ethical AI a national issue.” Digital, 8 March 2019.
3 "Asia-Pacific regulations keep pace with rapid evolution of artificial intelligence technology", Sidley, 16 August 2024.