Home Impact AI Ethics
AI ethics

IBM is helping to advance responsible AI with a multidisciplinary, multidimensional approach

Learn about foundation model ethics
Now is the moment for responsible AI

Businesses are facing an increasingly complex, ever-changing global regulatory landscape when it comes to AI. The IBM approach to AI ethics balances innovation with responsibility, helping you adopt trusted AI at scale.

Foundation models: Opportunities, risks and mitigations.

Fostering a more ethical future by leveraging technology

Case study: Building trust in AI

Our principles and pillars The Principles for Trust and Transparency are the guiding values that distinguish the IBM approach to AI ethics. Read the Principles for Trust and Transparency The purpose of AI is to augment human intelligence

IBM believes AI should make all of us better at our jobs, and that the benefits of the AI era should touch the many, not just the elite few.

Data and insights belong to their creator

IBM clients’ data is their data, and their insights are their insights. We believe that government data policies should be fair and equitable and prioritize openness.

Technology must be transparent and explainable

Companies must be clear about who trains their AI systems, what data was used in training and, most importantly, what went into the recommendations of their algorithms.

The Principles are supported by the Pillars of Trust, our foundational properties for AI ethics. Explainability

Good design does not sacrifice transparency in creating a seamless experience.

AI Explainability 360
Fairness

Properly calibrated, AI can assist humans in making choices more fairly.

AI Fairness 360
Robustness

As systems are employed to make crucial decisions, AI must be secure and robust.

Adversarial Robustness 360
Transparency

Transparency reinforces trust, and the best way to promote transparency is through disclosure.

AI FactSheets 360
Privacy

AI systems must prioritize and safeguard consumers’ privacy and data rights.

AI Privacy 360 toolkit

Ethics for generative AI

When ethically designed and responsibly brought to market, generative AI capabilities support unprecedented opportunities to benefit business and society alike.

Foundation models: Opportunities, risks and mitigations Read the paper
The CEO’s Guide to Generative AI: Platforms, data, governance and ethics

Human values are at the heart of responsible AI.

The urgency of AI governance

IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.

A policymaker’s guide to foundation models

A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models.

Putting principles into action

The IBM AI Ethics Board is at the center of IBM’s commitment to trust. Its mission is to:

  • Provide governance and decision-making as IBM develops, deploys, and uses AI and other technologies
  • Maintain consistency with the company’s values
  • Advance trustworthy AI for our clients, our partners and the world

Co-chaired by Francesca Rossi and Christina Montgomery, the Board sponsors workstreams that deliver thought leadership, policy advocacy and education and training about AI ethics to drive responsible innovation and the advancement and improvement of AI and emerging technologies. It also assesses use cases that raise potential ethical concerns.

The Board is a critical mechanism by which IBM holds our company and all IBMers accountable to our values and commitments to the ethical development and deployment of technology.

 

 

Francesca Rossi

Learn more about Francesca

 Christina Montgomery

Learn more about Christina

Policies and perspectives

IBM advocates for policies that balance innovation with responsibility and trust to help build a better future for all. 

IBM's five best practices for including and balancing human oversight, agency and accountability over decisions across the AI lifecycle.

Learn more

IBM’s recommendations for policymakers to mitigate the harms of deepfakes.

Learn more

IBM’s recommendations for policymakers to preserve an open innovation ecosystem for AI.

Learn more

These standards can inform auditors and developers of AI on what protected characteristics should be considered in bias audits and how to translate those into data points required to conduct these assessments.

Learn more

IBM recommends policymakers consider two distinct categories of data-driven business models and tailor regulatory obligations proportionate to the risk they pose to consumers.

Learn more

Policymakers should understand the privacy risks that neurotechnologies pose as well as how they work and what data is necessary for them to function.

Learn more

Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.

Learn more

Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.

Learn more
watsonx.governance Accelerate responsible, transparent and explainable data and AI workflows. Learn more Get the AI governance ebook

Next steps

Find out how IBM can help you accelerate your responsible AI journey.

Explore our AI governance services Discover watsonx.governance