Businesses are facing an increasingly complex, ever-changing global regulatory landscape when it comes to AI. The IBM approach to AI ethics balances innovation with responsibility, helping you adopt trusted AI at scale.
Just as important as what AI governance helps organizations achieve is what it helps organizations avoid. Find out the extensive potential costs of not implementing an AI governance program.
Through the last five years of AI evolution, the IBM AI Ethics Board has helped IBM responsibly innovate by guiding the development and implementation of ethical guidelines for AI.
Members of the IBM AI Ethics Board reflect on their experiences helping to ensure that AI is used responsibly and for the benefit of all of society.
The Data & Trust Alliance Data Provenance Standards are helping IBM accelerate internal data diligence processes.
AI governance matters now more than ever. But how do you get started? Find out in a new guide from the IBM Institute for Business Value.
This recognition validates IBM’s differentiated approach to delivering enterprise-grade foundation models, helping clients accelerate the adoption of gen AI into their business workflows while mitigating foundation model-related risks.
The EU AI Act has ushered in a new era for AI governance. What do you need to know and do to achieve compliance?
Three IBM leaders offer their insights on the significant opportunities and challenges facing new CAIOs in their first 90 days.
Learn how the responsible development and deployment of AI technology can be better for people and the planet.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making choices more fairly.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
When ethically designed and responsibly brought to market, generative AI capabilities support unprecedented opportunities to benefit business and society alike.
Human values are at the heart of responsible AI.
IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.
A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models.
The IBM AI Ethics Board is at the center of IBM’s commitment to trust. Its mission is to:
Co-chaired by Francesca Rossi and Christina Montgomery, the Board sponsors workstreams that deliver thought leadership, policy advocacy and education and training about AI ethics to drive responsible innovation and the advancement and improvement of AI and emerging technologies. It also assesses use cases that raise potential ethical concerns.
The Board is a critical mechanism by which IBM holds our company and all IBMers accountable to our values and commitments to the ethical development and deployment of technology.
Learn more about Francesca
Learn more about Christina
IBM advocates for policies that balance innovation with responsibility and trust to help build a better future for all.
IBM's five best practices for including and balancing human oversight, agency and accountability over decisions across the AI lifecycle.
IBM’s recommendations for policymakers to mitigate the harms of deepfakes.
IBM’s recommendations for policymakers to preserve an open innovation ecosystem for AI.
These standards can inform auditors and developers of AI on what protected characteristics should be considered in bias audits and how to translate those into data points required to conduct these assessments.
IBM recommends policymakers consider two distinct categories of data-driven business models and tailor regulatory obligations proportionate to the risk they pose to consumers.
Policymakers should understand the privacy risks that neurotechnologies pose as well as how they work and what data is necessary for them to function.
Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.
Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.
At the Notre Dame-IBM Tech Ethics Lab, industry leaders gathered to discuss the opportunities and challenges of responsible AI in finance.
Co-created by IBM, the Data & Trust Alliance's new Data Provenance Standards offer a first-of-their-kind metadata taxonomy to support transparency about data provenance.
IBM’s Global Leader for Responsible AI Initiatives, Dr. Heather Domin, discusses how regulation, collaboration, and skills demand are shaping the AI governance landscape
Experts from IBM and University of Notre Dame outline recommendations for getting the best ROI from AI ethics investments.
With input from IBM, Partnership on AI's new report explores safeguards for open foundation models.
Co-authored by IBM, the Data & Trust Alliance's new policy roadmap provides recommendations for balancing AI innovation with AI safety.
At The Futurist Summit, IBM Chief Privacy and Trust Officer Christina Montgomery and Partnership for AI CEO Rebecca Finley discuss the critical relationship between open innovation and AI safety.
With support from the Notre Dame-IBM Tech Ethics Lab, ten research projects will be undertaken in 2024.
With support from the Notre Dame-IBM Technology Ethics Lab, the Pulitzer Center launches the AI Spotlight Series, a global training initiative.
IBM and Meta launch the AI Alliance in collaboration with over 50 founding members and collaborators globally.
In collaboration with IBM, the World Economic Forum offers three briefing papers to help guide responsible transformation with AI.