Businesses are facing an increasingly complex, ever-changing global regulatory landscape when it comes to AI. The IBM approach to AI ethics balances innovation with responsibility, helping you adopt trusted AI at scale.
Fostering a more ethical future by leveraging technology
Case study: Building trust in AI
Co-created by IBM, the Data & Trust Alliance's new Data Provenance Standards offer a first-of-their-kind metadata taxonomy to support transparency about data provenance.
This recognition validates IBM’s differentiated approach to delivering enterprise-grade foundation models, helping clients accelerate the adoption of gen AI into their business workflows while mitigating foundation model-related risks.
The EU AI Act has ushered in a new era for AI governance. What do you need to know and do to achieve compliance?
Three IBM leaders offer their insights on the significant opportunities and challenges facing new CAIOs in their first 90 days.
Learn about strategies and tools that can help mitigate the unique risks posed by foundation models.
Learn how the responsible development and deployment of AI technology can be better for people and the planet.
IBM leaders Christina Montgomery and Joshua New outline three key priorities for policymakers to mitigate the harms of deepfakes.
Good design does not sacrifice transparency in creating a seamless experience.
Properly calibrated, AI can assist humans in making choices more fairly.
As systems are employed to make crucial decisions, AI must be secure and robust.
Transparency reinforces trust, and the best way to promote transparency is through disclosure.
AI systems must prioritize and safeguard consumers’ privacy and data rights.
Human values are at the heart of responsible AI.
IBM and the Data & Trust Alliance offer insights about the need for governance, particularly in the era of generative AI.
A risk- and context-based approach to AI regulation can mitigate potential risks, including those posed by foundation models.
The IBM AI Ethics Board was established as a central, cross-disciplinary body to support a culture of ethical, responsible and trustworthy AI throughout the organization.
Co-chaired by Francesca Rossi and Christina Montgomery, the Board’s mission is to support a centralized governance, review and decision-making process for IBM ethics policies, practices, communications, research, products and services. By infusing our long-standing principles and ethical thinking, the Board is one mechanism by which IBM holds our company and all IBMers accountable to our values.
Learn more about ethical impact in the 2023 IBM Impact Report
Take a look inside IBM's AI ethics governance framework
Learn more about Francesca
Learn more about Christina
IBM advocates for policies that balance innovation with responsibility and trust to help build a better future for all.
IBM's five best practices for including and balancing human oversight, agency and accountability over decisions across the AI lifecycle.
IBM's perspective on the opportunities posed by foundation models as well as their risks and potential mitigations.
Awareness about risks and potential mitigations is a crucial first step toward building and using foundation models responsibly.
White paper outlining seven recommendations about data-driven business model risks for policymakers.
Companies should utilize a risk-based AI governance policy framework and targeted policies to develop and operate trustworthy AI.
White paper on privacy risks of Brain-Computer Interfaces.
Companies that collect, store, manage or process data have an obligation to handle it responsibly, ensuring ownership and privacy, security and trust.
IBM no longer produces facial recognition or analysis software. We believe in a governance framework informed by precision regulation.
Five priorities to strengthen the adoption of testing, assessment and mitigation strategies to minimize bias in AI systems.
A pioneering paper on accountability, compliance and ethics in the age of smart machines.
IBM's point of view on protecting at-risk groups in AI bias auditing.
IBM is proud to contribute to diverse, global efforts to advance responsible AI through partnerships, alliances and affiliations.
Experts from IBM and University of Notre Dame outline recommendations for getting the best ROI from AI ethics investments.
With input from IBM, Partnership on AI's new report explores safeguards for open foundation models.
Co-authored by IBM, the Data & Trust Alliance's new policy roadmap provides recommendations for balancing AI innovation with AI safety.
At The Futurist Summit, IBM Chief Privacy and Trust Officer Christina Montgomery and Partnership for AI CEO Rebecca Finley discuss the critical relationship between open innovation and AI safety.
With support from the Notre Dame-IBM Tech Ethics Lab, ten research projects will be undertaken in 2024.
With support from the Notre Dame-IBM Technology Ethics Lab, the Pulitzer Center launches the AI Spotlight Series, a global training initiative.
IBM and Meta launch the AI Alliance in collaboration with over 50 founding members and collaborators globally.
Hear privacy predictions from key industry leaders, including IBM Chief Privacy and Trust Officer, Christina Montgomery.
In collaboration with IBM, the World Economic Forum offers three briefing papers to help guide responsible transformation with AI.