Oftentimes big words get bandied around expressing what Trusted AI should be. Words like Transparency, Fairness, Explainability… but when it comes down to features and functions, there are a lot of question marks.

The World Economic Forum’s AI Ethics board toolkit describes the desperate need organizations have to render standards actionable to practitioners. For this IBM Academy of Technology initiative we provided a way to have more efficient conversations between clients and practitioners about desired requirements for AI models that are most closely aligned to the client needs, the model risk, regulations, and the organization values.

The conversations are more efficient due to both a prescribed maturity model structure and companion architectural assets that aid in the alignment discussions.

This body of work takes IBM’s own 5 Trustworthy AI Pillars and curates a set of Trustworthy AI functional and non-functional requirements for four varying Service Level Agreements (SLAs).


Which level of Service do you want to provide?

Given the AI model use case and your organization’s principles, which features and functions align to your needs, the model risk, regulations, and organization values?


Note: This body of work does not assess or judge AI model use cases. Having an SLA 0 does not necessarily mean that a bad decision is being made. It could be that the feature and function for that pillar is not applicable for the use case (e.g., an AI model that makes predictions about ants may not require data privacy features). The levels of feature and function are highly dependent on the nature of the use case, its inherent risk, and the values of the organization.

IBM’s 5 pillars of Trustworthy AI are:

” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture” allowfullscreen>

  1. Fairness
  2. Explainability
  3. Transparency
  4. Adversarial Robustness
  5. Data Privacy

We are using these 5 pillars as a starting point to demonstrate how this might work, but it can be adopted by any organization and adapted for their pillars.

High Level findings that define the Service Level Agreements across each pillar

(For the purposes of this blog, we are offering high level extrapolations of rather detailed functional and non-functional requirements for just three of the five pillars.)

Pillar 1: Transparency

Our main approach for the Transparency pillar was to consider the nature of AI model audits. It was not the ONLY consideration, but certainly, one of the key themes.

 

When an organization is interested in auditing their model, there are various permutations on what could occur. Do they want to audit just once before they deploy and never touch it again? Do they want to audit annually for accuracy but ignore bias tests? This table shows a high-level extrapolation of the varying levels of strength an organization can take towards an audit strategy.

Pillar 2: Explainability

Our main approach was to characterize the features that THREE personas would experience for an explained model, including a Subject, a Risk Assessor, and a Model Owner. The detail in the table below provides a high-level summary of features for SUBJECTS.

This table articulates that when a company says that their AI model is explainable, it could actually have, at SLA level 1, language that looks something like this:

After each LAI Match refresh, we also conduct regression analysis to examine the data fluctuation from the previous refresh to the current one. The assumption is, Subject LAI category data should not fluctuate a lot from refresh to refresh, and if it does, then we need a reasonable explanation for that since a large data volatility may have implications to downstream applications.

Or… even worse, something like this:

Reason Code : A4312.

Not particularly empowering to an end user, right?

Pillar 3: Fairness

For Fairness we focused on whether clients want to mitigate bias only in terms of training data, or more holistically via design thinking, input from historically under-represented groups, ethics boards, etc.

This means if you have an organization that just wants to use a tool to mine for bias in the data used to train a model or in the output data, that organization is only operating at an SLA level 1 strength. There are all kinds of processes that organizations can put in place to ensure varying worldviews of Fairness (not just equality, but also Equity) that have nothing to do with tooling. These processes include Design Thinking workshops after the very inception of a model, focus groups, feedback loops, the use of ethics boards, etc.  And of course, one can do all of these things and use a third-party independent validator from outside the company.

” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture” allowfullscreen>

In summary, by introducing the architectural assets that were generated by this Academy initiative, we are effectively helping practitioners have better, more granular conversations with clients about what they ultimately want with respect to functionality as it pertains to best practices for responsibly used AI. The conversations then help inform contracts tied to these proffered Service Level Agreements.  Additionally, once this framework is in one’s mind, it is very easy to look at a Factsheet asset and categorize it as SLA 1, 2, or 3 level work.  This exercise offers a way of framing how assets and practices can use more or less rigorous features depending on the situation at hand- which allows us to make our  “Trustworthy AI words” actionable.

Download the full report now

Video: Introduction to IBM’s 5 Pillars and 3 Principles for Trustworthy AI

” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture” allowfullscreen>
Blog post: Read How the Titanic helped us think about Explainable AI


Authors:

Phaedra Boinodiris, pboinodi@us.ibm.com

Mayan Murray, mayan.murray@ibm.com

Kim Holmes, holmesk@us.ibm.com

Bernard Freund, bfreund@ca.ibm.com

Was this article helpful?
YesNo

More from Artificial intelligence

ServiceNow and IBM revolutionize talent development with AI

4 min read - Generative AI is fundamentally changing the world of work by redefining the skills and jobs needed for the future. In fact, recent research from ServiceNow and Pearson found that an additional 1.76 million tech workers will be needed by 2028 in the US alone.  However, according to the IBM Institute for Business Value, less than half of CEOs surveyed (44%) have assessed the potential impact of generative AI on their workforces. To help customers develop and upskill their workforces to meet…

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

Taming the Wild West of AI-generated search results

4 min read - Companies are racing to integrate generative AI into their search engines, hoping to revolutionize the way users access information. However, this uncharted territory comes with a significant challenge: ensuring the accuracy and reliability of AI-generated search results. As AI models grapple with "hallucinations"—producing content that fills in gaps with inaccurate information—the industry faces a critical question: How can we harness the potential of AI while minimizing the spread of misinformation? Google's new generative AI search tool recently surprised users by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters