Trust comes through understanding. How Artificial Intelligence (AI)-led decisions are made and what determining factors were included are crucial to understand. While transparency offers a view into the AI technology and algorithms in use, simple and straightforward explanations are needed to illustrate how the AI is used.

People are entitled to understand how AI arrived at a conclusion, especially when those conclusions impact decisions about their employability, their credit worthiness, or their potential. Provided explanations need to be easy to understand.

The IBM Academy of Technology is a wonderful collective of passionate technologists that work on initiatives that help guide the company in new directions using what are called ‘initiatives’. This AOT initiative was focused on taking a human-centric approach towards explainability because there has really been very little research done on how a user may react when they find out that a model is “probably but not totally correct”.  Additionally, we wanted to explore all the various facets that must be considered when thinking about rendering an AI model explainable – not just from a tooling perspective but also the user experience that ultimately empowers.

Three major themes came out of this effort that we wish to share through this blog post:

1. Prioritize empowering end users.

Be hyper-focused on empowering end users to give them the knowledge and autonomy they need to make better decisions about whether to trust an AI model. Having this lens on opened up our conversations across stakeholders.

 

 

 

 

 

 

 

 

2. AI Ethics is a Team Effort like no other.
The tech is easy, changing human behavior is hard. Getting people to understand the value of other skillsets for this exercise was crucial.

3. Designers, Behavioral Scientists and Psychologists must be at the table when curating Explainable AI.
Too often, we host conversations about AI that underscore this false notion that it is all too easy – “just massage the data until you get the answer you want”. It is critical to invite other stakeholders to the table that offer a more holistic perspective and work across siloes.

The Titanic dataset

We decided to use the original Titanic dataset that is in the public domain to start our initiative. The Titanic is a semi-fictitious data set, describing the survival status of individual passengers on the Titanic. Imagine an interface, integrated with an AI model, that could tell passengers the likelihood of receiving a life raft on the sinking ship. We necessarily chose this dataset because of its clear and easily understandable bias.

The Titanic dataset also provides an industry agnostic model that’s relatively simple to understand and captures the imagination of its consumers. It was paramount for the initiative to provide a model that did not require deep domain knowledge, such as complex actuarial models, in order to appeal to a broad audience and to enable as many people as possible, both technical and non-technical, in AI Explainability.

All organizations have biased data. The questions are whether the bias can be identified, what effect that bias may have, and what the organization is going to do about it. We selected this dataset because it is one of most common datasets used by people when learning data science and it is admittedly biased. We believe it is important to choose a dataset that is so obviously classist to demonstrate how the bias is identified by the AI model and the effect of the bias on passengers. We are demonstrating how organizations can adopt explainable and transparent principles, rendering the bias associated with datasets transparent to the end user so that they can have the autonomy to make decisions.
Scroll to view full table

Two Workstreams: Explainer and Design

We started our initiative by forming two workstreams. The first Explainer workstream was made up of data scientists to dive deep into the data, and the second Design workstream focused on the user experience of the various personas.

The two workstreams worked closely together with regular interlocks and playbacks of findings – this provided a means for each workstream to understand the goals and constraints of the other workstream.  For example, the design team could understand the art of the possible for explanations and the explainer team could comprehend the outputs required by each persona that was under evaluation.

Explainer Workstream

Building models on the Titanic dataset was the starting point for the explainer workstream. A common Jupyter notebook was built with two initial models that allowed the workstream to test and analyze which features to include, and to explore the dataset fully. The notebook provided the focal point for subsequent learning and experimentation with simple explainers (algorithms for capturing explanations) before progressing to incorporate IBM’s AI Explainability 360 (AIX 36) toolkit.

Once AIX 360 was included, each of its different types of explainers could be used on the initial models and their feature sets. This enabled the team to understand which explainers were appropriate for which type of models and the relative strengths of each against the others.

Design Workstream

The design workstream started with asking these sets of questions…

 

 

…whilst thinking of these personas….

 

Progressive Disclosure of Information: Explaining AI through Multiple Layers of Information

The onion idea incorporates progressive disclosure of the many layers of information behind the AI’s outcome so any user can dive as deep as needed into the explainability of the model.

Displaying information in this way avoids incorporating yet more bias by assuming a specific user of the model is not interested or savvy enough to understand a model inside out, meaning, reach the inner layers of the onion. This novel technique in information design that we adopt to explain how the model comes to the final decision must also include information data lineage and confidence levels associated with the model.

The design workstream came up with journey maps and interface drawings detailing how to ultimately offer the most empowering experience for an end user that would give them enough information to make informed decisions about the AI.  The insights from this work are now being used to train practitioners within IBM.

Conclusion

By working for months on this effort and confronting some of the biggest challenges that data scientists and designers alike face with making AI ethical, we came to the conclusion that there is no easy way to implement explainability and, therefore, trustworthy AI systems. Even if we are dealing with powerful technologies, this challenge is far more human rather than technological. AI Explainability requires a holistic approach that must also include humanistic disciplines, such as Design, Psychology, Cognitive Science, Communication, Strategy, etc. Ethics starts with the effort to bring together uncommon stakeholders to drive a significant cultural shift in how the organization adopts AI.

Exploring the personas, it became evident that different personas require different explanations or views of the explanations of the AI model. They may also require the ability to look deeper into the explanations depending on their personal outcome (“peeling back the onion”).  A data scientist or industry regulator will require a global and more detailed explanation than a service consumer who just wants to understand their own outcome.

There are also many different explainers in our AI Explainability toolkit from which to choose, each of which provides differing results based on their focus and strengths. It is a balance between performance, scalability, Subject Matter Expert (SME) availability, complexity of the model being assessed and the persona that is being served that determines the appropriateness of the explainer. Indeed, many projects may have to employ multiple explainers to provide the insights into the model required by all personas.

Whilst the explainers provide a means to surface AI Explainability, it does not automatically make the AI model Trustworthy. They provide insight into how the model is reaching its conclusions based on the chosen feature sets and data used to the train that model. The model itself may be inherently flawed with (at least) implicit or explicit data scientist, SME or input data bias contributing to weaknesses in the model.

In conclusion, earning trust in AI is not a technological challenge, it is a socio-technological challenge. The truth is that we do not know how an end-user will react to an imperfect AI model that may be making a decision that affects that person’s life. To earn people’s trust, considerable effort must be put into thinking about the experience that an end-user makes so as to empower them with knowledge. This knowledge may be used in myriads of ways, including buying into the results of the model, challenging the model, or even offering new data to make it more accurate and fair, or rather – deciding to opt-out. Through this initiative, we have demonstrated that there is a desperate need for designers, humanists and I/O psychologists to work side by side with fellow data scientists. This exercise is neither cheap nor easy: it requires multiple categories of skillset to be solved. It requires a holistic approach and a common effort across diverse stakeholders but especially, a change in the mindset of how to approach AI when building products and systems that impact our life.

We have close to 30 amazing contributors from all over the world that span across skillsets working together to contribute to this project. If you have further questions about this initiative, please reach out to:

Phaedra Boinodiris, Trustworthy AI Practice Leader (pboinodi@us.ibm.com) ,  Andy Barnes,  Executive IT Architect (andy_barnes@uk.ibm.com), or Kim Holmes, Senior User Experience designer (holmesk@us.ibm.com).

Was this article helpful?
YesNo

More from Artificial intelligence

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

Taming the Wild West of AI-generated search results

4 min read - Companies are racing to integrate generative AI into their search engines, hoping to revolutionize the way users access information. However, this uncharted territory comes with a significant challenge: ensuring the accuracy and reliability of AI-generated search results. As AI models grapple with "hallucinations"—producing content that fills in gaps with inaccurate information—the industry faces a critical question: How can we harness the potential of AI while minimizing the spread of misinformation? Google's new generative AI search tool recently surprised users by…

Are bigger language models always better?

4 min read - In the race to dominate AI, bigger is usually better. More data and more parameters create larger AI systems, that are not only more powerful but also more efficient and faster, and generally create fewer errors than smaller systems. The tech companies seizing the news headlines reinforce this trend. “The system that we have just deployed is, scale-wise, about as big as a whale,” said Microsoft CTO Kevin Scott about the supercomputer that powers Chat GPT-5. Scott was discussing the…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters