Trust in AI is a hot topic in the headlines. Can we trust the decisions AI models recommend? Can we trust that AI will not perpetuate bias? Can we explain the inner workings of AI to auditors and key stakeholders in a business? For multinational financial services enterprises like KPMG, trust in AI is critical to their adoption of the technology.

In a recent leadership conversation, Kelly Combs, Director of Emerging Technology Risk Services at KPMG and Srividya Sridharan, a vice president of Research at Forrester, discuss how establishing trust in AI promotes their competitive edge in the marketplace.

Learn how explainable and trusted AI can maximize competitive advantage.



Watch a summary (0:31) above of a leadership conversation with Forrester Research and KPMG, and watch a video of the full conversation (23:34) here.

“Only about 5 percent of KPMG clients are heavily adopting AI at this point,” says Kelly Combs. “Why is that? And what’s holding them back?”

According to Combs, transparency and trust about how AI functions accounts for slower adoption. The panel discussion covers a number of topics that address trust in AI:

  • AI black box concept – lack of understanding about what AI does creates mistrust.
  • AI integrity – detecting and preventing AI bias and drift, which may undermine trust that models will perform as intended or arrive at fair conclusions.
  • AI explainability – AI models and processes must be explainable to regulators, line of business and external stakeholders.
  • AI and data and information architecture – whether businesses are structurally ready to experience the full benefits of AI.

The AI black box

“To many who aren’t data scientists,” says Combs, “AI still is a black box and that scares us.” Many are reluctant to adopt AI because they don’t fully understand it. And the fear is real. In the highly regulated financial industry, institutions face serious legal and financial consequences if AI models are incorrect or misinterpreted.

How can we foster trust in models and AI Systems? Data integrity.

AI integrity: preventing drift and bias

One of the most important concerns for organizations using AI is ensuring data integrity. To recognize how and why AI systems come to particular conclusions, enterprises need to understand the origin and quality of data and how AI models are trained. Fairness in AI begins with AI models built, trained and monitored to prevent bias and drift.

  • Bias in AI occurs when models give preferential treatment to privileged groups. When models introduce bias into loan or credit approvals or other sensitive financial transactions, customers may be harmed and banks may be exposed to substantial legal and financial penalties.
  • Drift occurs when AI models evaluate data that is different or has changed from the data on which the model was trained, degrading their accuracy over time. Learn more about AI Model Drift.

“Financial institutions are using AI and machine learning to determine the creditworthiness of loan applicants, looking at attributes such as FICO scores, age and income,” says Combs. “It is the responsibility of the financial institutions to explain why and how a person was declined for a loan.”

Bias is not always easy to detect and can creep into models even when the data scientists omit filters likely to discriminate such as gender, age, ethnicity or race. An AI model used in hiring that prefers candidates who own a car may inadvertently discriminate against people of color, who have a lower per-capita percentage of automobile ownership. Similarly, an AI model trained on the resumes of males may inadvertently perpetuate gender bias.

Forrester moderator Srividya Sridharan points out that “technologies and tools are actually giving us humans a great way to uncover the biases that may already have existed.” Among these solutions, IBM Watson OpenScale™, now a part of Watson Studio on IBM Cloud Pak® for Data, helps weed out bias and drift in models.

“If I need to articulate how and why a system came to its determination and what were the data attributes it used,” says Kelly Combs, “IBM Watson OpenScale™ on IBM Cloud Pak for Data is one of the only technologies in the marketplace that is helping solve and give transparency in business terms to our clients.”

Learn more about how on IBM Cloud Pak for Data helps ensure that AI is trustworthy, transparent and compliant.

Explainability: Bridging the gap in AI understanding

Solving data integrity issues is just one part of establishing trust and getting buy-in from key stakeholders, agreement that is difficult to achieve without a clear and through explanation and understanding of the workings of the AI system. 

Data science teams create groundbreaking projects, but business leaders reject the models and their conclusions. Why? Often they were not involved in the training process, don’t fully understand the AI, and therefore lack trust in the system.

Creating trust in AI requires uniting minds across the organization, from the C-suite, to legal compliance, to security and to line of business. Stakeholders must understand from the inside out what AI can do for their organizations so they can structure their operations and processes to accommodate AI.

Data and Information Architecture and scaling AI

A recent report by the MIT Sloan Management Review and The Boston Consulting Group found that 81 percent of business leaders do not understand the data and infrastructure required for AI.

Experts agree: Everyone has a data architecture, but is it the right data architecture to support machine learning? Most companies find that it isn’t.

IBM Cloud Pak for Data provides a solution to help create the trusted environment to inspire confidence in the integrity of data in a platform ready to support machine learning and deep learning processes.

Learn more about how you can automate the AI lifecycle for trust and value in this webinar.

Dive into how you can manage AI with trust and confidence in this solution brief.

Was this article helpful?
YesNo

More from Cloud

A major upgrade to Db2® Warehouse on IBM Cloud®

2 min read - We’re thrilled to announce a major upgrade to Db2® Warehouse on IBM Cloud®, which introduces several new capabilities that make Db2 Warehouse even more performant, capable, and cost-effective. Here's what's new Up to 34 times cheaper storage costs The next generation of Db2 Warehouse introduces support for Db2 column-organized tables in Cloud Object Storage. Db2 Warehouse on IBM Cloud customers can now store massive datasets on a resilient, highly scalable storage tier, costing up to 34x less. Up to 4 times…

Manage the routing of your observability log and event data 

4 min read - Comprehensive environments include many sources of observable data to be aggregated and then analyzed for infrastructure and app performance management. Connecting and aggregating the data sources to observability tools need to be flexible. Some use cases might require all data to be aggregated into one common location while others have narrowed scope. Optimizing where observability data is processed enables businesses to maximize insights while managing to cost, compliance and data residency objectives.  As announced on 29 March 2024, IBM Cloud® released its next-gen observability…

The recipe for RAG: How cloud services enable generative AI outcomes across industries

4 min read - According to research from IBM®, about 42% of enterprises surveyed have AI in use in their businesses. Of all the use cases, many of us are now extremely familiar with natural language processing AI chatbots that can answer our questions and assist with tasks such as composing emails or essays. Yet even with widespread adoption of these chatbots, enterprises are still occasionally experiencing some challenges. For example, these chatbots can produce inconsistent results as they’re pulling from large data stores…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters