Loading
Chapter 01Chapter 02Chapter 03Chapter 04Chapter 05Chapter 06

No relevant matches. Try broadening your query.

Why avoiding bias is critical to AI success

Chapter 03
8 min read

As AI becomes more integral to human-run businesses, those who train and manage AI algorithms need to consider how human biases could influence that training and cause unintended discriminatory outcomes. If the person building an AI model isn’t aware that they are building human bias into the algorithms, it can lead to bias in the AI outcomes produced and cause deeper inequities. It can be difficult to tell how widespread these biases are in the technology we use in our everyday lives, however. While mitigating bias in AI models is clearly a challenge, it’s essential that businesses do so to reduce the likelihood of negative results.1

Machine learning models are being used more and more to inform high-stakes decisions. But unless the training that sets this process in motion is carefully monitored for bias, the AI algorithm could unintentionally place certain groups at a systemic advantage or disadvantage.² Bias in training data itself can yield models with unwanted bias. Incomplete or inaccurate data sets — for example, over- or under-sampling within groups — can lead to model bias, as can a failure to account for nuances based on cultural, racial or gender considerations.

AI can be used for social good. But it can also be used for other types of social impact in which one man's good is another man's evil. We must remain aware of that.”3
James Hendler
Director of the Institute for Data Exploration and Applications, Rensselaer Polytechnic Institute

So how can businesses ensure that they’re not building human bias directly into AI algorithms? Organizations need to have bias mitigation processes in place that allow for ongoing review and oversight of their AI systems. They need to continuously monitor and manage their models based on data.

Five ways to avoid bias

There are two types of learning models, supervised and unsupervised. In a supervised model, the training data is controlled by stakeholders. It’s critical that this group of people is equitably formed and its members have received unconscious bias training. An unsupervised model depends fully on the AI itself to detect bias trends. Bias prevention techniques need to be factored into the neural network so that it learns to distinguish between what is biased and what’s not.

Machine learning is only as good as the data that trains it. Whatever data you feed into your AI must be comprehensive and balanced while replicating the actual demographic of society.

Businesses need to be aware of bias at each step when processing data. Whether during pre-processing, in-processing or post-processing, bias can creep in at any point and be fed into the AI. Any data that could introduce bias needs to be excluded. It’s also important to ensure there is no human bias when it comes to interpreting the data outputs created by AI.

When it comes to monitoring AI, it’s key to not consider any model as “trained” or “finished.” Ongoing monitoring and testing with real-world data can help detect and correct bias before it creates a negative situation.

Aside from human and data influences, sometimes the infrastructure being used itself could cause inaccuracies that may lead to bias. For example, if you’re collecting data from mechanical sensors, the mechanical equipment itself could introduce bias if the sensors are not functioning correctly. This kind of bias can be difficult to detect and requires investment in the latest digital and technological infrastructures.4

Bias-avoidance progress is being made on the AI research front. One method that researchers are exploring to address inherent bias is inverse reinforcement learning, in which the AI observes human behavior in various situations to learn what people value. This teaches the system to make decisions consistent with fundamental ethical principles. From a tactical, business-level perspective, even things as simple as employing a diverse staff of programmers can make a huge difference in mitigating training bias. There is always room for improvement.

It’s also critical that businesses create, implement, and operationalize AI ethics principles, and ensure proper governance is in place. This end-to-end visibility throughout the AI lifecycle will help identify when bias could occur and allow for course-correction.

An open-source library such as the AI Fairness 360 toolkit is a helpful resource for detecting and mitigating bias in machine learning models. A comprehensive set of metrics for datasets can test for biases, explain those metrics, and build algorithms that can mitigate bias in datasets.

Which of the following can potentially cause bias in AI models and lead to unfair and inaccurate results?
Choose your answer
A judgement scale on a black platform
Bias can creep into models from many sources, from incomplete or homogenous datasets to unintentional human bias from those overseeing model development.

2 AI Fairness 360 - Resources, IBM Research Trusted AI.
3 Building Trust in AI, IBM, October 2016.