Data bias risk for AI

Fairness Icon representing fairness risks.
Risks associated with input
Training and tuning phase
Fairness
Amplified by generative AI

Description

Historical and societal biases that are present in the data are used to train and fine-tune the model.

Why is data bias a concern for foundation models?

Training an AI system on data with bias, such as historical or societal bias, can lead to biased or skewed outputs that can unfairly represent or otherwise discriminate against certain groups or individuals.

Background image for risks associated with input
Example

Healthcare Bias

According to the research article on reinforcing disparities in medicine using data and AI applications to transform how people receive healthcare is only as strong as the data behind the effort. For example, using training data with poor minority representation or that reflects what is already unequal care can lead to increased health inequalities.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.