My IBM Log in
Bias in AI: How we Build Fair AI Systems and Less-Biased Humans
Feb 01,2018

Artificial intelligence (AI) offers enormous potential to transform our businesses, solve some of our toughest problems and inspire the world to a better future. But our AI systems are only as good as the data we put into them. As AI becomes increasingly ubiquitous in all aspects of our lives, ensuring we’re developing and training these systems with data that is fair, interpretable and unbiased is critical.

Our AI systems are only as good as the data we put into them.

Bad data can contain implicit racial, gender, or ideological biases. It can be poorly researched, with vague and unsourced origins. For some, end results can be catastrophic: Qualified candidates can be disregarded for employment, while others can be subjected to unfair treatment in areas such as education or financial lending. In other words, that age-old saying, “garbage in, garbage out” still applies to data-driven AI systems.

The solution to reducing bias in AI may be our AI systems themselves. AI may actually hold the key to mitigating bias in AI systems – and offers an opportunity to shed light on the existing biases we hold as humans.

Without a process to guide the responsible development of trustworthy AI, our systems won’t benefit society — in fact, AI systems could exacerbate the negative consequences of unconscious bias. We therefore need to define an ethical framework that guides the development of this technology, roots out bias from our systems, and better aligns them with human values. This is a challenge for everyone in society, and it will require deep collaboration across industries, specialties and backgrounds. At IBM, we are committed to ensuring the responsible advancement and deployment of AI technologies. That includes providing clients and partners the ability to audit and understand how our AI systems arrived at a given decision or recommendation.

 

New Multi-Disciplinary Conference Dedicated to AI Ethics
Today marks a significant milestone in progressing these conversations. IBM’s Francesca Rossi, AI Ethics Global Leader and Distinguished Research Staff Member at IBM Research AI, will co-chair the inaugural Artificial Intelligence, Ethics and Society (AIES) conference in New Orleans. This multi-disciplinary, multi-stakeholder event is designed to shift the dynamics of the conversation on AI and ethics to concrete actions that scientists, businesses and society alike can take to ensure this promising technology is ushered into the world responsibility. For the first time, academics, researchers and students across several disciplines and industries will come together to present research, collaborate, and, most importantly, share personal experiences and insights to accelerate our collective understanding of ethical AI imperatives.

Also for the first time, the AIES conference will bring together two leading scientific associations around the theme of AI Ethics – the Association for Computing Machinery (and its special interest group on AI) and the Association for the Advancement of Artificial Intelligence (AAAI) – to reinforce scientific multi-disciplinary discussions on AI ethics. Over the course of three days, AIES attendees will present and discuss new peer-reviewed research on the ethical implications of artificial intelligence. Out of 165 submitted papers to the conference, 61 will be featured – including five by IBM Research – in sessions designed to ignite conversation and inspire actionable insight.

This conference is vital, because as we increasingly rely on apps and services that use AI, we need to be confident that AI is transparent, interpretable, unbiased, and trustworthy.

 

The Bias Test: New IBM Research on AI and Bias
Among the five new research papers IBM will present at AIES, Rossi will unveil her most recent work, “Towards Composable Bias Rating of AI Services,” executed in collaboration with IBM Researcher Biplav Srivastava. In this research, Rossi and Srivastava devised a testing methodology wherein deployed AI systems can be evaluated even if the training data is not available. This research proposes that an independent, three-level rating system can determine the relative fairness of an AI system: 1) It’s not biased, 2) It inherits the bias properties of its data or training, or 3) It has the potential to introduce bias whether the data is fair or not. From that independent evaluation, the AI end-user can determine the trustworthiness of each system, based on its level of bias.

But guidelines and evaluative testing systems aren’t the only viable approaches. Last December, an IBM Research AI effort by Flavio Calmon, Dennis Wei, Bhanu Vinzamuri, Karthi Ramamurthy and Kush Varshney developed a methodology to reduce the discrimination that may be present in a training dataset — this way, any AI algorithm that later learns from that dataset will perpetuate as little inequity as possible. The team’s paper, which was presented at the Neural Information Processing Systems (NIPS) conference, introduces a probabilistic formulation of data pre-processing for reducing discrimination. They show that discrimination can be greatly reduced through effective data transformation.

 

Future State: AI Reduces Human Bias
Research and multidisciplinary conversations like those taking place at AIES are crucial to the advancement of fair, trustworthy AI. By progressing new ethical frameworks for AI and thinking critically about the quality of our datasets and how humans perceive and work with AI, we can accelerate the artificial intelligence field in a way that will benefit everyone. IBM believes that artificial intelligence actually holds the keys to mitigating bias out of AI systems – and offers an unprecedented opportunity to shed light on the existing biases we hold as humans.

Share this post: