Previously on the Watson blog’s NLP series, we introduced sentiment analysis, which detects favorable and unfavorable sentiment in natural language. We examined how business solutions use sentiment analysis and how IBM is optimizing data pipelines with Watson Natural Language Understanding (NLU). But if a sentiment analysis model inherits discriminatory bias from its input data, it may propagate that discrimination into its results. As AI adoption accelerates, minimizing bias in AI models is increasingly important, and we all play a role in identifying and mitigating bias so we can use AI in a trusted and positive way. IBM Watson NLU can help with this.
The importance of mitigating bias in sentiment analysis
At IBM, we believe you can trust AI when it is explainable and fair; when you can understand how AI came to a decision and can be confident that the results are accurate and unbiased. Organizations developing and deploying AI have an obligation to put people and their interests at the center of the technology, enforce responsible use, and ensure that its benefits are felt by the many, not just an elite few.
Out of the box, our IBM Watson NLU sentiment analysis feature informs a user if the sentiment of their data is “positive” or “negative” and presents an associated score. Machine learning models with unaddressed biases do not produce desirable or accurate results, and a biased algorithm can produce results informed by stereotypes. As artificial intelligence continues to automate business processes, it’s crucial to train AI in a neutral, unbiased, and unwavering manner.
Watson NLU delivers sentiment analysis insights and more.
Identifying bias in sentiment analysis
Bias can lead to discrimination regarding sexual orientation, age, race, and nationality, among many other issues. This risk is especially high when examining content from unconstrained conversations on social media and the internet.
To examine the harmful impact of bias in sentimental analysis ML models, let’s analyze how bias can be embedded in language used to depict gender.
Take these two statements, for example:
- The new agent is a woman.
- The new agent is a man.
Depending on how you design your sentiment model’s neural network, it can perceive one example as a positive statement and a second as a negative statement.
For example, say your company uses an AI solution for HR to help review prospective new hires. If those outputs passed through a data pipeline, and if a sentiment model did not go through a proper bias detection process, the results could be detrimental to future business decisions and tarnish a company’s integrity and reputation. Your business could end up discriminating against prospective employees, customers, and clients simply because they fall into a category — such as gender identity — that your AI/ML has tagged as unfavorable.
How to reduce bias from AI
Data scientists and SMEs must build dictionaries of words that are somewhat synonymous with the term interpreted with a bias to reduce bias in sentiment analysis capabilities.
For example, a dictionary for the word woman could consist of concepts like a person, lady, girl, female, etc. These individual words are referred to as perturbations. After constructing this dictionary, you could then replace the flagged word with a perturbation and observe if there is a difference in the sentiment output.
If there is a difference in the detected sentiment based upon the perturbations, you have detected bias within your model.
To see how Natural Language Understanding can detect sentiment in language and text data, try the Watson Natural Language Understanding demo.
- Click Analyze
- Click Classification
- View Sentiment Scores for Entities and Keywords
Protect your enterprise from bias with IBM Watson NLU
The Watson NLU product team has made strides to identify and mitigate bias by introducing new product features. As of August 2020, users of IBM Watson Natural Language Understanding can use our custom sentiment model feature in Beta (currently English only).
After you train your sentiment model and the status is available, you can use the Analyze text method to understand both the entities and keywords. You can also create custom models that extend the base English sentiment model to enforce results that better reflect the training data you provide.
To learn more about how you can create custom sentiment models to eliminate bias, read our documentation.
A dedication to trust, transparency, and explainability permeate IBM Watson. As an offering manager on the Natural Language Understanding service, I lead my team in making sure that we are continuously working to address issues of bias, evolving features to help companies detect bias and make their services more inclusive, and ensuring that our customers feel confident implementing the technology into their business solutions.
Want to see what we’ve been working on? Give it a try for no charge. Start building now on the IBM Cloud. Explore apps, AI, analytics, and more.
Get started with Natural Language Understanding now.