Artificial Intelligence

Making sense of machine learning

Originally published on The Australian.

Artificial intelligence is increasingly powering life’s touchpoints from Spotify playlists to approving home loans. The benefits are clear but only if we understand them.

When Nick Kyrgios met Rafael Nadal in a packed centre court at Wimbledon 2019, it was always going to be a tempestuous affair. The fiery Australian went into the match with his trademark, but possibly misplaced, swagger given he had been at a local pub until 11pm the previous night. In typical Kyrgios fashion, there were spats with the chair umpire, cheeky underhand serves and some sublime tennis, including a nail biting 23 shot rally in the second set that ended with a blistering down-the-line forehand winner that Nadal could only gaze at from the wrong side of the court.

Given the celebrity and combustibility of the players in question, the point was a shoo-in for the highlights reel. Meanwhile, down on sparsely populated court 12, Elise Mertens pulled off a dazzling combination of base-line drives, lunging half-volleys, a smash and a final volley drop shot to take an early break point against Barbora Strycova. The rally also made the highlights reel, not because of some keen eyed television producer but because of an artificial intelligence algorithm.

The footage was clipped up almost instantaneously using IBM Watson Media, which analyses footage for player gestures and expressions, crowd noise and even acoustics of a ball on racquet to help select the best highlights. The AI technology was introduced at Wimbledon in 2017, delivering match highlights 15 minutes faster and resulting in 14.4 million views of video content with no human intervention. The software was obviously working, but according to Aleksandra Mojsilovic, IBM Fellow and head of trustworthy AI at IBM Research, the original algorithm’s use of crowd cheers as a determining factor for highlight inclusion, meant it often favoured great play on popular courts to the detriment of potentially brilliant tennis on outside courts.

“If you are a famous player, you’re going to have a much bigger audience than anyone else so you will get more cheering. If you go by that standard, then (that player) is going to be basically getting all highlights of the tournament,” she said.

They ran the program through bias mitigation software on IBM Watson OpenScale to see if it would even the field, with the software determining it should downplay crowd volume.

“(Now) every player gets their own shot at fame,” Mojsilovic said.

Tennis highlights may seem a trivial use case, but it demonstrates how easily bias can become embedded in a computer model, and how invisible it can be to an outside observer. It’s a critically important issue to address, as artificial intelligence plays an increasingly important role in our lives.

“We often think about bias just in terms of predetermined categories like race and gender. But it’s even more important to understand that bias exists everywhere and it can happen in any application and in any situation,” Mojsilovic said. “Handling biases requires us to be very open minded and ask ‘how is the application going to be used? Who is going to be using it? What kind of decision is it going to be making? And is there potential to harm individuals and communities?”

Race and gender biases are very much top of mind for citizens, governments and corporations in the wake of MeToo and Black Lives Matter. The movements are shining a light on some of the structural imbalances in society, and prompting a rethink of whether those imbalances are carrying through into the technology we use.

In June, IBM declared it would no longer work on artificial intelligence for facial recognition following concerns the software was being used for citizen surveillance and racial profiling by law enforcement agencies. Amazon and Microsoft have followed suit. Facial recognition had been criticised for failing to correctly identify people of colour, with one study showing that while white men were correctly identified 99 per cent of the time, black women were misidentified in up to one third of tests.

Regulators in Australia are also looking at the issue of bias in artificial intelligence, with the Human Rights Commission soon to release findings into AI and human rights. Commissioner Ed Santow believes that while Australians are happy early adopters of technology, their concerns have shifted from privacy to issues like non-discrimination or equality, and the right to a fair trial.

“The utopian vision is the idea that AI is going to improve every aspect of our lives. The dystopian vision might be something like the social credit scheme that we see in China,” he said.

“We need to be more realistic. All forms of technology have always existed in ways that can either help us or harm us. That is true of the most sophisticated form of deep neural network as it is of a knife. And so what we need to do is make sure that we have a legal structure in place that makes it as likely as possible that people will be protected.”

One of the issues that will be of particular concern to the Human Rights Commission is the so-called “black box” problem — the notion that the underlying algorithm should be kept secret to protect intellectual property.

“We’re very concerned about black box decision making using AI and particularly in high stakes areas. So if it’s a decision about whether someone might be given a bank loan or a decision in the criminal justice system, it’s fundamentally important that anyone who might be negatively affected should understand the basis of that decision. That’s not just important to that individual. That’s a really important principle for any liberal democracy that believes in the rule of law. People should be able to understand the basis of momentous decisions that affect them. And I would hope that our courts will uphold that ancient and enduring principle very strongly,” Santow said.

That need for transparency is underpinning an increasingly important element of the AI universe: Explainable AI. As artificial intelligence becomes democratised and ubiquitous, moving into broader decision-making domains and away from highly trained data scientists, the need for the model to make sense becomes more important.

To help companies address the issue, IBM launched Watson OpenScale, a tool that allows developers to check for unwanted biases in datasets and machine learning models, and suggest algorithms to mitigate them. They also provide an open source tool, AI Fairness 360.

An example of the process is in approving applications for loans using artificial intelligence. Using Watson Open Scale, applications would be submitted into the model multiple times with the genders on submissions switched. If it detects that identical applications from males are being treated differently to those from females, it will alert the end user that that is happening and make recommendations or suggestions about retraining of the model or re-examining the way that the system is working.

According to Ross Farrelly, IBM’s leader for data science and artificial intelligence, explainability is important to not just catch out societal biases such as race or gender but to improve the likelihood of a valid AI decision being accepted.

“Especially when the recommendations coming up in our system are counterintuitive and go against corporate knowledge or the gut feel of experienced practitioners then the output from the model will be challenged and challenged quite vigorously. That’s where explainability really comes in,” he said.

For Farrelly, getting AI right will unlock benefits for companies and consumers alike.

“In my experience, the benefits outweigh the perils by orders of magnitude. People don’t like doing mundane tasks. They don’t like answering the same questions again and again when they’re interacting with an organization. Ideally, they want to get their hands on those goods and services quickly, painlessly and with great customer service. That’s why I think it’s incumbent on organizations to explain the two way benefits.”

Business is already seeing the enormous benefits that can be delivered through artificial intelligence. But as the use of AI becomes ubiquitous, more and more stakeholders from both within the organization and in the wider community will need to understand how decisions are arrived at. Wrong decisions at scale will erode trust in the application of AI, which is why explainability will be a critical component of its widespread adoption.

More stories

Summer’s Coming – Get ready to ride the wave of post-lockdown optimism

Author Ross Farrelly, Director, Data Science and Artificial Intelligence, IBM A/NZ You can feel it in the air. Summer’s on its way and there’s a spring in our step. In October, business confidence in NSW leapt by 42pts in September, while Victoria climbed by 16pts.[i] But it’s not just the change of season that is […]

Continue reading

Why natural language search is the way forward

By Jessica Vella, Associate Data Scientist – Advanced Analytics and Paul Sherlock, Associate Partner, Offering Lead – Cognitive Care A/NZ Every week, hundreds of young couples take their first step on an exciting, scary, nerve-wracking journey – one of the most important of their lives. They decide to buy a house. If you’ve been on […]

Continue reading

How IBM is building trustworthy AI

By Ross Farrelly, Director, Data Science and Artificial Intelligence, IBM A/NZ As IBM’s leader for artificial intelligence (AI) in the Australia and New Zealand region, I spend a lot of time talking to companies about how they can realize the benefits associated with this powerful technology. That includes how AI can help them modernize, respond […]

Continue reading