Generic

Watson and other impossible Grand Challenges

Share this post:

By dr. Maris van Sprang, Ph.D

IBM regularly defines Grand Challenges: setting a clear goal that is challenging to the level of “just not impossible”, that can only be achieved by a multi year cooperation between hardware, software, services and research divisions. Of course, the idea is that the goal will be achieved but even if it eventually turns out to be impossible, it is likely that the results will be valuable anyway and that the “frontier” of the possible will have been pushed towards the impossible.

Just before the end of the millennium, the Grand Challenge was to build a Chess Computer and beat the world chess champion. A very clear goal but in the decades before Deep Blue this seemed completely impossible. Brute force chess programs relied on evaluating every possible move but quickly ran into hardware limits since increasing the depth by one more move of White and Black increases the calculation effort with a factor of 1000. The other approach relied on mimicking the way humans play but they were at most very funny, playing surprisingly ridiculous moves even when you were expecting ridiculous moves. Nevertheless, in 1997 Deep Blue has beaten the reigning world champion Gary Kasparov.

Kasparov vs DeepBlue

Kasparov vs DeepBlue

Around 2005 Watson became the next Grand Challenge to be launched during IBM’s centennial. The challenge was to build a Cognitive Computer which is a system that resembles the way humans think and at least requires the ability to understand natural language (Natural Language Processing – NLP) and a more robust way than traditional programming is capable of to handle incomplete, conflicting or partial incorrect input data. Machine Learning was the way to go.

These abilities are very wide and complex and despite being research topics for many years progress had not been impressive. But just measuring progress was a problem by itself: how do you measure “understanding natural language”? The system may be good in one area, let’s say “travelling” and bad in another such as “sports”. This results from the fact that the abilities have many “dimensions”. The measuring problem already occurs for 2 dimensions: which object is larger a 3×4 rectangle or a 2×6 rectangle? Without solving this measurement problem first, it would be impossible to track progress (did this change lead to an improvement?) and also would always leave room for discussions in the end (yeah, “travelling” was handled OK, but I am sure “sports” will be handled miserably).

The very clever choice made for Watson was to copy the competition aspect of Deep Blue: have Watson compete with humans. The American quiz Jeopardy! was an ideal battlefield: participants answer cryptic questions that cover many knowledge fields such as sport, geography, history, arts, etc. So, just as in chess, skills needed to be built up and mastered in numerous areas but in the end there is only a 1 dimensional measurement required for evaluation of all skills together. Both in chess and Jeopardy! you win by scoring more points than your opponents.

IBM-Watson-Jeopardy

IBM-Watson-Jeopardy

So, what needed to be done for Watson to win Jeopardy? Of course, a huge pile of information, the equivalent of 1 million books, needed to be stored and made accessible. Access to internet was forbidden, just as it was for humans. After the question, Watson searched this pile to find pieces of information that could be (parts of) answers. But, most importantly, only 1 answer could be given and, because wrong answers would result in point subtraction, Watson needed to be “sufficiently confident” about an answer before giving it. Watson needed to rank the candidate answers by confidence level, somehow. Just counting hits was not sufficient. To build up the confidence, Watson used its Natural Language Processing skills to find evidence for the candidate answers. In essence, this led to a basic level of understanding language, clearly very useful if you want to answer a question successfully. Lastly, to make a chance to beat the human champion, this whole process, starting with the search through a million books, would need to be finished within a few seconds.

Impossible?

In 2011 Watson has beaten the human champions.

After winning Jeopardy! sceptic people asked: “OK, so IBM has this game winning computer, so what?”. But it has turned out that Cognitive Computers have great potential in many fields. They need to have access to all relevant sources and be “trained” before they can be used. During this training, humans tell the computer which answers are correct and which are wrong while the computer makes adaptations in its software accordingly. Even during normal usage, the training continues when users give feedback on the quality of the answers. So Cognitive Computers continue learning and become better and better.

In 2014 Watson has grown to a whole family of commercial applications:

  • in Healthcare to help doctors identify treatment options
  • in Finance to help planners recommend better investments
  • in Retail to help retailers transform customer relationships
  • in Public Sector to help government help its citizens
  • Watson Engagement Advisor to handle customer interactions with natural language skills
  • Watson Discovery Advisor to assist Research by discovering patterns in all kinds of data
  • Watson Ecosystem as a cloud based environment offering Watson capabilities to developers to create “cognitive apps”
  • Watson Foundations as a Big Data and Analytics Platform.

 

What will be the future of Watson?

Any area where decisions need to be made and where decision quality improves with using relevant knowledge, where important questions have multiple correct answers but only one best answer, any of these areas is a candidate where Watson can have breakthrough impact. The data that is underlying this knowledge can be characterised in Big Data terms of Volume (how many Terabytes or even Petabytes large is the area), Velocity (how quickly does it grow in terms of Gigabytes per second), Variety (is the data structured or unstructured, does it consist of text, pictures, movies, sound, etc) and Veracity (to what extent can data be trusted). Any of these 4 dimensions can grow beyond what can be grasped by a mere human individual and puts Watson in a position to become of value.

A great example is Watson’s impact in medical oncology. Medical information doubles in volume every five years, and physicians practicing in the rapidly changing field of oncology are challenged to remain current with medical literature, research, guidelines and best practices. Keeping up with the medical literature can take an individual physician as many as 160 hours a week. But Watson can do this and ensure that decision making of physicians remains based on up to date information.

IBM'S WATSON helping to fight against leukemia at MD Anderson

IBM’S WATSON helping to fight against leukemia at MD Anderson

But imagine how much its value would grow if the data sources would expand: include besides oncology also other related medical fields, include next to English also other languages and also include other media such as X-ray and MRI pictures. Perhaps all of this, and more, can be accomplished with a Mega Watson: Watson scaled up a million times.

Impossible?

Perhaps not. Watson has already benefitted from hardware evolution since 2011. Its original size of 10 racks POWER 750 servers with 2880 cores has shrunk to only 3 “pizza boxes”. But a recent hardware revolution has the potential to dramatically improve Watson. TrueNorth is a neuro-synaptic computer chip by functioning like the human brain with the equivalent of 1 million neurons interconnected via 256 million synapses. With its 5.4 billion transistors it is one of the largest chips ever but still consumes less than 0.1 Watt. Running Watson cognitive software on cognitive TrueNorth hardware could result in Mega Watson in the coming years.

And have a look at the progress of the previous Grand Challenge. Today, computers “play” chess on astronomical levels that leave world champions completely without any hope to win. And you don’t need the proverbial mainframe for that. You can buy a strong chess program for a few 100 euro’s and run it on your laptop or smart phone. Of course, it is strictly forbidden to use them during chess tournaments but they are used during game preparation and to get a definitive judgement about game positions. If it indicates that White can get “decisive advantage” by playing the advised move then that will serve as ground truth. And if someone disagrees, it will be based on another, perhaps stronger, chess program. In their preparation, chess players, depend on their computer. All in all, todays chess players play stronger than decades ago, as indicated by the official ELO chess strength ratings.

The same will happen with Watson. Its advice better be followed because it is the best answer based on all available information. In just a few years, it will simply be a bad idea to make important decisions without support from cognitive systems. A decision maker will be held responsible and may be liable to prosecution after deviating unsuccessfully from the cognitive system recommended option. It will be forbidden not using them.

And after that? Imagine having the equivalent of a million TrueNorth chips containing sufficient neurons to compare with the human brain or a billion which exceeds the human brain. Via its connections with the Internet of Things, it will have access to billions of external signals and it will sense what is happening in the world. Might self-consciousness emerge? I posit this as a future Grand Challenge to “build a Sentient Computer that is self-conscious, senses the world and outwits the smartest humans”. But you won’t find this in Watson’s roadmap documentation 😉

Impossible?

First thing to solve is to have a one dimensional metric. Only then we can answer “Where are we now?”, how much progress is needed and when it might be achieved. David Bowie would say “the moment it knows it knows we know”.

This post was written by:

Maris van Sprangdr. Maris van Sprang, Ph.D.
Senior IT Architect &
Benelux TEC Council Member

TOGAF 9 Certified

Maris can be reached at: m.vansprang@nl.ibm.com
 

 

 

More stories

Is regulation enabling or hindering innovation in the financial services industry?

Anne Leslie, Cloud Risk & Controls Leader Europe, IBM Cloud for Financial Services Europe’s financial services sector is in the throes of wide scale digital transformation – a transition being accelerated by the growing adoption of digital solutions and services to help keep up with the demands of digitally savvy consumers. While there can be […]

Continue reading

The Digital Operational Resilience Act for Financial Services: Harmonised rules, broader scope of application

The Digital Operational Resilience Act – what and why As part of the European Commission’s Digital Finance Package, the new Digital Operational Resilience Act, or in short DORA, will come into force in the coming period. The aim of DORA is to establish uniform requirements across the EU that improve the cybersecurity and operational resilience […]

Continue reading

Banking on empathy

Suppose you’re owning a small boutique wine shop and have gone through two difficult years because of the Covid-19 pandemic. As the pandemic seems to be on its way back, it is time to revitalize the shop. And this causes direct a huge challenge: the wine stock needs to be replenished but you have used […]

Continue reading