IBM Support

Cognitive University for Watson Systems SmartSeller

Technical Blog Post


Abstract

Cognitive University for Watson Systems SmartSeller

Body

Last month, I had the pleasure to help train Watson in its latest mission, to help answer questions from sellers, this are not just for the IBM feet on the street, but also for IBM distributors and IBM Business Partners as well.

In their post [Workers Spend Too Much Time Searching for Information], Cottrill Research explains the problem all too well. Here is an excerpt:

"... [survey by SearchYourCloud] revealed 'workers took up to 8 searches to find the right document and information.' Here are a few other statistics that help tell the tale of information overload and wasted time spent searching for correct information -- either external or internal:
  • 'According to a McKinsey report, employees spend 1.8 hours every day -- 9.3 hours per week, on average -- searching and gathering information. Put another way, businesses hire 5 employees but only 4 show up to work; the fifth is off searching for answers, but not contributing any value.' Source: [Time Searching for Information]
     
  • '19.8 percent of business time -- the equivalent of one day per working week -- is wasted by employees searching for information to do their job effectively,' according to Interact. Source: [A Fifth of Business Time is Wasted]
     
  • IDC data shows that 'the knowledge worker spends about 2.5 hours per day, or roughly 30 percent of the workday, searching for information ... 60 percent [of company executives] felt that time constraints and lack of understanding of how to find information were preventing their employees from finding the information they needed.' Source: [Information: The Lifeblood of the Enterprise]."

In the early days of the Internet, before search engines like Google or Bing, I competed in [Internet Scavenger Hunts]. A dozen or more contestants would be in a room, and would be given a list of 20 questions to find answers for. Each of us would then hunt down answers on the Internet. The person to find the most documented answers before time runs out wins. It was quite the challenge!

Over the years, I have honed my skills as a [Search Ninja]. With over 30 years of experience in IBM Storage, many sellers come to me for answers. Sometimes sellers are just too lazy to look for the answers themselves, too busy trying to meet client deadlines, or too green to know where to look.

A good portion of my 60-hour week is spent helping sellers find the answers they are looking for. Sometimes I dig into the [SSIC], product data sheets, or various IBM Redbooks.

Other times, I would confer with experts, engineers and architects in particular development teams. Often, I learn something new myself. In a few cases, I have turned some questions into ideas for blog posts!

It was no surprise when I was asked to help train Watson for the new "Systems SmartSeller" tool. This will be a tool that runs on smartphones or desktops to help answer questions that sellers might need to respond to RFP or other client queries.

The premise was simple. Treat Watson as a student at "Cognitive University" taking classes from dozens of IBM professors, in a series of semesters, or "phases".

Phase I involved building the "Corpus", the set of documents related to z Systems, POWER systems, Storage and SDI solutions; and a "Grading Tool" that would be used as the Graphical User Interface. I was not involved in phase I.

Phase II was where I came in. Hundreds of questions are categorized by product area. I worked on 500 questions for storage. For each question, Watson had up to eleven different responses, typically a paragraph from the Corpus. My job as a professor was to grade the responses to some 500 storage questions:

Rating Meaning
★ (one star) Irrelevant, answer not even storage-related
★★ (two stars) Relevant, at least it is storage-related, but does not answer the question, or answers it poorly
★★★ (three stars) Relevant, adequately answers the question
★★★★ (four stars) Relevant, answers the question well

Most of the answers were either 1-star (not storage related) or 2-star (mentioned storage, but poor response). I would search through the existing Corpus looking for a better answer, and at best found only 3-star responses, which I would add to the list and grade as a 3-star response.

I then searched the Internet for better answers. Once I found a good match, I would type up a 4-star response, add it to the list, and point it to the appropriate resources on the Web.

Other professors, who were also looking at these questions, would then get to grade my suggested responses as well. Watson would learn based on the consensus of how appropriate and accurate each response was graded.

I don't know where the Cognitive University team got some of the questions, but they were quite representative of the ones I get every week. In some cases, the seller didn't understand the question he heard from the client, making it difficult for me to figure out what they were actually asking for.

It reminds me of that parlor game ["Telephone" or "Chinese Whispers"], in which one person whispers a message to the ear of the next person through a line of people until the last player announces the message to the entire group. I have actually played this at an IBM event in China!

Watson needs to parse the question into nouns and verbs, and use that Natural Linguistic Programming (NLP) to then search the Corpus for appropriate answer. I determined three challenges for Watson in this case:

  • The questions are not always fully formed sentences. For example, "Object storage?" Is this asking what is object storage in general, or rather what does IBM offer in this area?
     
  • The questions often do not spell the names of products correctly, or use informal abbreviations. "Can Store-wise V7 do RtC?" is a typical example, short for "Can the IBM Storwize V7000 storage controller perform Real-time Compression?"
     
  • The questions ask what is planned in the future. "When will IBM offer feature x in product y?" I am sorry, but Watson is not [Zoltar, the fortune teller]!

I managed to grade the responses in the two weeks we were given. Part of my frustration was the grading tool itself was a bit buggy, and I spent some time trying to track down some of its flaws.

The next phase is in late January and February. This will give the Cognitive University team a chance to update the Corpus, improve the grading interface, and find more professors and different set of questions. I volunteered the most recent four years' worth of my blog posts to be added to the Corpus.

Maybe this tool will help me turn my 60-hour week back to the 40-hour week it should be!

technorati tags: , , , , , , , , , , , , , , , , , ,

[{"Business Unit":{"code":"BU054","label":"Systems w\/TPS"},"Product":{"code":"HW206","label":"Storage Systems"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB26","label":"Storage"}}]

UID

ibm16157119