Watsonx Assistant now offers conversational search, generating conversational answers grounded in enterprise-specific content to respond to customer and employee questions. Conversational search uses generative AI to free up human authors from writing and updating answers manually; this accelerates time to value and decreases the total cost of ownership of virtual assistants.

IBM watsonx Assistant connects to watsonx, IBM’s enterprise-ready AI and data platform for training, deploying and managing foundation models, to enable business users to automate accurate, conversational question-answering with customized watsonx large language models.

IBM and watsonx Assistant have been using foundation models since 2020 for advanced processing and understanding of text, including customer conversations. Now, Assistant connects to watsonx to implement retrieval-augmented generation (RAG), a generative AI framework to respond to natural language questions with contextual answers grounded in relevant, enterprise-specific information.

Retrieval-augmented generation (RAG)

RAG is an AI framework that combines search with generative artificial intelligence to retrieve enterprise-specific information from a search tool or vector database and then generate a conversational answer grounded in that information.

Retrieval phase

First, watsonx Assistant retrieves relevant information from your organization’s content. For example, your content might be stored in a knowledge base or content management system. Assistant connects to this content through a search tool, retrieving accurate, up-to-date information in response to prospect, customer or employee questions.

IBM watsonx Assistant supports various patterns to connect to your organization’s content, from no-code to low-code to custom configuration. Assistant supports an out-of-the-box, no-code integration with Watson Discovery for search. Watson Discovery allows non-technical business users to upload documents, crawl the web or connect to content stored in Microsoft SharePoint, Salesforce or Box.

Clients can also take advantage of watsonx Assistant’s starter kits, which lay out step-by-step how to connect to common search tools including Coveo, Google Custom Search, Magnolia and Zendesk Support.

Answer generation phase

Once watsonx Assistant retrieves relevant information from your organization’s content, it passes that into a watsonx large language model (LLM) to generate a conversational answer grounded in that content.

By passing the LLM accurate, up-to-date content to use to generate its answer, watsonx Assistant ensures that the LLM’s answers are grounded in a closed domain of enterprise-specific content instead of an open domain of internet-scale data. As a result, the LLM is less likely to ‘hallucinate’ incorrect or misleading information.

To support answer generation, watsonx Assistant has partnered with IBM Research and watsonx to develop customized watsonx LLMs that specialize in generating answers grounded in enterprise-specific content. Today, clients can connect watsonx Assistant to customized watsonx LLMs using step-by-step starter kits that walk through the entire process of setting up retrieval-augmented generation for conversational search. Clients can also connect to their own watsonx LLMs or third-party LLMs using the watsonx Assistant custom extensions framework, both for retrieval-augmented generation and other generative use cases.

Conversational search in practice

What does conversational search, powered by this retrieval-augmented generation framework, mean for building, deploying and maintaining virtual assistants?

Building and deploying your first virtual assistant is much easier. With conversational search, watsonx Assistant can accurately answer a broad range of questions without non-technical business users writing answers manually. Teams can expand an existing virtual assistant’s coverage to handle a new set of topics or stand up and launch a new virtual assistant connected to their organization’s existing knowledge base without any manual authoring.

Maintaining virtual assistants also requires less effort. Once watsonx Assistant is connected to a knowledge base for conversational search, it automatically pulls information from that source to inform its generated answers. When information changes or new information becomes available, teams can simply update the information in their knowledge base. IBM watsonx Assistant will automatically retrieve the updated information to inform its answers. Teams no longer need to manually update answers or retrain models.

Altogether, conversational search accelerates the time to value and drives down the effort required for teams that want to build and deploy exceptional conversational experiences with watsonx Assistant.

Why conversational search with watsonx Assistant?

IBM watsonx Assistant’s conversational search functionality builds on the foundation of its prebuilt integrations, low-code integrations framework, and no-code authoring experience. Developers and business users alike can automate question-answering with conversational search, freeing themselves up to build higher-value transactional flows and integrated digital experiences with their virtual assistants.

Beyond conversational search, Assistant continues to collaborate with IBM Research and watsonx to develop customized watsonx LLMs that specialize in classification, reasoning, information extraction, summarization and other conversational use cases. Watsonx Assistant has already achieved major advancements in its ability to understand customers with less effort using large language models.

Stay tuned for more updates on IBM watsonx Assistant’s generative AI capabilities. Or to learn more about how you can engage your prospects, customers and employees with conversational experiences powered by generative AI, click the button below to schedule a consult.

Schedule a consult Watch our on-demand webinar ‘AI for customer service’

IBM watsonx Assistant can now give prospects, customers, and employees conversational answers based on an organization’s proprietary, or public facing content without human authors having to write a single line of text. As an added benefit, organizations can rely on their digital assistant to retrieve information that is kept up to date in the source content itself, easing the pain of complex content maintenance across its communication channels and content repositories.

IBM watsonx Assistant now supports this capability in conversational search, generating conversational answers grounded in enterprise-specific content to accurately respond to customer and employee questions. Conversational search uses generative AI to free up non-technical business users from having to write and maintain answers manually, accelerating time to value and decreasing the total cost of ownership of virtual assistants.

Was this article helpful?
YesNo

More from Artificial intelligence

Responsible AI is a competitive advantage

3 min read - In the era of generative AI, the promise of the technology grows daily as organizations unlock its new possibilities. However, the true measure of AI’s advancement goes beyond its technical capabilities. It’s about how technology is harnessed to reflect collective values and create a world where innovation benefits everyone, not just a privileged few. Prioritizing trust and safety while scaling artificial intelligence (AI) with governance is paramount to realizing the full benefits of this technology. It is becoming clear that…

Taming the Wild West of AI-generated search results

4 min read - Companies are racing to integrate generative AI into their search engines, hoping to revolutionize the way users access information. However, this uncharted territory comes with a significant challenge: ensuring the accuracy and reliability of AI-generated search results. As AI models grapple with "hallucinations"—producing content that fills in gaps with inaccurate information—the industry faces a critical question: How can we harness the potential of AI while minimizing the spread of misinformation? Google's new generative AI search tool recently surprised users by…

Are bigger language models always better?

4 min read - In the race to dominate AI, bigger is usually better. More data and more parameters create larger AI systems, that are not only more powerful but also more efficient and faster, and generally create fewer errors than smaller systems. The tech companies seizing the news headlines reinforce this trend. “The system that we have just deployed is, scale-wise, about as big as a whale,” said Microsoft CTO Kevin Scott about the supercomputer that powers Chat GPT-5. Scott was discussing the…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters