October 10, 2023 By Charles Quincy 5 min read

Generative AI has taken the business world by storm. Organizations around the world are trying to understand the best way to harness these exciting new developments in AI while balancing the inherent risks of using these models in an enterprise context at scale. Whether there are concerns over hallucination, traceability, training data, IP rights, skills or costs, enterprises must grapple with a wide variety of risks in putting these models into production. However, the promise of transforming customer and employee experiences with AI is too great to ignore while the pressure to implement these models has become unrelenting.

Paving the way: Large language models

The current focus of generative AI has centered on Large language models (LLMs). These language-based models are ushering in a new paradigm for discovering knowledge, both in how we access knowledge and interact with it. Traditionally, enterprises have relied on enterprise search engines to harness corporate and customer-facing knowledge to support customers and employees alike. These search engines are reliant on keywords and human feedback. Search played a key role in the initial roll out of chatbots in the enterprise by covering the “long tail” of questions that did not have a pre-defined path or answer. In fact, IBM  watsonx Assistant has been successfully enabling this pattern for close to four years. Now, we are excited to take this pattern even further with large language models and generative AI.

Introducing Conversational Search for watsonx Assistant  

Today, we are excited to announce the beta release of Conversational Search in watsonx Assistant. Powered by our IBM Granite large language model and our enterprise search engine Watson Discovery, Conversational Search is designed to scale conversational answers grounded in business content so your AI Assistants can drive outcome-oriented interactions, and deliver faster, more accurate answers to your customers and employees.

Conversational search is seamlessly integrated into our augmented conversation builder, to enable customers and employees to automate answers and actions. From helping your customers understand credit card rewards and helping them apply, to offering your employees information about time off policies and the ability to seamlessly book their vacation time.

Last month, IBM announced the General Availability of Granite, IBM Research´s latest Foundation model series designed to accelerate the adoption of generative AI into business applications and workflows with trust and transparency. Now, with this beta release, users can leverage a Granite LLM model pre-trained on enterprise-specialized datasets and apply it to watsonx Assistant to power compelling and comprehensive question and answering assistants quickly. Conversational Search expands the range of user queries handled by your AI Assistant, so you can spend less time training and more time delivering knowledge to those who need.

Users of the Plus or Enterprise plans of watsonx Assistant can now request early access to Conversational Search. Contact your IBM Representative to get exclusive access to Conversational Search Beta or schedule a demo with one of our experts.

Schedule a demo with our experts today

How does Conversational Search work behind the scenes?

When a user asks an assistant a question, watsonx Assistant first determines how to help the user – whether to trigger a prebuilt conversation, conversational search, or escalate to a human agent. This is done using our new transformer model, achieving higher accuracy with dramatically less training needed.

Once conversational search is triggered, it relies on two fundamental steps to succeed: the retrieval portion, how to find the most relevant information possible, and the generation portion, how to best structure that information to get the richest responses from the LLM. For both portions, IBM watsonx Assistant leverages the Retrieval Augmented Generation framework packaged as a no-code out-of-the-box solution to reduce the need to feed and retrain the LLM model. Users can simply upload the latest business documentation or policies, and the model will retrieve information and return with an updated response.

For the retrieval portion, watsonx Assistant leverages search capabilities to retrieve relevant content from business documents. IBM watsonx Discovery enables semantic searches that understand context and meaning to retrieve information. And, because these models understand language so well, business-users can improve the quantity of topics and quality of answers their AI assistant can cover with no training. Semantic search is available today on IBM Cloud Pak for Data and will be available as a configurable option for you to run as software and SaaS deployments in the upcoming months.

Once the retrieval is done and the search results have been organized in order of relevancy, the information is passed along to an LLM – in this case the IBM model Granite – to synthesize and generate a conversational answer grounded in that content. This answer is provided with traceability so businesses and their users can see  the source of the answer. The result: A trusted contextual response based on your company´s content.

At IBM we understand the importance of using AI responsibly and we enable our clients to do the same with conversational search. Organizations can enable the functionality if only certain topics are recognized, and/or have the option of utilizing conversational search as a general fallback to long-tail questions. Enterprises can adjust their preference for using search based on their corporate policies for using generative AI. We also offer “trigger words” to automatically escalate to a human agent if certain topics are recognized to ensure conversational search is not used.

Conversational Search in action

Let’s look at a real-life scenario and how watsonx Assistant leverages Conversational Search to help a customer of a bank apply for a credit card.

Let’s say a customer opens the bank’s assistant and asks what sort of welcome offer they would be eligible for if they apply for the Platinum Card. Watsonx Assistant leverages its transformer model to examine the user’s message and route to a pre-built conversation flow that can handle this topic. The assistant can seamlessly and naturally extract the relevant information from the user’s messages to gather the necessary details, call the appropriate backend service, and return the welcome offer details back to the user.

Before the user applies, they have a couple questions. They start by asking for some more details on what sort rewards the card offers. Again, Watsonx assistant utilizes its transformer model, but this time decides to route to Conversational Search because there are no suitable pre-built conversations. Conversational Search looks through the bank’s knowledge documents and answers the user’s question.

The user is now ready to apply but wants to make sure applying won’t affect their credit score. When they ask this question to the assistant, the assistant recognizes this as a special topic and escalates to a human agent. IBM watsonx Assistant can condense the conversation into a concise summary and send it to the human agent, who can quickly understand the user’s question and resolve it for them.

From there, the user is satisfied and applies for their new credit card.

Conversational AI that drives open innovation

IBM has been and will continue to be committed to an open strategy, offering of deployment options to clients in a way that best suits their enterprise needs. IBM watsonx Assistant Conversational Search provides a flexible platform that can deliver accurate answers across different channels and touchpoints by bringing together enterprise search capabilities and IBM base LLM models built on watsonx. Today, we offer this Conversational Search Beta on IBM Cloud as well as a self-managed Cloud Pak for Data deployment option for semantic search with watsonx Discovery. In the coming months, we will offer semantic search as a configurable option for Conversational Search for both software and SaaS deployments – ensuring enterprises can run and deploy where they want.

For greater flexibility in model-building, organizations can also bring their proprietary data to IBM LLM models and customize these using watsonx.ai or leverage third-party models like Meta’s Llama and others from the Hugging Face community for use with conversational search or other use cases.

Just getting started on your generative AI Journey for Customer Service? Tune in to our webinar to learn more about this new feature and how companies are seizing the opportunities of conversational AI to empower agents and elevate customer experiences.

Listen: Embracing Generative AI for Elevated Customer Service
Was this article helpful?

More from Uncategorized

Kubernetes version 1.29 is available on IBM Cloud Kubernetes Service

3 min read - We are excited to announce the availability of Kubernetes version 1.29 for your clusters that are running in IBM Cloud Kubernetes Service (IKS). This marks our 24th release of Kubernetes and has been accessible since 14 February. Our Kubernetes service ensures a straightforward upgrade experience by using the IBM Cloud console, sparing you the need for extensive Kubernetes expertise with just a few clicks! For more information and methods on upgrading your cluster, look here. When you deploy new clusters, the default Kubernetes version…

EDGE3 to help universities and athletes navigate recruiting landscape using IBM watsonx AI and data platform

2 min read - The commercialization of amateur sports has accelerated college recruiting decision-making timelines, putting enormous pressure on athletes, parents, and coaches. This reality often forces coaching staffs to rely on inadequate tools to efficiently analyze large amounts of data from disparate sources.  EDGE3 is an athlete intelligence and digital advisory platform for coaches and athletes. Along with a handful of other former professional athletes, I created EDGE3 to use AI to tackle this growing challenge in college athletics.  We are taking our…

Introducing the IBM Framework for Securing Generative AI

7 min read - While generative artificial intelligence (AI) is becoming a top technology investment area, many organizations are unprepared to cope with the cybersecurity risks associated with it. Like with any new technology, it’s paramount that we recognize the new security risks generative AI brings, because it’s without a doubt that adversaries will try to exploit any weakness in pursuit of their objectives. In fact, according to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters