Get deeper insights into your chatbot
Watson Assistant provides a summary of the interactions between users and your chatbot. Our bot analytics helps you understand the topics your users want to know more about, if the chatbot is addressing those needs, and how to improve.
Holistic customer engagement analytics
Ensure full transparency by reviewing entire conversations between the customer and the assistant.
Leverage AI recommendations
Identify content gaps in current training with new and emerging topics your customers care about.
Compare coverage over time
Look at a timeline of intent coverage to see how you are improving content breadth.
Business impact through containment
Understand the impact to your business and how well the assistant can solve customer needs without a human agent.
Track primary KPIs and understand the traffic your assistant is receiving, with the ability to filter performance by intent or entity.
Data where you need it
Call our API to get the log data into your own database and business intelligence tools alongside other important business data.
Autolearning and recommendations
Enable your chatbot to improve itself
Review visualizations that illustrate the impact that autolearning is having on the performance of your assistant.
Serve your customers better by using Watson-recommended sets of intents to cover the most common customer intents and inquiries.
Intent detection model
Combine traditional machine learning, transfer learning, and deep learning techniques in a cohesive model that is highly responsive at run time.
One of the benefits is looking at the responses and data that Watson collects. You’re able to see where the population has questions that you may not be answering directly on your website.
Kevin W. Sexton, MD
Associate Director, Institute for Digital Health & Innovation
University of Arkansas for Medical Sciences
Frequently asked questions
What conversational analytics does Watson Assistant offer?
The Analytics dashboard of Watson Assistant provides a history of conversations between users and a deployed assistant. You can use this history to improve how your assistants understand and respond to user requests.
What are key metrics should I track for my AI chatbot?
Containment: Number of conversations in which the assistant is able to satisfy the customer's request without human intervention.
Coverage: Number of conversations in which the assistant is confident that it can address a customer's request.
Total conversations: The total number of conversations between active users and your assistant during the selected time period.
Average messages per conversation: The total messages received during the selected time period divided by the total conversations during the selected time period.
Total messages: The total number of messages received from active users over the selected time period.
Active users: The number of unique users who have engaged with your assistant within the selected time period.
Average conversations per user: The total conversations divided by the total number of unique users during the selected time period.
Use our metrics scorecard to view all.
Can I view chatbot metrics in real time?
The Analytics Overview page provides a summary of the interactions between users and your assistant. You can view the amount of traffic for a given time period, as well as the intents and entities that were recognized most often in user conversations.
You can choose whether to view data for a single day, a week, a month, or a quarter. In each case, the data points on the graph adjust to an appropriate measurement period. For example, when viewing a graph for a day, the data is presented in hourly values, but when viewing a graph for a week, the data is shown by day. A week always runs from Sunday through Saturday. You can create custom time periods also, such as a week that runs from Thursday to the following Wednesday, or a month that begins on any date other than the first.
How do I improve chatbot performance?
Measure the number of individual messages with weak understanding. These messages are not classified by an intent, and do not contain any known entities. Reviewing unrecognized messages can help you to identify potential dialog problems.
Can I measure user satisfaction?
Use built-in metrics to analyze logs from conversations between customers and your assistant to gauge how well it's doing and identify areas for improvement.
How can I measure my total number of users, active users, and new users?
The Analytics dashboard of Watson Assistant provides a history of conversations between users and a deployed assistant. You can use this history to measure the number of users interacting with your assistant.
How can I measure the total number of conversations and total number of messages?
A single conversation consists of messages that an active user sends to your assistant, and the messages your assistant sends to the user to initiate the conversation or respond. If your assistant starts by saying "Hi, how can I help you?", and then the user closes the browser without responding, that message is included in the total conversation count.
The Analytics dashboard of Watson Assistant provides a history of conversations between users and a deployed assistant. You can use this history to determine the total number of conversations and messages.
Can I see the number of times my chatbot escalates to a human agent?
To measure containment accurately, the metric must be able to identify when a human intervention occurs. The metric primarily uses the Connect to human agent response type as an indicator. If a user conversation log includes a call to a Connect to human agent response type, then the conversation is considered to be not contained.
Can I monitor sentiment analysis?
Can I monitor user behavior and user interactions?
You can view the logs for a version of a skill that is running in production from the Analytics tab of a development version of the skill. As you find misclassifications or other issues, you can correct them in the development version of the skill, and then deploy the improved version to production after testing. See Improving across assistants for more details.