Get valuable insights into your chatbot’s performance
Watson Assistant provides a summary of the interactions between users and your virtual agent. Visualization and analysis of critical metrics and KPIs help you understand the topics users want addressed, if the virtual agent is meeting those needs, and how to improve the service it provides.
Holistic customer experience analytics
See the full impact of Watson Assistant on customer experience and your business.
Goal completion rate (GCR)
Fully understand how successful your chatbot is by seeing how often users complete the action flows you’ve created. This important metric empowers you to know if the content you’ve built enables successful completion of customer requests — by the virtual agent or by intentional handoff to a live agent.
Look deeper to understand why conversation flows are failing. Did the user get stuck? Were they looking for a different action? Was the customer support conversation abandoned? Was it escalated to an agent? Quickly identify enhancements to help your customers get the answers they need, when they need them.
Group chatbot logs at a conversation level, highlighting the different actions that a customer takes within that conversation. This enables quicker troubleshooting to understand problem areas, while still providing a full conversation view with every message when you want to dive deeper.
With Watson Assistant and Segment, users can incorporate the data generated by all of the daily conversations between customer and virtual agents, into their choice of dozens of solutions. Clean, synthesize, and connect this data to the hundreds of analytics, raw data, and warehouse tools housed in the Segment integration catalog.
One of the benefits is looking at the responses and data that Watson collects. You’re able to see where the population has questions that you may not be answering directly on your website.
Kevin W. Sexton, MD
Associate Director, Institute for Digital Health & Innovation
University of Arkansas for Medical Sciences
Frequently asked questions
What conversational analytics does Watson Assistant offer?
What are key metrics should I track for my artificial intelligence chatbot?
Containment: Number of conversations in which the assistant is able to satisfy the customer's request without human intervention.
Coverage: Number of conversations in which the assistant is confident that it can address a customer's request.
Total conversations: The total number of conversations between active users and your assistant during the selected time period.
Average messages per conversation: The total messages received during the selected time period divided by the total conversations during the selected time period.
Total messages: The total number of messages received from active users over the selected time period.
Active users: The number of unique users who have engaged with your assistant within the selected time period.
Average conversations per user: The total conversations divided by the total number of unique users during the selected time period.
Retention rate: Percentage of users that return to using the chatbot in the given time frame.
Use our metrics scorecard to view all.
Can I view chatbot metrics in real time?
The Analytics Overview page provides a summary of chatbot interactions. You can view the amount of traffic for a given time period, as well as the intents and entities that were recognized most often in user conversations.
You can choose whether to view data for a single day, a week, a month, or a quarter. In each case, the data points on the graph adjust to an appropriate measurement period. For example, when viewing a graph for a day, the data is presented in hourly values, but when viewing a graph for a week, the data is shown by day. A week always runs from Sunday through Saturday. You can create custom time periods also, such as a week that runs from Thursday to the following Wednesday, or a month that begins on any date other than the first.
How do I improve chatbot performance?
Measure the number of individual messages with weak understanding. These messages are not classified by an intent, and do not contain any known entities. Reviewing unrecognized messages can help you to identify potential dialog problems.
Can I measure user satisfaction?
Use built-in metrics to analyze logs from conversations between customers and your assistant to gauge how well it's doing and identify areas for improvement.
How can I measure my total number of users, active users, and new users?
The Analytics dashboard of Watson Assistant provides a history of conversations between users and a deployed assistant. You can use this history to measure the number of users interacting with your assistant.
How can I measure the total number of conversations and total number of messages?
A single conversation consists of messages that an active user sends to your assistant, and the messages your assistant sends to the user to initiate the conversation or respond. If your assistant starts by saying "Hi, how can I help you?", and then the user closes the browser without responding, that message is included in the total conversation count.
The Analytics dashboard of Watson Assistant provides a history of conversations between users and a deployed assistant. You can use this history to determine the total number of conversations and messages.
Can I see the number of times my chatbot escalates to a human agent?
To measure containment accurately, the metric must be able to identify when a human intervention occurs. The metric primarily uses the Connect to human agent response type as an indicator. If a user conversation log includes a call to a Connect to human agent response type, then the conversation is considered to be not contained.
What is chatbot fallback?
A chatbot triggers a fallback message when it can't intelligently respond to a message it receives from a user.
Can I monitor sentiment analysis?
Can I monitor user behavior and user interactions?
You can view the logs for a version of a skill that is running in production from the Analytics tab of a development version of the skill. As you find misclassifications or other issues, you can correct them in the development version of the skill, and then deploy the improved version to production after testing. See Improving across assistants for more details.