April 29, 2019 | Written by: Twana Daniel
Categorized: How-tos | Watson
Share this post:
An experience from a recent customer engagement on transcribing customer conversations
As a developer and technical offering manager, I get to work with some of the IBM large enterprise clients and their development teams. One of my recent engagements has been with an insurance company, helping them with their customer service department transformation by transcribing a million+ hours of calls per month. The customer service department can then pass the transcribed calls into a number of AI services to get a full understanding of how call conversations are flowing, categorize calls, and much more.
The journey involves understanding the customer requirements, asking the right questions to confirm the requirements, and deciding on which IBM Cloud services should be used to achieve the required outcome. In this blog post series, you will learn how to break the problem statement into smaller problem statements, map the AI services to handle each problem statement, chain AI services, and achieve the end result of transforming the call center customer service experience.
Achieving a high accuracy rate while transcribing calls is traditionally quite hard, but by creating custom language models using the Watson Speech To Text service on IBM Cloud, it can now be easily achieved. The now-accurate transcribed data is passed to the Watson Natural Language Understanding (NLU) service to categorize the calls. This provides a full insight into how each conversation is flowing. The NLU service requires a custom training model to understand the insurance domain. Finally, the data can be visualized on a dashboard to understand the end-to-end handling of calls.
Start with some of the questions
- Why should these calls be transcribed?
- What outcome is expected from the transcribed calls?
- What is the quality of these calls, speaker type, the topic of conversations, and data sensitivity?
- Should custom models be used for better accuracy or will the base model work fine?
- Do we need to do quality control checks of the calls to check if the beginning and end of each call are matching the insurance company call scripts?
- Do we need to categorize these calls?
- What kinds of dashboard views are required to see all calls in a single view?
- When do we pass calls to higher-up management for review?
- We need to use a number of Watson services that are chained to one another—is that aligned with the company roadmap in terms of financial usage?
These were some of the questions asked, and once we had an understanding and agreement on all the questions, I switched to the requirements needed to get the work done.
- Transcribe one million+ hours of calls per month.
- Improve accuracy for the transcribed calls.
- Categorize the calls—filter the calls under the correct categories, targeting quotations, account modifications, claim support, or others.
- Get the tones and emotions, notify the correct department if the caller is unsatisfied.
- Quality control check—check if the customer service representative is following the beginning and ending of the script following company policy.
- Get a dashboard view for all the calls—a quick view of how the calls been handled and satisfaction rates.
- Speech To Text (STT).
- Natural Language Understanding (NLU).
- Knowledge Studio to create custom categories models used by the NLU.
- Tone Analyzer to get the various tones of the calls, such as joy, sadness, anger, and agreeableness.
Stay tuned for Part 2
In the next part of this blog post series, you will understand how these AI services can be chained to analyze the real-time customer conversations. Stay tuned!