May 17, 2017 By Vidyasagar Machupalli 3 min read

Who’s speaking? : Speaker Diarization with Watson Speech-to-Text API

Distinguishing between two speakers in a conversation is pretty difficult especially when you are hearing them virtually or for the first-time. Same can be the case when multiple voices interact with AI/Cognitive systems, virtual assistants, and home assistants like Alexa or Google Home. To overcome this, Watson’s Speech To Text API has been enhanced to support real-time speaker diarization.

Post building a popular chatbot using Watson services, there are a couple of requests to include SpeakerLabels setting into our code sample.

So, What is Speaker Diarization?

Speaker diarisation (or diarization) is the process of partitioning an input audio stream into homogeneous segments according to the speaker identity. It can enhance the readability of an automatic speech transcription by structuring the audio stream into speaker turns and, when used together with speaker recognition systems, by providing the speaker’s true identity.

Why Speaker Diarization?

Real-time speaker diarization is a need we’ve heard about from many businesses across the world that rely on transcribing volumes of voice conversations collected every day. Imagine you operate a call center and regularly take action as customer and agent conversations happen — issues can come up like providing product-related help, alerting a supervisor about negative feedback, or flagging calls based on customer promotional activities. Prior to today, calls were typically transcribed and analyzed after they ended. Now, Watson’s speaker diarization capability enables access to that data immediately.

To experience speaker diarization via Watson speech-to-text API on IBM Bluemix, head to this demo and click to play sample audio 1 or 2. If you check the input JSON specifically Line 20 below; we are setting “speaker_labels” optional parameter to true. This helps us in distinguishing between speakers in a conversation.

{<br>
 "continuous": true,<br>
 "timestamps": true,<br>
 "content-type": "audio/wav",<br>
 "interim_results": true,<br>
 "keywords": [<br>
  "IBM",<br>
  "admired",<br>
  "AI",<br>
  "transformations",<br>
  "cognitive",<br>
  "Artificial Intelligence",<br>
  "data",<br>
  "predict",<br>
  "learn"<br>
 ],<br>
 "keywords_threshold": 0.01,<br>
 "word_alternatives_threshold": 0.01,<br>
 "smart_formatting": true,<br>
 "speaker_labels": true,<br>
 "action": "start"<br>
}

A part of output JSON after real-time speech-to-text conversion:

{<br>
 ....<br>
     "confidence": 0.927,<br>
     "transcript": "So thank you very much for coming Dave it's good to have you here. "<br>
    }<br>
   ],<br>
   "final": true,<br>
   "speaker": 0<br>
  }

You can see that a speaker label is getting assigned to each speaker in the conversation.

Steps to enable speaker diarization

  • Watson speech-to-text is available as a service on IBM Bluemix, a cloud platform from IBM. Create a new service to leverage your application.

  • If you are taking the Rest API approach, don’t forget to include the optional parameter “speaker_labels: true” in your request JSON.

  • Based on the programming language your application is created, use any of the easy-to-use SDKs available on Watson Developer Cloud ranging from Python, Node, Java, Swift etc.,

Refer chatbot-watson-android code sample to get a gist of how to enable or add speaker diarization to an existing android app. Similarly, you can use other SDKs to achieve speaker diarization.

Note: Speaker labels are not enabled by default. Check ToDos in the code to uncomment.

Use cases

From integrating into chatbots to interacting with home assistants like Alexa, Google Home etc.; from call centers to medical services, the possibilities are endless.

For Bluemix Code samples and Tutorials, please visit our Bluemix github page.

Was this article helpful?
YesNo

More from

AI transforms the IT support experience

5 min read - We know that understanding clients’ technical issues is paramount for delivering effective support service. Enterprises demand prompt and accurate solutions to their technical issues, requiring support teams to possess deep technical knowledge and communicate action plans clearly. Product-embedded or online support tools, such as virtual assistants, can drive more informed and efficient support interactions with client self-service. About 85% of execs say generative AI will be interacting directly with customers in the next two years. Those who implement self-service search…

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

Chat with watsonx models

3 min read - IBM is excited to offer a 30-day demo, in which you can chat with a solo model to experience working with generative AI in the IBM® watsonx.ai™ studio.   In the watsonx.ai demo, you can access some of our most popular AI models, ask them questions and see how they respond. This gives users a taste of some of the capabilities of large language models (LLMs). AI developers may also use this interface as an introduction to building more advanced…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters