October 6, 2016 By Pritam Gundecha 3 min read

IBM Watson just got more accurate at detecting emotions

Emotion detection has been a central piece of the puzzle to make AI systems compassionate. With this goal in mind, early this year IBM Watson released textual emotion detection as a new functionality within the Alchemy Language Service and Tone Analyzer on the Watson developer cloud.

We are pleased to announce that IBM Watson’s emotion detection capability has undergone significant enhancements. These enhancements will remain pivotal in improving user interactions, and understanding their emotional state.

What are the new enhancements?

Newly released emotion model brings following enhancements:

  • Expansion in the training data: We doubled our training dataset from the previous release. Systematic expansion of the training dataset has helped the new model to significantly improve its vocabulary coverage than before.

  • New feature selection process: Feature selection is one of the most important steps in building a large scale machine learning system. In this release, we explore some linear models penalized with the L1 norm to have coefficients of important features to be non-zero. Based on our experiments, we find that Linear SVM with L1 penalty helped most to extract important features. These selected features along with topic and specialized engineered features helped classifiers in the ensemble model not only to improve accuracy but also to provide transparency for the final prediction.

  • Diverse classifiers: The ensemble framework performs better when it contains diverse set of classifiers in it. In this release we bring a new set of diverse classifiers exploring different hypotheses, including tree-based ensemble classifiers, kernel-based classifiers, and latent topic-based classifiers. Since training data is continuously increasing, this diverse set of classifiers has to address the scalability problem before being incorporated into our ensemble framework.

  • Improved lexicon support: Our new release significantly improved emotion detection at lexicon/word-level.

  • Expanded support for emoticons, emojis and slang: This is an important step for detecting emotions in conversational systems.

All of these enhancements helped us achieve improved accuracy (in terms of average F1-measure), which is better than the state of the art emotion models [Li et. Al 2009, Kim et.al 2010, Liu 2012, Agrawal and An 2012, Wang and Pal 2015] included in our previous version. Some of these state-of-the art emotion models are part of our ensemble framework.

This is the current state of our work at the time of this release. We are continuously improving our models and look forward to releasing enhanced models in the future.

Ready to try a demo?

Check out this fun (and possibly insightful) service demonstration:

Tone Analyzer demo

The API is currently available for English text input. More details about this service, the science behind it, how to use the APIs, and example applications are available in the documentation for AlchemyLanguage and Tone Analyzer.

References

  • Sunghwan Mac Kim, Alessandro Valitutti, and Rafael A. Calvo. “Evaluation of unsupervised emotion models to textual affect recognition.” Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. Association for Computational Linguistics, 2010.

  • AmeetaAgrawal, and Aijun An. “Unsupervised emotion detection from text using semantic and syntactic relations.” Web Intelligence and Intelligent Agent Technology (WI-IAT), 2012 IEEE/WIC/ACM International Conferences on. Vol. 1. IEEE, 2012.

  • Tao Li, Yi Zhang, and VikasSindhwani. “A non-negative matrix tri-factorization approach to sentiment classification with lexical prior knowledge.” Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1. Association for Computational Linguistics, 2009.

  • Yichen Wang, and Aditya Pal. “Detecting emotions in social media: A constrained optimization approach.” Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015). 2015.

  • Bing Liu. “Sentiment analysis and opinion mining.” Synthesis lectures on human language technologies 5.1 (2012): 1-167.

Technical team

The technical team responsible for emotion analysis includes: Pritam GundechaHau-wen ChangMateo Nicolas BengualidVibha SinhaJalal MahmudRama AkkirajuJonathan HerzigMichal Shmueli-Scheuer, and David KonopnickiAlexis Plair and Tanmay Sinha are the offering managers. Steffi Diamond is the release manager.

More from

Simplify workload management and cloud provisioning with Amazon RDS for Db2’s consumption-based licensing

2 min read - At AWS re:Invent 2023, IBM and AWS launched the Amazon RDS for Db2 service allowing customers to move self-managed Db2 databases to the cloud through a bring-your-own-license model. Since then, we have been working with customers to accelerate strategic modernization initiatives and expand service features to meet application needs. Many customers have worked with us and AWS to use Amazon RDS for Db2, including Profile Centevo, a leading asset and fund management solutions provider, who was able to modernize three times faster thanks to…

Democratizing Large Language Model development with InstructLab support in watsonx.ai

5 min read - There is no doubt that generative AI is changing the game for many industries around the world due to its ability to automate and enhance creative and analytical processes. According to McKinsey, generative AI has a potential to add $4 trillion to the global economy. With the advent of generative AI and, more specifically, Large Language Models (LLMs), driving tremendous opportunities and efficiencies, we’re finding that the path to success for organizations to effectively use and scale their generative AI…

Optimizing GPU resources for performance and efficiency  

3 min read - As the demand for advanced graphics processing units (GPU) from vendors like NVIDIA® grows to support machine learning, AI, video streaming and 3D visualization, safeguarding performance while maximizing efficiency is critical. And with the pace of progress in AI model architecture rapidly accelerating with services like IBM watsonx™, the use of large language models (LLMs) that require advanced NVIDIA GPU workloads is on the rise to meet performance requirements. With this comes new concerns over costs and proper provisioning to ensure…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters