AI

Advancing Natural Language Processing for Enterprise Domains

Share this post:

Finding information in a company’s vast trove of documents and knowledge bases to answer users’ questions is never as easy as it should be. The answers may very well exist, but they often remain out of reach for a number of reasons.

For starters, unlike the Web, where information is connected through a rich set of links and is often captured redundantly in multiple forms (making it easier to find), enterprise content is usually stored in silos with much less repetition of key information. In addition, users searching enterprise content typically ask intricate questions and expect more detailed answers than they would get from a Web search engine. These may include questions about product support, one’s bills, the latest regulation as it applies to contracts with customers, the implications of events discovered in news sites and so on. Finally, enterprises are often reluctant to rely on ‘black box’ AI that can’t explain its recommendations and may require techniques that are explainable to decision makers or end-users.

Enterprise NLPNatural language processing (NLP) holds great promise to help find such deep information in enterprise content by allowing users to more freely express their information needs and providing accurate answers to increasingly complex questions. However, enterprise NLP systems are often challenged by a number of factors, which include making sense of heterogonous silos of information, dealing with incomplete data, training accurate models from small amounts of data and navigating a changing environment in which new content, products, terms and other information is continuously being added.

IBM Research AI is exploring along three different themes to tackle these challenges and improve NLP for enterprise domains. The first seeks to advance AI where systems can learn from small amounts of data, leverage external knowledge and use techniques that include neurosymbolic approaches to language that combine neural and symbolic processing. The second focuses on trusting AI where explainability on how a system reaches a decision is provided. The third approach involves scaling AI to allow continuous adaptation and better monitoring and testing of systems to support the deployment of language systems under the rigorous expectations of enterprises.

In my post on Towards Data Science, I provide specifics on IBM Research’s enterprise NLP work by highlighting four papers we’re presenting at the ACL 2019 conference (a complete list of all our ACL papers is here). The first two papers address semantic parsing: the first uses Abstract Meaning Representation (AMR) language to represent the meaning of a sentence, and the second creates a semantic parser that converts the user’s question into a program to query a knowledge base. I also briefly explore our work integrating incomplete knowledge bases with text to improve the coverage in answering questions. The fourth paper describes a system enabling subject matter experts to fine tune the rules for an interpretable rules-based system.

Read my entire Towards Data Science article, here.

IBM Fellow & CTO Translation Technologies, IBM Research

More AI stories

New advances in speaker diarization

In a recent publication, “New Advances in Speaker Diarization,” presented virtually at Interspeech 2020, we describe our new state-of-the-art speaker diarization system that introduces several novel techniques.

Continue reading

Could AI help clinicians to predict Alzheimer’s disease before it develops?

A new AI model, developed by IBM Research and Pfizer, has used short, non-invasive and standardized speech tests to help predict the eventual onset of Alzheimer’s disease within healthy people with an accuracy of 0.7 and an AUC of 0.74 (area under the curve).

Continue reading

State-of-the-Art Results in Conversational Telephony Speech Recognition with a Single-Headed Attention-Based Sequence-to-Sequence Model

Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words. It is generally believed that direct sequence-to-sequence speech recognition models are competitive with traditional hybrid models only when a large amount of training data is used. However, in our recent […]

Continue reading