Posted in: AI, Publications, Thomas J Watson Research Center

Learning to answer non-trivial questions: reasoning over knowledge bases with deep learning

While most of today’s question answering (QA) systems have proven adept at responding to simple questions about specific domains or topics, there’s a growing demand for systems that are able to answer questions across multiple, inexact domains and entities.

New research from IBM Research’s AI Foundations team proposes how to overcome this challenge by creating a state-of-the-art system for automatically answering questions that require multiple non-trivial steps by making use of knowledge bases; also known as KBs. Their research will be detailed in a talk this week at the Association for Computational Linguistics conference (ACL 2017) in Vancouver, Canada.

KBs are huge collections of human knowledge, in the forms of entities, such as humans, organizations, movies and dates, and the relations between these entities. For example, “Robert Pattinson”, the movie “Twilight” and “2008,” have relations such as “FilmStarring” and “ReleasedDate,” that exist between the actor and the movie as well as the movie and the date it debuted.

Because of the breadth and depth of human knowledge, KBs have become one of the most important resources for modern question answering systems. Such systems, usually called KB-QA in the research community, could be thought of as mapping the input question to sub-graphs of entities and relations in a KB. For example, given the question “What movie did Rob Pattinson play in in 2008?” the system will first identify the entities “Robert Pattinson” and “2008,” then detect the relations connecting the answer to the former entity as “FilmStarring” and the latter as “ReleasedDate.” Or in the below example, “What TV show did Grant Show play on in 2008?”, the system uses “Grant Show” and “2008” as the entities, and detects a chain of relations with “StarringRole-Series” to point to the answer “SwingTown.” Only the entities satisfying the above constrains in a KB could be selected as answers.

kb

Example of a KBQA system parsing a question to perform the tasks of entity linking and relation detection.

All the above steps need to deal with lexical or syntactic variations. For example, the above questions do not contain the exact forms of the entities and relations. In the first question, the entity “Robert Pattinson” appears as “Rob Pattinson” but no word forms of “FilmStarring” exist in the question. Also for entities there could be multiple people with the same name. It’s because of such linguistic variations and ambiguation that KB-QA is well-known as a hard problem in artificial intelligence.

The main focus of our research is to detect relations from question texts. Based on the observation that relations defined in a KB could usually be broken into sub-units (such as words describing the relations), and usually smaller units tend to match to shorter phrases in the question, the team proposed a deep learning model which learns to encode short and long phrases in questions differently, such that they could match to different granularity in relations separately. For example, the whole relation name “FilmStarring” corresponds to the long pattern “movie did <somebody> play” in the question, while the sub-units in relation “FilmStarring” could be “Film” and “Starring”, which could match to the question phrases “movie” and “play” separately.

Another technique called residual connections — previously used by the computer vision community — is further applied to deal with the less common cases where a long question phrase represents a small relation unit due to the linguistic variations.

To deal with entity ambiguities, this research also improves the KB-QA pipeline by using the advanced relation detection model to filter the entities. The key idea is that different entities with the same name tend to connect to different relations, such as the actor and the politician “Robert Pattinson.” When most of the top confident relations detected from the sentence are about films and starring, the politician entity will be filtered out.

The new approach outperforms previous work on several benchmarks of relation extraction from questions. It also helps the IBM KB-QA system to achieve state-of-the-art performance on well-studied question answering benchmarks such as SimpleQuestions and WebQSP, by outperforming all other reported results from various organizations.

Improved Neural Relation Detection for Knowledge Base Question Answering

Mo Yu, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, Bowen Zhou, IBM Research

Wenpeng Yin, Center for Information and Language Processing, LMU Munich

Comments

AI

Mo Yu & Bowen Zhou

Members of the AI Foundations Lab, IBM Research