Posted in: AI, Publications

Graph2Seq: A Generalized Seq2Seq Model for Graph Inputs

In a recent paper “Graph2Seq: Graph to Sequence Learning with Attention-based Neural Networks,” we describe a general end-to-end Graph-to-Sequence attention-based neural encoder-decoder architecture that encodes an input graph and decodes the target sequence. Graph encoder and attention-based decoder are two important building blocks in the development and widespread acceptance of machine learning solutions. Two of our recent papers at the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018; “Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model” and “SQL-to-Text Generation with Graph-to-Sequence Model”) further demonstrate the advantages of our Graph2Seq model over various Seq2Seq models and Tree2Seq models.

The ability to infer a sequence of outputs from a sequence of complex inputs is an important part of machine intelligence beyond simple classification and clustering tasks. Developing models that handle such cases is a key goal in AI research, since there are many real-world tasks requiring such capability. Despite tremendous successes achieved by the Sequence-to-Sequence Learning (Seq2Seq) technique, many inputs are naturally or best expressed with a more complex structure such as graphs as opposed to a simple sequence from observed data, which existing Seq2Seq models cannot directly handle. A fundamental challenge in this respect is how to develop a model similar to Seq2Seq that can maximally preserve the information in the raw complex inputs.

Moving to a sequence of inputs and outputs

Many real-world tasks involve taking a sequence of raw observations and generating a sequence of target outputs. For instance, in machine translation, the task is to translate a sequence of text in one language to a sequence of text in another language. Another recent hot task in drug discovery is to “translate” a raw protein sequence to its target secondary structured sequence or even directly to its dihedral angles. For such data, we rely on a Seq2Seq technique that aims to encode a sequence of inputs to a sequence of vectors and decode the target sequence from these vectors.

Seq2Seq: a brief overview

The celebrated Seq2Seq technique [1][2] and its numerous variants [3] achieve excellent performance on many tasks such as neural machine translation, natural language generation, speech recognition, and drug discovery. Most of the proposed Seq2Seq models can be viewed as a family of encoder-decoders, where an encoder reads and encodes a source input in the form of sequences into a continuous vector representation of fixed dimension, and a decoder takes the encoded vectors and outputs a target sequence. Many other enhancements including attention mechanisms and bidirectional recurrent neural networks or bidirectional long short-term memory networks as encoders have been proposed to further improve the practical performance of Seq2Seq for general or domain-specific applications.

Where Seq2Seq falls short

Despite their flexibility and expressive power, a significant limitation with the Seq2Seq models is that a neural network can only be applied to problems whose inputs are represented as sequences. However, the sequences are probably the simplest structured data, and many important problems are best expressed with a more complex structure such as graphs that have more capacity to encode complicated pairwise relationships in the data. For example, one task in natural language generation applications is to translate a graph-structured semantic representation such as SQL to a text expressing its meaning (see our recent EMNLP 2018 paper “SQL-to-Text Generation with Graph-to-Sequence Model”).

Graph2Seq: overall architecture

To cope with the complex structured graph inputs, we propose Graph2Seq, a novel attention-based neural network architecture for graph-to-sequence learning. The Graph2Seq model follows the conventional encoder-decoder approach with two main components: a graph encoder and a sequence decoder. The proposed graph encoder aims to learn expressive node embeddings and then to reassemble them into the corresponding graph embeddings. To this end, inspired by a recent graph representation learning method [4], we propose an inductive graph-based neural network to learn node embeddings from node attributes through aggregation of neighborhood information for directed and undirected graphs, which explores two distinct aggregators on each node to yield two representations that are concatenated to form the final node embedding. In addition, we further design the RNN-based sequence decoder that takes the graph embedding as its initial hidden state and outputs a target prediction by learning to align and translate jointly based on the context vectors associated with the corresponding nodes and all previous predictions. The overall architecture of our Graph2Seq model is shown in Figure 1.

Framework of the Graph2Seq model

Figure 1: Framework of the Graph2Seq model.

Graph2Seq is simple yet general and is highly extensible where its two building blocks, graph encoder and sequence decoder, can be replaced by other models such as graph convolutional networks or their extensions, and LSTM.

Graph2Seq for semantic parsing

The task of semantic parsing is to translate text to its formal meaning representations, such as logical forms or structured queries. Recent neural semantic parsers approach this problem by learning soft alignments between natural language and logical forms from (text, logic) pairs. However, existing works only consider word order features while neglecting useful syntactic information, such as dependency parse and constituency parse.

Our recent EMNLP 2018 paper (“Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model”) introduces a structure, namely a syntactic graph, to represent three types of syntactic information, i.e., word order, dependency, and constituency features. We construct a syntactic graph (Figure 2) and then employ our novel Graph2Seq model. Compared with Seq2Seq and Seq2Tree models, the experimental results show that our model achieves competitive performance. In addition, our model is more robust than other models when performing experiments on two types of adversarial examples. Our code and data are available at https://github.com/IBM/ Text-to-LogicForm.

Syntactic graph

Figure 2: The syntactic graph for the Jobs640 question, “What are the jobs for programmers that have salary of 50,000, that use c++, and are not related with AI?” Due to space limitations, the constituent tree is partially shown here.

Graph2Seq for natural language generation

The goal of the SQL-to-text task is to automatically generate human-like descriptions interpreting the meaning of a given structured query language (SQL) query. This task is critical to the natural language interface to a database since it helps non-expert users to understand the esoteric SQL queries that are used to retrieve the answers through the question-answering process. Recently proposed approaches for this task heavily rely on Seq2Seq models or Tree2Seq models, however, which cannot fully capture the graph structure information of SQL.

Our recent EMNLP 2018 paper (“SQL-to-Text Generation with Graph-to-Sequence Model”) introduces a strategy to represent the SQL query as a directed graph (as shown in Figure 3) and further make full use of our Graph2Seq model that encodes this graph-structured SQL query, and then decodes its interpretation. Experimental results show that our model achieves state-of-the-art performance on the WikiSQL dataset and Stackoverflow dataset. Our code and data are available at https://github.com/IBM/SQL-to-Text.

Graph representation

Figure 3: The graph representation of the SQL query in Figure 2.

Outlook

The developed Graph2Seq model can impact several important areas in machine learning and AI. Conceptually, many applications where current Seq2Seq models are applied can be easily adapted to our Graph2Seq model if the input can be cast as a graph-structured input. Although we have demonstrated its power in two natural language processing applications (semantic parsing and natural language generation), we expect it can also be exploited in other natural language processing tasks such as machine translation and question answering. In addition, the Graph2Seq model can also be applied to other domains such as robotic planning and theory proving. Finally, our graph encoder and attention mechanism over node embedding in a graph can be exploited for more complex model such as Graph2Tree and Graph2Graph.


We will present our two EMNLP papers on Friday, November 2, during the session 3C: Semantic Parsing / Generation (Short Papers Oral), 15:00PM ‑ 16:00PM, in the Silver Hall / Panoramic Hall.


[1] Kyunghyun Cho et al. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. EMNLP 2015.
[2] Ilya Sutskever et al. Sequence to Sequence Learning with Neural Networks. NIPS 2015.
[3] Dzmitry Bahdanau et al. Neural Machine Translation by Jointly Learning to Align and Translate. ICLR 2015.
[4] William L. Hamilton et al. Inductive representation learning on large graphs. NIPS 2017.

Lingfei Wu

Research Staff Member, IBM Research