AI

Text2Scene: Generating Compositional Scenes from Textual Descriptions

Share this post:

Generating images from textual descriptions has become an active and exciting area of research. Interest has been partially fueled by the adoption of generative adversarial networks (GANs) [1], which have demonstrated impressive results on a number of image synthesis tasks. However, challenges remain when attempting to synthesize images for complex scenes with multiple interacting objects. In our paper, a Best Paper Finalist at CVPR 2019, we proposed to approach this problem from another direction. Inspired by the principle of compositionality [2], our model produces a scene by sequentially generating objects (in the forms of clip-art, bounding boxes, or segmented object patches) containing the semantic elements that compose the scene.

Compositional Scene Generation
We introduce Text2Scene, a model to interpret visually descriptive language in order to generate compositional scene representations. In particular, we focus on generating a scene representation consisting of a list of objects, along with their attributes (e.g., location, size, aspect ratio, pose, appearance). We adapt and train models to generate three types of scenes as shown in Figure 1: cartoon-like scenes, object layouts, and synthetic images.

Figure 1: Tasks on generating scenes from text

We propose a unified sequence-to-sequence framework to handle these three different tasks.

Generally, Text2Scene consists of a text encoder (Fig 2 (A)) that maps the input sentence to a set of latent representations, an image encoder (Fig 2 (B)), which encodes the current generated canvas, a convolutional recurrent module (Fig 2 (C)), which passes the current state to the next step, attention modules (Fig 2 (D)), which focus on different parts of the input text, an object decoder (Fig 2 (E)) that predicts the next object conditioned on the current scene state and attended input text, and an attribute decoder (Fig 2 (F)) that assigns attributes to the predicted object, and (G) an optional foreground embedding step that learns an appearance vector for patch retrieval in the synthetic image generation task.

The scene generation starts from an initially empty canvas that is updated at each time step. For the synthetic image generation task, our model sequentially retrieves and pastes object patches from other images to compose the scene. As the composite images may exhibit gaps between patches, we also leverage the stitching network in [5] for post-processing.

 

Figure 2: Text2Scene framework overview

Evaluation

We compare our approach to the latest GAN-based methods. Experimental results show that our model achieves near state-of-the-art performance on automatic metrics. Human subject evaluation shows that 75% of people preferred our outputs compared to the best GAN-based method such as SG2IM[3] and AttnGAN [4].

Figure 3:  Qualitative examples of scene generation results

Outlook
Synthesizing images from text requires a level of language and visual understanding, which could lead to applications in image retrieval through natural language queries, representation learning for text, and automated computer graphics and image editing applications. Our work proposes an interpretable model that generates various forms of compositional scene representations. Experimental results demonstrate the capacity of our model to capture finer semantic meaning from descriptive text to generate complex scenes.

[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, DavidWarde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.

[2] Xiaodan Zhu and Edward Grefenstette. Deep learning for semantic composition. In ACL tutorial, 2017.

[3] Justin Johnson, Agrim Gupta, and Li Fei-Fei. Image gener- ation from scene graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

[4] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine- grained text to image generation with attentional generative adversarial networks. In IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), 2018.Xiaojuan

[5] Qi, Qifeng Chen, Jiaya Jia, and Vladlen Koltun. Semi-parametric image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.


Text2Scene: Generating Compositional Scenes from Textual Descriptions, Fuwen Tan, Song Feng, Vicente Ordonez

Research Staff Member, IBM Research

More AI stories

The path to the “perfect” analog material and system: IBM at IEDM and NeurIPS

Researchers from the IBM AI Hardware Center will showcase at IEDM and NeurIPS new analog devices, algorithmic and architectural solutions, a novel model training technique, and a full custom design.

Continue reading

Quantum-Inspired Logical Embedding for Knowledge Representation

In our new paper, to be presented at NeurIPS 2019, we develop a new knowledge representation, which we call “quantum embedding”, that represents conceptual knowledge using a vector space representation that preserves its logical structure and allows reasoning tasks to be solved accurately and efficiently.

Continue reading

Learning and Characterizing Causal Graphs with Latent Variables

Researchers from the MIT-IBM Watson AI Lab developed a new approach to characterize the set of plausible causal graphs from observational and interventional data that has latent variables.

Continue reading