AI

Text2Scene: Generating Compositional Scenes from Textual Descriptions

Share this post:

Generating images from textual descriptions has become an active and exciting area of research. Interest has been partially fueled by the adoption of generative adversarial networks (GANs) [1], which have demonstrated impressive results on a number of image synthesis tasks. However, challenges remain when attempting to synthesize images for complex scenes with multiple interacting objects. In our paper, a Best Paper Finalist at CVPR 2019, we proposed to approach this problem from another direction. Inspired by the principle of compositionality [2], our model produces a scene by sequentially generating objects (in the forms of clip-art, bounding boxes, or segmented object patches) containing the semantic elements that compose the scene.

Compositional Scene Generation
We introduce Text2Scene, a model to interpret visually descriptive language in order to generate compositional scene representations. In particular, we focus on generating a scene representation consisting of a list of objects, along with their attributes (e.g., location, size, aspect ratio, pose, appearance). We adapt and train models to generate three types of scenes as shown in Figure 1: cartoon-like scenes, object layouts, and synthetic images.

Figure 1: Tasks on generating scenes from text

We propose a unified sequence-to-sequence framework to handle these three different tasks.

Generally, Text2Scene consists of a text encoder (Fig 2 (A)) that maps the input sentence to a set of latent representations, an image encoder (Fig 2 (B)), which encodes the current generated canvas, a convolutional recurrent module (Fig 2 (C)), which passes the current state to the next step, attention modules (Fig 2 (D)), which focus on different parts of the input text, an object decoder (Fig 2 (E)) that predicts the next object conditioned on the current scene state and attended input text, and an attribute decoder (Fig 2 (F)) that assigns attributes to the predicted object, and (G) an optional foreground embedding step that learns an appearance vector for patch retrieval in the synthetic image generation task.

The scene generation starts from an initially empty canvas that is updated at each time step. For the synthetic image generation task, our model sequentially retrieves and pastes object patches from other images to compose the scene. As the composite images may exhibit gaps between patches, we also leverage the stitching network in [5] for post-processing.

 

Figure 2: Text2Scene framework overview

Evaluation

We compare our approach to the latest GAN-based methods. Experimental results show that our model achieves near state-of-the-art performance on automatic metrics. Human subject evaluation shows that 75% of people preferred our outputs compared to the best GAN-based method such as SG2IM[3] and AttnGAN [4].

Figure 3:  Qualitative examples of scene generation results

Outlook
Synthesizing images from text requires a level of language and visual understanding, which could lead to applications in image retrieval through natural language queries, representation learning for text, and automated computer graphics and image editing applications. Our work proposes an interpretable model that generates various forms of compositional scene representations. Experimental results demonstrate the capacity of our model to capture finer semantic meaning from descriptive text to generate complex scenes.

[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, DavidWarde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014.

[2] Xiaodan Zhu and Edward Grefenstette. Deep learning for semantic composition. In ACL tutorial, 2017.

[3] Justin Johnson, Agrim Gupta, and Li Fei-Fei. Image gener- ation from scene graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.

[4] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine- grained text to image generation with attentional generative adversarial networks. In IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), 2018.Xiaojuan

[5] Qi, Qifeng Chen, Jiaya Jia, and Vladlen Koltun. Semi-parametric image synthesis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.


Text2Scene: Generating Compositional Scenes from Textual Descriptions, Fuwen Tan, Song Feng, Vicente Ordonez

Research Staff Member, IBM Research

More AI stories

AI Enables Foreign Language Study Abroad, No Travel Required

Working with IBM Research, RPI has created a first-of-its-kind, six-week credit-bearing course in Mandarin taught in the school’s Cognitive and Immersive Systems Laboratory.

Continue reading

Overcoming Challenges In Automated Image Captioning

At CVPR 2019, IBM researchers introduce an improved method to bridge the semantic gap between visual scenes and language to produce diverse, creative and human-like captions.

Continue reading

Label Set Operations (LaSO) Networks for Multi-Label Few-Shot Learning

Data augmentation is one of the leading methods to tackle the problem of few-shot learning, but current synthesis approaches only address the scenario of a single label per image, when in reality real life images may contain multiple objects. The IBM team came up with a novel technique for synthesizing samples with multiple labels.

Continue reading