Keep Drawing It: Iterative language-based image generation and editing


Alaaeldin El-Nouby, Shikhar Sharma, Hannes Schulz, Devon Hjelm, Layla El Asri, Samira Ebrahimi Kahou, Yoshua Bengio, Graham W. Taylor


Abstract

Conditional text-to-image generation approaches commonly focus on generating a single image in a single step. One practical extension beyond one-step generation is an interactive system that generates an image iteratively, conditioned on ongoing linguistic input / feedback. This is significantly more challenging as such a system must understand and keep track of the ongoing context and history. In this work, we present a recurrent image generation model which takes into account both the generated output up to the current step as well as all past instructions for generation. We show that our model is able to generate the background, add new objects, apply simple transformations to existing objects, and correct previous mistakes. We believe our approach is an important step toward interactive generation.

BibTeX

                        @inproceedings{elnouby2018keepdrawingit,
  title     = {Keep Drawing It: Iterative language-based image generation and editing},
  author    = {El{-}Nouby, Alaaeldin and
               Sharma, Shikhar and
               Schulz, Hannes and
               Hjelm, Devon and
               El Asri, Layla and
               Ebrahimi Kahou, Samira and
               Bengio, Yoshua and
               Taylor, Graham W.},
  booktitle = {Neural Information Processing Systems (NIPS) Visually Grounded Interaction and Language (ViGIL) Workshop},
  month     = {December},
  year      = {2018},
  url       = {https://nips2018vigil.github.io/static/papers/accepted/13.pdf}
}
                    

PDF URL