Scribbler: Controlling Deep Image Synthesis with Sketch and Color

Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
CVPR 2017

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

Abstract

Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to ‘scribble’ over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.

Paper

Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
CVPR 2017

Citation

@article{
  sangkloy2016scribbler,
  title={Scribbler: Controlling Deep Image Synthesis with Sketch and Color},
  author={Sangkloy, Patsorn and Lu, Jingwan and Fang, Chen and Yu, FIsher and Hays, James},
  journal={Computer Vision and Pattern Recognition, CVPR},
  year={2017}
}

Related


TextureGAN: Controlling Deep Image Synthesis with Texture Patches

TextureGAN: Controlling Deep Image Synthesis with Texture Patches

CVPR 2018 Spotlight We develop a local texture loss in addition to adversarial and content loss to train the generative network.


PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

PairedCycleGAN: Asymmetric Style Transfer for Applying and Removing Makeup

CVPR 2018 We introduce an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo.


Interactive 3D Modeling with a Generative Adversarial Network

Interactive 3D Modeling with a Generative Adversarial Network

3DV 2017 We propose using a generative adversarial network (GAN) to assist a novice user in designing real-world shapes with a simple interface.