MUNIT

MUNIT: disentangling style and content for style transfer

Released in: Multimodal Unsupervised Image-to-Image Translation

Contributor:

Summary

Unsupervised image-to-image translation is an important and challenging problem in computer vision. Given an image in the source domain, the goal is to learn the conditional distribution of corresponding images in the target domain, without seeing any pairs of corresponding images. While this conditional distribution is inherently multimodal, existing approaches make an overly simplified assumption, modeling it as a deterministic one-to-one mapping. As a result, they fail to generate diverse outputs from a given source domain image. To address this limitation, the authors propose a Multimodal Unsupervised Image-to-image Translation (MUNIT) framework, assuming that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties. To translate an image to another domain, the model recombines its content code with a random style code sampled from the style space of the target domain. The paper analyzes the proposed framework and establishes several theoretical results. Extensive experiments with comparisons to the state-of-the-art approaches further demonstrates the advantage of the proposed framework. Moreover, the MUNIT framework allows users to control the style of translation outputs by providing an example style image.

2018

Year Released

Key Links & Stats

MUNIT

Multimodal Unsupervised Image-to-Image Translation

Multimodal Unsupervised Image-to-Image Translation

@inproceedings{huang2018munit, title={Multimodal Unsupervised Image-to-image Translation}, author={Huang, Xun and Liu, Ming-Yu and Belongie, Serge and Kautz, Jan}, booktitle={ECCV}, year={2018} }

ML Tasks

  1. General
  2. Domain Adaptation
  3. Style Transfer

ML Platform

  1. Pytorch

Modalities

  1. General
  2. Still Image

Verticals

  1. General

CG Platform

  1. Not Applicable

Related organizations

NVIDIA

Cornell University