FUNIT: style transfer with styles defined by a few sample images

Released in: Few-Shot Unsupervised Image-to-Image Translation



Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images. While remarkably successful, current methods require access to many images in both source and destination classes at training time. The authors argue this greatly limits their use. Drawing inspiration from the human capability of picking up the essence of a novel object from a small number of examples and generalizing from there, they seek a few-shot, unsupervised image-to-image translation algorithm that works on previously unseen target classes that are specified, at test time, only by a few example images. The proposed model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design. Through extensive experimental validation and comparisons to several baseline methods on benchmark datasets, the paper verifies the effectiveness of the proposed framework.


Year Released

Key Links & Stats


Few-Shot Unsupervised Image-to-Image Translation

Few-Shot Unsupervised Image-to-Image Translation

@inproceedings{liu2019few, title={Few-Shot Unsupervised Image-to-Image Translation}, author={Ming-Yu Liu and Xun Huang and Arun Mallya and Tero Karras and Timo Aila and Jaakko Lehtinen and Jan Kautz}, booktitle={IEEE International Conference on Computer Vision (ICCV)}, year={2019} }

ML Tasks

  1. General
  2. Domain Adaptation
  3. Style Transfer

ML Platform

  1. Pytorch


  1. General
  2. Still Image


  1. General

CG Platform

  1. Not Applicable

Related organizations