Bidirectional GANs that learn to generate new samples and extract features at the same time

Released in: Adversarial Feature Learning



The ability of the Generative Adversarial Networks (GANs) framework to learn generative models mapping from simple latent distributions to arbitrarily complex data distributions has been demonstrated empirically, with compelling results showing that the latent space of such generators captures semantic variation in the data distribution. Intuitively, models trained to predict these semantic latent representations given data may serve as useful feature representations for auxiliary problems where semantics are relevant. However, in their existing form, GANs had no means of learning the inverse mapping — projecting data back into the latent space. The authors propose Bidirectional Generative Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and demonstrate that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.


Year Released

Key Links & Stats


Bidirectional GAN

Adversarial Feature Learning

@inproceedings{DBLP:conf/iclr/DonahueKD17, author = {Jeff Donahue and Philipp Kr{\"{a}}henb{\"{u}}hl and Trevor Darrell}, title = {Adversarial Feature Learning}, booktitle = {5th International Conference on Learning Representations, {ICLR} 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings}, publisher = {}, year = {2017}, url = {}, timestamp = {Thu, 25 Jul 2019 14:25:40 +0200}, biburl = {}, bibsource = {dblp computer science bibliography,} }

ML Tasks

  1. General
  2. Image Generation

ML Platform

  1. Not Applicable


  1. General
  2. Still Image


  1. General

CG Platform

  1. Not Applicable

Related organizations

UC Berkeley

UC Texas, Austin