Large-scale GANs for generating varied class-conditional images, with hundreds of classes

Released in: Large Scale GAN Training for High Fidelity Natural Image Synthesis



Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, the authors train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. They find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator’s input. The new modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.5 and Frechet Inception Distance (FID) of 7.4, improving over the previous best IS of 52.52 and FID of 18.6.


Year Released

Key Links & Stats



Large Scale GAN Training for High Fidelity Natural Image Synthesis

@inproceedings{DBLP:conf/iclr/BrockDS19, author = {Andrew Brock and Jeff Donahue and Karen Simonyan}, title = {Large Scale {GAN} Training for High Fidelity Natural Image Synthesis}, booktitle = {7th International Conference on Learning Representations, {ICLR} 2019, New Orleans, LA, USA, May 6-9, 2019}, publisher = {}, year = {2019}, url = {}, timestamp = {Thu, 25 Jul 2019 13:03:18 +0200}, biburl = {}, bibsource = {dblp computer science bibliography,} }

ML Tasks

  1. General
  2. Image Generation

ML Platform

  1. Pytorch


  1. General
  2. Still Image


  1. General

CG Platform

  1. Not Applicable

Related organizations