The original progressively growing GAN from NVIDIA

Released in: Progressive Growing of GANs for Improved Quality, Stability, and Variation



The authors describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing to produce images of previously unprecedented quality, e.g., CelebA images at 1024^2. The authors also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, the paper describes several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, the authors suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, the paper presents a higher-quality version of the CelebA dataset.


Year Released

Key Links & Stats

Progressive growing of GANs


Progressive Growing of GANs for Improved Quality, Stability, and Variation

@article{DBLP:journals/corr/abs-1710-10196, author = {Tero Karras and Timo Aila and Samuli Laine and Jaakko Lehtinen}, title = {Progressive Growing of GANs for Improved Quality, Stability, and Variation}, journal = {CoRR}, volume = {abs/1710.10196}, year = {2017}, url = {}, eprinttype = {arXiv}, eprint = {1710.10196}, timestamp = {Mon, 13 Aug 2018 16:46:42 +0200}, biburl = {}, bibsource = {dblp computer science bibliography,} }

ML Tasks

  1. Image Generation

ML Platform

  1. Tensorflow


  1. Still Image


  1. General
  2. Facial
  3. Digital Human

CG Platform

  1. Not Applicable

Related organizations