StyleGAN3

StyleGAN3: making StyleGAN translation and rotation invariant

Released in: Alias-Free Generative Adversarial Networks

Contributor:

Summary

The authors observe that despite their hierarchical convolutional nature, the synthesis process of typical generative adversarial networks depends on absolute pixel coordinates in an unhealthy manner. This manifests itself as, e.g., detail appearing to be glued to image coordinates instead of the surfaces of depicted objects. The paper traces the root cause to careless signal processing that causes aliasing in the generator network. Interpreting all signals in the network as continuous, the authors derive generally applicable, small architectural changes that guarantee that unwanted information cannot leak into the hierarchical synthesis process. The resulting networks match the FID of StyleGAN2 but differ dramatically in their internal representations, and they are fully equivariant to translation and rotation even at subpixel scales. These results pave the way for generative models better suited for video and animation.

2021

Year Released

Key Links & Stats

StyleGAN3

Alias-Free Generative Adversarial Networks

@inproceedings{Karras2021, author = {Tero Karras and Miika Aittala and Samuli Laine and Erik H\"ark\"onen and Janne Hellsten and Jaakko Lehtinen and Timo Aila}, title = {Alias-Free Generative Adversarial Networks}, booktitle = {Proc. NeurIPS}, year = {2021} }

ML Tasks

  1. General
  2. Image Generation
  3. Style Transfer

ML Platform

  1. Pytorch

Modalities

  1. Still Image

Verticals

  1. General
  2. Facial
  3. Digital Human

CG Platform

  1. Not Applicable

Related organizations

NVIDIA