StyleGAN

From HandWiki
Short description: Novel generative adversarial network
An image generated by a StyleGAN that looks deceptively like a portrait of a young woman. This image was generated by an artificial intelligence based on an analysis of portraits.

StyleGAN is a generative adversarial network (GAN) introduced by Nvidia researchers in December 2018,[1] and made source available in February 2019.[2][3]

StyleGAN depends on Nvidia's CUDA software, GPUs, and Google's TensorFlow,[4] or Meta AI's PyTorch, which supersedes TensorFlow as the official implementation library in later StyleGAN versions.[5] The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7] Nvidia introduced StyleGAN3, described as an "alias-free" version, on June 23, 2021, and made source available on October 12, 2021.[8]

History

A direct predecessor of the StyleGAN series is the Progressive GAN, published in 2017.[9]

In December 2018, Nvidia researchers distributed a preprint with accompanying software introducing StyleGAN, a GAN for producing an unlimited number of (often convincing) portraits of fake human faces. StyleGAN was able to run on Nvidia's commodity GPU processors.

In February 2019, Uber engineer Phillip Wang used the software to create This Person Does Not Exist, which displayed a new face on each web page reload.[10][11] Wang himself has expressed amazement, given that humans are evolved to specifically understand human faces, that nevertheless StyleGAN can competitively "pick apart all the relevant features (of human faces) and recompose them in a way that's coherent."[12]

In September 2019, a website called Generated Photos published 100,000 images as a collection of stock photos.[13] The collection was made using a private dataset shot in a controlled environment with similar light and angles.[14]

Similarly, two faculty at the University of Washington's Information School used StyleGAN to create Which Face is Real?, which challenged visitors to differentiate between a fake and a real face side by side.[11] The faculty stated the intention was to "educate the public" about the existence of this technology so they could be wary of it, "just like eventually most people were made aware that you can Photoshop an image".[15]

The second version of StyleGAN, called StyleGAN2, was published on February 5, 2020. It removes some of the characteristic artifacts and improves the image quality.[6][7]

In 2021, a third version was released, improving consistency between fine and coarse details in the generator. Dubbed "alias-free", this version was implemented with pytorch.[16]

Illicit use

In December 2019, Facebook took down a network of accounts with false identities, and mentioned that some of them had used profile pictures created with artificial intelligence.[17]

Architecture

Progressive GAN

Progressive GAN[9] is a method for training GAN for large-scale image generation stably, by growing a GAN generator from small to large scale in a pyramidal fashion. Like SinGAN, it decomposes the generator as [math]\displaystyle{ G = G_1 \circ G_2 \circ \cdots \circ G_N }[/math], and the discriminator as [math]\displaystyle{ D = D_N \circ D_{N-1} \circ \cdots \circ D_1 }[/math].

During training, at first only [math]\displaystyle{ G_N, D_N }[/math] are used in a GAN game to generate 4x4 images. Then [math]\displaystyle{ G_{N-1}, D_{N-1} }[/math] are added to reach the second stage of GAN game, to generate 8x8 images, and so on, until we reach a GAN game to generate 1024x1024 images.

To avoid discontinuity between stages of the GAN game, each new layer is "blended in" (Figure 2 of the paper[9]). For example, this is how the second stage GAN game starts:

  • Just before, the GAN game consists of the pair [math]\displaystyle{ G_N, D_N }[/math] generating and discriminating 4x4 images.
  • Just after, the GAN game consists of the pair [math]\displaystyle{ ((1-\alpha) + \alpha\cdot G_{N-1})\circ u \circ G_N, D_N \circ d \circ ((1-\alpha) + \alpha\cdot D_{N-1}) }[/math] generating and discriminating 8x8 images. Here, the functions [math]\displaystyle{ u, d }[/math] are image up- and down-sampling functions, and [math]\displaystyle{ \alpha }[/math] is a blend-in factor (much like an alpha in image composing) that smoothly glides from 0 to 1.

StyleGAN

The main architecture of StyleGAN-1 and StyleGAN-2

StyleGAN is designed as a combination of Progressive GAN with neural style transfer.[18]

The key architectural choice of StyleGAN-1 is a progressive growth mechanism, similar to Progressive GAN. Each generated image starts as a constant[note 1] [math]\displaystyle{ 4\times 4 \times 512 }[/math] array, and repeatedly passed through style blocks. Each style block applies a "style latent vector" via affine transform ("adaptive instance normalization"), similar to how neural style transfer uses Gramian matrix. It then adds noise, and normalize (subtract the mean, then divide by the variance).

At training time, usually only one style latent vector is used per image generated, but sometimes two ("mixing regularization") in order to encourage each style block to independently perform its stylization without expecting help from other style blocks (since they might receive an entirely different style latent vector).

After training, multiple style latent vectors can be fed into each style block. Those fed to the lower layers control the large-scale styles, and those fed to the higher layers control the fine-detail styles.

Style-mixing between two images [math]\displaystyle{ x, x' }[/math] can be performed as well. First, run a gradient descent to find [math]\displaystyle{ z, z' }[/math] such that [math]\displaystyle{ G(z)\approx x, G(z')\approx x' }[/math]. This is called "projecting an image back to style latent space". Then, [math]\displaystyle{ z }[/math] can be fed to the lower style blocks, and [math]\displaystyle{ z' }[/math] to the higher style blocks, to generate a composite image that has the large-scale style of [math]\displaystyle{ x }[/math], and the fine-detail style of [math]\displaystyle{ x' }[/math]. Multiple images can also be composed this way.

StyleGAN2

StyleGAN2 improves upon StyleGAN in two ways.

One, it applies the style latent vector to transform the convolution layer's weights instead, thus solving the "blob" problem.[19] The "blob" problem roughly speaking is because using the style latent vector to normalize the generated image destroys useful information. Consequently, the generator learned to create a "distraction" by a large blob, which absorbs most of the effect of normalization (somewhat similar to using flares to distract a heat-seeking missile).

Two, it uses residual connections, which helps it avoid the phenomenon where certain features are stuck at intervals of pixels. For example, the seam between two teeth may be stuck at pixels divisible by 32, because the generator learned to generate teeth during stage N-5, and consequently could only generate primitive teeth at that stage, before scaling up 5 times (thus intervals of 32).

This was updated by the StyleGAN2-ADA ("ADA" stands for "adaptive"),[20] which uses invertible data augmentation. It also tunes the amount of data augmentation applied by starting at zero, and gradually increasing it until an "overfitting heuristic" reaches a target level, thus the name "adaptive".

StyleGAN3

StyleGAN3[21] improves upon StyleGAN2 by solving the "texture sticking" problem, which can be seen in the official videos.[22] They analyzed the problem by the Nyquist–Shannon sampling theorem, and argued that the layers in the generator learned to exploit the high-frequency signal in the pixels they operate upon.

To solve this, they proposed imposing strict lowpass filters between each generator's layers, so that the generator is forced to operate on the pixels in a way faithful to the continuous signals they represent, rather than operate on them as merely discrete signals. They further imposed rotational and translational invariance by using more signal filters. The resulting StyleGAN-3 is able to generate images that rotate and translate smoothly, and without texture sticking.

See also

Notes

  1. It is learned during the training, but afterwards it is held constant, much like a bias vector.

References

  1. "GAN 2.0: NVIDIA's Hyperrealistic Face Generator". December 14, 2018. https://syncedreview.com/2018/12/14/gan-2-0-nvidias-hyperrealistic-face-generator/. 
  2. "NVIDIA Open-Sources Hyper-Realistic Face Generator StyleGAN". February 9, 2019. https://medium.com/syncedreview/nvidia-open-sources-hyper-realistic-face-generator-stylegan-f346e1a73826. 
  3. Beschizza, Rob (February 15, 2019). "This Person Does Not Exist". https://boingboing.net/2019/02/15/this-person-does-not-exist.html. 
  4. Larabel, Michael (February 10, 2019). "NVIDIA Opens Up The Code To StyleGAN - Create Your Own AI Family Portraits". https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-StyleGAN-Open-Source. 
  5. "Looking for the PyTorch version? - Stylegan2". October 28, 2021. https://github.com/NVlabs/stylegan2-ada#looking-for-the-pytorch-version. 
  6. 6.0 6.1 "Synthesizing High-Resolution Images with StyleGAN2 – NVIDIA Developer News Center". June 17, 2020. https://news.developer.nvidia.com/synthesizing-high-resolution-images-with-stylegan2/. 
  7. 7.0 7.1 NVlabs/stylegan2, NVIDIA Research Projects, August 11, 2020, https://github.com/NVlabs/stylegan2, retrieved August 11, 2020 
  8. Kakkar, Shobha (October 13, 2021). "NVIDIA AI Releases StyleGAN3: Alias-Free Generative Adversarial Networks" (in en-US). https://www.marktechpost.com/2021/10/12/nvidia-ai-releases-stylegan3-alias-free-generative-adversarial-networks/. 
  9. 9.0 9.1 9.2 Karras, Tero; Aila, Timo; Laine, Samuli; Lehtinen, Jaakko (2018). "Progressive Growing of GANs for Improved Quality, Stability, and Variation". International Conference on Learning Representations. https://openreview.net/pdf?id=Hk99zCeAb. 
  10. msmash, n/a (February 14, 2019). "'This Person Does Not Exist' Website Uses AI To Create Realistic Yet Horrifying Faces". https://tech.slashdot.org/story/19/02/14/199200/this-person-does-not-exist-website-uses-ai-to-create-realistic-yet-horrifying-faces. 
  11. 11.0 11.1 Fleishman, Glenn (April 30, 2019). "How to spot the realistic fake people creeping into your timelines". Fast Company. https://www.fastcompany.com/90332538/how-to-spot-the-creepy-fake-faces-who-may-be-lurking-in-your-timelines-deepfaces. 
  12. Bishop, Katie (February 7, 2020). "AI in the adult industry: porn may soon feature people who don't exist". The Guardian. https://www.theguardian.com/culture/2020/feb/07/ai-in-the-adult-industry-porn-may-soon-feature-people-who-dont-exist. 
  13. Porter, Jon (September 20, 2019). "100,000 free AI-generated headshots put stock photo companies on notice" (in en). https://www.theverge.com/2019/9/20/20875362/100000-fake-ai-photos-stock-photography-royalty-free. 
  14. Timmins, Jane Wakefield and Beth (February 29, 2020). "Could deepfakes be used to train office workers?" (in en-GB). BBC News. https://www.bbc.com/news/technology-51064933. 
  15. Vincent, James (March 3, 2019). "Can you tell the difference between a real face and an AI-generated fake?" (in en). The Verge. https://www.theverge.com/2019/3/3/18244984/ai-generated-fake-which-face-is-real-test-stylegan. 
  16. NVlabs/stylegan3, NVIDIA Research Projects, October 11, 2021, https://github.com/NVlabs/stylegan3 
  17. "Facebook's latest takedown has a twist -- AI-generated profile pictures" (in en). https://abcnews.go.com/US/facebooks-latest-takedown-twist-ai-generated-profile-pictures/story?id=67925292. 
  18. Karras, Tero; Laine, Samuli; Aila, Timo (2019). "A Style-Based Generator Architecture for Generative Adversarial Networks". 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 4396–4405. doi:10.1109/CVPR.2019.00453. ISBN 978-1-7281-3293-8. https://openaccess.thecvf.com/content_CVPR_2019/papers/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.pdf. 
  19. Karras, Tero; Laine, Samuli; Aittala, Miika; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo (2020). "Analyzing and Improving the Image Quality of StyleGAN". 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8107–8116. doi:10.1109/CVPR42600.2020.00813. ISBN 978-1-7281-7168-5. https://openaccess.thecvf.com/content_CVPR_2020/papers/Karras_Analyzing_and_Improving_the_Image_Quality_of_StyleGAN_CVPR_2020_paper.pdf. 
  20. Tero, Karras; Miika, Aittala; Janne, Hellsten; Samuli, Laine; Jaakko, Lehtinen; Timo, Aila (2020). "Training Generative Adversarial Networks with Limited Data" (in en). Advances in Neural Information Processing Systems 33. https://proceedings.neurips.cc/paper/2020/hash/8d30aa96e72440759f74bd2306c1fa3d-Abstract.html. 
  21. Karras, Tero; Aittala, Miika; Laine, Samuli; Härkönen, Erik; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo (2021). Alias-Free Generative Adversarial Networks. Advances in Neural Information Processing Systems. https://proceedings.neurips.cc/paper/2021/file/076ccd93ad68be51f23707988e934906-Paper.pdf. 
  22. Karras, Tero; Aittala, Miika; Laine, Samuli; Härkönen, Erik; Hellsten, Janne; Lehtinen, Jaakko; Aila, Timo. "Alias-Free Generative Adversarial Networks (StyleGAN3)". https://nvlabs.github.io/stylegan3. 

External links