1 of 9

ON THE “STEERABILITY” OF

GENERATIVE ADVERSARIAL NETWORKS

Ali Jahanian*, Lucy Chai*, & Phillip Isola

ICLR 2020

2 of 9

Motivation

  • Understanding the latent space of GANs
  • Controlling GAN outputs using only the latent space
    • I.e. Without special training or special datasets

3 of 9

Contributions

  • Develop a self supervised method to find walks in the latent space
  • Measuring the extent of “steerability” of GANs
  • Comparing linear walks to non-linear (several linear steps) walks
  • Proposing data augmentation to improve the “steerability” of GANs

4 of 9

Method

  • Develop an `edit` program for each type of variable
    • Position, rotation, hue, brightness
  • Find the direction in the latent space to find a vector corresponding to each edit. (L_2, LPIPS)

5 of 9

6 of 9

7 of 9

8 of 9

Findings

  • A linear walk is as effective as a non-linear walk
  • GAN steerability is limited by the variability of the training dataset
    • Data augmentation can increase steerability
  • Their framework works for different GANs
    • BigGAN, StyleGAN, and DCGAN

9 of 9

Limitations

  • Method
    • Need to develop an edit function for training
    • Cannot extrapolate outside of training data too much
  • Results
    • Images can change classes if steered too much
    • Don’t have control over things like the background which end up changing