1 of 30

ME 5990: Introduction to Machine Learning

Autoencoder Discussion

2 of 30

Outline

  • Autoencoder
    • Principle
    • Stacked Autoencoder
    • Convolutional autoencoder
    • Application: Image Denoise
    • Application: Variational autoencoder
  • Generative Adversarial Network

3 of 30

Introduction

  • What does the convolution do?

  • How can we know the convolution maintains the most valuable information?

4 of 30

Basic Idea of Encoding and Decoding

  • Encoding and decoding

Input

Code

Output

00000001

000

00000001

00000010

001

00000010

00000100

010

00000100

00001000

011

00001000

00010000

100

00010000

00100000

101

00100000

01000000

110

01000000

10000000

111

10000000

Encoding

Decoding

5 of 30

Autoencoders

  • Encoder: compress input into a latent-space of usually smaller dimension. h = f(x)
  • Decoder: reconstruct input from the latent space. r = g(f(x)) with r as close to x as possible

https://towardsdatascience.com/deep-inside-autoencoders-7e41f319999f

6 of 30

Autoencoder

  • An autoencoder is a network trained to learn the identity function: output = input
    • Subnetwork called encoder f(·) maps input to an embedded representation
    • Subnetwork called decoder g(·) maps back to input space
  • Can be thought of as lossy compression of input
  • Need to identify the important attributes of inputs to reproduce faithfully

7 of 30

Stacked Autoencoder

  •  

8 of 30

Stacked Autoencoder

  • We can simplify the training by incremental training
  • Cans implify training by starting with single hidden layer H1
  • Then, train the 2nd autoencoder to mimic the output of H1
  • Insert this into first network
  • Can build by using H1 output as training set for the next phase

9 of 30

Autoencoder example

10 of 30

Transfer learning

  • Can use the encoder as the pre-trained network with unlabeled data
  • Learn useful features and then train “logic” of FC layers with labeled data

11 of 30

Transfer Learning from Trained Classifier

  • Can also transfer from a classifier trained on different task, e.g., transfer a GoogleNet architecture to ultrasound classification
  • Often choose existing one from a model zoo

12 of 30

Autoencoders

  • Autoencoders are designed to reproduce their input, especially for images.
  • Key point is to reproduce the input from a learned encoding.

https://www.edureka.co/blog/autoencoders-tutorial/

13 of 30

Convolution Autoencoder

  • Convolution: Usually we downsize the image

https://epynn.net/Convolution.html

14 of 30

Convolution Autoencoder

  • 2D Transposed Convolution:
    • Stride = 1

https://www.geeksforgeeks.org/apply-a-2d-transposed-convolution-operation-in-pytorch/

15 of 30

Convolution Autoencoder

  • 2D Transposed Convolution:
    • Stride = 2

https://www.geeksforgeeks.org/apply-a-2d-transposed-convolution-operation-in-pytorch/

16 of 30

Convolution Autoencoder

  •  

17 of 30

Denoise using Autoencoder

  • Denoising: input clean image + noise and train to reproduce the clean image.

18 of 30

Denoise using Autoencoder

  • Image Denoise

Dor BankNoam KoenigsteinRaja Giryes, Autoencoders: https://arxiv.org/abs/2003.05991

19 of 30

Denoise using Autoencoder

20 of 30

Denoise

  •  

21 of 30

Variational Autoencoder

  • Variation Autoencoder (VAE) is a generative network.
  • The generative model is equivalent to a probabilistic decoder

22 of 30

Variational Autoencoder

  • Results for VAE on Minist

23 of 30

Variational Autoencoder

  • Face generation

https://towardsdatascience.com/intuitively-understanding-variational-autoencoders-1bfe67eb5daf

24 of 30

Generative Adversarial Network

  • A powerful generative network by Ian Goodfellow etc.

https://developers.google.com/machine-learning/gan/gan_structure

25 of 30

Generative Adversarial Network

  • GAN has to major module: generator (similar to decoder) and discriminator

26 of 30

GAN

  • Train the discriminator

27 of 30

GAN

  • Train the generator, fix the discriminator!

28 of 30

GAN

  • The generator and the discriminator have different training processes. So how do we train the GAN as a whole?
  • GAN training proceeds in alternating periods:
    1. The discriminator trains for one or more epochs.
    2. The generator trains for one or more epochs.
    3. Repeat steps 1 and 2 to continue to train the generator and discriminator networks

29 of 30

GAN

  • The result is very destructive…

30 of 30

GAN

  • The result is very destructive