Generative Models
Theory and techniques
Outline
Overview of Generative Models
10
Overview of Generative Models
The term generative model is also used for models that generate instances of variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks (GANs) are examples of this class of generative models.
10
Generative vs. Discriminative Modeling
Discriminative Modelling
Prediction
Input Data
Generated Example
Generative Modeling
Prediction
Random Input
Generated Example
Popular Probabilistic Generative Models
Popular Probabilistic Generative Models
Naïve Bayes
P(X, Y) = P(Y) * P(X|Y) = P(Y) * P(x1|Y) * P(x2|Y) * …. * P(xN|Y)
Classifier
Popular Probabilistic Generative Models
Z
X
Popular Probabilistic Generative Models
Hidden Markov Model
P(X, Y) = P(y1) * P(x1| y1) * P(y2| y1) * P(x2| y2) * …. * P(yN| yN-1) * P(xN| yN)
Sequence model
Popular Probabilistic Generative Models
Popular Probabilistic Generative Models
Latent Dirichlet Allocation11
P(W, X, Θ, Φ, α, β) = P(W| Z, Φ) * P(Φ| β) * P(Z| Θ) * P(Θ| α)
Admixture model
Variational Autoencoders (VAEs)
Variational Autoencoder (VAE)1,2,7
Variational Autoencoder (VAE)2,7
VAE training
Variational Autoencoder (VAE)2,7
VAE training
The loss function contains:
Variational Autoencoder (VAE)2,6,7
The regularization term is enforcing:
Probabilistic view of VAEs14
The VAE can be viewed as a probabilistic model where:
Probabilistic view of VAEs14
Properties of Variational Autoencoders12
Work better than other methods in exploring variations of existing data:
Applications of Variational Autoencoders
Powerful generative models8,13 for many complex data including:
Generative Adversarial Networks (GANs)
A Probabilistic Motivation18
A Probabilistic Motivation18
Adversarial training is used to train the NN and learn the complex transformation function
Generative Adversarial Networks (GANs)15,16,17
Unsupervised modelling technique that learns the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that could have been drawn from the original dataset
The problem is framed as a supervised learning problem with two sub-models:
Generative Adversarial Networks (GANs)15,16,17
Generator Model : takes a fixed-length random vector as input and generates a sample in the domain
Generative Adversarial Networks (GANs)15,16,17
Discriminator Model : takes an example from the domain as input (real or generated) and predicts a binary class label of real or fake (generated)
Generative Adversarial Networks (GANs)18
Generative Adversarial Networks (GANs)
Original loss function:
Challenges of Real World GANs19, 20
GAN Variations
GAN Variations
GAN Variations
GAN Variations
GAN Variations
https://salu133445.github.io/musegan/audio/best_samples.mp3
GAN Variations
Hands-on Naïve Bayes and GAN examples
References
References