Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Variational Autoencoder | Introduction to Generative Networks
Image Synthesis Through Generative Networks
course content

Course Content

Image Synthesis Through Generative Networks

Image Synthesis Through Generative Networks

1. Introduction to Generative Networks
2. VAE implementation
3. GAN Implementation

bookVariational Autoencoder

A Variational Autoencoder (VAE) is a generative model extending the traditional autoencoder architecture. It learns to generate new data samples by representing them as points in a continuous latent space.

In a VAE, the encoder network maps input data to a probability distribution in the latent space, typically modeled as a Gaussian distribution. The decoder network then generates data samples by sampling from this latent space distribution and reconstructing the original input.

Latent Space Structure

The latent space refers to a lower-dimensional input data representation. It's a compressed, abstract space where data points are encoded into a more compact form.

In the case of VAEs, an additional constraint is added, where the latent space values are assumed to follow a Gaussian (or any other) distribution. The decoder network learns to reconstruct an image from samples drawn from this Gaussian distribution.
As a result, we can generate new images by sampling from this distribution in the latent space and passing these samples through the decoder network.

Note

The latent space shown in the video is created using the MNIST dataset, which contains handwritten digits from 0 to 9.

Why Can't We Use a Simple Autoencoder for Content Generation?

Two main reasons for this are irregularity of latent space and overfitting.

Irregularity of Latent Space

In a simple autoencoder, the latent space lacks regularity, making it challenging to determine a suitable sampling distribution for generating new data.
Attempting to generate images from a different distribution may cause the model to malfunction, as it was trained within a specific distribution.

Variational Autoencoders (VAEs) address this issue by ensuring that the latent space follows a specific distribution. By imposing this constraint, we can be confident that the network is trained to effectively sample data from this distribution, enabling it to generate new high-quality images.

Overfitting

Simple autoencoders often suffer from overfitting, as they learn a deterministic mapping from input data to a lower-dimensional latent space. This leads to some points in the latent space producing meaningless content when decoded.

To mitigate overfitting, VAEs are trained to map data into a particular probability distribution rather than specific values, reducing the risk of overfitting.

How does a VAE differ from a standard autoencoder in its encoder output?

How does a VAE differ from a standard autoencoder in its encoder output?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 1. Chapter 4
We're sorry to hear that something went wrong. What happened?
some-alt