Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lernen Conditional GAN | GAN Implementation
Image Synthesis Through Generative Networks
course content

Kursinhalt

Image Synthesis Through Generative Networks

Image Synthesis Through Generative Networks

1. Introduction to Generative Networks
2. VAE implementation
3. GAN Implementation

book
Conditional GAN

Conditional GANs (cGANs) extend the GAN framework by conditioning both the generator and discriminator on additional information. This auxiliary data can be anything from class labels, images, text, or any other contextual information the model can use to guide the generation process. By doing so, cGANs enable a more controlled and targeted generation of data.

Generator

The generator takes two inputs: a random noise vector and the conditioning information . These inputs are concatenated and fed into the generator network, which outputs the conditionaed generated data.

def build_generator(latent_dim, label_dim=10):
    # Input layers
    input_noise = Input(shape=(latent_dim,))
    input_label = Input(shape=(label_dim,))  # One-hot encoded label input

    # Project the noise vector to match the initial shape for convolutional operations
    x = Dense(7 * 7 * 1024, activation='elu')(input_noise)
    x = Reshape((7, 7, 1024))(x)  # Reshape to a 3D tensor (7, 7, 1024)

    # Concatenate the reshaped noise vector with the label input
    # This allows the generator to conditionally generate images based on the provided label
    x = Concatenate()([x, input_label])

    # Upsampling and convolutional layers for image generation
    x = Conv2D(512, (2, 2), padding='same', activation='elu')(x)
    x = Conv2D(256, (2, 2), padding='same', activation='elu')(x)
    x = UpSampling2D()(x)  # Upsample to (14, 14, 256)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = Conv2D(64, (2, 2), padding='same', activation='elu')(x)
    x = UpSampling2D()(x)  # Upsample to (28, 28, 64)

    x = Conv2D(32, (4, 4), padding='same', activation='elu')(x)

    # Final convolutional layer to generate the output image
    generated_image = Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)

    # Define model with noise and label inputs
    model = Model(inputs=[input_noise, input_label], outputs=generated_image)

    return model

Discriminator

The discriminator also receives two inputs: the data (either real or generated) and the conditioning information. It processes both inputs to determine whether the data is real and if it matches the conditioning information.

def build_discriminator(in_shape=(28, 28, 1), label_dim=10):
    input_img = Input(shape=in_shape)
    input_label = Input(shape=(label_dim,))  # One-hot encoded label input

    x = Conv2D(32, (3, 3), padding='same', activation='elu')(input_img)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(64, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Flatten()(x)
    
    # Concatenate the flattened features with the label input
    x = Concatenate()([x, input_label])
    
    x = Dense(256)(x)
    x = Dense(1, activation='sigmoid')(x)

    # Define model with both image and label inputs
    model = Model(inputs=[input_img, input_label], outputs=x)
    return model

Data generation

We pass a one-hot-encoded image label as an input of the trained generator network to create new data samples.

num_images = 10
latent_dim = 100  # Example latent dimension
label_dim = 10  # Example label dimension

# Generate random noise and random conditional labels
noise = np.random.normal(0, 1, (num_images, latent_dim))
labels = np.random.randint(0, label_dim, num_images)  # Random labels for each image

# Convert labels to one-hot encoding
conditional_labels = np.zeros((num_images, label_dim))
conditional_labels[np.arange(num_images), labels] = 1

# Generate images using the conditional generator
gen_imgs = generator_model.predict([noise, conditional_labels])

War alles klar?

Wie können wir es verbessern?

Danke für Ihr Feedback!

Abschnitt 3. Kapitel 3

Fragen Sie AI

expand

Fragen Sie AI

ChatGPT

Fragen Sie alles oder probieren Sie eine der vorgeschlagenen Fragen, um unser Gespräch zu beginnen

course content

Kursinhalt

Image Synthesis Through Generative Networks

Image Synthesis Through Generative Networks

1. Introduction to Generative Networks
2. VAE implementation
3. GAN Implementation

book
Conditional GAN

Conditional GANs (cGANs) extend the GAN framework by conditioning both the generator and discriminator on additional information. This auxiliary data can be anything from class labels, images, text, or any other contextual information the model can use to guide the generation process. By doing so, cGANs enable a more controlled and targeted generation of data.

Generator

The generator takes two inputs: a random noise vector and the conditioning information . These inputs are concatenated and fed into the generator network, which outputs the conditionaed generated data.

def build_generator(latent_dim, label_dim=10):
    # Input layers
    input_noise = Input(shape=(latent_dim,))
    input_label = Input(shape=(label_dim,))  # One-hot encoded label input

    # Project the noise vector to match the initial shape for convolutional operations
    x = Dense(7 * 7 * 1024, activation='elu')(input_noise)
    x = Reshape((7, 7, 1024))(x)  # Reshape to a 3D tensor (7, 7, 1024)

    # Concatenate the reshaped noise vector with the label input
    # This allows the generator to conditionally generate images based on the provided label
    x = Concatenate()([x, input_label])

    # Upsampling and convolutional layers for image generation
    x = Conv2D(512, (2, 2), padding='same', activation='elu')(x)
    x = Conv2D(256, (2, 2), padding='same', activation='elu')(x)
    x = UpSampling2D()(x)  # Upsample to (14, 14, 256)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = Conv2D(64, (2, 2), padding='same', activation='elu')(x)
    x = UpSampling2D()(x)  # Upsample to (28, 28, 64)

    x = Conv2D(32, (4, 4), padding='same', activation='elu')(x)

    # Final convolutional layer to generate the output image
    generated_image = Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)

    # Define model with noise and label inputs
    model = Model(inputs=[input_noise, input_label], outputs=generated_image)

    return model

Discriminator

The discriminator also receives two inputs: the data (either real or generated) and the conditioning information. It processes both inputs to determine whether the data is real and if it matches the conditioning information.

def build_discriminator(in_shape=(28, 28, 1), label_dim=10):
    input_img = Input(shape=in_shape)
    input_label = Input(shape=(label_dim,))  # One-hot encoded label input

    x = Conv2D(32, (3, 3), padding='same', activation='elu')(input_img)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(64, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Flatten()(x)
    
    # Concatenate the flattened features with the label input
    x = Concatenate()([x, input_label])
    
    x = Dense(256)(x)
    x = Dense(1, activation='sigmoid')(x)

    # Define model with both image and label inputs
    model = Model(inputs=[input_img, input_label], outputs=x)
    return model

Data generation

We pass a one-hot-encoded image label as an input of the trained generator network to create new data samples.

num_images = 10
latent_dim = 100  # Example latent dimension
label_dim = 10  # Example label dimension

# Generate random noise and random conditional labels
noise = np.random.normal(0, 1, (num_images, latent_dim))
labels = np.random.randint(0, label_dim, num_images)  # Random labels for each image

# Convert labels to one-hot encoding
conditional_labels = np.zeros((num_images, label_dim))
conditional_labels[np.arange(num_images), labels] = 1

# Generate images using the conditional generator
gen_imgs = generator_model.predict([noise, conditional_labels])

War alles klar?

Wie können wir es verbessern?

Danke für Ihr Feedback!

Abschnitt 3. Kapitel 3
some-alt