Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Generator - Discriminator Principle | GAN Implementation
Image Synthesis Through Generative Networks

bookGenerator - Discriminator Principle

The generator-discriminator principle is central to the functioning of Generative Adversarial Networks (GANs).

Generator

This network generates new data instances. The goal of the generator is to produce data that is indistinguishable from real data. It takes random noise as input and transforms it into a data sample that mimics the real data distribution.

# Define discriminator
def build_discriminator(in_shape=(28, 28, 1)):
    input_img = Input(shape=in_shape)

    x = Conv2D(32, (3, 3), padding='same', activation='elu')(input_img)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(64, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Flatten()(x)
    x = Dense(256)(x)

    x = Dense(1, activation='sigmoid')(x)

    return Model(inputs=input_img, outputs=x)

Discriminator

This network evaluates the authenticity of the data. It takes a data instance as input and outputs a probability that indicates whether the input data is real (from the true data distribution) or fake (generated by the generator).

# Define generator
def build_generator(latent_dim):
    input_noise = Input(shape=(latent_dim,))

    x = Dense(7 * 7 * 1024, activation = 'elu')(input_noise)
    x = Reshape((7, 7, 1024))(x)

    x = Conv2D(512, (2, 2), padding='same', activation = 'elu')(x)
    x = Conv2D(256, (2, 2), padding='same', activation = 'elu')(x)
    x = UpSampling2D()(x)

    x = Conv2D(128, (3, 3), padding='same', activation = 'elu')(x)
    x = Conv2D(64, (2, 2), padding='same', activation = 'elu')(x)
    x = UpSampling2D()(x)

    x = Conv2D(32, (4, 4), padding='same', activation = 'elu')(x)

    x = Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)

    return Model(inputs=input_noise, outputs=x)

During training, the generator and discriminator are updated in alternation. Typically, for each iteration:

  • the discriminator is updated by maximizing the probability of correctly classifying real and fake samples;
  • the generator is updated by minimizing the probability that the discriminator correctly identifies its samples as fake.

This adversarial process continues until the generator produces data that is realistic enough that the discriminator cannot reliably distinguish between real and fake data, ideally achieving a balance where both networks are optimized.

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 3. Kapitel 1

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

Suggested prompts:

Ställ mig frågor om detta ämne

Sammanfatta detta kapitel

Visa verkliga exempel

Awesome!

Completion rate improved to 5.26

bookGenerator - Discriminator Principle

Svep för att visa menyn

The generator-discriminator principle is central to the functioning of Generative Adversarial Networks (GANs).

Generator

This network generates new data instances. The goal of the generator is to produce data that is indistinguishable from real data. It takes random noise as input and transforms it into a data sample that mimics the real data distribution.

# Define discriminator
def build_discriminator(in_shape=(28, 28, 1)):
    input_img = Input(shape=in_shape)

    x = Conv2D(32, (3, 3), padding='same', activation='elu')(input_img)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(64, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)
    x = MaxPooling2D((2, 2))(x)

    x = Flatten()(x)
    x = Dense(256)(x)

    x = Dense(1, activation='sigmoid')(x)

    return Model(inputs=input_img, outputs=x)

Discriminator

This network evaluates the authenticity of the data. It takes a data instance as input and outputs a probability that indicates whether the input data is real (from the true data distribution) or fake (generated by the generator).

# Define generator
def build_generator(latent_dim):
    input_noise = Input(shape=(latent_dim,))

    x = Dense(7 * 7 * 1024, activation = 'elu')(input_noise)
    x = Reshape((7, 7, 1024))(x)

    x = Conv2D(512, (2, 2), padding='same', activation = 'elu')(x)
    x = Conv2D(256, (2, 2), padding='same', activation = 'elu')(x)
    x = UpSampling2D()(x)

    x = Conv2D(128, (3, 3), padding='same', activation = 'elu')(x)
    x = Conv2D(64, (2, 2), padding='same', activation = 'elu')(x)
    x = UpSampling2D()(x)

    x = Conv2D(32, (4, 4), padding='same', activation = 'elu')(x)

    x = Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)

    return Model(inputs=input_noise, outputs=x)

During training, the generator and discriminator are updated in alternation. Typically, for each iteration:

  • the discriminator is updated by maximizing the probability of correctly classifying real and fake samples;
  • the generator is updated by minimizing the probability that the discriminator correctly identifies its samples as fake.

This adversarial process continues until the generator produces data that is realistic enough that the discriminator cannot reliably distinguish between real and fake data, ideally achieving a balance where both networks are optimized.

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 3. Kapitel 1
some-alt