Conditional GAN
Conditional GANs (cGANs) extend the GAN framework by conditioning both the generator and discriminator on additional information. This auxiliary data can be anything from class labels, images, text, or any other contextual information the model can use to guide the generation process. By doing so, cGANs enable a more controlled and targeted generation of data.
Generator
The generator takes two inputs: a random noise vector and the conditioning information . These inputs are concatenated and fed into the generator network, which outputs the conditionaed generated data.
python9912345678910111213141516171819202122232425262728293031def build_generator(latent_dim, label_dim=10):# Input layersinput_noise = Input(shape=(latent_dim,))input_label = Input(shape=(label_dim,)) # One-hot encoded label input# Project the noise vector to match the initial shape for convolutional operationsx = Dense(7 * 7 * 1024, activation='elu')(input_noise)x = Reshape((7, 7, 1024))(x) # Reshape to a 3D tensor (7, 7, 1024)# Concatenate the reshaped noise vector with the label input# This allows the generator to conditionally generate images based on the provided labelx = Concatenate()([x, input_label])# Upsampling and convolutional layers for image generationx = Conv2D(512, (2, 2), padding='same', activation='elu')(x)x = Conv2D(256, (2, 2), padding='same', activation='elu')(x)x = UpSampling2D()(x) # Upsample to (14, 14, 256)x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)x = Conv2D(64, (2, 2), padding='same', activation='elu')(x)x = UpSampling2D()(x) # Upsample to (28, 28, 64)x = Conv2D(32, (4, 4), padding='same', activation='elu')(x)# Final convolutional layer to generate the output imagegenerated_image = Conv2D(1, (3, 3), padding='same', activation='sigmoid')(x)# Define model with noise and label inputsmodel = Model(inputs=[input_noise, input_label], outputs=generated_image)return model
Discriminator
The discriminator also receives two inputs: the data (either real or generated) and the conditioning information. It processes both inputs to determine whether the data is real and if it matches the conditioning information.
python99123456789101112131415161718192021222324def build_discriminator(in_shape=(28, 28, 1), label_dim=10):input_img = Input(shape=in_shape)input_label = Input(shape=(label_dim,)) # One-hot encoded label inputx = Conv2D(32, (3, 3), padding='same', activation='elu')(input_img)x = MaxPooling2D((2, 2))(x)x = Conv2D(64, (3, 3), padding='same', activation='elu')(x)x = MaxPooling2D((2, 2))(x)x = Conv2D(128, (3, 3), padding='same', activation='elu')(x)x = MaxPooling2D((2, 2))(x)x = Flatten()(x)# Concatenate the flattened features with the label inputx = Concatenate()([x, input_label])x = Dense(256)(x)x = Dense(1, activation='sigmoid')(x)# Define model with both image and label inputsmodel = Model(inputs=[input_img, input_label], outputs=x)return model
Data generation
We pass a one-hot-encoded image label as an input of the trained generator network to create new data samples.
python991234567891011121314num_images = 10latent_dim = 100 # Example latent dimensionlabel_dim = 10 # Example label dimension# Generate random noise and random conditional labelsnoise = np.random.normal(0, 1, (num_images, latent_dim))labels = np.random.randint(0, label_dim, num_images) # Random labels for each image# Convert labels to one-hot encodingconditional_labels = np.zeros((num_images, label_dim))conditional_labels[np.arange(num_images), labels] = 1# Generate images using the conditional generatorgen_imgs = generator_model.predict([noise, conditional_labels])
Danke für Ihr Feedback!
Fragen Sie AI
Fragen Sie alles oder probieren Sie eine der vorgeschlagenen Fragen, um unser Gespräch zu beginnen