in
https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py
you compute the generator loss as:
a_loss = self.adversarial.train_on_batch(noise, y)
but this also trains the discriminator using only the fake samples.
shouldn't you freeze the discriminator weights for this part?
in
https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.pyyou compute the generator loss as:
a_loss = self.adversarial.train_on_batch(noise, y)but this also trains the discriminator using only the fake samples.
shouldn't you freeze the discriminator weights for this part?