How to Implement Wasserstein Loss for Generative Adversarial Networks
The Wasserstein Generative Adversarial Network, or Wasserstein GAN, is an extension to the generative adversarial network that both improves the stability when training the model and provides a loss function that correlates with the quality of generated images.
It is an important extension to the GAN model and requires a conceptual shift away from a discriminator that predicts the probability of a generated image being “real” and toward the idea of a critic model that scores the “realness” of a given image.
This conceptual shift is motivated mathematically using the earth mover distance, or Wasserstein distance, to train the GAN that measures the distance between the data distribution observed in the training dataset and the distribution observed in the generated examples.
In this post, you will discover how to implement Wasserstein loss for Generative Adversarial Networks.
After reading this post, you will know:
- The conceptual shift in the WGAN from discriminator predicting a probability to a critic predicting a score.
- The implementation details for the WGAN as minor changes to the standard deep convolutional GAN.
- The intuition behind the Wasserstein loss function and how implement it from scratch.
Kick-start your project with my new book Generative Adversarial Networks
To finish reading, please visit source site