A Gentle Introduction to Generative Adversarial Network Loss Functions
The generative adversarial network, or GAN for short, is a deep learning architecture for training a generative model for image synthesis.
The GAN architecture is relatively straightforward, although one aspect that remains challenging for beginners is the topic of GAN loss functions. The main reason is that the architecture involves the simultaneous training of two models: the generator and the discriminator.
The discriminator model is updated like any other deep learning neural network, although the generator uses the discriminator as the loss function, meaning that the loss function for the generator is implicit and learned during training.
In this post, you will discover an introduction to loss functions for generative adversarial networks.
After reading this post, you will know:
- The GAN architecture is defined with the minimax GAN loss, although it is typically implemented using the non-saturating loss function.
- Common alternate loss functions used in modern GANs include the least squares and Wasserstein loss functions.
- Large-scale evaluation of GAN loss functions suggests little difference when other concerns, such as computational budget and model hyperparameters, are held constant.
Kick-start your project with my new book Generative Adversarial Networks with Python, including step-by-step tutorials and the Python source code
To finish reading, please visit source site