Tips for Training Stable Generative Adversarial Networks
Last Updated on September 12, 2019
The Empirical Heuristics, Tips, and Tricks That You Need to Know to Train Stable Generative Adversarial Networks (GANs).
Generative Adversarial Networks, or GANs for short, are an approach to generative modeling using deep learning methods such as deep convolutional neural networks.
Although the results generated by GANs can be remarkable, it can be challenging to train a stable model. The reason is that the training process is inherently unstable, resulting in the simultaneous dynamic training of two competing models.
Nevertheless, given a large amount of empirical trial and error but many practitioners and researchers, a small number of model architectures and training configurations have been found and reported that result in the reliable training of a stable GAN model.
In this post, you will discover empirical heuristics for the configuration and training of stable general adversarial network models.
After reading this post, you will know:
- The simultaneous training of generator and discriminator models in GANs is inherently unstable.
- Hard-earned empirically discovered configurations for the DCGAN provide a robust starting point for most GAN applications.
- Stable training of GANs remains an open problem and many other empirically discovered tips and tricks have
To finish reading, please visit source site