A Gentle Introduction to Pix2Pix Generative Adversarial Network
Last Updated on December 6, 2019
Image-to-image translation is the controlled conversion of a given source image to a target image.
An example might be the conversion of black and white photographs to color photographs.
Image-to-image translation is a challenging problem and often requires specialized models and loss functions for a given translation task or dataset.
The Pix2Pix GAN is a general approach for image-to-image translation. It is based on the conditional generative adversarial network, where a target image is generated, conditional on a given input image. In this case, the Pix2Pix GAN changes the loss function so that the generated image is both plausible in the content of the target domain, and is a plausible translation of the input image.
In this post, you will discover the Pix2Pix conditional generative adversarial network for image-to-image translation.
After reading this post, you will know:
- Image-to-image translation often requires specialized models and hand-crafted loss functions.
- Pix2Pix GAN provides a general purpose model and loss function for image-to-image translation.
- The Pix2Pix GAN was demonstrated on a wide variety of image generation tasks, including translating photographs from day to night and products sketches to photographs.
Kick-start your project with my new book
To finish reading, please visit source site