A Gentle Introduction to CycleGAN for Image Translation
Last Updated on August 17, 2019
Image-to-image translation involves generating a new synthetic version of a given image with a specific modification, such as translating a summer landscape to winter.
Training a model for image-to-image translation typically requires a large dataset of paired examples. These datasets can be difficult and expensive to prepare, and in some cases impossible, such as photographs of paintings by long dead artists.
The CycleGAN is a technique that involves the automatic training of image-to-image translation models without paired examples. The models are trained in an unsupervised manner using a collection of images from the source and target domain that do not need to be related in any way.
This simple technique is powerful, achieving visually impressive results on a range of application domains, most notably translating photographs of horses to zebra, and the reverse.
In this post, you will discover the CycleGAN technique for unpaired image-to-image translation.
After reading this post, you will know:
- Image-to-Image translation involves the controlled modification of an image and requires large datasets of paired images that are complex to prepare or sometimes don’t exist.
- CycleGAN is a technique for training unsupervised image translation models via the GAN architecture
To finish reading, please visit source site