Minimal PyTorch implementation of Generative Latent Optimization
Minimal PyTorch implementation of Generative Latent Optimization
This is a reimplementation of the paper
Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam:
Optimizing the Latent Space of Generative Networks
I’m not one of the authors. I just reimplemented parts of the paper in PyTorch for learning about PyTorch and generative models. Also, I liked the idea in the paper and was surprised that the approach actually works.
Implementation of the Laplacian pyramid L1 loss is inspired by https://github.com/mtyka/laploss. DCGAN network architecture follows https://github.com/pytorch/examples/tree/master/dcgan.
Running the code
First, install the required packages. For example, in Anaconda, you can simple do
conda install pytorch torchvision -c pytorch
conda install scikit-learn tqdm plac python-lmdb pillow
Download the LSUN dataset (only the bedroom training images are used here) into