Diverse im2im and vid2vid selfie to anime translation

GANs N’ Roses Pytorch

Official PyTorch repo for GAN’s N’ Roses. Diverse im2im and vid2vid selfie to anime translation.

teaser

Abstract:

We show how to learn a map that takes a content code, derived from a face image, and a randomly chosen style code to an anime image. We derive an adversarial loss from our simple and effective definitions of style and content. This adversarial loss guarantees the map is diverse — a very wide range of anime can be produced from a single content code. Under plausible assumptions, the map is not just diverse, but also correctly represents the probability of an anime, conditioned on an input face. In contrast, current multimodal generation procedures cannot capture the complex styles that appear in anime. Extensive quantitative

 

 

 

To finish reading, please visit source site