Face Stylization based on the paper “AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning”

English | 简体中文 Introduction This repo is an efficient toolkit for Face Stylization based on the paper “AgileGAN: Stylizing Portraits by Inversion-Consistent Transfer Learning”. We note that since the training code of AgileGAN is not released yet, this repo merely adopts the pipeline from AgileGAN and combines other helpful practices in this literature. This project is based on MMCV and MMGEN, star and fork is welcomed 🤗! Results from FaceStylor trained by MMGEN Requirements CUDA 10.0 / CUDA 10.1 Python […]

Read more

GAN encoders in PyTorch that could match PGGAN, StyleGAN v1/v2, and BigGAN

MTV-TSA MTV-TSA: Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions. This is the official code release for “Adaptable GAN Encoders for Image Reconstruction via Multi-type Latent Vectors with Two-scale Attentions”. The code contains a set of encoders that match pre-trained GANs (PGGAN, StyleGANv1, StyleGANv2, BigGAN) via multi-scale vectors with two-scale attentions. Usage training encoder with center attentions (align image) python E_align.py training encoder with Gram-based attentions (misalign image) python E_mis_align.py embedding real images to latent […]

Read more

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral) StyleCLIP: Text-Driven Manipulation of StyleGAN ImageryOr Patashnik*, Zongze Wu*, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski*Equal contribution, ordered alphabeticallyhttps://arxiv.org/abs/2103.17249 Abstract: Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees […]

Read more