An implementation of the Adversarial Patch paper

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to have bugs/errors. How to run: Data set-up: Run attack: python make_patch.py –cuda –netClassifier inceptionv3 –max_count 500 –image_size 299 –patch_type circle –outf log Results: Using patch shapes of both circles and squares gave good results (both achieved 100% success on the training set and eventually > 90% success on test set) I managed to recreate the toaster example in the original […]

Read more

Photographic Image Synthesis with Cascaded Refinement Networks-Pytorch

Photographic Image Synthesis with Cascaded Refinement Networks-Pytorch This is a Pytorch implementation of cascaded refinement networks to synthesize photographic images from semantic layouts. Now the pretrained model and codes for training the network from scratch are available for 256×512 resolution. Thanks to Qifeng Chen for his tensorflow implementation which helped a lot in developing this pytorch version. Testing Download this package and keep all the subsequent mentioned files in the same folder. Download the pretrained VGG19 Net from VGG19 Download […]

Read more

Efficient Neural Architecture Search (ENAS) in PyTorch

PyTorch implementation of Efficient Neural Architecture Search via Parameters Sharing. ENAS reduce the computational requirement (GPU-hours) of Neural Architecture Search (NAS) by 1000x via parameter sharing between models that are subgraphs within a large computational graph. SOTA on Penn Treebank language modeling. Prerequisites Python 3.6+ PyTorch==0.3.1 tqdm, scipy, imageio, graphviz, tensorboardX Usage Install prerequisites with: conda install graphviz pip install -r requirements.txt To train ENAS to discover a recurrent cell for RNN: python main.py –network_type rnn –dataset ptb –controller_optim adam […]

Read more

A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Processing) by Hossein Talebi and Peyman Milanfar. You can learn more from this post at Google Research Blog. Implementation Details The model was trained on the AVA (Aesthetic Visual Analysis) dataset containing 255,500+ images. You can get it from here. Note: there may be some corrupted images in the dataset, remove them first before you start training. Use […]

Read more

Given a content photo and a style photo with python

FastPhotoStyle Given a content photo and a style photo, the code can transfer the style of the style photo to the content photo. The details of the algorithm behind the code is documented in our arxiv paper. Please cite the paper if this code repository is used in your publications. GitHub https://github.com/NVIDIA/FastPhotoStyle    

Read more

A python implementation of Deep-Image-Analogy based on pytorch

Deep-Image-Analogy This project is a python implementation of Deep Image Analogy. Some results Requirements python 3 opencv3 If you use anaconda, you can install opencv3 by conda install opencv pytorch See pytorch for installation Codes in branch “master” works with pytorch 0.4 Codes in branch “pytorch0.3” works with pytorch 0.3 cuda (CPU version is not implemented yet) Usage (demo) python main.py –resize_ratio 0.5 –weight 2 –img_A_path data/demo/ava.png –img_BP_path data/demo/mona.png –use_cuda True GitHub https://github.com/Ben-Louis/Deep-Image-Analogy-PyTorch    

Read more

Pytorch implementation of i-RevNets

i-RevNet: Deep Invertible Networks Pytorch implementation of i-RevNets. i-RevNets define a family of fully invertible deep networks, built from a succession of homeomorphic layers. Reference: Jörn-Henrik Jacobsen, Arnold Smeulders, Edouard Oyallon. i-RevNet: Deep Invertible Networks. International Conference on Learning Representations (ICLR), 2018. (https://iclr.cc/) The i-RevNet and its dual. The inverse can be obtained from the forward model with minimal adaption and is an i-RevNet as well. Read the paper for theoretical background and detailed analysis of the trained models. Pytorch […]

Read more

An implementation of shampoo, proposed in Shampoo

shampoo.pytorch An implementation of shampoo, proposed in Shampoo : Preconditioned Stochastic Tensor Optimization by Vineet Gupta, Tomer Koren and Yoram Singer. # Suppose the size of the tensor grad (i, j, k), # dim_id = 1 and dim = j grad = grad.transpose_(0, dim_id).contiguous() # (j, i, k) transposed_size = grad.size() grad = grad.view(dim, -1) # (j, i x k) grad_t = grad.t() # (i x k, j) precond.add_(grad @ grad_t) # (j, j) inv_precond.copy_(_matrix_power(state[precond, -1 / order)) # (j, […]

Read more

PyTorch NIMA: Neural IMage Assessment

PyTorch NIMA: Neural IMage Assessment PyTorch implementation of Neural IMage Assessment by Hossein Talebi and Peyman Milanfar. You can learn more from this post at Google Research Blog. Installing Docker docker run -it truskovskiyk/nima:latest /bin/bash PYPI package (In Progress) pip install nima VirtualEnv git clone https://github.com/truskovskiyk/nima.pytorch.git cd nima.pytorch virtualenv -p python3.7 env source ./env/bin/activate Dataset The model was trained on the AVA (Aesthetic Visual Analysis) datasetYou can get it from hereHere are some examples of images with theire scores Pre-train […]

Read more

Python’s ChainMap: Manage Multiple Contexts Effectively

Sometimes when you’re working with several different dictionaries, you need to group and manage them as a single one. In other situations, you can have multiple dictionaries representing different scopes or contexts and need to handle them as a single dictionary that allows you to access the underlying data following a given order or priority. In those cases, you can take advantage of Python’s ChainMap from the collections module. ChainMap groups multiple dictionaries and mappings in a single, updatable view […]

Read more
1 510 511 512 513 514 903