Articles About Machine Learning

End-to-End Pre-training for Vision-Language Representation Learning

Seeing Out of tHe bOx End-to-End Pre-training for Vision-Language Representation Learning [CVPR’21, Oral]By Zhicheng Huang*, Zhaoyang Zeng*, Yupan Huang*, Bei Liu, Dongmei Fu and Jianlong Fu arxiv: https://arxiv.org/pdf/2104.03135.pdf This is the official implementation of the paper. In this paper, we propose SOHO to “See Out of tHe bOx” that takes a whole image as input, and learns vision-language representation in an end-to-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than region-based approaches. Architecture […]

Read more

Spatially-invariant Style-codes Controlled Makeup Transfer in python

SCGAN Implementation of CVPR 2021 paper “Spatially-invariant Style-codes Controlled Makeup Transfer” Prepare The pre-trained model is avaiable at https://drive.google.com/file/d/1t1Hbgqqzc_rV5v3gF7HuJ-xiuEVNb8sh/view?usp=sharing. vgg_conv.pth:https://drive.google.com/file/d/1JNrSVZrK4TfC7pFG-r7AOmGvBXF2VFOt/view?usp=sharing Put the G.pth and VGG weights in “./checkpoints” and “./” respectively. Environments:python=3.8, pytorch=1.6.0, Ubuntu=20.04.1 LTS Train Put the train-list of makeup images in “./MT-Dataset/makeup.txt” and the train-list of non-makeup images in “./MT-Dataset/non-makeup.txt” Use the “./scripts/handle_parsing.py” to convert the origin MT-Dataset’s seg labels Use python sc.py –phase train to train Test 1.Global Makeup Transfer python sc.py –phase test 2.Part-specific Makeup Transfer […]

Read more

Python package for covariance matrices manipulation and Biosignal classification

pyRiemann pyRiemann is a python package for covariance matrices manipulation and classification through Riemannian geometry. The primary target is classification of multivariate biosignals, like EEG, MEG or EMG. This is work in progress … stay tuned. This code is BSD-licenced (3 clause). Documentation The documentation is available on http://pyriemann.readthedocs.io/en/latest/ Install Using PyPI pip install pyriemann or using pip+git for the latest version of the code : pip install git+https://github.com/pyRiemann/pyRiemann Anaconda is not currently supported, if you want to use anaconda, […]

Read more

Biterm Topic Model : modeling topics in short texts

Biterm Topic Model Bitermplus implements Biterm topic model for short texts introduced by Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. Actually, it is a cythonized version of BTM. This package is also capable of computing perplexity and semantic coherence metrics. Development Please note that bitermplus is actively improved.Refer to documentation to stay up to date. Requirements cython numpy pandas scipy scikit-learn tqdm Setup Linux and Windows There should be no issues with installing bitermplus under these OSes. You […]

Read more

An Adversarial Framework for (non-) Parametric Image Stylization

Fully Adversarial Mosaics (FAMOS) Pytorch implementation of the paper “Copy the Old or Paint Anew? An Adversarial Framework for (non-) Parametric Image Stylization” available at http://arxiv.org/abs/1811.09236. This code allows to generate image stylisation using an adversarial approach combining parametric and non-parametric elements. Tested to work on Ubuntu 16.04, Pytorch 0.4, Python 3.6. Nvidia GPU p100. It is recommended to have a GPU with 12, 16GB, or more of VRAM. Parameters Our method has many possible settings. You can specify them […]

Read more

A Neural Network Approach to Fast Graph Similarity Computation

SimGNN A PyTorch implementation of SimGNN: A Neural Network Approach to Fast Graph Similarity Computation (WSDM 2019). Abstract Graph similarity search is among the most important graph-based applications, e.g. finding the chemical compounds that are most similar to a query compound. Graph similarity/distance computation, such as Graph Edit Distance (GED) and Maximum Common Subgraph (MCS), is the core operation of graph similarity search and many other applications, but very costly to compute in practice. Inspired by the recent success of […]

Read more

A PyTorch implementation of Graph Classification Using Structural Attention

GAM A PyTorch implementation of Graph Classification Using Structural Attention (KDD 2018). Abstract Graph classification is a problem with practical applications in many different domains. To solve this problem, one usually calculates certain graph statistics (i.e., graph features) that help discriminate between graphs of different classes. When calculating such features, most existing approaches process the entire graph. In a graphlet-based approach, for instance, the entire graph is processed to get the total count of different graphlets or subgraphs. In many […]

Read more

A PyTorch implementation of Capsule Graph Neural Network

CapsGNN A PyTorch implementation of Capsule Graph Neural Network (ICLR 2019). Abstract The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings. Inspired by the Capsule Neural Network (CapsNet), we […]

Read more

Watch Your Step: Learning Node Embeddings via Graph Attention

Attention Walk A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding methods represent nodes in a continuous vector space, preserving different types of relational information from the graph. There are many hyper-parameters to these methods (e.g. the length of a random walk) which have to be manually tuned for every graph. In this paper, we replace previously fixed hyper-parameters with trainable ones that we automatically learn via backpropagation. In particular, we […]

Read more

Clockwork Variational Autoencoders using JAX and Flax

Clockwork VAEs in JAX/Flax Implementation of experiments in the paper Clockwork Variational Autoencoders (project website) using JAX and Flax, ported from the official TensorFlow implementation. Running on a single TPU v3, training is 10x faster than reported in the paper (60h -> 6h on minerl). Method Clockwork VAEs are deep generative model that learn long-term dependencies in video by leveraging hierarchies of representations that progress at different clock speeds. In contrast to prior video prediction methods that typically focus on […]

Read more
1 47 48 49 50 51 226