Articles About Machine Learning

Linear regression for data with measurement errors and intrinsic scatter

BCES Python module for performing robust linear regression on (X,Y) data points where both X and Y have measurement errors. The fitting method is the bivariate correlated errors and intrinsic scatter (BCES) and follows the description given in Akritas & Bershady. 1996, ApJ. Some of the advantages of BCES regression compared to ordinary least squares fitting (quoted from Akritas & Bershady 1996): it allows for measurement errors on both variables it permits the measurement errors for the two variables to […]

Read more

Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers

aft-pytorch Unofficial PyTorch implementation of Attention Free Transformer’s layers by Zhai, et al. [abs, pdf] from Apple Inc. Installation You can install aft-pytorch via pip: pip install aft-pytorch Usage You can import the AFT-Full or AFT-Simple layer (as described in the paper) from the package like so: AFTFull from aft_pytorch import AFTFull layer = AFTFull( max_seqlen=20, dim=512, hidden_dim=64 ) # a batch of sequences with 10 timesteps of length 512 each x = torch.rand(32, 10, 512) y = layer(x) # […]

Read more

Efficient Vision Transformers with Dynamic Token Sparsification

DynamicViT This repository contains PyTorch implementation for DynamicViT. Created by Yongming Rao, Wenliang Zhao, Benlin Liu, Jiwen Lu, Jie Zhou, Cho-Jui Hsieh Model Zoo We provide our DynamicViT models pretrained on ImageNet: Usage Requirements torch>=1.7.0 torchvision>=0.8.1 timm==0.4.5 Data preparation: download and extract ImageNet images from http://image-net.org/. The directory structure should be │ILSVRC2012/ ├──train/ │ ├── n01440764 │ │ ├── n01440764_10026.JPEG │ │ ├── n01440764_10027.JPEG │ │ ├── …… │ ├── …… ├──val/ │ ├── n01440764 │ │ ├── ILSVRC2012_val_00000293.JPEG │ […]

Read more

Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning

Openspeech Openspeech provides reference implementations of various ASR modeling papers and three languages recipe to perform tasks on automatic speech recognition. We aim to make ASR technology easier to use for everyone. Openspeech is backed by the two powerful libraries — PyTorch-Lightning and Hydra.Various features are available in the above two libraries, including Multi-GPU and TPU training, Mixed-precision, and hierarchical configuration management. Get Started We use Hydra to control all the training configurations.If you are not familiar with Hydra we […]

Read more

Text Analytics of Resume Dataset with NLP!

This article was published as a part of the Data Science Blogathon Introduction We all have made our resumes at some point in time. In a resume, we try to include important facts about ourselves like our education, work experience, skills, etc. Let us work on a resume dataset today.  The text we put in our resume speaks a lot about us. For example, our education, skills, work experience, and other random information about us are all present in a resume. […]

Read more

Reinforcement Learning via Sequence Modeling

Decision Transformer Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas†, and Igor Mordatch† *equal contribution, †equal advising Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling. Contains scripts to reproduce experiments. Instructions We provide code in two sub-directories: atari containing code for Atari experiments and gym containing code for OpenAI Gym experiments.See corresponding READMEs in each folder for instructions; scripts should be run from the respective directories.It may be necessary to […]

Read more

Gotta Go Fast When Generating Data with Score-Based Models

score_sde_fast_sampling This repo contains the official implementation for the paper Gotta Go Fast When Generating Data with Score-Based Models, which shows how to generate data as fast as possible with score-based models using a well-designed SDE solver. See the blog post for more details. Pretrained checkpoints https://drive.google.com/drive/folders/10pQygNzF7hOOLwP3q8GiNxSnFRpArUxQ?usp=sharing References If you find the code useful for your research, please consider citing @article{jolicoeurmartineau2021gotta, title={Gotta Go Fast When Generating Data with Score-Based Models}, author={Alexia Jolicoeur-Martineau and Ke Li and R{‘e}mi Pich{‘e}-Taillefer and Tal […]

Read more

Pytorch implementation of Generative Models as Distributions of Functions

Generative Models as Distributions of Functions This repo contains code to reproduce all experiments in Generative Models as Distributions of Functions. Requirements Requirements for training the models can be installed using pip install -r requirements.txt. All experiments were run using python 3.8.10. Training a model To train a model on CelebAHQ64, run python main.py configs/config_celebahq64.json Example configs to reproduce the results in the paper are provided in the configs folder. Note that you will have to provide a path to […]

Read more

Winning solution of the Indoor Location & Navigation Kaggle competition

ndoor-Location-Navigation-Public This repository contains the code to generate the winning solution of the Kaggle competition on indoor location and navigation organized by Microsoft Research. Authors: Are Haartveit Dmitry Gordeev Tom Van de Wiele Steps to obtain the approximate winning submission Clone the repository, it doesn’t matter where you clone it to since the source code and data are disentangled. Create a project folder on a disk with at least 150GB of free space. Create a “Data” subfolder in your project […]

Read more

A PyTorch library and evaluation platform for end-to-end compression research

CompressAI CompressAI (compress-ay) is a PyTorch library and evaluation platform for end-to-end compression research. CompressAI currently provides: custom operations, layers and models for deep learning based data compression a partial port of the official TensorFlow compression library pre-trained end-to-end compression models for learned image compression evaluation scripts to compare learned models against classical image/videocompression codecs Note: Multi-GPU support is now experimental. Installation CompressAI supports python 3.6+ and PyTorch 1.4+. pip: pip install compressai Note: wheels are available for Linux and […]

Read more
1 55 56 57 58 59 226