Articles About Machine Learning

Rethinking the Design Principles of Robust Vision Transformer

Robust-Vision-Transformer Note: Since the model is trained on our private platform, this transferred code has not been tested and may have some bugs. If you meet any problems, feel free to open an issue! This repository contains PyTorch code for Robust Vision Transformers. For details see our paper “Rethinking the Design Principles of Robust Vision Transformer” First, clone the repository locally: git clone https://github.com/vtddggg/Robust-Vision-Transformer.git Install PyTorch 1.7.0+ and torchvision 0.8.1+ and pytorch-image-models 0.3.2: conda install -c pytorch pytorch torchvision pip […]

Read more

PyTorch implementation of some learning rate schedulers for deep learning researcher

pytorch-lr-scheduler PyTorch implementation of some learning rate schedulers for deep learning researcher. Usage WarmupReduceLROnPlateauScheduler import torch from lr_scheduler.warmup_reduce_lr_on_plateau_scheduler import WarmupReduceLROnPlateauScheduler if __name__ == ‘__main__’: max_epochs, steps_in_epoch = 10, 10000 model = [torch.nn.Parameter(torch.randn(2, 2, requires_grad=True))] optimizer = torch.optim.Adam(model, 1e-10) scheduler = WarmupReduceLROnPlateauScheduler( optimizer, init_lr=1e-10, peak_lr=1e-4, warmup_steps=30000, patience=1, factor=0.3, ) for epoch in range(max_epochs): for timestep in range(steps_in_epoch): … … if timestep < warmup_steps: scheduler.step() val_loss = validate() scheduler.step(val_loss) TransformerLRScheduler import torch from lr_scheduler.transformer_lr_scheduler import TransformerLRScheduler if __name__ == '__main__': max_epochs, steps_in_epoch [...]

Read more

An all-MLP replacement for Transformers in Pytorch

gMLP – Pytorch Implementation of gMLP, an all-MLP replacement for Transformers, in Pytorch Install $ pip install g-mlp-pytorch Usage For masked language modelling import torch from g_mlp_pytorch import gMLP model = gMLP( num_tokens = 20000, dim = 512, depth = 6, seq_len = 256 ) x = torch.randint(0, 20000, (1, 256)) logits = model(x) # (1, 256, 20000) For image classification import torch from g_mlp_pytorch import gMLPVision model = gMLPVision( image_size = 256, patch_size = 16, num_classes = 1000, dim […]

Read more

A ML-Ops platform that helps you collaborate and share your Machine Learning work

MLReef Your Machine Learning life cycle in one platform MLReef is an open source ML-Ops platform that helps you collaborate, reproduce and share your Machine Learning work MLReef is a ML/DL development platform containing four main sections: Data-Management – Fully versioned data hosting and processing infrastructure Publishing code repositories – Containerized and versioned script repositories for immutable use in data pipelines Experiment Manager – Experiment tracking, environments and results ML-Ops – Pipelines & Orchestration solution for ML/DL jobs (K8s / […]

Read more

An Unsupervised Graph-based Toolbox for Fraud Detection

UGFraud UGFraud is an unsupervised graph-based fraud detection toolbox that integrates several state-of-the-art graph-based fraud detection algorithms. It can be applied to bipartite graphs (e.g., user-product graph), and it can estimate the suspiciousness of both nodes and edges. The implemented models can be found here. The toolbox incorporates the Markov Random Field (MRF)-based algorithm, dense-block detection-based algorithm, and SVD-based algorithm. For MRF-based algorithms, the users only need the graph structure and the prior suspicious score of the nodes as the […]

Read more

Generating a wordcloud made by Twitter with python

auto_tweet_wordcloud This repos is auto action which generating a wordcloud made by Twitter. Preconditions Install Python dependencies pip install -r requirements.txt Download neologd Dictionary sh scripts/download_neologd_dict.sh Usage python src/main.py Demo Default Default Alpha Man Face in Profile Man Face in Profile Alpha Twitter Bird Twitter Bird Alpha GitHub https://github.com/tubone24/auto_tweet_wordcloud    

Read more

Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms

FNet: Mixing Tokens with Fourier Transforms Pytorch implementation of Fnet : Mixing Tokens with Fourier Transforms. Citation: @misc{leethorp2021fnet, title={FNet: Mixing Tokens with Fourier Transforms}, author={James Lee-Thorp and Joshua Ainslie and Ilya Eckstein and Santiago Ontanon}, year={2021}, eprint={2105.03824}, archivePrefix={arXiv}, primaryClass={cs.CL} } GitHub https://github.com/rishikksh20/FNet-pytorch    

Read more

Vision-and-Language Transformer Without Convolution or Region Supervision

ViLT Code for the ICML 2021 (long talk) paper: “ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision” Install pip install -r requirements.txt pip install -e . Download Pretrained Weights We provide five pretrained weights ViLT-B/32 Pretrained with MLM+ITM for 200k steps on GCC+SBU+COCO+VG (ViLT-B/32 200k) link ViLT-B/32 200k finetuned on VQAv2 link ViLT-B/32 200k finetuned on NLVR2 link ViLT-B/32 200k finetuned on COCO IR/TR link ViLT-B/32 200k finetuned on F30K IR/TR link Out-of-the-box MLM + Visualization Demo pip install gradio==1.6.4 […]

Read more

Variational Relational Point Completion Network

VRCNet Real-scanned point clouds are often incomplete due to viewpoint, occlusion, and noise. Existing point cloud completion methods tend to generate global shape skeletons and hence lack fine local details. Furthermore, they mostly learn a deterministic partial-to-complete mapping, but overlook structural relations in man-made objects. To tackle these challenges, this paper proposes a variational framework, Variational Relational point Completion network (VRCNet) with two appealing properties: 1) Probabilistic Modeling. In particular, we propose a dual-path architecture to enable principled probabilistic modeling […]

Read more

A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet

Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet TL;DR We replace the attention layer in a vision transformer with a feed-forward layer and find that it still works quite well on ImageNet. Abstract The strong performance of vision transformers on image classification and other vision tasks is often attributed to the design of their multi-head attention layers. However, the extent to which attention is responsible for this strong performance remains unclear. In this […]

Read more
1 57 58 59 60 61 226