Dataset Distillation by Matching Training Trajectories

Project Page | Paper This repo contains code for training expert trajectories and distilling synthetic data from our Dataset Distillation by Matching Training Trajectories paper (CVPR 2022). Please see our project page for more results. Dataset Distillation by Matching Training Trajectories George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu CMU, MIT, UC Berkeley CVPR 2022 The task of “Dataset Distillation” is to learn a small number of synthetic images such that a model trained on this set […]

Read more

Implementation of some unbalanced loss like focal_loss, dice_loss, DSC Loss, GHM Loss et.al

Implementation of some unbalanced loss for NLP task like focal_loss, dice_loss, DSC Loss, GHM Loss et.al Summary Here is a loss implementation repository included unbalanced loss How to use? You can find all the loss usage information in test_loss.py. Here is a simple demo of usage: import torch from unbalanced_loss.focal_loss import MultiFocalLoss batch_size, num_class = 64, 10 Loss_Func = MultiFocalLoss(num_class=num_class, gamma=2.0, reduction=

Read more

Python wrapper kernel for Crystal

Simple Python wrapper kernel for Crystal language. ICrystal is the widely used Jupyter kernel for Crystal, which uses ICR. On the other hand, this crystal_kernel uses the official Crystal interpreter. Forked from bash_kernel installation Make sure the Crystal’s interpreter starts with crystal i. Then type the following commands. pip install crystal_kernel python -m crystal_kernel.install Development Something is better than nothing. GitHub View Github    

Read more

Please stop writing shell scripts

When you’re automating some task, for example packaging your application for Docker, you’ll often find yourself writing shell scripts. You might have a bash script to drive the packaging process, and another script as an entry point for the container. As your packaging grows in complexity, so does your shell script. Everything works fine. And then, one day, your shell script does something completely wrong. That’s when you realize your mistake: bash, and shell scripting languages in general, are mostly […]

Read more

The first released system towards complex meters` detection and recognition, which is implemented by computer vision techniques

This is the first released system towards detection and recognition of complex meters in wild. The system can be divided into three moduels. Fisrtly, a yolo-based detector is applied to get pure meter region. Secondly, a spatial transformer module is eatablished to rectify the position of meter. Lastly, an end-to-end network is to read meter values, which is implemented by pointer/dail predcition and key number learning. Visulization results Left row is the original image, middle row is the process of […]

Read more

Official implementation of AdaTime: A Benchmarking Suite for Domain Adaptation on Time Series Data

by: Mohamed Ragab*, Emadeldeen Eldele*, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee Kwoh, Xiaoli Li AdaTime is a PyTorch suite to systematically and fairly evaluate different domain adaptation methods on time series data. Requirmenets: Python3 Pytorch==1.7 Numpy==1.20.1 scikit-learn==0.24.1 Pandas==1.2.4 skorch==0.10.0 (For DEV risk calculations) openpyxl==3.0.7 (for classification reports) Wandb=0.12.7 (for sweeps) Datasets Available Datasets We used four public datasets in this study. We also provide the preprocessed versions as follows: Adding New Dataset Structure of data To […]

Read more

Towards Data-Efficient Detection Transformers

By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-CondDETR in the paper Towards Data-Efficient Detection Transformers. For the implementation of DE-DETR and DELA-DETR, please refer to DE-DETRs. Introduction TL; DR. We identify the data-hungry issue of existing detection transformers and alleviate it by simply alternating how key and value sequences are constructed in the cross-attention layer, with minimum modifications to the original models. Besides, we introduce a […]

Read more

Local-Global Context Aware Transformer for Language-Guided Video Segmentation

This repository is an official PyTorch implementation of paper: Local-Global Context Aware Transformer for Language-Guided Video Segmentation. Chen Liang, Wenguan Wang, Tianfei Zhou, Jiaxu Miao, Yawei Luo, Yi Yang arXiv 2022. News & Update Logs: [2022-03-17] Repo created. Paper, code, and data will come in a few days. Stay tuned. [2022-03-18] Inference code, pretrained weights, and data for A2D-S+ released. [2022-03-21] arXiv (full paper available) Instructions on usage Training code and detailed instructions Code for dataset creation Abstract We explore […]

Read more
1 204 205 206 207 208 944