Dataset Distillation by Matching Training Trajectories
Project Page | Paper
This repo contains code for training expert trajectories and distilling synthetic data from our Dataset Distillation by Matching Training Trajectories paper (CVPR 2022). Please see our project page for more results.
Dataset Distillation by Matching Training Trajectories
George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu
CMU, MIT, UC Berkeley
CVPR 2022
The task of “Dataset Distillation” is to learn a small number of synthetic images such that a model trained on this set alone will have similar test performance as a model trained on the full real dataset.