Image2Reverb: Cross-Model Reverb Impulse Response Synthesis

Image2Reverb

Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis
Nikhil Singh, Jeff Mentch, Jerry Ng, Matthew Beveridge, Iddo Drori

Code for the ICCV 2021 paper [arXiv]. Image2Reverb is a method for generating audio impulse responses, to simulate the acoustic reverberation of a given environment, from a 2D image of it.

Dependencies

Model/Data:

  • PyTorch>=1.7.0
  • PyTorch Lightning
  • torchvision
  • torchaudio
  • librosa
  • PyRoomAcoustics
  • PIL

Eval/Preprocessing:

  • PySoundfile
  • SciPy
  • Scikit-Learn
  • python-acoustics
  • google-images-download
  • matplotlib

Resources

Model Checkpoint

Code Acknowlegdements

We borrow and adapt code snippets from GANSynth (and this PyTorch re-implementation), additional snippets from this PGGAN implementation, monodepth2, this GradCAM implementation, and more.

Citation

If you find the code, data, or models useful for your research, please consider citing our paper:

@article{singh2021image2reverb,
title={Image2Reverb: Cross-Modal Reverb

 

 

 

To finish reading, please visit source site