Experimental Deep Learning Video De-interlacer
Work in progress deep de-interlacer filter. It is based on the architecture proposed by Bernasconi et al. from Disney Research | Studios. Original publication
Differences
While the publication appears to voluntarily omit some implementation details, the implementation presented here may not match exactly the one initially thought by the authors. First, the RDB does not add the convoluted input feature maps to the output of the network. In image denoising, we add back the input as we expect the RDB to remove noise of a still shot. Here, the network is trying to predict missing fields. Adding back unsuited temporal data at the output would, intuitively for me, increase aliasing which is undesired.
Dependencies
- Numpy
- PyTorch
- VapourSynth
Additional dependencies for training:
Usage
- Load an odd number N