GitHub Link
The GitHub link is
https://github.com/xiangz-0/gem
Introduce
The repository "XiangZ-0/GEM" contains a PyTorch implementation of the ICCV'23 paper titled "Generalizing Event-Based Motion Deblurring in Real-World Scenarios." The paper addresses the limitations of current event-based motion deblurring methods, which assume consistent spatial resolution and specific blur distributions. The proposed approach introduces a scale-aware network that handles varying spatial and temporal scales of motion blur. A two-stage self-supervised learning scheme adapts the method to real-world data distribution. The code provides tools for testing and training on different datasets, including a real-world dataset called MS-RBD. The work demonstrates significant performance improvements and offers a citation for those who find it useful in their research. The code is built upon PyTorch Lightning, LIIF, and Deformable Convolution V2.
Event-based motion deblurring has shown promising results by exploiting low-latency events.
Content
Event-based motion deblurring has shown promising results by exploiting low-latency events. However, current approaches are limited in their practical usage, as they assume the same spatial resolution of inputs and specific blurriness distributions. This work addresses these limitations and aims to generalize the performance of event-based deblurring in real-world scenarios. We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur. A two-stage self-supervised learning scheme is then developed to fit real-world data distribution. By utilizing the relativity of blurriness, our approach efficiently ensures the restored brightness and structure of latent images and further generalizes deblurring performance to handle varying spatial and temporal scales of motion blur in a self-distillation manner. Our method is extensively evaluated, demonstrating remarkable performance, and we also introduce a real-world dataset consisting of multi-scale blurry frames and events to facilitate research in event-based deblurring. Install the above dependencies and Deformable Convolution V2 Deblurred results will be saved in './results/'. Note that the script will automatically compute PSNR and SSIM for Ev-REDS and HS-ERGB datasets. Since MS-RBD is a real-world dataset without ground-truth images, we predict the central sharp latent image for qualitative evaluation in real-world scenarios. If you want to train a model on your own dataset (especially real-world datasets), it is recommended to pack your data in the MS-RBD format and then modify 'configs/msrbd_train.yaml' according to your needs for training :) If you find our work useful in your research, please consider citing: This code is built based on the Pytorch Lightning template, LIIF, and Deformable Convolution V2.
