Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches logo

Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches

Although unsupervised approaches based on generative adversarial networks offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN-based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers.

GitHub Link

The GitHub link is https://github.com/linxin0/scpgabnet

Introduce

The GitHub repository "SCPGabNet" presents an unsupervised image denoising approach for real-world scenarios using a Self-Collaboration Parallel Generative Adversarial Branches framework. The method enhances denoising performance without increasing computational complexity by iteratively replacing less powerful denoisers in a filter-guided noise extraction module. It also introduces parallel generative adversarial branches with "self-synthesis" and "unpaired-synthesis" constraints for stable training. The approach outperforms existing unsupervised methods, as demonstrated in experiments. The code requires Python 3.7.13, PyTorch 1.13.0, numpy 1.21.5, opencv 4.6.0, and scikit-image 0.19.3. Pre-trained models and dataset download links are provided. The method's effectiveness is validated through denoising results on the SIDD validation dataset. The citation and contact information for inquiries are also provided. Although unsupervised approaches based on generative adversarial networks offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN-based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers.

Content

This is the official code of SCPGabNet Deep learning methods have shown remarkable performance in image denoising, particularly when trained on large-scale paired datasets. However, acquiring such paired datasets for real-world scenarios poses a significant challenge. Although unsupervised approaches based on generative adversarial networks (GANs) offer a promising solution for denoising without paired datasets, they are difficult in surpassing the performance limitations of conventional GAN based unsupervised frameworks without significantly modifying existing structures or increasing the computational complexity of denoisers. To address this problem, we propose a self-collaboration (SC) strategy for multiple denoisers. This strategy can achieve significant performance improvement without increasing the inference complexity of the GAN-based denoising framework. Its basic idea is to iteratively replace the previous less powerful denoiser in the filter-guided noise extraction module with the current powerful denoiser. This process generates better synthetic clean-noisy image pairs, leading to a more powerful denoiser for the next iteration. In addition, we propose a baseline method that includes parallel generative adversarial branches with complementary Òself-synthesisÓ and Òunpaired-synthesisÓ constraints. This baseline ensures the stability and effectiveness of the training network. The experimental results demonstrate the superiority of our method over state-of-the-art unsupervised methods. You can get the complete SIDD validation dataset from https://www.eecs.yorku.ca/~kamel/sidd/benchmark.php. '.mat' files need to be converted to images ('.png'). run test.pyto output the denoising results of our proposed method. If you have any questions, please contact [email protected] More detailed reproduction introduction is soon, I am very busy in these days...

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.