|

Removing Atmospheric Turbulence via Deep Adversarial Learning.

Researchers

Journal

Modalities

Models

Abstract

Restoring images degraded due to atmospheric turbulence is challenging as it consists of several distortions. Several deep learning methods have been proposed to minimize atmospheric distortions that consist of a single-stage deep network. However, we find that a single-stage deep network is insufficient to remove the mixture of distortions caused by atmospheric turbulence. We propose a two-stage deep adversarial network that minimizes atmospheric turbulence to mitigate this. The first stage reduces the geometrical distortion and the second stage minimizes the image blur. We improve our network by adding channel attention and a proposed sub-pixel mechanism, which utilizes the information between the channels and further reduces the atmospheric turbulence at the finer level. Unlike previous methods, our approach neither uses any prior knowledge about atmospheric turbulence conditions at inference time nor requires the fusion of multiple images to get a single restored image. Our final restoration models DT-GAN+ and DTD-GAN+ outperform the general state-of-the-art image-to-image translation models and baseline restoration models. We synthesize turbulent image datasets to train the restoration models. Additionally, we also curate a natural turbulent dataset from YouTube to show the generalisability of the proposed model. We perform extensive experiments on restored images by utilizing them for downstream tasks such as classification, pose estimation, semantic keypoint estimation, and depth estimation. We observe that our restored images outperform turbulent images in downstream tasks by a significant margin demonstrating the restoration model’s applicability in real-world problems.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *