Semi-supervised atmospheric component learning in low-light image problem.

Researchers

Journal

Modalities

Models

Abstract

Ambient lighting conditions play a crucial role in determining the perceptual quality of images from photographic devices. In general, inadequate transmission light and undesired atmospheric conditions jointly degrade the image quality. If we know the desired ambient factors associated with the given low-light image, we can recover the enhanced image easily. Typical deep networks perform enhancement mappings without investigating the light distribution and color formulation properties. This leads to a lack of image instance-adaptive performance in practice. On the other hand, physical model-driven schemes suffer from the need for inherent decompositions and multiple objective minimizations. Moreover, the above approaches are rarely data efficient or free of postprediction tuning. Influenced by the above issues, this study presents a semisupervised training method using no-reference image quality metrics for low-light image restoration. We incorporate the classical haze distribution model to explore the physical properties of the given image to learn the effect of atmospheric components and minimize a single objective for restoration. We validate the performance of our network for six widely used low-light datasets. Experimental studies show that our proposed study achieves a competitive performance for no-reference metrics compared to current state-of-the-art methods. We also show the improved generalization performance of our proposed method which is efficient in preserving face identities in extreme low-light scenarios.Copyright: © 2023 Fahim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *