Deep Multi-Exposure Image Fusion for Dynamic Scenes.

Researchers

Journal

Modalities

Models

Abstract

Recently, learning-based multi-exposure fusion (MEF) methods have made significant improvements. However, these methods mainly focus on static scenes and are prone to generate ghosting artifacts when tackling a more common scenario, i.e., the input images include motion, due to the lack of a benchmark dataset and solution for dynamic scenes. In this paper, we fill this gap by creating an MEF dataset of dynamic scenes, which contains multi-exposure image sequences and their corresponding high-quality reference images. To construct such a dataset, we propose a ‘static-for-dynamic’ strategy to obtain multi-exposure sequences with motions and their corresponding reference images. To the best of our knowledge, this is the first MEF dataset of dynamic scenes. Correspondingly, we propose a deep dynamic MEF (DDMEF) framework to reconstruct a ghost-free high-quality image from only two differently exposed images of a dynamic scene. DDMEF is achieved through two steps: pre-enhancement-based alignment and privilege-information-guided fusion. The former pre-enhances the input images before alignment, which helps to address the misalignments caused by the significant exposure difference. The latter introduces a privilege distillation scheme with an information attention transfer loss, which effectively improves the deghosting ability of the fusion network. Extensive qualitative and quantitative experimental results show that the proposed method outperforms state-of-the-art dynamic MEF methods. The source code and dataset are released at https://github.com/Tx000/Deep_dynamicMEF.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *