|

Geometry-Aware Deep Video Deblurring via Recurrent Feature Refinement.

Researchers

Journal

Modalities

Models

Abstract

Blurring in videos is a frequent phenomenon in real-world video data owing to camera shake or object movement at different scene depths. Hence, video deblurring is an ill-posed problem that requires understanding of geometric and temporal information. Traditional model-based optimization methods first define a degradation model and then solve an optimization problem to recover the latent frames with a variational model for additional external information, such as optical flow, segmentation, depth, or camera movement. Recent deep-learning-based approaches learn from numerous training pairs of blurred and clean latent frames, with the powerful representation ability of deep convolutional neural networks. Although deep models have achieved remarkable performances without the explicit model, existing deep methods do not utilize geometrical information as strong priors. Therefore, they cannot handle extreme blurring caused by large camera shake or scene depth variations. In this paper, we propose a geometry-aware deep video deblurring method via a recurrent feature refinement module that exploits optimization-based and deep-learning-based schemes. In addition to the off-the-shelf deep geometry estimation modules, we design an effective fusion module for geometrical information with deep video features. Specifically, similar to model-based optimization, our proposed module recurrently refines video features as well as geometrical information to restore more precise latent frames. To evaluate the effectiveness and generalization of our framework, we perform tests on eight baseline networks whose structures are motivated by the previous research. The experimental results show that our framework offers greater performances than the eight baselines and produces state-of-the-art performance on four video deblurring benchmark datasets.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *