|

A Broad Generative Network for Two-Stage Image Outpainting.

Researchers

Journal

Modalities

Models

Abstract

Image outpainting is a challenge for image processing since it needs to produce a big scenery image from a few patches. In general, two-stage frameworks are utilized to unpack complex tasks and complete them step-by-step. However, the time consumption caused by training two networks will hinder the method from adequately optimizing the parameters of networks with limited iterations. In this article, a broad generative network (BG-Net) for two-stage image outpainting is proposed. As a reconstruction network in the first stage, it can be quickly trained by utilizing ridge regression optimization. In the second stage, a seam line discriminator (SLD) is designed for transition smoothing, which greatly improves the quality of images. Compared with state-of-the-art image outpainting methods, the experimental results on the Wiki-Art and Place365 datasets show that the proposed method achieves the best results under evaluation metrics: the Fréchet inception distance (FID) and the kernel inception distance (KID). The proposed BG-Net has good reconstructive ability with faster training speed than those of deep learning-based networks. It reduces the overall training duration of the two-stage framework to the same level as the one-stage framework. Furthermore, the proposed method is adapted to image recurrent outpainting, demonstrating the powerful associative drawing capability of the model.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *