Invertible Residual Blocks in Deep Learning Networks.

Researchers

Journal

Modalities

Models

Abstract

Residual blocks have been widely used in deep learning networks. However, information may be lost in residual blocks due to the relinquishment of information in rectifier linear units (ReLUs). To address this issue, invertible residual networks have been proposed recently but are generally under strict restrictions which limit their applications. In this brief, we investigate the conditions under which a residual block is invertible. A sufficient and necessary condition is presented for the invertibility of residual blocks with one layer of ReLU inside the block. In particular, for widely used residual blocks with convolutions, we show that such residual blocks are invertible under weak conditions if the convolution is implemented with certain zero-padding methods. Inverse algorithms are also proposed, and experiments are conducted to show the effectiveness of the proposed inverse algorithms and prove the correctness of the theoretical results.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *