Regularizing Deep Neural Networks by Enhancing Diversity in Feature Extraction.

Researchers

Journal

Modalities

Models

Abstract

This paper proposes a new and efficient technique to regularize the neural network in the context of deep learning using correlations among features. Previous studies have shown that oversized deep neural network models tend to produce a lot of redundant features that are either the shifted version of one another or are very similar and show little or no variations, thus resulting in redundant filtering. We propose a way to address this problem and show that such redundancy can be avoided using regularization and adaptive feature dropout mechanism. We show that regularizing both negative and positive correlated features according to their differentiation and based on their relative cosine distances yields network extracting dissimilar features with less overfitting and better generalization. This concept is illustrated with deep multilayer perceptron, convolutional neural network, sparse autoencoder, gated recurrent unit, and long short-term memory on MNIST digits recognition, CIFAR-10, ImageNet, and Stanford Natural Language Inference data sets.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *