|

A Deep-Network Piecewise Linear Approximation Formula.

Researchers

Journal

Modalities

Models

Abstract

The mathematical foundation of deep learning is the theorem that any continuous function can be approximated within any specified accuracy by using a neural network with certain non-linear activation functions. However, this theorem does not tell us what the network architecture should be and what the values of the weights are. One must train the network to estimate the weights. There is no guarantee that the optimal weights can be reached after training. This paper develops an explicit architecture of a universal deep network by using the Gray code order and develops an explicit formula for the weights of this deep network. This architecture is target function independent. Once the target function is known, the weights are calculated by the proposed formula, and no training is required. There is no concern whether the training may or may not reach the optimal weights. This deep network gives the same result as the shallow piecewise linear interpolation function for an arbitrary target function.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *