| |

Supervised Learning in Neural Networks: Feedback-Network-Free Implementation and Biological Plausibility.

Researchers

Journal

Modalities

Models

Abstract

The well-known backpropagation learning algorithm is probably the most popular learning algorithm in artificial neural networks. It has been widely used in various applications of deep learning. The backpropagation algorithm requires a separate feedback network to back propagate errors. This feedback network must have the same topology and connection strengths (weights) as the feed-forward network. In this article, we propose a new learning algorithm that is mathematically equivalent to the backpropagation algorithm but does not require a feedback network. The elimination of the feedback network makes the implementation of the new algorithm much simpler. The elimination of the feedback network also significantly increases biological plausibility for biological neural networks to learn using the new algorithm by means of some retrograde regulatory mechanisms that may exist in neurons. This new algorithm also eliminates the need for two-phase adaptation (feed-forward phase and feedback phase). Hence, neurons can adapt asynchronously and concurrently in a way analogous to that of biological neurons.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *