| |

Jump-GRS: a multi-phase approach to structured pruning of neural networks for neural decoding.

Researchers

Journal

Modalities

Models

Abstract

Neural decoding, an important area of neural engineering, helps to link neuron
activity to behavior in the outside world. Deep Neural Networks (DNNs), which
are becoming increasingly popular in many application fields of machine
learning, show promising performance in neural decoding compared to traditional
neural decoding methods. Various neural decoding applications, such as Brain
Computer Interface (BCI) applications, require both high decoding accuracy and
real-time decoding speed. Pruning methods are used to produce compact DNN
models for faster computational speed. Greedy inter-layer order with Random
Selection (GRS) is a recently-designed structured pruning method that derives
compact DNN models for calcium-imaging-based neural decoding. Although
GRS has advantages in terms of detailed structure analysis and consideration of
both learned information and model structure during the pruning process, the
method is very computationally intensive, and is not feasible when large-scale
DNN models need to be pruned within typical constraints on time and
computational resources. Large-scale DNN models arise in neural decoding when
large numbers of neurons are involved. In this paper, we build on GRS to
develop a new structured pruning algorithm called Jump GRS (JGRS) that is
designed to efficiently compress large-scale DNN models. We compare the pruning
performance and speed of JGRS and GRS with extensive experiments in the context
of neural decoding. Our results demonstrate that JGRS provides significantly
faster pruning speed compared to GRS, and at the same time, JGRS provides
pruned models that are similarly compact as those generated by GRS.Creative Commons Attribution license.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *