|

Biophysically interpretable recurrent neural network for functional magnetic resonance imaging analysis and sparsity based causal architecture discovery.

Researchers

Journal

Modalities

Models

Abstract

Recent efforts use state-of-the-art Recurrent Neural Networks (RNN) to gain insight into neuroscience. A limitation of these works is that the used generic RNNs lack biophysical meaning, making the interpretation of the results in a neuroscience context difficult. In this paper, we propose a biophysically interpretable RNN built on the Dynamic Causal Modelling (DCM). DCM is an advanced nonlinear generative model typically used to test hypotheses of brain causal architectures and associated effective connectivities. We show that DCM can be cast faithfully as a special form of a new generalized RNN. In the resulting DCM-RNN, the hidden states are neural activity, blood flow, blood volume, and deoxyhemoglobin content and the parameters are biological quantities such as effective connectivity, oxygen extraction fraction and vessel wall elasticity. DCM-RNN is a versatile tool for neuroscience with great potential especially when combined with deep learning networks. In this study, we explore sparsity- based causal architecture discovery with DCM-RNN. In the experiments, we demonstrate that DCM-RNN equipped with $l_{1}$ connectivity regulation is more robust to noise and more powerful at discovering sparse architectures than classic DCM with $l_{2}$ connectivity regulation.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *