| |

Interpretation of deep non-linear factorization for autism.

Researchers

Journal

Modalities

Models

Abstract

Autism, a neurodevelopmental disorder, presents significant challenges for diagnosis and classification. Despite the widespread use of neural networks in autism classification, the interpretability of their models remains a crucial issue. This study aims to address this concern by investigating the interpretability of neural networks in autism classification using the deep symbolic regression and brain network interpretative methods. Specifically, we analyze publicly available autism fMRI data using our previously developed Deep Factor Learning model on a Hibert Basis tensor (HB-DFL) method and extend the interpretative Deep Symbolic Regression method to identify dynamic features from factor matrices, construct brain networks from generated reference tensors, and facilitate the accurate diagnosis of abnormal brain network activity in autism patients by clinicians. Our experimental results show that our interpretative method effectively enhances the interpretability of neural networks and identifies crucial features for autism classification.Copyright © 2023 Chen, Yin and Ke.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *