| |

A self-training deep neural network for early prediction of cognitive deficits in very preterm infants using brain functional connectome data.

Researchers

Journal

Modalities

Models

Abstract

Deep learning has been employed using brain functional connectome data for evaluating the risk of cognitive deficits in very preterm infants. Although promising, training these deep learning models typically requires a large amount of labeled data, and labeled medical data are often very difficult and expensive to obtain.This study aimed to develop a self-training deep neural network (DNN) model for early prediction of cognitive deficits at 2 years of corrected age in very preterm infants (gestational age ≤32 weeks) using both labeled and unlabeled brain functional connectome data.We collected brain functional connectome data from 343 very preterm infants at a mean (standard deviation) postmenstrual age of 42.7 (2.5) weeks, among whom 103 children had a cognitive assessment at 2 years (i.e. labeled data), and the remaining 240 children had not received 2-year assessments at the time this study was conducted (i.e. unlabeled data). To develop a self-training DNN model, we built an initial student model using labeled brain functional connectome data. Then, we applied the trained model as a teacher model to generate pseudo-labels for unlabeled brain functional connectome data. Next, we combined labeled and pseudo-labeled data to train a new student model. We iterated this procedure to obtain the best student model for the early prediction task in very preterm infants.In our cross-validation experiments, the proposed self-training DNN model achieved an accuracy of 71.0%, a specificity of 71.5%, a sensitivity of 70.4% and an area under the curve of 0.75, significantly outperforming transfer learning models through pre-training approaches.We report the first self-training prognostic study in very preterm infants, efficiently utilizing a small amount of labeled data with a larger share of unlabeled data to aid the model training. The proposed technique is expected to facilitate deep learning with insufficient training data.© 2022. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *