|

Multiparametric mapping in the brain from conventional contrast-weighted images using deep learning.

Researchers

Journal

Modalities

Models

Abstract

To develop a deep-learning-based method to quantify multiple parameters in the brain from conventional contrast-weighted images.Eighteen subjects were imaged using an MR Multitasking sequence to generate reference T1 and T2 maps in the brain. Conventional contrast-weighted images consisting of T1 MPRAGE, T1 GRE, and T2 FLAIR were acquired as input images. A U-Net-based neural network was trained to estimate T1 and T2 maps simultaneously from the contrast-weighted images. Six-fold cross-validation was performed to compare the network outputs with the MR Multitasking references.The deep-learning T1 /T2 maps were comparable with the references, and brain tissue structures and image contrasts were well preserved. A peak signal-to-noise ratio >32 dB and a structural similarity index >0.97 were achieved for both parameter maps. Calculated on brain parenchyma (excluding CSF), the mean absolute errors (and mean percentage errors) for T1 and T2 maps were 52.7 ms (5.1%) and 5.4 ms (7.1%), respectively. ROI measurements on four tissue compartments (cortical gray matter, white matter, putamen, and thalamus) showed that T1 and T2 values provided by the network outputs were in agreement with the MR Multitasking reference maps. The mean differences were smaller than ± 1%, and limits of agreement were within ± 5% for T1 and within ± 10% for T2 after taking the mean differences into account.A deep-learning-based technique was developed to estimate T1 and T2 maps from conventional contrast-weighted images in the brain, enabling simultaneous qualitative and quantitative MRI without modifying clinical protocols.© 2021 International Society for Magnetic Resonance in Medicine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *