| |

Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs.

Researchers

Journal

Modalities

Models

Abstract

  There has been substantial interest in developing techniques for synthesis of CT like images from MRI inputs, with important applications in simultaneous PET/MR and radiotherapy planning. Deep learning has recently shown great potential for solving this problem. The goal of this research was to investigate the capability of four common clinical MRI pulse sequences (T1-weighted gradient echo [T1], T2-weighted fat- suppressed fast spin echo [T2-FatSat], post-contrast T1-weighted gradient echo [T1-Post], and fast spin echo T2-weighted fluid attenuated inversion recovery [CUBE-FLAIR]) as inputs into a deep CT synthesis pipeline. Data was obtained retrospectively in 92 subjects who had undergone an MRI and CT scan on the same day. The patient’s MR and CT scans were registered to one another using affine registration. The deep learning model was a convolutional neural network encoder-decoder with skip connections similar to the U-net architecture and Inception V3 inspired blocks instead of sequential convolution blocks. After training with 150 epochs and a batch size of 6, the model was evaluated using SSIM, PNSR, MAE and Dice Coefficients. We found that feasible results were attainable for each image type, and no single image type was superior for all analyses. The MAE of the resulting synthesized CT in the whole brain was 51.236 ± 4.504 for CUBE-FLAIR, 45.432 ± 8.517 for T1, 44.558 ± 7.478 for T1-Post, and 45.721 ± 8.7767 for T2. Deep learning-based synthesis of CT images from MRI is possible with a wide range of inputs, suggesting that viable images can be created from a wide range of clinical input types.
© 2020 Institute of Physics and Engineering in Medicine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *