Transformer-Based T2-weighted MRI Synthesis from T1-weighted Images.

Researchers

Journal

Modalities

Models

Abstract

Multi-modality magnetic resonance (MR) images provide complementary information for disease diagnoses. However, modality missing is quite usual in real-life clinical practice. Current methods usually employ convolution-based generative adversarial network (GAN) or its variants to synthesize the missing modality. With the development of vision transformer, we explore its application in the MRI modality synthesis task in this work. We propose a novel supervised deep learning method for synthesizing a missing modality, making use of a transformer-based encoder. Specifically, a model is trained for translating 2D MR images from T1-weighted to T2-weighted based on conditional GAN (cGAN). We replace the encoder with transformer and input adjacent slices to enrich spatial prior knowledge. Experimental results on a private dataset and a public dataset demonstrate that our proposed model outperforms state-of-the-art supervised methods for MR image synthesis, both quantitatively and qualitatively. Clinical relevance- This work proposes a method to synthesize T2-weighted images from T1-weighted ones to address the modality missing issue in MRI.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *