McSTRA: A multi-branch cascaded swin transformer for point spread function-guided robust MRI reconstruction.

Researchers

Journal

Modalities

Models

Abstract

Deep learning MRI reconstruction methods are often based on Convolutional neural network (CNN) models; however, they are limited in capturing global correlations among image features due to the intrinsic locality of the convolution operation. Conversely, the recent vision transformer models (ViT) are capable of capturing global correlations by applying self-attention operations on image patches. Nevertheless, the existing transformer models for MRI reconstruction rarely leverage the physics of MRI. In this paper, we propose a novel physics-based transformer model titled, the Multi-branch Cascaded Swin Transformers (McSTRA) for robust MRI reconstruction. McSTRA combines several interconnected MRI physics-related concepts with the Swin transformers: it exploits global MRI features via the shifted window self-attention mechanism; it extracts MRI features belonging to different spectral components via a multi-branch setup; it iterates between intermediate de-aliasing and data consistency via a cascaded network with intermediate loss computations; furthermore, we propose a point spread function-guided positional embedding generation mechanism for the Swin transformers which exploit the spread of the aliasing artifacts for effective reconstruction. With the combination of all these components, McSTRA outperforms the state-of-the-art methods while demonstrating robustness in adversarial conditions such as higher accelerations, noisy data, different undersampling protocols, out-of-distribution data, and abnormalities in anatomy.Copyright © 2023 The Authors. Published by Elsevier Ltd.. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *