|

MAIRNet: weakly supervised anatomy-aware multimodal articulated image registration network.

Researchers

Journal

Modalities

Models

Abstract

Multimodal articulated image registration (MAIR) is a challenging problem because the resulting transformation needs to maintain rigidity for bony structures while allowing elastic deformation for surrounding soft tissues. Existing deep learning-based methods ignore the articulated structures and consider it as a pure deformable registration problem, leading to suboptimal results.We propose a novel weakly supervised anatomy-aware multimodal articulated image registration network, referred as MAIRNet, to solve the challenging problem. The architecture of MAIRNet comprises of two branches: a non-learnable polyrigid registration branch to estimate an initial velocity field, and a learnable deformable registration branch to learn an increment. These two branches work together to produce a velocity field that can be integrated to generate the final displacement field.We designed and conducted comprehensive experiments on three datasets to evaluate the performance of the proposed method. Specifically, on the hip dataset, our method achieved, respectively, an average dice of 90.8%, 92.4% and 91.3% for the pelvis, the right femur, and the left femur. On the lumbar spinal dataset, our method obtained, respectively, an average dice of 86.1% and 85.9% for the L4 and the L5 vertebrae. On the thoracic spinal dataset, our method achieved, respectively, an average dice of 76.7%, 79.5%, 82.9%, 85.5% and 85.7% for the five thoracic vertebrae ranging from T6 to T10.In summary, we developed a novel approach for multimodal articulated image registration. Comprehensive experiments conducted on three typical yet challenging datasets demonstrated the efficacy of the present approach. Our method achieved better results than the state-of-the-art approaches.© 2024. CARS.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *