|

Ultrasound Frame-to-Volume Registration via Deep Learning for Interventional Guidance.

Researchers

Journal

Modalities

Models

Abstract

Fusing intra-operative 2D ultrasound (US) frames with preoperative 3D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2D/3D data. To overcome these challenges, we propose a novel US frame-to-volume registration pipeline to bridge dimensionality gap between 2D US frames and 3D US volume. The developed pipeline is implemented using deep neural networks, which is fully automatic without requiring external tracking devices. The framework consists of three major components, including (1) a frame-to-frame registration network that estimates the current frame’s 3D spatial position based on previous video context, (2) a frame-to-slice correction network adjusting the estimated frame position using the 3D US volumetric information, and (3) a similarity filtering mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2D-US to 3D-MR fusion navigation tasks. The proposed Frame-to-Volume Registration (FVReg) achieved an average target navigation error of 1.93 mm at 5 to 14 frames per second. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *