| |

Deep learning-based reconstruction of interventional tools and devices from four X-ray projections for tomographic interventional guidance.

Researchers

Journal

Modalities

Models

Abstract

Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires or coils. In this work we propose a deep learning-based pipeline for real-time tomographic (four-dimensional) interventional guidance at conventional dose levels.Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils and guide wires RESULTS: The pipeline is capable of reconstructing interventional tools from only four x-ray projections without the need for a patient prior. At an isotropic voxel size of 100 µm our methods achieves a precision/recall within a 100 µm environment of the ground truth of 93 %/98 %, 90 %/71 %, and 93 %/76 % for guide wires, stents and coils, respectively.A deep learning-based approach for four-dimensional interventional guidance is able to overcome the drawbacks of today’s interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy.This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *