| |

Deep learning-based liver segmentation for fusion-guided intervention.

Researchers

Journal

Modalities

Models

Abstract

Tumors often have different imaging properties, and there is no single imaging modality that can visualize all tumors. In CT-guided needle placement procedures, image fusion (e.g. with MRI, PET, or contrast CT) is often used as image guidance when the tumor is not directly visible in CT. In order to achieve image fusion, interventional CT image needs to be registered to an imaging modality, in which the tumor is visible. However, multi-modality image registration is a very challenging problem. In this work, we develop a deep learning-based liver segmentation algorithm and use the segmented surfaces to assist image fusion with the applications in guided needle placement procedures for diagnosing and treating liver tumors.
The developed segmentation method integrates multi-scale input and multi-scale output features in one single network for context information abstraction. The automatic segmentation results are used to register an interventional CT with a diagnostic image. The registration helps visualize the target and guide the interventional operation.
The segmentation results demonstrated that the developed segmentation method is highly accurate with Dice of 96.1% on 70 CT scans provided by LiTS challenge. The segmentation algorithm is then applied to a set of images acquired for liver tumor intervention for surface-based image fusion. The effectiveness of the proposed methods is demonstrated through a number of clinical cases.
Our study shows that deep learning-based image segmentation can obtain useful results to help image fusion for interventional guidance. Such a technique may lead to a number of other potential applications.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *