|

Deep-learning based synthetization of real-time in-treatment 4D images using surface motion and pre-treatment images: a proof-of-concept study.

Researchers

Journal

Modalities

Models

Abstract

To develop a deep learning model that maps body surface motion to internal anatomy deformation, which is potentially applicable to dose-free real-time 4D virtual image-guided radiotherapy based on skin surface data.Body contours were segmented out of 4DCT images. Deformable image registration algorithm was used to register the end-of-exhalation (EOE) phase to other phases. Deformation vector field was dimension-reduced to the first two principal components (PC). A deep learning model was trained to predict the two PC scores of each phase from surface displacement. The instant deformation field can then be reconstructed, warping EOE image to obtain real-time CT image. This approach was validated on 4D XCAT phantom, the public DIR-Lab and 4D-Lung dataset respectively, with and without simulated noise.Validation accuracy of the tumor centroid trajectory was observed as 0.04±0.02 mm on XCAT phantom. For the DIR-Lab dataset, 300 landmarks were annotated on the EOI images of each patient, and the mean displacements between their predicted and reference positions were below 2 mm for all studied cases. For the 4D-Lung dataset, the average dice coefficients ±std between predicted and reference tumor contours at EOI phase were 0.835±0.092 for all studied cases.A deep learning-based approach was proposed and validated to predict internal anatomy deformation from the surface motion, which is potentially applicable to on-line target navigation for accurate radiotherapy based on real-time 4D skin surface data and pre-treatment images. This article is protected by copyright. All rights reserved.This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *