|

Registration-guided deep learning image segmentation for cone beam CT-based online adaptive radiotherapy.

Researchers

Journal

Modalities

Models

Abstract

Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as Cone Beam Computed Tomography (CBCT). Direct application of deep learning (DL)-based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models.The RgDL framework is composed of two components: image registration and registration-guided DL segmentation. The image registration algorithm transforms / deforms planning contours, which were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework-Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)-with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U-Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset.Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def-RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than one second in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, while RgDL only dropped by 3.4%.By incorporating the patient-specific registration guidance to a population-based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART. This article is protected by copyright. All rights reserved.This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *