|

Multi-scale adversarial learning with difficult region supervision learning models for primary tumor segmentation.

Researchers

Journal

Modalities

Models

Abstract

Recently, deep learning techniques have found extensive application in accurate and automated segmentation of tumor regions. However, owing to the variety of tumor shapes, complex types, and unpredictability of spatial distribution, tumor segmentation still faces major challenges. Taking cues from the deep supervision and adversarial learning, we have devised a cascade-based methodology incorporating multi-scale adversarial learning and difficult-region supervision learning in this study to tackle these challenges. 
 Approach. Overall, the method adheres to a coarse-to-fine strategy, first roughly locating the target region, and then refining the target object with multi-stage cascaded binary segmentation which converts complex multi-class segmentation problems into multiple simpler binary segmentation problems. In addition, a Multi-Scale Adversarial Learning Difficult Supervised UNet (MSALDS-UNet) is proposed as our model for fine-segmentation, which applies multiple discriminators along the decoding path of the segmentation network to implement multi-scale adversarial learning, thereby enhancing the accuracy of network segmentation. Meanwhile, in MSALDS-UNet, we introduce a difficult region supervision loss to effectively utilize structural information for segmenting difficult-to-distinguish areas, such as blurry boundary areas. 
Main results. A thorough validation of three independent public databases (KiTS21, MSD’s Brain and Pancreas datasets) shows that our model achieves satisfactory results for tumor segmentation in terms of key evaluation metrics including \textcolor{red}{DSC, JC, and HD95}.
Significance. This paper introduces a cascade approach that combines multi-scale adversarial learning and difficult supervision to achieve precise tumor segmentation. It confirms that the combination can improve the segmentation performance, especially for small objects. (Our code will be publicly available after acceptance on https://zhengshenhai.github.io/).© 2024 Institute of Physics and Engineering in Medicine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *