Lung nodule malignancy classification with weakly supervised explanation generation.

Researchers

Journal

Modalities

Models

Abstract

Purpose: Explainable AI aims to build systems that not only give high performance but also are able to provide insights that drive the decision making. However, deriving this explanation is often dependent on fully annotated (class label and local annotation) data, which are not readily available in the medical domain. Approach: This paper addresses the above-mentioned aspects and presents an innovative approach to classifying a lung nodule in a CT volume as malignant or benign, and generating a morphologically meaningful explanation for the decision in the form of attributes such as nodule margin, sphericity, and spiculation. A deep learning architecture that is trained using a multi-phase training regime is proposed. The nodule class label (benign/malignant) is learned with full supervision and is guided by semantic attributes that are learned in a weakly supervised manner. Results: Results of an extensive evaluation of the proposed system on the LIDC-IDRI dataset show good performance compared with state-of-the-art, fully supervised methods. The proposed model is able to label nodules (after full supervision) with an accuracy of 89.1% and an area under curve of 0.91 and to provide eight attributes scores as an explanation, which is learned from a much smaller training set. The proposed system’s potential to be integrated with a sub-optimal nodule detection system was also tested, and our system handled 95% of false positive or random regions in the input well by labeling them as benign, which underscores its robustness. Conclusions: The proposed approach offers a way to address computer-aided diagnosis system design under the constraint of sparse availability of fully annotated images.© 2021 Society of Photo-Optical Instrumentation Engineers (SPIE).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *