| |

Diagnosing uterine cervical cancer on a single T2-weighted image: Comparison between deep learning versus radiologists.

Researchers

Journal

Modalities

Models

Abstract

To compare deep learning with radiologists when diagnosing uterine cervical cancer on a single T2-weighted image.
This study included 418 patients (age range, 21-91 years; mean, 50.2 years) who underwent magnetic resonance imaging (MRI) between June 2013 and May 2020. We included 177 patients with pathologically confirmed cervical cancer and 241 non-cancer patients. Sagittal T2-weighted images were used for analysis. A deep learning model using convolutional neural networks (DCNN), called Xception architecture, was trained with 50 epochs using 488 images from 117 cancer patients and 509 images from 181 non-cancer patients. It was tested with 60 images for 60 cancer and 60 non-cancer patients. Three blinded experienced radiologists also interpreted these 120 images independently. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were compared between the DCNN model and radiologists.
The DCNN model and the radiologists had a sensitivity of 0.883 and 0.783-0.867, a specificity of 0.933 and 0.917-0.950, and an accuracy of 0.908 and 0.867-0.892, respectively. The DCNN model had an equal to, or better, diagnostic performance than the radiologists (AUC = 0.932, and p for accuracy = 0.272-0.62).
Deep learning provided diagnostic performance equivalent to experienced radiologists when diagnosing cervical cancer on a single T2-weighted image.
Copyright © 2020 Elsevier B.V. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *