Improving breast mass classification by shared data with domain transformation using a generative adversarial network.

Researchers

Journal

Modalities

Models

Abstract

Training of a convolutional neural network (CNN) generally requires a large dataset. However, it is not easy to collect a large medical image dataset. The purpose of this study is to investigate the utility of synthetic images in training CNNs and to demonstrate the applicability of unrelated images by domain transformation. Mammograms showing 202 benign and 212 malignant masses were used for evaluation. To create synthetic data, a cycle generative adversarial network was trained with 599 lung nodules in computed tomography (CT) and 1430 breast masses on digitized mammograms (DDSM). A CNN was trained for classification between benign and malignant masses. The classification performance was compared between the networks trained with the original data, augmented data, synthetic data, DDSM images, and natural images (ImageNet dataset). The results were evaluated in terms of the classification accuracy and the area under the receiver operating characteristic curves (AUC). The classification accuracy improved from 65.7% to 67.1% with data augmentation. The use of an ImageNet pretrained model was useful (79.2%). Performance was slightly improved when synthetic images or the DDSM images only were used for pretraining (67.6 and 72.5%, respectively). When the ImageNet pretrained model was trained with the synthetic images, the classification performance slightly improved (81.4%), although the difference in AUCs was not statistically significant. The use of the synthetic images had an effect similar to the DDSM images. The results of the proposed study indicated that the synthetic data generated from unrelated lesions by domain transformation could be used to increase the training samples.
Copyright © 2020 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *