The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System.

Researchers

Journal

Modalities

Models

Abstract

The coronavirus disease 2019 (COVID-19) rapidly spread around the world, and resulted in a global pandemic. Applying artificial intelligence to COVID-19 research can produce very exciting results. However, most research has focused on applying AI techniques in the study of COVID-19, but has ignored the security and reliability of AI systems. In this paper, we explore adversarial attacks on a deep learning system based on COVID-19 CT images with the aim of helping to address this problem. Firstly, we built a deep learning system that could identify COVID-19 CT images and non-COVID-19 CT images with an average accuracy of 76.27%. Secondly, we attacked the pretrained model with an adversarial attack algorithm, i.e., FGSM, to cause the COVID-19 deep learning system to misclassify the CT images, and the classification accuracy of non-COVID-19 CT images dropped from 80% to 0%. Finally, in response to this attack, we proposed how a more secure and reliable deep learning model based on COVID-19 medical images could be built. This research is based on a COVID-19 CT image recognition system, which studies the security of a COVID-19 CT image-based deep learning system. We hope to draw more researchers’ attention to the security and reliability of medical deep learning systems.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *