Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism.

Researchers

Journal

Modalities

Models

Abstract

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings.In order to improve the defense of the medical imaging system against adversarial examples, we propose a new model-based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and over-parameterized medical image DNNs model.Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the Chest X-Ray datasets, our defending method can even achieve up 77.18% defense rate for PGD (Projected Gradient Descent) attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model.Compared with the existing model-based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models. This article is protected by copyright. All rights reserved.This article is protected by copyright. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *