|

A Feature Space-Restricted Attention Attack on Medical Deep Learning Systems.

Researchers

Journal

Modalities

Models

Abstract

Deep neural network has shown a powerful performance in the medical image analysis of a variety of diseases. However, a number of studies over the past few years have demonstrated that these deep learning systems can be vulnerable to well-designed adversarial attacks, with minor disruptions added to the input. Since both the public and academia have focused on deep learning in the health information economy, these adversarial attacks would prove more important and raise security concerns. In this article, adversarial attacks on deep learning systems in medicine are analyzed from two different points of view: 1) white box and 2) black box. A fast adversarial sample generation method, Feature Space-Restricted Attention Attack is proposed to explore more confusing adversarial samples. It is based on a generative adversarial network with bound classification space to generate perturbations to achieve attacks. Meanwhile, it can employ an attention mechanism to focus this perturbation on the lesion region. This enables the perturbation closely associated with the classification information making the attack more efficient and invisible. The performance and specificity of the proposed attack method are demonstrated by conducting extensive experiments on three different types of medical images. Finally, it is expected that this work can assist practitioners become being of current weaknesses in the deployment of deep learning systems in clinical settings. And, it further investigates domain-specific features of medical deep learning systems to enhance model generalization and resistance to attacks.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *