Blinding and blurring the multi-object tracker with adversarial perturbations.

Researchers

Journal

Modalities

Models

Abstract

Adversarial attack reveals a potential imperfection in deep models that they are susceptible to being tricked by imperceptible perturbations added to images. Recent deep multi-object trackers combine the functionalities of detection and association, rendering attacks on either the detector or the association component an effective means of deception. Existing attacks focus on increasing the frequency of ID switching, which greatly damages tracking stability, but is not enough to make the tracker completely ineffective. To fully explore the potential of adversarial attacks, we propose Blind-Blur Attack (BBA), a novel attack method based on spatio-temporal motion information to fool multi-object trackers. Specifically, a simple but efficient perturbation generator is trained with the blind-blur loss, simultaneously making the target invisible to the tracker and letting the background be regarded as moving targets. We take TraDeS as our main research tracker, and verify our attack method on other excellent algorithms (i.e., CenterTrack, FairMOT, and ByteTrack) on MOT-Challenge benchmark datasets (i.e., MOT16, MOT17, and MOT20). BBA attack reduced the MOTA of TraDeS and ByteTrack from 69.1 and 80.3 to -238.1 and -357.0, respectively, indicating that it is an efficient method with a high degrees of transferability.Copyright © 2024 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *