|

Deeply Coupled Convolution-Transformer With Spatial-Temporal Complementary Learning for Video-Based Person Re-Identification.

Researchers

Journal

Modalities

Models

Abstract

Advanced deep convolutional neural networks (CNNs) have shown great success in video-based person re-identification (Re-ID). However, they usually focus on the most obvious regions of persons with a limited global representation ability. Recently, it witnesses that Transformers explore the interpatch relationships with global observations for performance improvements. In this work, we take both the sides and propose a novel spatial-temporal complementary learning framework named deeply coupled convolution-transformer (DCCT) for high-performance video-based person Re-ID. First, we couple CNNs and Transformers to extract two kinds of visual features and experimentally verify their complementarity. Furthermore, in spatial, we propose a complementary content attention (CCA) to take advantages of the coupled structure and guide independent features for spatial complementary learning. In temporal, a hierarchical temporal aggregation (HTA) is proposed to progressively capture the interframe dependencies and encode temporal information. Besides, a gated attention (GA) is used to deliver aggregated temporal information into the CNN and Transformer branches for temporal complementary learning. Finally, we introduce a self-distillation training strategy to transfer the superior spatial-temporal knowledge to backbone networks for higher accuracy and more efficiency. In this way, two kinds of typical features from same videos are integrated mechanically for more informative representations. Extensive experiments on four public Re-ID benchmarks demonstrate that our framework could attain better performances than most state-of-the-art methods.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *