Balancing Transferability and Discriminability for Unsupervised Domain Adaptation.

Researchers

Journal

Modalities

Models

Abstract

Unsupervised domain adaptation (UDA) aims to leverage a sufficiently labeled source domain to classify or represent the fully unlabeled target domain with a different distribution. Generally, the existing approaches try to learn a domain-invariant representation for feature transferability and add class discriminability constraints for feature discriminability. However, the feature transferability and discriminability are usually not synchronized, and there are even some contradictions between them, which is often ignored and, thus, reduces the accuracy of recognition. In this brief, we propose a deep multirepresentations adversarial learning (DMAL) method to explore and mitigate the inconsistency between feature transferability and discriminability in UDA task. Specifically, we consider feature representation learning at both the domain level and class level and explore four types of feature representations: domain-invariant, domain-specific, class-invariant, and class-specific. The first two types indicate the transferability of features, and the last two indicate the discriminability. We develop an adversarial learning strategy between the four representations to make the feature transferability and discriminability to be gradually synchronized. A series of experimental results verify that the proposed DMAL achieves comparable and promising results on six UDA datasets.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *