| |

Unsupervised Domain Adaptation with Asymmetrical Margin Disparity loss and Outlier Sample Extraction.

Researchers

Journal

Modalities

Models

Abstract

Unsupervised domain adaptation (UDA) trains models using labeled data from a specific source domain and then transferring the knowledge to certain target domains that have few or no labels. Many prior measurement-based works achieve lots of progress, but their feature distinguishing abilities to classify target samples with similar features are not enough; they do not adequately consider the confusing samples in the target domain that are similar to the source domain; and they don’t consider negative transfer of the outlier sample in source domain. We address these issues in our work and propose an UDA method with asymmetrical margin disparity loss and outlier sample extraction, called AMD-Net with OSE. We propose an Asymmetrical Margin Disparity Discrepancy (AMD) method and a training strategy based on sample selection mechanism to make the network have better feature extraction ability and the network gets rid of local optimal. Firstly, in the AMD method, we design a multi-label entropy metric to evaluate the marginal disparity loss of the confusing samples in the target domain. This asymmetric marginal disparity loss designment uses the different entropy measurement algorithms of the two domains to excavate the differences of the two domains as much as possible, so as to find the common features of the two domains. Secondly, A sample selection mechanism is designed to evaluate which part of the sample in target domain is confusable. We define the certainty of the sample in the target domain, adopt a progressive learning scheme, and adopt one-hot marginal disparity loss for most of the samples in the target domain with low uncertainty and easy to distinguish. The multi-label marginal calculation method is used only for the uncertainty samples in the target domain whose certainty is less than the threshold value, so that the network can get rid of the local optimal as much as possible. At last, we further propose an outlier sample extraction algorithm (OSE) based on weighted cosine similarity distance for source domain to reduce the negative migration effect caused by outlier samples in the source domain. Extensive experiments on four datasets Office-31, Office-Home, VisDA-2017 and DomainNet demonstrate that our method works well in various UDA settings and outperforms the state-of-the-art methods.Copyright © 2023 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *