Sort:
Regular Paper Issue
Source-Free Unsupervised Domain Adaptation with Sample Transport Learning
Journal of Computer Science and Technology 2021, 36(3): 606-616
Published: 05 May 2021
Abstract Collect

Unsupervised domain adaptation (UDA) has achieved great success in handling cross-domain machine learning applications. It typically benefits the model training of unlabeled target domain by leveraging knowledge from labeled source domain. For this purpose, the minimization of the marginal distribution divergence and conditional distribution divergence between the source and the target domain is widely adopted in existing work. Nevertheless, for the sake of privacy preservation, the source domain is usually not provided with training data but trained predictor (e.g., classifier). This incurs the above studies infeasible because the marginal and conditional distributions of the source domain are incalculable. To this end, this article proposes a source-free UDA which jointly models domain adaptation and sample transport learning, namely Sample Transport Domain Adaptation (STDA). Specifically, STDA constructs the pseudo source domain according to the aggregated decision boundaries of multiple source classifiers made on the target domain. Then, it refines the pseudo source domain by augmenting it through transporting those target samples with high confidence, and consequently generates labels for the target domain. We train the STDA model by performing domain adaptation with sample transport between the above steps in alternating manner, and eventually achieve knowledge adaptation to the target domain and attain confident labels for it. Finally, evaluation results have validated effectiveness and superiority of the proposed method.

Regular Paper Issue
Discrimination-Aware Domain Adversarial Neural Network
Journal of Computer Science and Technology 2020, 35(2): 259-267
Published: 27 March 2020
Abstract Collect

The domain adversarial neural network (DANN) methods have been successfully proposed and attracted much attention recently. In DANNs, a discriminator is trained to discriminate the domain labels of features generated by a generator, whereas the generator attempts to confuse it such that the distributions between domains are aligned. As a result, it actually encourages the whole alignment or transfer between domains, while the inter-class discriminative information across domains is not considered. In this paper, we present a Discrimination-Aware Domain Adversarial Neural Network (DA2NN) method to introduce the discriminative information or the discrepancy of inter-class instances across domains into deep domain adaptation. DA2NN considers both the alignment within the same class and the separation among different classes across domains in knowledge transfer via multiple discriminators. Empirical results show that DA2NN can achieve better classification performance compared with the DANN methods.

Total 2