Sort:
Open Access Research Article Issue
Structure preserved ordinal unsupervised domain adaptation
Electronic Research Archive 2024, 32(11): 6338-6363
Published: 22 November 2024
Abstract PDF (11.5 MB) Collect
Downloads:1

Unsupervised domain adaptation (UDA) aims to transfer the knowledge from labeled source domain to unlabeled target domain. The main challenge of UDA stems from the domain shift between the source and target domains. Currently, in the discrete classification problems, most existing UDA methods usually adopt the distribution alignment strategy while enforcing unstable instances to pass through the low-density areas. However, the scenario of ordinal regression (OR) is rarely researched in UDA, and the traditional UDA methods cannot preferably handle OR since they do not preserve the order relationships in data labels, like in human age estimation. To address this issue, we proposed a structure-oriented adaptation strategy, namely, structure preserved ordinal unsupervised domain adaptation (SPODA). More specifically, on one hand, the global structure information was modeled and embedded into an auto-encoder framework via a low-rank transferred structure matrix. On the other hand, the local structure information was preserved through a weighted pair-wise strategy in the latent space. Guided by both the local and global structure information, a well-performance latent space was generated, whose geometric structure was adopted to further obtain a more discriminant ordinal regressor. To further enhance its generalization, a counterpart of SPODA with deep architecture was developed. Finally, extensive experiments indicated that in addressing the OR problem, SPODA was more effective and advanced than existing related domain adaptation methods.

Regular Paper Issue
Source-Free Unsupervised Domain Adaptation with Sample Transport Learning
Journal of Computer Science and Technology 2021, 36(3): 606-616
Published: 05 May 2021
Abstract Collect

Unsupervised domain adaptation (UDA) has achieved great success in handling cross-domain machine learning applications. It typically benefits the model training of unlabeled target domain by leveraging knowledge from labeled source domain. For this purpose, the minimization of the marginal distribution divergence and conditional distribution divergence between the source and the target domain is widely adopted in existing work. Nevertheless, for the sake of privacy preservation, the source domain is usually not provided with training data but trained predictor (e.g., classifier). This incurs the above studies infeasible because the marginal and conditional distributions of the source domain are incalculable. To this end, this article proposes a source-free UDA which jointly models domain adaptation and sample transport learning, namely Sample Transport Domain Adaptation (STDA). Specifically, STDA constructs the pseudo source domain according to the aggregated decision boundaries of multiple source classifiers made on the target domain. Then, it refines the pseudo source domain by augmenting it through transporting those target samples with high confidence, and consequently generates labels for the target domain. We train the STDA model by performing domain adaptation with sample transport between the above steps in alternating manner, and eventually achieve knowledge adaptation to the target domain and attain confident labels for it. Finally, evaluation results have validated effectiveness and superiority of the proposed method.

Total 2
1/11GOpage