AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Regular Paper

Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning

State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
Google DeepMind, Mountain View, CA 94043, U.S.A.
Show Author Information

Abstract

We propose to address the open set domain adaptation problem by aligning images at both the pixel space and the feature space. Our approach, called Open Set Translation and Adaptation Network (OSTAN), consists of two main components: translation and adaptation. The translation is a cycle-consistent generative adversarial network, which translates any source image to the “style” of a target domain to eliminate domain discrepancy in the pixel space. The adaptation is an instance-weighted adversarial network, which projects both (labeled) translated source images and (unlabeled) target images into a domain-invariant feature space to learn a prior probability for each target image. The learned probability is applied as a weight to the unknown classifier to facilitate the identification of the unknown class. The proposed OSTAN model significantly outperforms the state-of-the-art open set domain adaptation methods on multiple public datasets. Our experiments also demonstrate that both the image-to-image translation and the instance-weighting framework can further improve the decision boundaries for both known and unknown classes.

Electronic Supplementary Material

Download File(s)
JCST-2010-11073-Highlights.pdf (587.6 KB)

References

[1]
Xie C H, Tan M X, Gong B Q, Wang J, Yuille A L, Le Q V. Adversarial examples improve image recognition. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.819–828. DOI: 10.1109/CVPR42600.2020.00090.
[2]
Deng W J, Zheng L, Ye Q X, Kang G L, Yang Y, Jiao J B. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp.994–1003. DOI: 10.1109/CVPR.2018.00110.
[3]

Wang Y Y, Gu J M, Wang C, Chen S C, Xue H. Discrimination-aware domain adversarial neural network. Journal of Computer Science and Technology, 2020, 35(2): 259–267. DOI: 10.1007/s11390-020-9969-4.

[4]

Lu H, Shen C H, Cao Z G, Xiao Y, van den Hengel A. An embarrassingly simple approach to visual domain adaptation. IEEE Trans. Image Processing, 2018, 27(7): 3403–3417. DOI: 10.1109/TIP.2018.2819503.

[5]
Saito K, Yamamoto S, Ushiku Y, Harada T. Open set domain adaptation by backpropagation. In Proc. the 15th European Conference on Computer Vision, Sept. 2018, pp.156–171. DOI: 10.1007/978-3-030-01228-1_10.
[6]
Panareda Busto P, Gall J. Open set domain adaptation. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.754–763. DOI: 10.1109/ICCV.2017.88.
[7]
Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.7167–7176. DOI: 10.1109/CVPR.2017.316.
[8]
Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation. In Proc. the 32nd International Conference on Machine Learning, Jun. 2015, pp.1180–1189.
[9]
Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proc. the 2017 IEEE International Conference on Computer Vision, Oct. 2017, pp.2223–2232. DOI: 10.1109/ICCV.2017.244.
[10]
Hoffman J, Tzeng E, Park T, Zhu J Y, Isola P, Saenko K, Efros A A, Darrell T. CyCADA: Cycle-consistent adversarial domain adaptation. In Proc. the 35th International Conference on Machine Learning, Jul. 2018, pp.1989–1998.
[11]
Zhang H J, Li A, Han X, Chen Z M, Zhang Y, Guo Y W. Improving open set domain adaptation using image-to-image translation. In Proc. the 2019 International Conference on Multimedia and Expo, Jul. 2019, pp.1258–1263. DOI: 10.1109/ICME.2019.00219.
[12]

Rozantsev A, Salzmann M, Fua P. Beyond sharing weights for deep domain adaptation. IEEE Trans. Pattern Analysis and Machine Intelligence, 2019, 41(4): 801–814. DOI: 10.1109/TPAMI.2018.2814042.

[13]
Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In Proc. the 11th European Conference on Computer Vision, Sept. 2010, pp.213–226. DOI: 10.1007/978-3-642-15561-1_16.
[14]
Gopalan R, Li R N, Chellappa R. Domain adaptation for object recognition: An unsupervised approach. In Proc. the 2011 International Conference on Computer Vision, Nov. 2011, pp.999–1006. DOI: 10.1109/ICCV.2011.6126344.
[15]
Cui S H, Wang S H, Zhuo J B, Su C, Huang Q M, Tian Q. Gradually vanishing bridge for adversarial domain adaptation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.12455–12464. DOI: 10.1109/CVPR42600.2020.01247.
[16]
Tang H, Chen K, Jia K. Unsupervised domain adaptation via structurally regularized deep clustering. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.8725–8735. DOI: 10.1109/CVPR42600.2020.00875.
[17]
Lu Z H, Yang Y X, Zhu X T, Liu C, Song Y Z, Xiang T. Stochastic classifiers for unsupervised domain adaptation. In Proc. the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2020, pp.9111–9120. DOI: 10.1109/CVPR42600.2020.00913.
[18]
Long M S, Wang J M, Ding G G, Sun J G, Yu P S. Transfer joint matching for unsupervised domain adaptation. In Proc. the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2014, pp.1410–1417. DOI: 10.1109/CVPR.2014.183.
[19]
Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.3722–3731. DOI: 10.1109/CVPR.2017.18.
[20]
Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola A. A kernel method for the two-sample-problem. In Advances in Neural Information Processing Systems 19, Schölkopf B, Platt J, Hofmann T (eds.), The MIT Press, 2007, pp.513–520. DOI: 10.7551/mitpress/7503.003.0069.
[21]
Long M S, Zhu H, Wang J M, Jordan M I. Deep transfer learning with joint adaptation networks. In Proc. the 34th International Conference on Machine Learning, Jul. 2017, pp.2208–2217.
[22]
Yu Y, Gong Z Q, Zhong P, Shan J X. Unsupervised representation learning with deep convolutional neural network for remote sensing images. In Proc. the 9th International Conference on Image and Graphics, Sept. 2017, pp.97–108. DOI: 10.1007/978-3-319-71589-6_9.
[23]
Liu M Y, Tuzel O. Coupled generative adversarial networks. In Proc. the 30th International Conference on Neural Information Processing Systems, Dec. 2016, pp.469–477.
[24]
Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W D, Webb R. Learning from simulated and unsupervised images through adversarial training. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.2107–2116. DOI: 10.1109/CVPR.2017.241.
[25]
Isola P, Zhu J Y, Zhou T H, Efros A A. Image-to-image translation with conditional adversarial networks. In Proc. the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Jul. 2017, pp.1125–1134. DOI: 10.1109/cvpr.2017.632.
[26]
Bendale A, Boult T E. Towards open set deep networks. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.1563–1572. DOI: 10.1109/CVPR.2016.173.
[27]
Jain L P, Scheirer W J, Boult T E. Multi-class open set recognition using probability of inclusion. In Proc. the 13th European Conference on Computer Vision, Sept. 2014, pp.393–409. DOI: 10.1007/978-3-319-10578-9_26.
[28]

Li F Y, Wechsler H. Open set face recognition using transduction. IEEE Trans. Pattern Analysis and Machine Intelligence, 2005, 27(11): 1686–1697. DOI: 10.1109/TPAMI.2005.224.

[29]
Bao J M, Chen D, Wen F, Li H Q, Hua G. Towards open-set identity preserving face synthesis. In Proc. the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp.6713–6722. DOI: 10.1109/CVPR.2018.00702.
[30]

Rudd E M, Jain L P, Scheirer W J, Boult T E. The extreme value machine. IEEE Trans. Pattern Analysis and Machine Intelligence, 2018, 40(3): 762–768. DOI: 10.1109/TPAMI.2017.2707495.

[31]
Zhang H J, Li A, Guo J, Guo Y W. Hybrid models for open set recognition. In Proc. the 16th European Conference on Computer Vision, Aug. 2020, pp.102–117. DOI: 10.1007/978-3-030-58580-8_7.
[32]
Cui X Y, Liu Q S, Gao M C, Metaxas D N. Abnormal detection using interaction energy potentials. In Proc. the 2011 CVPR, Jun. 2011, pp.3161–3167. DOI: 10.1109/CVPR.2011.5995558.
[33]
Zhou D Y, Bousquet O, Lal T N, Weston J, Schölkopf B. Learning with local and global consistency. In Proc. the 16th International Conference on Neural Information Processing Systems, Dec. 2003, pp.321–328.
[34]
Peng X C, Usman B, Kaushik N, Hoffman J, Wang D Q, Saenko K. VisDA: The visual domain adaptation challenge. arXiv: 1710.06924, 2017. https://arxiv.org/abs/1710.06924, May 2023.
[35]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 1409.1556, 2014. https://arxiv.org/abs/1409.1556, May 2023.
[36]
He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In Proc. the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2016, pp.770–778. DOI: 10.1109/CVPR.2016.90.
[37]
Gong B Q, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In Proc. the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2012, pp.2066–2073. DOI: 10.1109/CVPR.2012.6247911.
Journal of Computer Science and Technology
Pages 644-658
Cite this article:
Zhang H-J, Li A, Guo J, et al. Improving Open Set Domain Adaptation Using Image-to-Image Translation and Instance-Weighted Adversarial Learning. Journal of Computer Science and Technology, 2023, 38(3): 644-658. https://doi.org/10.1007/s11390-021-1073-x

316

Views

1

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 15 October 2020
Revised: 02 May 2021
Accepted: 30 July 2021
Published: 30 May 2023
© Institute of Computing Technology, Chinese Academy of Sciences 2023
Return