[1]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770-778.
[3]
Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, Learning deep transformer models for machine translation, in Proc. 57 th Annu. Meeting of the Association for Computational Linguistics, Florence, Italy, 2019, pp. 1810-1822.
[5]
B. N. Oreshkin, P. Rodriguez, and A. Lacoste, TADAM: Task dependent adaptive metric for improved few-shot learning, in Proc. 32 nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 719-729.
[6]
M. Ren, R. Liao, E. Fetaya, and R. S. Zemel, Incremental few-shot learning with attention attractor networks, in Proc. 32 nd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 5276-5286.
[7]
B. Hariharan and R. B. Girshick, Low-shot visual recognition by shrinking and hallucinating features, in Proc. 2017 IEEE Int. Conf. Computer Vision, Venice, Italy, 2017, pp. 3037-3046.
[9]
Y. Lu, F. Yu, M. K. K. Reddy, and Y. Wang, Few-shot scene-adaptive anomaly detection, in Proc. 16 th European Conf. Computer Vision, Springer, Glasgow, UK, 2020, pp. 125-141.
[10]
P. Wang, R. Yang, B. Cao, W. Xu, and Y. Lin, Dels-3D: Deep localization and segmentation with a 3D semantic map, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 5860-5869.
[11]
T. Hu, P. Yang, C. Zhang, G. Yu, Y. Mu, and C. G. M. Snoek, Attention-based multi-context guiding for few-shot semantic segmentation, in Proc. Thirty-Three AAAI Conf. Artificial Intelligence, Honolulu, HI, USA, 2019, pp. 8441-8448.
[12]
N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel, A simple neural attentive meta-learner, in Proc. 6 th Int. Conf. Learning Representations, Vancouver, 2018, pp. 1-17.
[13]
L. Metz, N. Maheswaranathan, B. Cheung, and J. Sohl-Dickstein, Meta-learning update rules for unsupervised representation learning, in Proc. 7 th Int. Conf. Learning Representations, New Orleans, LA, USA, 2019, pp. 1-27.
[14]
H. J. Ye, H. Hu, D. C. Zhan, and F. Sha, Few-shot learning via embedding adaptation with set-to-set functions, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 8805-8814.
[17]
C. Finn, P. Abbeel, and S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in Proc. 34 th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 1126-1135.
[18]
A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, Meta-learning with latent embedding optimization, arXiv preprint arXiv: 1807.05960, 2019.
[19]
S. Ravi and H. Larochelle, Optimization as a model for few-shot learning, in Proc. 5 th Int. Conf. Learning Representations, Toulon, France, 2017, pp. 1-11.
[20]
J. Oh, H. Yo, C. Kim, and S. Y. Yun, BOIL: Towards representation change for few-shot learning, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-24.
[21]
O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, Matching networks for one shot learning, in Proc. 30 th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 3637-3645.
[22]
J. Snell, K. Swersky, and R. S. Zemel, Prototypical networks for few-shot learning, in Proc. 30 th Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 4077-4087.
[23]
F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, Learning to compare: Relation network for few-shot learning, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 1199-1208.
[24]
W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo, Revisiting local descriptor based image-to-class measure for few-shot learning, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 7253-7260.
[25]
S. W. Yoon, J. Seo, and J. Moon, TapNet: Neural network augmented with task-adaptive projection for few-shot learning, in Proc. 36 th Int. Conf. Machine Learning, Long Beach, CA, USA, 2019, pp. 7115-7123.
[26]
J. Chen, L. M. Zhan, X. M. Wu, and F. L. Chung, Variational metric scaling for metric-based meta-learning, in Proc. Thirty-Fourth AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 3478-3485.
[27]
A. Antoniou, H. Edwards, and A. Storkey, How to train your MAML, in Proc. 7 th Int. Conf. Learning Representations, New Orleans, LA, USA, 2019, pp. 1-11.
[28]
K. Lee, S. Maji, A. Ravichandran, and S. Soatto, Meta-learning with differentiable convex optimization, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 10649-10657.
[31]
J. Snell and R. Zemel, Bayesian few-shot classification with one-vs-each polya-gamma augmented Gaussian processes, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-26.
[32]
L. Wang, Q. Cai, Z. Yang, and Z. Wang, On the global optimality of model-agnostic meta-learning, in Proc. 37 th Int. Conf. Machine Learning, Virtual Event, 2020, pp. 9837-9846.
[33]
S. Sun and H. Gao, Meta-AdaM: A meta-learned adaptive optimizer with momentum for few-shot learning, in Proc. 37 th Int. Conf. Neural Information Processing Systems, New Orleans, LA, USA, 2023, pp. 65441-65455
[34]
B. Zhang, X. Li, S. Feng, Y. Ye, and R. Ye, MetaNODE: Prototype optimization as a neural ODE for few-shot learning, in Proc. Thirty-Sixth AAAI Conf. Artificial Intelligence, Virtual Event, 2022, pp. 9014-9021
[37]
Y. Lee and S. Choi, Gradient-based meta-learning with learned layerwise metric and subspace, in Proc. 35 th Int. Conf. Machine Learning, Stockholmsmssan, Sweden, 2018, pp. 2927-2936.
[38]
W. Li, L. Wang, J. Huo, Y. Shi, Y. Gao, and J. Luo, Asymmetric distribution measure for few-shot learning, in Proc. Twenty-Ninth Int. Joint Conf. Artificial Intelligence, Yokohama, Japan, 2020, pp. 2957-2963.
[39]
A. Li, W. Huang, X. Lan, J. Feng, Z. Li, and L. Wang, Boosting few-shot learning with adaptive margin loss, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 12573-12581.
[41]
M. Zhang, J. Zhang, Z. Lu, T. Xiang, M. Ding, and S. Huang, IEPT: Instance-level and episode-level pretext tasks for few-shot learning, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-16.
[42]
N. Fei, Z. Lu, T. Xiang, and S. Huang, MELR: Meta-learning via modeling episode-level relationships for few-shot learning, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-20.
[43]
S. Gidaris and N. Komodakis, Dynamic few-shot visual learning without forgetting, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 4367-4375.
[44]
H. Li, D. Eigen, S. Dodge, M. Zeiler, and X. Wang, Finding task-relevant features for few-shot learning by category traversal, in Proc.2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 1-10.
[45]
L. Qiao, Y. Shi, J. Li, Y. Tian, T. Huang, and Y. Wang, Transductive episodic-wise adaptive metric for few-shot learning, in Proc. 2019 IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 3602-3611.
[46]
L. Yang, L. Li, Z. Zhang, X. Zhou, E. Zhou, and Y. Liu, DPGN: Distribution propagation graph network for few-shot learning, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 13387-13396.
[47]
C. Zhang, Y. Cai, G. Lin, and C. Shen, DeepEMD: Few-shot image classification with differentiable earth mover’s distance and structured classifiers, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 12200-12210.
[48]
K. Cao, M. Brbic, and J. Leskovec, Concept learners for few-shot learning, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-17.
[49]
S. Bartunov, J. W. Rae, S. Osindero, and T. P. Lillicrap, Meta-learning deep energy-based memory models, in Proc. 8 th Int. Conf. Learning Representations, Addis Ababa, Ethiopia, 2020, pp. 1-23.
[52]
H. Cheng, S. Yang, J. T. Zhou, L. Guo, and B. Wen, Frequency guidance matters in few-shot learning, in Proc. 2013 IEEE/CVF Int. Conf. Computer Vision, Paris, France, 2023, pp. 11780-11790.
[53]
D. Guo, L. Tian, H. Zhao, M. Zhou, and H. Zha, Adaptive distribution calibration for few-shot learning with hierarchical optimal transport, in Proc. 36 th Int. Conf. Neural Information Processing Systems, New Orleans, LA, USA, 2022, pp. 6996-7010.
[54]
Q. Sun, Y. Liu, T. S. Chua, and B. Schiele, Meta-transfer learning for few-shot learning, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 403-412.
[56]
L. Zhou, P. Cui, S. Yang, W. Zhu, and Q. Tian, Learning to learn image classifiers with visual analogy, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 11489-11498.
[57]
H. Yao, X. Wu, Z. Tao, Y. Li, B. Ding, R. Li, and Z. Li, Automated relational meta-learning, in Proc. 8 th Int. Conf. Learning Representations, Addis Ababa, Ethiopia, 2020, pp. 1-19.
[58]
M. Chen, Y. Fang, X. Wang, H. Luo, Y. Geng, X. Zhang, C. Huang, W. Liu, and B. Wang, Diversity transfer network for few-shot learning, in Proc. Thirty-Fourth AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 10559-10566.
[59]
Z. Peng, Z. Li, J. Zhang, Y. Li, G. J. Qi, and J. Tang, Few-shot image recognition with knowledge transfer, in Proc. 2019 IEEE/CVF Int. Conf. Computer Vision (ICCV), Seoul, Korea (South), 2019, pp. 441-449.
[60]
T. Chen, M. Xu, X. Hui, H. Wu, and L. Lin, Learning semantic-specific graph representation for multi-label image recognition, in Proc. 2019 IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 522-531.
[62]
H. Yao, C. Zhang, Y. Wei, M. Jiang, S. Wang, J. Huang, N. V. Chawla, and Z. Li, Graph few-shot learning via knowledge transfer, in Proc. Thirty-Fourth AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 6656-6663.
[63]
R. Chen, T. Chen, X. Hui, H. Wu, G. Li, and L. Lin, Knowledge graph transfer network for few-shot recognition, in Proc. Thirty-Fourth AAAI Conf. Artificial Intelligence, New York, NY, USA, 2020, pp. 10575-10582.
[64]
S. Qiao, C. Liu, W. Shen, and A. L. Yuille, Few-shot image recognition by predicting parameters from activations, in Proc. 2018 IEEE/CVF Conf Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 7229-7238.
[65]
S. Liu, E. Johns, and A. J. Davison, End-to-end multi-task learning with attention, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 1871-1880.
[66]
Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, Rethinking few-shot image classification: A good embedding is all you need? in Proc. 16 th European Conf. Computer Vision–ECCV 2020, Glasgow, UK, 2020, pp. 266-282.
[67]
J. Liu, L. Song, and Y. Qin, Prototype rectification for few-shot learning, in Proc. 16 th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 741-756.
[68]
A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. P. Lillicrap, Meta-learning with memory-augmented neural networks, in Proc. 33 rd Int. Conf. Machine Learning, New York, NY, USA, 2016, pp. 1842-1850.
[69]
K. Allen, E. Shelhamer, H. Shin, and J. Tenenbaum, Infinite mixture prototypes for few-shot learning, in Proc. 36 th Int. Conf. Machine Learning, Long Beach, CA, USA, 2019, pp. 232-241.
[70]
S. Gidaris and N. Komodakis, Generating classification weights with GNN Denoising Autoencoders for few-shot learning, in Proc. 2019 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Long Beach, CA, USA, 2019, pp. 21-30.
[72]
R. Hou, H. Chang, B. Ma, S. Shan, and X. Chen, Cross attention network for few-shot classification, in Proc. 33 rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, p. 360.
[73]
C. Simon, P. Koniusz, R. Nock, and M. Harandi, Adaptive subspaces for few-shot learning, in Proc. 2020 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Seattle, WA, USA, 2020, pp. 4135-4144.
[74]
Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. J. Hwang, and Y. Yang, Learning to propagate labels: Transductive propagation network for few-shot learning, in Proc. 7 th Int. Conf. Learning Representations, New Orleans, LA, USA, 2019, pp. 1-14.
[75]
G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto, A baseline for few-shot image classification, in Proc. 8 th Int. Conf. Learning Representations, Addis Ababa, Ethiopia, 2020, pp. 1-20.
[76]
I. M. Ziko, J. Dolz, E. Granger, and I. B. Ayed, Laplacian regularized few-shot learning, in Proc. 37 th Int. Conf. Machine Learning, Virtual Event, 2020, p. 1081.
[77]
M. Boudiaf, I. M. Ziko, J. Rony, J. Dolz, P. Piantanida, and I. B. Ayed, Transductive information maximization for few-shot learning, in Proc. 34 th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, p. 206.
[78]
J. Kim, H. Kim, and G. Kim, Model-agnostic boundary-adversarial sampling for test-time generalization in few-shot learning, in Proc. 16 th European Conf. Computer Vision, Glasgow, UK, 2020, pp. 599-617.
[79]
W. Xu, Y. Xu, H. Wang, and Z. Tu, Attentional constellation nets for few-shot learning, in Proc. 9 th Int. Conf. Learning Representations, Vienna, Austria, 2021, pp. 1-16.
[80]
A. Ravichandran, R. Bhotika, and S. Soatto, Few-shot learning with embedded class models and shot-free meta training, in Proc. IEEE/CVF Int. Conf. Computer Vision, Seoul, Korea (South), 2019, pp. 331-339.
[81]
D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schlkopf, Learning with local and global consistency, in Proc. 16 th Int. Conf. Neural Information Processing Systems, Whistler, Canada, 2003, pp. 321-328.