[1]
A. Souri, A. Hussien, M. Hoseyninezhad, and M. Norouzi, A systematic review of IoT communication strategies for an efficient smart environment, Trans. Emerg. Telecommun. Technol., vol. 33, no. 3, p. e3736, 2022.
[2]
P. Hosseinioun, M. Kheirabadi, S. R. K. Tabbakh, and R. Ghaemi, A new energy-aware tasks scheduling approach in fog computing using hybrid meta-heuristic algorithm, J. Parallel Distrib. Comput., vol. 143, pp. 88–96, 2020.
[3]
Cisco, Cisco Annual Internet Report (2018–2023 ) White Paper. San Jose, CA, USA: Cisco, 2020.
[4]
M. Asad, A. Moustafa, and T. Ito, Federated learning versus classical machine learning: A convergence comparison, arXiv preprint arXiv: 2107.10976, 2021.
[5]
P. Li, J. Li, Z. Huang, T. Li, C. Z. Gao, S. M. Yiu, and K. Chen, Multi-key privacy-preserving deep learning in cloud computing, Future Gener. Comput. Syst., vol. 74, pp. 76–85, 2017.
[6]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. Agüera y Arcas, Communication-efficient learning of deep networks from decentralized data, in Proc. 20 th Int. Conf. Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 2017, pp. 1273–1282.
[7]
N. A. Jalali and H. Chen, Federated learning security and privacy-preserving algorithm and experiments research under internet of things critical infrastructure, Tsinghua Science and Technology, vol. 29, no. 2, pp. 400–414, 2024.
[8]
W. Zhang, Z. Li, and X. Chen, Quality-aware user recruitment based on federated learning in mobile crowd sensing, Tsinghua Science and Technology., vol. 26, no. 6, pp. 869–877, 2021.
[9]
J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, Federated learning: Strategies for improving communication efficiency, arXiv preprint arXiv: 1610.05492, 2017.
[10]
D. Chen, Y. C. Liu, B. Kim, J. Xie, C. S. Hong, and Z. Han, Edge computing resources reservation in vehicular networks: A meta-learning approach, IEEE Trans. Veh. Technol., vol. 69, no. 5, pp. 5634–5646, 2020.
[11]
C. Zhao, X. Sun, S. Yang, X. Ren, P. Zhao, and J. McCann, Exploration across small silos: Federated few-shot learning on network edge, IEEE Netw., vol. 36, no. 1, pp. 159–165, 2022.
[12]
O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, Matching networks for one shot learning, in Proc. 30 th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 3637–3645.
[13]
Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, Generalizing from a few examples: A survey on few-shot learning, ACM Comput. Surveys, vol. 53, no. 3, p. 63, 2020.
[14]
C. Zhao, X. Sun, S. Yang, X. Ren, P. Zhao, and J. McCann, Exploration across small silos: Federated few-shot learning on network edge, IEEE Netw., vol. 36, no. 1, pp. 159–165, 2022.
[15]
F. Chen, M. Luo, Z. Dong, Z. Li, and X. He, Federated meta-learning with fast convergence and efficient communication, arXiv preprint arXiv: 1802.07876, 2019.
[16]
A. Fallah, A. Mokhtari, and A. Ozdaglar, Personalized federated learning: A meta-learning approach, arXiv preprint arXiv: 2002.07948, 2020.
[17]
Q. Yang, Y. Liu, T. Chen, and Y. Tong, Federated machine learning: Concept and applications, ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, p. 12, 2019.
[18]
S. Wang, X. Fu, K. Ding, C. Chen, H. Chen, and J. Li, Federated few-shot learning, arXiv preprint arXiv: 2306.10234, 2023.
[19]
S. Ji, Y. Tan, T. Saravirta, Z. Yang, Y. Liu, L. Vasankari, S. Pan, G. Long, and A. Walid, Emerging trends in federated learning: From model fusion to federated x learning, arXiv preprint arXiv: 2102.12920, 2024.
[20]
S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečný, S. Kumar, and H. B. McMahan, Adaptive federated optimization, arXiv preprint arXiv: 2003.00295, 2021.
[21]
T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, Federated optimization in heterogeneous networks, arXiv preprint arXiv: 1812.06127, 2020.
[22]
J. Chen, W. Xu, S. Guo, J. Wang, J. Zhang, and H. Wang, FedTune: A deep dive into efficient federated fine-tuning with pre-trained transformers, arXiv preprint arXiv: 2211.08025, 2022.
[23]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, Attention is all you need, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 6000–6010.
[24]
L. Qu, Y. Zhou, P. P. Liang, Y. Xia, F. Wang, E. Adeli, L. Fei-Fei, and D. Rubin, Rethinking architecture design for tackling data heterogeneity in federated learning, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, LA, USA, 2022, pp. 10051–10061.
[25]
G. Koch, R. Zemel, and R. Salakhutdinov, Siamese neural networks for one-shot image recognition, https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf, 2023.
[26]
J. Snell, K. Swersky, and R. Zemel, Prototypical networks for few-shot learning, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 4080–4090.
[27]
S. X. Hu, D. Li, J. Stühmer, M. Kim, and T. M. Hospedales, Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, LA, USA, 2022, pp. 9058–9067.
[28]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, arXiv preprint arXiv: 2010.11929, 2021.
[29]
C. Finn, P. Abbeel, and S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in Proc. 34 th Int. Conf. Machine Learning, Sydney, Australia, 2017, pp. 1126–1135.
[30]
Z. Li, F. Zhou, F. Chen, and H. Li, Meta-SGD: Learning to learn quickly for few-shot learning, arXiv preprint arXiv: 1707.09835, 2017.
[31]
A. Shysheya, J. Bronskill, M. Patacchiola, S. Nowozin, and R. E. Turner, FiT: Parameter efficient few-shot transfer learning for personalized and federated image classification, arXiv preprint arXiv: 2206.08671, 2023.
[32]
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., ImageNet large scale visual recognition challenge, Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015.
[33]
E. Perez, F. Strub, H. de Vries, V. Dumoulin, and A. Courville, FiLM: Visual reasoning with a general conditioning layer, in Proc. 32 nd AAAI Conf. Artificial Intelligence, New Orleans, LA, USA, 2018, pp. 3942–3951.
[34]
X. Sun, S. Yang, and C. Zhao, Lightweight industrial image classifier based on federated few-shot learning, IEEE Trans. Industr. Inform., vol. 19, no. 6, pp. 7367–7376, 2023.
[35]
F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, Learning to compare: Relation network for few-shot learning, in Proc. 2018 IEEE/CVF Conf. Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 1199–1208.
[36]
X. He, C. Tan, B. Liu, L. Si, W. Yao, L. Zhao, D. Liu, Q. Zhangli, Q. Chang, K. Li, et al., Dealing with heterogeneous 3D MR knee images: A federated few-shot learning method with dual knowledge distillation, arXiv preprint arXiv: 2303.14357, 2023.
[37]
W. Huang, M. Ye, B. Du, and X. Gao, Few-shot model agnostic federated learning, in Proc. 30 th ACM Int. Conf. Multimedia, Lisboa, Portugal, 2022, pp. 7309–7316.
[38]
H. Shi, V. Radu, and P. Yang, Lightweight workloads in heterogeneous federated learning via few-shot learning, in Proc. 4 th Int. Workshop on Distributed Machine Learning, Paris, France, 2023, pp. 21–26.
[39]
K. Ding, J. Wang, J. Li, K. Shu, C. Liu, and H. Liu, Graph prototypical networks for few-shot learning on attributed networks, in Proc. 29 th ACM Int. Conf. Information & Knowledge Management, Virtual Event, 2020, pp. 295–304.
[40]
K. Ding, Q. Zhou, H. Tong, and H. Liu, Few-shot network anomaly detection via cross-network meta-learning, in Proc. Web Conf. 2021, Ljubljana, Slovenia, 2021, pp. 2448–2456.
[41]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[42]
A. Krizhevsky, Learning multiple layers of features from tiny images, https://api.semanticscholar.org/CorpusID:18268744, 2023.
[43]
B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, Human-level concept learning through probabilistic program induction, Science, vol. 350, no. 6266, pp. 1332–1338, 2015.
[44]
Z. Zhu, J. Hong, and J. Zhou, Data-free knowledge distillation for heterogeneous federated learning, in Proc. 38 th Int. Conf. Machine Learning, Virtual Event, 2021, pp. 12878–12889.