[3]
A. Hilmkil, S. Callh, M. Barbieri, L. R. Sütfeld, E. L. Zec, and O. Mogren, Scaling federated learning for fine-tuning of large language models, in Proc. 26 th Int. Conf. Applications of Natural Language to Information Systems, Saarbrücken, Germany, 2021, pp. 15–23.
[4]
J. H. Ro, T. Breiner, L. McConnaughey, M. Chen, A. T. Suresh, S. Kumar, and R. Mathews, Scaling language model size in cross-device federated learning, in Proc. 1 st Workshop on Federated Learning for Natural Language Processing (FL4NLP 2022 ), Dublin, Ireland, 2022, pp. 6–20.
[5]
C. Chen, X. Feng, J. Zhou, J. Yin, and X. Zheng, Federated large language model: A position paper, arXiv preprint arXiv: 2307.08925, 2023.
[6]
Y. Wang, X. Zhang, M. Li, T. Lan, H. Chen, H. Xiong, X. Cheng, and D. Yu, Theoretical convergence guaranteed resource-adaptive federated learning with mixed heterogeneity, in Proc. 29 th ACM SIGKDD Conf. Knowledge Discovery and Data Mining, Long Beach, CA, USA, 2023, pp. 2444–2455.
[7]
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, How to backdoor federated learning, in Proc. 23 rd Int. Conf. Artificial Intelligence and Statistics, Palermo, Italy, 2020, pp. 2938–2948.
[8]
H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. Y. Sohn, K. Lee, and D. Papailiopoulos, Attack of the tails: Yes, you really can backdoor federated learning, in Proc. 34 th Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2020, p. 1348.
[9]
Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, Can you really backdoor federated learning? arXiv preprint arXiv: 1911.07963, 2019.
[11]
A. Nguyen and A. Tran, Wanet- imperceptible warping-based backdoor attack, arXiv preprint arXiv: 2102.10369, 2021.
[12]
Y. Yu, Y. Wang, W. Yang, S. Lu, Y.-P. Tan, and A. C. Kot, Backdoor attacks against deep image compression via adaptive frequency trigger, in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition (CVPR 2023), vol. 2023, pp. 12250–12259.
[13]
X. Chen, C. Liu, B. Li, K. Lu, and D. Song, Targeted backdoor attacks on deep learning systems using data poisoning, arXiv preprint arXiv: 1712.05526, 2017.
[15]
Z. Zhang, A. Panda, L. Song, Y. Yang, M. Mahoney, P. Mittal, R. Kannan, and J. Gonzalez, Neurotoxin: Durable backdoors in federated learning, in Proc. 39 th Int. Conf. Machine Learning, Baltimore, MD, USA, 2022, pp. 26429–26446.
[16]
P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, Machine learning with adversaries: Byzantine tolerant gradient descent, in Proc. 31 st Int. Conf. Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 118–128.
[17]
D. Yin, Y. Chen, R. Kannan, and P. L. Bartlett, Byzantine-robust distributed learning: Towards optimal statistical rates, in Proc. 35 th Int. Conf. Machine Learning, Stockholmsmässan, Sweden, 2018, pp. 5650–5659.
[18]
A. Krizhevsky, Learning multiple layers of features from tiny images, Master dissertation, University of Toronto, Japan, 2009.
[19]
K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition, in Proc. 2016 IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 770–778.
[20]
Y. Liu, S. Ma, Y. Aafer, W. C. Lee, J. Zhai, W. Wang, and X. Zhang, Trojaning attack on neural networks, presented at 25th Annual Network and Distributed System Security Symp., San Diego, CA, USA, 2018.
[22]
Y. Liu, T. Zou, Y. Kang, W. Liu, Y. He, Z. Yi, and Q. Yang, Batch label inference and replacement attacks in black-boxed vertical federated learning, arXiv preprint arXiv: 2112.05409, 2021.
[23]
Y. Li, Y. Li, B. Wu, L. Li, R. He, and S. Lyu, Invisible backdoor attack with sample-specific triggers, in Proc. 2021 IEEE/CVF Int. Conf. Computer Vision (ICCV ), Montreal, Canada, 2021, pp. 16443–16452.
[25]
A. Saha, A. Subramanya, and H. Pirsiavash, Hidden trigger backdoor attacks, in Proc. AAAI Conf. Artificial Intelligence, vol. 34, no. 7, pp. 11957–11965, 2020.
[26]
T. D. Nguyen, P. Rieger, M. Miettinen, and A. R. Sadeghi, Poisoning attacks on federated learning-based IoT intrusion detection system, in Proc. Workshop on Decentralized IoT Systems and Security, San Diego, CA, USA, 2020, pp. 1–7.
[27]
K. Y. Yoo and N. Kwak, Backdoor attacks in federated learning by rare embeddings and gradient ensembling, in Proc. 2022 Conf. Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 2022, pp. 72–88.
[28]
A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. B. Calo, Analyzing federated learning through an adversarial lens, in Proc. 36 th Int. Conf. Machine Learning, Long Beach, CA, USA, 2019, pp. 634–643.
[29]
J. Jiang, X. Liu, and C. Fan, Low-parameter federated learning with large language models, arXiv preprint arXiv:2307.13896, 2023.
[31]
Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, Federated learning with Non-IID data, arXiv preprint arXiv: 1806.00582, 2018.
[32]
F. Seide, H. Fu, J. Droppo, G. Li, and D. Yu, 1-bit stochastic gradient descent and its application to data-parallel distributed training of speech DNNs, in Proc. Interspeech 2014, Singapore, 2014, pp. 1058–1062.
[33]
S. U Stich, J. B. Cordonnier, and M. Jaggi, Sparsified SGD with memory, in Proc. 32 nd Int. Conf. Neural Information Processing Systems, Montréal, Canada, 2018, pp. 4452–4463.
[37]
A. Mathew, P. Amudha, and S. Sivakumari, Deep learning techniques: An overview, in Proc. Int. Conf. Advanced Machine Learning Technologies and Applications, Singapore, 2021, pp. 599–608.
[38]
N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly, Parameter-efficient transfer learning for NLP, in Proc. 36 th Int. Conf. Machine Learning, Long Beach, CA, USA, 2019, pp. 2790–2799.
[39]
E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, Lora: Low-rank adaptation of large language models, arXiv preprint arXiv:2106.09685, 2021.
[40]
A. Jeddi, M. J. Shafiee, and A. Wong, A simple fine-tuning is all you need: Towards robust deep learning via adversarial fine-tuning, arXiv preprint arXiv: 2012.13628, 2020.
[41]
R. He, L. Liu, H. Ye, Q. Tan, B. Ding, L. Cheng, J.-W. Low, L. Bing, and L. Si, On the effectiveness of adapter-based tuning for pretrained language model adaptation, arXiv preprint arXiv:2106.03164, 2021.
[42]
Y. L. Sung, J. Cho, and M. Bansal, VL-ADAPTER: Parameter-efficient transfer learning for vision-and-language tasks, in Proc. 2022 IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, LA, USA, 2022, pp. 5217–5227.
[43]
X. Wang, L. Aitchison, and M. Rudolph, LoRA ensembles for large language model fine-tuning, arXiv preprint arXiv:2310.00035, 2023.
[44]
J. Kaddour, J. Harris, M. Mozes, H. Bradley, R. Raileanu, and R. McHardy, Challenges and applications of large language models, arXiv preprint arXiv: 2307.10169, 2023.
[45]
L. Truong, C. Jones, B. Hutchinson, A. August, B. Praggastis, R. Jasper, N. Nichols, and A. Tuor, Systematic evaluation of backdoor data poisoning attacks on image classifiers, in Proc. 2020 IEEE/CVF Conf. Computer Vision and pattern Recognition Workshops, Seattle, WA, USA, 2020, pp. 3422–3431.
[46]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., PyTorch: An imperative style, high-performance deep learning library, in Proc. 33 rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, p. 721.
[47]
S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W. W. Hwu, Optimization principles and application performance evaluation of a multithreaded GPU using CUDA, in Proc. 13th ACM SIGPLAN Symp. Principles and Practice of Parallel Programming, 2008, pp. 73–82.
[48]
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., PyTorch: An imperative style, high-performance deep learning library, in Proc. Advances in Neural Information Processing Systems 32 (NeurIPS 2019 ), Vancouver, Canada, 2019, pp. 8024–8035.
[49]
T. M. H. Hsu, H. Qi, and M. Brown, Measuring the effects of non-identical data distribution for federated visual classification, arXiv preprint arXiv: 1909.06335, 2019.
[50]
Y. Li, M. Ya, Y. Bai, Y. Jiang, and S.-T. Xia, Backdoorbox: A python toolbox for backdoor learning, arXiv preprint arXiv:2302.01762, 2023.