Federated Learning (FL) enables clients to securely share gradients computed on their local data with the server, thereby eliminating the necessity to directly expose their sensitive local datasets. In traditional FL, the server might take advantage of its dominant position during the model aggregation process to infer sensitive information from the shared gradients of the clients. At the same time, malicious clients may submit forged and malicious gradients during model training. Such behavior not only compromises the integrity of the global model, but also diminishes the usability and reliability of trained models. To effectively address such privacy and security attack issues, this work proposes a Blockchain-based Privacy-preserving and Secure Federated Learning (BPS-FL) scheme, which employs the threshold homomorphic encryption to protect the local gradients of clients. To resist malicious gradient attacks, we design a Byzantine-robust aggregation protocol for BPS-FL to realize the cipher-text level secure model aggregation. Moreover, we use a blockchain as the underlying distributed architecture to record all learning processes, which ensures the immutability and traceability of the data. Our extensive security analysis and numerical evaluation demonstrate that BPS-FL satisfies the privacy requirements and can effectively defend against poisoning attacks.
L. Deng and D. Yu, Deep learning: Methods and applications, Found. Trends® Signal Process., vol. 7, nos. 3&4, pp. 197–387, 2014.
L. Peng, N. Wang, N. Dvornek, X. Zhu, and X. Li, FedNI: Federated graph learning with network inpainting for population-based disease prediction, IEEE Trans. Med. Imag., vol. 42, no. 7, pp. 2032–2043, 2023.
Z. Li, X. Wang, W. Yang, J. Wu, Z. Zhang, Z. Liu, M. Sun, H. Zhang, and S. Liu, A unified understanding of deep NLP models for text classification, IEEE Trans. Vis. Comput. Graph., vol. 28, no. 12, pp. 4980–4994, 2022.
R. Zhao, Y. Wang, Z. Xue, T. Ohtsuki, B. Adebisi, and G. Gui, Semisupervised federated-learning-based intrusion detection method for Internet of Things, IEEE Internet Things J., vol. 10, no. 10, pp. 8645–8657, 2023.
X. Guo, Z. Liu, J. Li, J. Gao, B. Hou, C. Dong, and T. Baker, VeriFL: communication-efficient and fast verifiable aggregation for federated learning, IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 1736–1751, 2021.
M. Al-Rubaie and J. M. Chang, Privacy-preserving machine learning: Threats and solutions, IEEE Secur. Priv., vol. 17, no. 2, pp. 49–58, 2019.
X. Cao, J. Jia, and N. Z. Gong, Provably secure federated learning against malicious clients, Proc. AAAI Conf. Artif. Intell., vol. 35, no. 8, pp. 6885–6893, 2021.
Y. Li, H. Li, G. Xu, T. Xiang, X. Huang, and R. Lu, Toward secure and privacy-preserving distributed deep learning in fog-cloud computing, IEEE Internet Things J., vol. 7, no. 12, pp. 11460–11472, 2020.
L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, Privacy-preserving deep learning via additively homomorphic encryption, IEEE Trans. Inf. Forensics Secur., vol. 13, no. 5, pp. 1333–1345, 2018.
K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. S. Quek, and H. Vincent Poor, Federated learning with differential privacy: Algorithms and performance analysis, IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 3454–3469, 2020.
X. Ma, X. Sun, Y. Wu, Z. Liu, X. Chen, and C. Dong, Differentially private Byzantine-robust federated learning, IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 12, pp. 3690–3701, 2022.
Z. Wu, Q. Ling, T. Chen, and G. B. Giannakis, Federated variance-reduced stochastic gradient descent with robustness to Byzantine attacks, IEEE Trans. Signal Process., vol. 68, pp. 4583–4596, 2952.
X. Gong, Y. Chen, Q. Wang, and W. Kong, Backdoor attacks and defenses in federated learning: State-of-the-art, taxonomy, and future directions, IEEE Wirel. Commun., vol. 30, no. 2, pp. 114–121, 2023.
B. Hou, J. Gao, X. Guo, T. Baker, Y. Zhang, Y. Wen, and Z. Liu, Mitigating the backdoor attack by federated filters for industrial IoT applications, IEEE Trans. Ind. Inform., vol. 18, no. 5, pp. 3562–3571, 2022.
G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, VerifyNet: Secure and verifiable federated learning, IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 911–926, 2020.
J. So, B. Güler, and A. S. Avestimehr, Byzantine-resilient secure federated learning, IEEE J. Sel. Areas Commun., vol. 39, no. 7, pp. 2168–2181, 2021.
X. Liu, H. Li, G. Xu, Z. Chen, X. Huang, and R. Lu, Privacy-enhanced federated learning against poisoning adversaries, IEEE Trans. Inf. Forensics Secur., vol. 16, pp. 4574–4588, 2021.
Y. Miao, Z. Liu, H. Li, K.-K R. Choo, and R. H. Deng, Privacy-preserving Byzantine-robust federated learning via blockchain systems, IEEE Trans. Inf. Forensics Secur., vol. 17, pp. 2848–2861, 2022.
M. Shayan, C. Fung, C. J. M. Yoon, and I. Beschastnikh, Biscotti: A blockchain system for private and secure federated learning, IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 7, pp. 1513–1525, 2021.
G. Xu, H. Li, Y. Zhang, S. Xu, J. Ning, and R. H. Deng, Privacy-preserving federated deep learning with irregular users, IEEE Trans. Dependable Secure Comput., vol. 19, no. 2, pp. 1364–1381, 2022.