Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
In current federated learning frameworks, a central server randomly selects a small number of clients to train local models at the beginning of each global iteration. Since clients’ local data are non-dependent and identically distributed, partial local models are not consistent with the global model. Existing studies employ model cleaning methods to find inconsistent local models. Model cleaning methods measure the cosine similarity between local models and the global model. The inconsistent local model is cleaned out and will not be aggregated for the next global model. However, model cleaning methods incur negative effects such as large computation overheads and limited updates. In this paper, we propose a data distribution optimization method, called federated distribution optimization (FedDO), aiming to overcome the shortcomings of model cleaning methods. FedDO calculates the gradient of the Jensen-Shannon divergence to decrease the discrepancy between selected clients’ data distribution and the overall data distribution. We test our method on the multi-classification regression model, the multi-layer perceptron, and the convolutional neural network model on a handwritten digital image dataset. Compared with model cleaning methods, FedDO improves the training accuracy by 1.8%, 2.6%, and 5.6%, respectively.
Z. Hu, D. Li, D. Zhang, Y. Zhang, and B. Peng, Optimizing resource allocation for data-parallel jobs via GCN-based prediction, IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 9, pp. 2188–2201, 2021.
W. Zhang, Z. Li, and X. Chen, Quality-aware user recruitment based on federated learning in mobile crowd sensing, Tsinghua Science and Technology, vol. 26, no. 6, pp. 869–877, 2021.
Y. Liu, T. Wang, S. Peng, G. Wang, and W. Jia, Edge-based model cleaning and device clustering in federated learning, (in Chinese), Chinese J. Computers, vol. 44, no. 12, pp. 2515–2528, 2021.
M. Duan, D. Liu, X. Ji, Y. Wu, L. Liang, X. Chen, Y. Tan, and A. Ren, Flexible clustered federated learning for client-level data distribution shift, IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 11, pp. 2661–2674, 2022.
Z. Wang, J. Xin, H. Yang, S. Tian, G. Yu, C. Xu, and Y. Yao, Distributed and weighted extreme learning machine for imbalanced big data learning, Tsinghua Science and Technology, vol. 22, no. 2, pp. 160–173, 2017.
S. C. Tsai, W. G. Tzeng, and H. L. Wu, On the Jensen-Shannon divergence and variational distance, IEEE Trans. Inf. Theory, vol. 51, no. 9, pp. 3333–3336, 2005.
K. M. Borgwardt, A. Gretton, M. J. Rasch, H. P. Kriegel, B. Schölkopf, and A. J. Smola, Integrating structured biological data by kernel maximum mean discrepancy, Bioinformatics, vol. 22, no. 14, pp. e49–e57, 2006.
I. Yang, A convex optimization approach to distributionally robust Markov decision processes with Wasserstein distance, IEEE Contr. Syst. Lett., vol. 1, no. 1, pp. 164–169, 2017.
B. Yang, Y. Lei, F. Jia, N. Li, and Z. Du, A polynomial kernel induced distance metric to improve deep transfer learning for fault diagnosis of machines, IEEE Trans. Ind. Electron., vol. 67, no. 11, pp. 9747–9757, 2020.
R. D. Rodman, Algorithm 166: MonteCarlo, Commun. ACM, vol. 6, no. 4, p. 164, 1963.
M. C. Bartholomew-Biggs, The estimation of the hessian matrix in nonlinear least squares problems with non-zero residuals, Math. Program. Ser. A B, vol. 12, no. 1, pp. 67–80, 1977.
Z. Q. Luo and W. Yu, An introduction to convex optimization for communications and signal processing, IEEE J. Sel. Areas Commun., vol. 24, no. 8, pp. 1426–1438, 2006.
C. Kwak and A. Clayton-Matthews, Multinomial logistic regression, Nurs. Res., vol. 51, no. 6, pp. 404–410, 2002.
The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).