Sort:
Open Access Issue
Optimizing Data Distributions Based on Jensen-Shannon Divergence for Federated Learning
Tsinghua Science and Technology 2025, 30(2): 670-681
Published: 09 December 2024
Abstract PDF (1.2 MB) Collect
Downloads:0

In current federated learning frameworks, a central server randomly selects a small number of clients to train local models at the beginning of each global iteration. Since clients’ local data are non-dependent and identically distributed, partial local models are not consistent with the global model. Existing studies employ model cleaning methods to find inconsistent local models. Model cleaning methods measure the cosine similarity between local models and the global model. The inconsistent local model is cleaned out and will not be aggregated for the next global model. However, model cleaning methods incur negative effects such as large computation overheads and limited updates. In this paper, we propose a data distribution optimization method, called federated distribution optimization (FedDO), aiming to overcome the shortcomings of model cleaning methods. FedDO calculates the gradient of the Jensen-Shannon divergence to decrease the discrepancy between selected clients’ data distribution and the overall data distribution. We test our method on the multi-classification regression model, the multi-layer perceptron, and the convolutional neural network model on a handwritten digital image dataset. Compared with model cleaning methods, FedDO improves the training accuracy by 1.8%, 2.6%, and 5.6%, respectively.

Total 1