Abstract
Federated learning (FL) is an emerging distributed machine learning paradigm that provides privacy
guarantees for training robust models on distributed clients. The primary challenge of FL is data heterogeneity,which slows down model convergence and degrades model performance. Knowledge Distillation has recently demonstrated effectiveness in addressing this challenge. However, these approaches neglect the statistical heterogeneity in local models and the uncertainty of the data distribution in the global model, which results in the ensemble knowledge cannot be fully utilized to guide local model learning. In this work, we propose an unsupervised knowledge distillation method migrating the local class-level pseudo-data sample scheme in the server for fine-tuning the global model. Specifically, we provide the conditional autoencoder for each client to maintain a dynamic generator in the server, which ensembles the client’s class-level information. The proposal produces an auxiliary dataset representing the global class-level distribution to regulate the local model as an inductive knowledge bias and employs unsupervised knowledge distillation to enhance the aggregated model’s performance. The extensive experiments show that our proposal significantly outperforms the current state-of-theart FL algorithms and can be integrated as a flexible plugin into existing FL optimization algorithms to enhance model performance.