Publications
Sort:
Open Access Issue
Boosting for Distributed Online Convex Optimization
Tsinghua Science and Technology 2023, 28(4): 811-821
Published: 06 January 2023
Abstract PDF (4.9 MB) Collect
Downloads:201

Decentralized Online Learning (DOL) extends online learning to the domain of distributed networks. However, limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods. Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network, applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models. A new boosting method, namely Boosting for Distributed Online Convex Optimization (BD-OCO), is designed to realize the application of boosting in distributed scenarios. BD-OCO achieves the regret upper bound 𝒪(M+NMNT), where M measures the size of the distributed network and N is the number of Weak Learners (WLs) in each node. The core idea of BD-OCO is to apply the local model to train a strong global one. BD-OCO is evaluated on the basis of eight different real-world datasets. Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence, and is robust to the size of the distributed network.

Total 1