AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (4.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Open Access

Boosting for Distributed Online Convex Optimization

Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha 410073, China
Medical Engineering Laboratory of Chinese PLA General Hospital
School of Cyberspace Security, Dongguan University of Technology, Dongguan 523000, China
Show Author Information

Abstract

Decentralized Online Learning (DOL) extends online learning to the domain of distributed networks. However, limitations of local data in decentralized settings lead to a decrease in the accuracy of decisions or models compared to centralized methods. Considering the increasing requirement to achieve a high-precision model or decision with distributed data resources in a network, applying ensemble methods is attempted to achieve a superior model or decision with only transferring gradients or models. A new boosting method, namely Boosting for Distributed Online Convex Optimization (BD-OCO), is designed to realize the application of boosting in distributed scenarios. BD-OCO achieves the regret upper bound 𝒪(M+NMNT), where M measures the size of the distributed network and N is the number of Weak Learners (WLs) in each node. The core idea of BD-OCO is to apply the local model to train a strong global one. BD-OCO is evaluated on the basis of eight different real-world datasets. Numerical results show that BD-OCO achieves excellent performance in accuracy and convergence, and is robust to the size of the distributed network.

References

[1]
W. Chen, Y. J. Wang, and Y. Yuan, Combinatorial multi-armed bandit: General framework and applications, in Proc. of the 30th Int. Conf. on Machine Learning (ICML), Atlanta, GA, USA, 2013, pp. 151159.
[2]
B. L. Pereira, A. Ueda, G. Penha, R. L. T. Santos, and N. Ziviani, Online learning to rank for sequential music recommendation, in Proc. of the 13th ACM Conf. on Recommender Systems, Copenhagen, Dennark, 2019, pp. 237245.
[3]
Y. W. Zhao, C. Yu, P. L. Zhao, H. L. Tang, S. Qiu, and J. Liu, Decentralized online learning: Take benefits from others’ data without sharing your own to track global trend, arXiv preprint arXiv: 1901.10593, 2019.
[4]
S. Shahrampour and A. Jadbabaie, Distributed online optimization in dynamic environments using mirror descent, IEEE Trans. Automat. Control, vol. 63, no. 3, pp. 714725, 2018.
[5]
A. Koppel, S. Paternain, C. Richard, and A. Ribeiro, Decentralized online learning with kernels, IEEE Trans. Signal Process., vol. 66, no. 12, pp. 32403255, 2018.
[6]
N. Bastianello and E. Dall’Anese, Distributed and inexact proximal gradient method for online convex optimization, in Proc. 2021 European Control Conf. (ECC), Delft, the Netherlands, 2021, pp. 24322437.
[7]
C. Zhang, P. L. Zhao, S. J. Hao, Y. C. Soh, B. S. Lee, C. Y. Miao, and S. C. H. Hoi, Distributed multi-task classification: A decentralized online learning approach, Mach. Learn., vol. 107, no. 4, pp. 727747, 2018.
[8]
T. G. Dietterich, Ensemble methods in machine learning, in Proc. Int. Workshop on Multiple Classifier Systems, Cagliari, Italy, 2000, pp. 115.
[9]
J. Tanha, Y. Abdi, N. Samadi, N. Razzaghi, and M. Asadpour, Boosting methods for multi-class imbalanced data classification: An experimental review, J. Big Data, vol. 7, p. 70, 2020.
[10]
N. C. Oza and S. J. Russell, Online bagging and boosting, in Proc. of the Eighth Int. Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 2001, pp. 229236.
[11]
S. T. Chen, H. T. Lin, and C. J. Lu, An online boosting algorithm with theoretical justifications, in Proc. of the 29th Int. Conf. on Int. Conf. on Machine Learning, Edinburgh, UK, 2012, pp. 18731880.
[12]
A. Beygelzimer, E. Hazan, S. Kale, and H. P. Luo, Online gradient boosting, in Proc. of the 28th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2015, pp. 24582466.
[13]
E. Hazan and K. Singh, Boosting for online convex optimization, in Proc. of the 38th Int. Conf. on Machine Learning, Vienna, Austria, 2021, pp. 41404149.
[14]
Y. Freund and R. E. Schapire, Experiments with a new boosting algorithm, in Proc. of the Thirteenth Int. Conf. on Int. Conf. on Machine Learning, Bari, Italy, 1996, pp. 148156.
[15]
T. Zhang and B. Yu, Boosting with early stopping: Convergence and consistency, Ann. Statist., vol. 33, no. 4, pp. 15381579, 2005.
[16]
M. Raginsky, N. Kiarashi, and R. Willett, Decentralized online convex programming with local information, in Proc. of the 2011 American Control Conf., San Francisco, CA, USA, 2011, pp. 53635369.
[17]
A. Nedić, S. Lee, and M. Raginsky, Decentralized online optimization with global objectives and local communication, in Proc. 2015 American Control Conf. (ACC), Chicago, IL, USA, 2015, pp. 44974503.
[18]
J. Y. Jiang, W. P. Zhang, J. J. Gu, and W. W. Zhu, Asynchronous decentralized online learning, in Proc. of the 35th Int. Conf. on Neural Information Processing Systems, Montreal, Canada, 2021, pp. 2018520196.
[19]
E. Hazan, Introduction to online convex optimization, Foundations and Trends in Optimization, vol. 2, nos. 3&4, pp. 157325, 2016.
[20]
S. Shalev-Shwartz, Online learning and online convex optimization, Foundat. Trends Mach. Learn., vol. 4, no. 2, pp. 107194, 2012.
[21]
Y. Freund and R. E. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting, J. Comput. Syst. Sci., vol. 55, no. 1, pp. 119139, 1997.
[22]
J. H. Friedman, Greedy function approximation: A gradient boosting machine, Ann. Statist., vol. 29, no. 5, pp. 11891232, 2001.
[23]
T. Parag, F. Porikli, and A. Elgammal, Boosting adaptive linear weak classifiers for online learning and tracking, in Proc. 2008 IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 2008, pp. 18.
[24]
A. Beygelzimer, S. Kale, and H. P. Luo, Optimal and adaptive algorithms for online boosting, in Proc. of the 32nd Int. Conf. on Int. Conf. on Machine Learning, Lille, France, 2015, pp. 23232331.
[25]
Y. H. Jung, J. Goetz, and A. Tewari, Online multiclass boosting, in Proc. of the 31st Int. Conf. on Neural Information Processing Systems, Long Beach, CA, USA, 2017. 920929.
[26]
X. B. An, C. Hu, G. Liu, and H. S. Lin, Distributed online gradient boosting on data stream over multi-agent networks, Signal Process., vol. 189, p. 108253, 2021.
[27]
J. C. Duchi, A. Agarwal, and M. J. Wainwright, Dual averaging for distributed optimization: Convergence analysis and network scaling, IEEE Trans. Automat. Control, vol. 57, no. 3, pp. 592606, 2012.
[28]
D. Garber and E. Hazan, A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization, arXiv preprint arXiv: 1301.4666, 2013.
Tsinghua Science and Technology
Pages 811-821
Cite this article:
Hu Y, Zhao Y, Luo L, et al. Boosting for Distributed Online Convex Optimization. Tsinghua Science and Technology, 2023, 28(4): 811-821. https://doi.org/10.26599/TST.2022.9010041

1109

Views

201

Downloads

1

Crossref

1

Web of Science

1

Scopus

0

CSCD

Altmetrics

Received: 06 September 2022
Revised: 18 September 2022
Accepted: 20 September 2022
Published: 06 January 2023
© The author(s) 2023.

The articles published in this open access journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return