The large-scale model (LSM) can handle large-scale data and complex problems, effectively improving the intelligence level of urban intersections. However, the traffic conditions at intersections are becoming increasingly complex, so the intelligent intersection LSMs (I2LSMs) also need to be continuously learned and updated. The traditional cloud-based training method incurs a significant amount of computational and storage overhead, and there is a risk of data leakage. The combination of edge artificial intelligence and federated learning provides an efficient and highly privacy protected computing mode. Therefore, we propose a hierarchical hybrid distributed training mechanism for I2LSM. Firstly, relying on the intelligent intersection system for cloud-network-terminal integration, we constructed an I2LSM hierarchical hybrid distributed training architecture. Then, we propose a hierarchical hybrid federated learning (H2Fed) algorithm that combines the advantages of centralized federated learning and decentralized federated learning. Further, we propose an adaptive compressed sensing algorithm to reduce the communication overhead. Finally, we analyze the convergence of the H2Fed algorithm. Experimental results show that the H2Fed algorithm reduces the communication overhead by 21.6% while ensuring the accuracy of the model.
S. Asadianfam, M. Shamsi, and A. R. Kenari, Big data platform of traffic violation detection system: Identifying the risky behaviors of vehicle drivers, Multimed. Tools Appl., vol. 79, pp. 24645–24684, 2020.
A. Navarro-Espinoza, O. R. López-Bonilla, E. E. García-Guerrero, E. Tlelo-Cuautle, D. López-Mancilla, C. Hernández-Mejía, and E. Inzunza-González, Traffic flow prediction for smart traffic lights using machine learning algorithms, Technologies, vol. 10, no. 1, p. 5, 2022.
Y. Yang, K. He, Y. P. Wang, Z. Z. Yuan, Y. H. Yin, and M. Z. Guo, Identification of dynamic traffic crash risk for cross-area freeways based on statistical and machine learning methods, Physica A: Statistical Mechanics and Its Applications, vol. 595, p. 127083, 2022.
J. Yao, S. Zhang, Y. Yao, F. Wang, J. Ma, J. Zhang, Y. Chu, L. Ji, K. Jia, T. Shen, et al., Edge-cloud polarization and collaboration: A comprehensive survey for AI, IEEE Trans. Knowl. Data Eng., vol. 35, no. 7, pp. 6866–6886, 2023.
Y. Cui, S. Huang, J. Zhong, Z. Liu, Y. Wang, C. Sun, B. Li, X. Wang, and A. Khajepour, DriveLLM: Charting the path toward full autonomous driving with large language models, IEEE Trans. Intell. Veh., vol. 9, no. 1, pp. 1450–1464, 2024.
Z. Tang, S. Shi, B. Li, and X. Chu, GossipFL: A decentralized federated learning framework with sparsified and adaptive communication, IEEE Trans. Parallel Distrib. Syst., vol. 34, no. 3, pp. 909–922, 2023.
M. Asad, A. Moustafa, and T. Ito, FedOpt: Towards communication efficiency and privacy preservation in federated learning, Appl. Sci., vol. 10, no. 8, p. 2864, 2020.
N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor, and S. Cui, UVeQFed: Universal vector quantization for federated learning, IEEE Trans. Signal Process., vol. 69, pp. 500–514, 2021.
C. Li, G. Li, and P. K. Varshney, Communication-efficient federated learning based on compressed sensing, IEEE Internet Things J., vol. 8, no. 20, pp. 15531–15541, 2021.
R. Zong, Y. Qin, F. Wu, Z. Tang, and K. Li, Fedcs: Efficient communication scheduling in decentralized federated learning, Inf. Fusion, vol. 102, p. 102028, 2024.