AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Online learning-based model predictive trajectory control for connected and autonomous vehicles: Modeling and physical tests

Qianwen Li1Peng Zhang2Handong Yao1Zhiwei Chen3Xiaopeng Li2( )
School of Environmental, Civil, Agricultural and Mechanical Engineering, University of Georgia, Athens 30602, USA
Department of Civil and Environmental Engineering, University of Wisconsin–Madison, Madison 53706, USA
Department of Civil, Architectural, and Environmental Engineering, Drexel University, Philadelphia 19104, USA
Show Author Information

Abstract

Motivated by the promising benefits of connected and autonomous vehicles (CAVs) in improving fuel efficiency, mitigating congestion, and enhancing safety, numerous theoretical models have been proposed to plan CAV multiple-step trajectories (time–specific speed/location trajectories) to accomplish various operations. However, limited efforts have been made to develop proper trajectory control techniques to regulate vehicle movements to follow multiple-step trajectories and test the performance of theoretical trajectory planning models with field experiments. Without an effective control method, the benefits of theoretical models for CAV trajectory planning can be difficult to harvest. This study proposes an online learning-based model predictive vehicle trajectory control structure to follow time–specific speed and location profiles. Unlike single-step controllers that are dominantly used in the literature, a multiple-step model predictive controller is adopted to control the vehicle’s longitudinal movements for higher accuracy. The model predictive controller output (speed) cannot be interpreted by vehicles. A reinforcement learning agent is used to convert the speed value to the vehicle’s direct control variable (i.e., throttle/brake). The reinforcement learning agent captures real-time changes in the operating environment. This is valuable in saving parameter calibration resources and improving trajectory control accuracy. A line tracking controller keeps vehicles on track. The proposed control structure is tested using reduced-scale robot cars. The adaptivity of the proposed control structure is demonstrated by changing the vehicle load. Then, experiments on two fundamental CAV platoon operations (i.e., platooning and split) show the effectiveness of the proposed trajectory control structure in regulating robot movements to follow time–specific reference trajectories.

References

 
Bolton, W., 2015. Instrumentation and Control Systems. Amsterdam: Elsevier, 281–302.
 

Campion, M., Ranganathan, P., Faruque, S., 2018. UAV swarm communication and control architectures: A review. J Unmanned Veh Sys, 7, 93−106.

 
Coulter, C., 1992. Implementation of the pure pursuit path tracking algorithm. https://api.semanticscholar.org/CorpusID:62550799
 
Glorennec, P. Y., Jouffe, L., 1997. Fuzzy Q-learning. In: Proceedings of 6th International Fuzzy Systems Conference, 659–662.
 
Hossain, M. A., Noor, R. M., Azzuhri, S. R., Z’aba, M. R., Ahmedy, I., Anjum, S. S., et al., 2020. Faster convergence of Q-learning in cognitive radio-VANET scenario. In: Advances in Electronics Engineering, Lecture Notes in Electrical Engineering, vol 619, 171–181.
 

Li, Q., Li, X., Huang, Z., Halkias, J., McHale, G., James, R., 2021. Simulation of mixed traffic with cooperative lane changes. Computer Aided Civil Eng, 37, 1978−1996.

 
Majumdar, A., Benavidez, P., Jamshidi, M., 2018. Multi-agent exploration for faster and reliable deep Q-learning convergence in reinforcement learning. In: 2018 World Automation Congress (WAC), 1–6.
 

Milanes, V., Godoy, J., Villagra, J., Perez, J., 2010. Automated on-ramp merging system for congested traffic situations. IEEE Trans Intell Transp Syst, 12, 500−508.

 
Nishio, D., Yamane, S., 2018. Faster deep Q-learning using neural episodic control. In: 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), 486–491.
 
Su, J., Wu, J., Cheng, P., Chen, J., 2018. Autonomous vehicle control through the dynamics and controller learning. IEEE Trans Veh Technol, 67, 5650–5657.
Journal of Intelligent and Connected Vehicles
Pages 86-96
Cite this article:
Li Q, Zhang P, Yao H, et al. Online learning-based model predictive trajectory control for connected and autonomous vehicles: Modeling and physical tests. Journal of Intelligent and Connected Vehicles, 2024, 7(2): 86-96. https://doi.org/10.26599/JICV.2023.9210026

143

Views

10

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 04 October 2023
Revised: 26 October 2023
Accepted: 09 November 2023
Published: 30 June 2024
© The author(s) 2023.

This is an open access article under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return