AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
Article Link
Collect
Submit Manuscript
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Review | Open Access

Leveraging reinforcement learning for dynamic traffic control: A survey and challenges for field implementation

Yu Hana( )Meng WangbLudovic Leclercqc
School of Transportation, Southeast University, Nanjing, 211189, China
Faculty of Transport and Traffic Sciences, Technische Universität Dresden, Dresden, Saxony, 01067, Germany
LICIT-ECO7, Université Gustave Eiffel, ENTPE, Lyon, F-69675, France
Show Author Information

Abstract

In recent years, the advancement of artificial intelligence techniques has led to significant interest in reinforcement learning (RL) within the traffic and transportation community. Dynamic traffic control has emerged as a prominent application field for RL in traffic systems. This paper presents a comprehensive survey of RL studies in dynamic traffic control, addressing the challenges associated with implementing RL-based traffic control strategies in practice, and identifying promising directions for future research. The first part of this paper provides a comprehensive overview of existing studies on RL-based traffic control strategies, encompassing their model designs, training algorithms, and evaluation methods. It is found that only a few studies have isolated the training and testing environments while evaluating their RL controllers. Subsequently, we examine the challenges involved in implementing existing RL-based traffic control strategies. We investigate the learning costs associated with online RL methods and the transferability of offline RL methods through simulation experiments. The simulation results reveal that online training methods with random exploration suffer from high exploration and learning costs. Additionally, the performance of offline RL methods is highly reliant on the accuracy of the training simulator. These limitations hinder the practical implementation of existing RL-based traffic control strategies. The final part of this paper summarizes and discusses a few existing efforts which attempt to overcome these challenges. This review highlights a rising volume of studies dedicated to mitigating the limitations of RL strategies, with the specific aim of enhancing their practical implementation in recent years.

References

 

Abdulhai, B., Pringle, R., Karakoulas, G.J., 2003. Reinforcement learning for True adaptive traffic signal control. J. Transport. Eng. 129, 278–285.

 

Aboudolas, K., Geroliminis, N., 2013. Perimeter and boundary flow control in multi-reservoir heterogeneous networks. Transp. Res. Part B Methodol. 55, 265–281.

 

Aradi, S., 2020. Survey of deep reinforcement learning for motion planning of autonomous vehicles. IEEE Trans. Intell. Transport. Syst. 23, 740–759.

 

Arel, I., Liu, C., Urbanik, T., Kohls, A.G., 2010. Reinforcement learning-based multi-agent system for network traffic signal control. IET Intell. Transp. Syst. 4, 128–135.

 

Aslani, M., Mesgari, M.S., Wiering, M., 2017. Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events. Transport. Res. C Emerg. Technol. 85, 732–752.

 

Aslani, M., Seipel, S., Mesgari, M.S., Wiering, M., 2018. Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown Tehran. Adv. Eng. Inf. 38, 639–655.

 

Bai, Z., Hao, P., Shangguan, W., Cai, B., Barth, M.J., 2022. Hybrid reinforcement learning-based eco-driving strategy for connected and automated vehicles at signalized intersections. IEEE Trans. Intell. Transport. Syst. 23, 15850–15863.

 

Belletti, F., Haziza, D., Gomes, G., Bayen, A.M., 2017. Expert level control of ramp metering based on multi-task deep reinforcement learning. IEEE Trans. Intell. Transport. Syst. 19, 1198–1207.

 

Carlson, R.C., Papamichail, I., Papageorgiou, M., Messmer, A., 2010. Optimal motorway traffic flow control involving variable speed limits and ramp metering. Transport. Sci. 44, 238–253.

 
Casas, N., 2017. Deep deterministic policy gradient for urban traffic light control. arXiv: 1703.09035. https://arxiv.org/abs/1703.09035.pdf.
 
Chalaki, B., Beaver, L.E., Remer, B., Jang, K., Vinitsky, E., Bayen, A.M., et al., 2020. Zero-shot autonomous vehicle policy transfer: from simulation to real-world via adversarial learning. In: 2020 IEEE 16th International Conference on Control & Automation (ICCA). October 9-11, 2020, Singapore. IEEE, pp. 35–40.
 

Chen, C., Wei, H., Xu, N., Zheng, G., Yang, M., Xiong, Y., et al., 2020. Toward A thousand lights: decentralized deep reinforcement learning for large-scale traffic signal control. Proc. AAAI Conf. Artif. Intell. 34, 3414–3421.

 

Chen, C., Huang, Y.P., Lam, W.H.K., Pan, T.L., Hsu, S.C., Sumalee, A., et al., 2022. Data efficient reinforcement learning and adaptive optimal perimeter control of network traffic dynamics. Transport. Res. C Emerg. Technol. 142, 103759.

 

Chu, T., Wang, J., Codecà, L., Li, Z., 2019. Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Trans. Intell. Transport. Syst. 21, 1086–1095.

 
Coşkun, M., Baggag, A., Chawla, S., 2018. Deep reinforcement learning for traffic light optimization. In: 2018 IEEE International Conference on Data Mining Workshops (ICDMW). November 17-20, 2018, Singapore. IEEE, pp. 564–571.
 
Davarynejad, M., Hegyi, A., Vrancken, J., van den Berg, J., 2011. Motorway rampmetering control with queuing consideration using Q-learning. In: 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC). October 5-7, 2011. IEEE, Washington, DC, USA, pp. 1652–1658.
 
Duan, H., Li, Z., Zhang, Y., 2010. Multiobjective reinforcement learning for traffic signal control using vehicular ad hoc network. EURASIP J. Appl. Signal Process. 2010, 7, 1–7.
 

El-Tantawy, S., Abdulhai, B., Abdelgawad, H., 2013. Multiagent reinforcement learning for integrated network of adaptive traffic signal controllers (MARLIN-ATSC): methodology and large-scale application on downtown Toronto. IEEE Trans. Intell. Transport. Syst. 14, 1140–1150.

 

El-Tantawy, S., Abdulhai, B., Abdelgawad, H., 2014. Design of reinforcement learning parameters for seamless application of adaptive traffic signal control. J. Intell. Transp. Syst. 18, 227–245.

 
Fujimoto, S., Hoof, H., Meger, D., 2018, July. Addressing function approximation error in actor-critic methods. In: International conference on machine learning. PMLR, pp. 1587–1596.
 
Gao, J., Shen, Y., Liu, J., Ito, M., Shiratori, N., 2017. Adaptive traffic signal control: deep reinforcement learning algorithm with experience replay and target network. arXiv: 1705.02755. https://arxiv.org/abs/1705.02755.pdf.
 
Genders, W., Razavi, S., 2016. Using a deep reinforcement learning agent for traffic signal control. arXiv: 1611.01142. https://arxiv.org/abs/1611.01142.pdf.
 

Genders, W., Razavi, S., 2018. Evaluating reinforcement learning state representations for adaptive traffic signal control. Procedia Comput. Sci. 130, 26–33.

 

Geroliminis, N., Haddad, J., Ramezani, M., 2012. Optimal perimeter control for two urban regions with macroscopic fundamental diagrams: a model predictive approach. IEEE Trans. Intell. Transport. Syst. 14, 348–359.

 

Gong, Y., Abdel-Aty, M., Cai, Q., Rahman, M.S., 2019. Decentralized network level adaptive signal control by multi-agent deep reinforcement learning. Transp. Res. Interdiscip. Perspect. 1, 100020.

 

Han, Y., Ramezani, M., Hegyi, A., Yuan, Y., Hoogendoorn, S., 2020. Hierarchical ramp metering in freeways: an aggregated modeling and control approach. Transport. Res. C Emerg. Technol. 110, 1–19.

 

Han, Y., Hegyi, A., Zhang, L., He, Z., Chung, E., Liu, P., 2022a. A new reinforcement learning-based variable speed limit control approach to improve traffic efficiency against freeway jam waves. Transport. Res. C Emerg. Technol. 144, 103900.

 

Han, Y., Wang, M., Li, L., Roncoli, C., Gao, J., Liu, P., 2022b. A physics-informed reinforcement learning-based strategy for local and coordinated ramp metering. Transport. Res. C Emerg. Technol. 137, 103584.

 
Han, G., Han, Y., Wang, H., Ruan, T., Li, C., 2023. Coordinated control of urban expressway integrating adjacent signalized intersections using adversarial network based reinforcement learning method. In: IEEE Trans Intell Transp Syst, pp. 1–15.
 

Haydari, A., Yılmaz, Y., 2020. Deep reinforcement learning for intelligent transportation systems: a survey. IEEE Trans. Intell. Transport. Syst. 23, 11–32.

 

Hegyi, A., De Schutter, B., Hellendoorn, H., 2005. Model predictive control for optimal coordination of ramp metering and variable speed limits. Transport. Res. C Emerg. Technol. 13, 185–209.

 

Hu, J., Li, X., Cen, Y., Xu, Q., Zhu, X., Hu, W., 2022. A roadside decision-making methodology based on deep reinforcement learning to simultaneously improve the safety and efficiency of merging zone. IEEE Trans. Intell. Transport. Syst. 23, 18620–18631.

 
Huang, X., Wu, D., Jenkin, M., Boulet, B., 2021. ModelLight: model-based metareinforcement learning for traffic signal control. arXiv: 2111.08067. https://arxiv.org/abs/2111.08067.pdf.
 

Huo, Y., Tao, Q., Hu, J., 2020. Cooperative control for multi-intersection traffic signal based on deep reinforcement learning and imitation learning. IEEE Access 8, 199573–199585.

 
Jang, K., Vinitsky, E., Chalaki, B., Remer, B., Beaver, L., Malikopoulos, A.A., et al., 2019. Simulation to scaled city: zero-shot policy transfer for traffic control via autonomous vehicles. In: Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems. April 16-18, 2019, Montreal, Quebec. ACM, Canada. New York, pp. 291–300.
 

Ke, Z., Li, Z., Cao, Z., Liu, P., 2020. Enhancing transferability of deep reinforcement learning-based variable speed limit control using transfer learning. IEEE Trans. Intell. Transport. Syst. 22, 4684–4695.

 

Keyvan-Ekbatani, M., Kouvelas, A., Papamichail, I., Papageorgiou, M., 2012. Exploiting the fundamental diagram of urban networks for feedback-based gating. Transp. Res. Part B Methodol. 46, 1393–1403.

 
Khamis, M.A., Gomaa, W., 2013. Enhanced multiagent multi-objective reinforcement learning for urban traffic light control. In: 2012 11th International Conference on Machine Learning and Applications. December 12-15, 2012. IEEE, Boca Raton, FL, USA, pp. 586–591.
 

Khamis, M.A., Gomaa, W., 2014. Adaptive multi-objective reinforcement learning with hybrid exploration for traffic signal control based on cooperative multi-agent framework. Eng. Appl. Artif. Intell. 29, 134–151.

 
Khamis, M.A., Gomaa, W., El-Shishiny, H., 2012. Multi-objective traffic light control system based on Bayesian probability interpretation. In: 2012 15th International IEEE Conference on Intelligent Transportation Systems. September 16-19, 2012. IEEE, Anchorage, AK, USA, pp. 995–1000.
 

Kim, G., Kang, J., Sohn, K., 2023. A meta–reinforcement learning algorithm for traffic signal control to automatically switch different reward functions according to the saturation level of traffic flows. Comput. Aided Civil Eng. 38, 779–798.

 
Kreidieh, A.R., Wu, C., Bayen, A.M., 2018. Dissipating stop-and-go waves in closed and open networks via deep reinforcement learning. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). November 4-7, 2018. IEEE, Maui, HI, USA, pp. 1475–1480.
 
Kunjir, M., Chawla, S., Chandrasekar, S., Jay, D., Ravindran, B., 2023. Optimizing traffic control with model-based learning: a pessimistic approach to data-efficient policy inference. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. August 6-10, 2023. ACM, Long Beach, CA, USA. New York, pp. 1176–1187.
 
Kuyer, L., Whiteson, S., Bakker, B., Vlassis, N., 2008. Multiagent reinforcement learning for urban traffic control using coordination graphs. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, pp. 656–671.
 

Li, L., Lv, Y., Wang, F.Y., 2016. Traffic signal timing via deep reinforcement learning. IEEE/CAA J. Autom Sin. 3, 247–254.

 

Li, M., Cao, Z., Li, Z., 2021a. A reinforcement learning-based vehicle platoon control strategy for reducing energy consumption in traffic oscillations. IEEE Transact. Neural Networks Learn. Syst. 32, 5309–5322.

 
Li, X., Guo, Z., Dai, X., Lin, Y., Jin, J., Zhu, F., et al., 2020a. Deep imitation learning for traffic signal control and operations based on graph convolutional neural networks. In: 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC). September 20-23, 2020, Rhodes, Greece. IEEE, pp. 1–6.
 

Li, D., Hou, Z., 2020. Perimeter control of urban traffic networks based on model-free adaptive control. IEEE Trans. Intell. Transport. Syst. 22, 6460–6472.

 

Li, Z., Liu, P., Xu, C., Duan, H., Wang, W., 2017. Reinforcement learning-based variable speed limit control strategy to reduce traffic congestion at freeway recurrent bottlenecks. IEEE Trans. Intell. Transport. Syst. 18, 3204–3217.

 

Li, Q., Peng, Z., Feng, L., Zhang, Q., Xue, Z., Zhou, B., 2022. MetaDrive: composing diverse driving scenarios for generalizable reinforcement learning. IEEE Trans. Pattern Anal. Mach. Intell. 45, 3461–3475.

 

Li, Z., Xu, C., Guo, Y., Liu, P., Pu, Z., 2020b. Reinforcement learning-based variable speed limits control to reduce crash risks near traffic oscillations on freeways. IEEE Intell. Transp. Syst. Mag. 13, 64–70.

 

Li, Z., Yu, H., Zhang, G., Dong, S., Xu, C.Z., 2021b. Network-wide traffic signal control optimization using a multi-agent deep reinforcement learning. Transport. Res. C Emerg. Technol. 125, 103059.

 

Liang, X., Du, X., Wang, G., Han, Z., 2019. A deep reinforcement learning network for traffic light cycle control. IEEE Trans. Veh. Technol. 68, 1243–1253.

 
Lin, Y., Dai, X., Li, L., Wang, F.Y., 2018. An efficient deep reinforcement learning model for urban traffic control. arXiv: 1808.01876. https://arxiv.org/abs/1808.01876.pdf.
 
Lu, S., Liu, X., Dai, S., 2008. Q-learning for adaptive traffic signal control based on delay minimization strategy. In: 2008 IEEE International Conference on Networking, Sensing and Control. April 6-8, 2008, Sanya, China. IEEE, pp. 687–691.
 

Liu, H., Claudel, C.G., Machemehl, R., Perrine, K.A., 2021. A robust traffic control model considering uncertainties in turning ratios. IEEE Trans. Intell. Transport. Syst. 23, 6539–6555.

 

Lu, C., Chen, H., Grant-Muller, S., 2014. Indirect reinforcement learning for incident-responsive ramp control. Procedia Soc Behav Sci 111, 1112–1122.

 

Lu, W., Yi, Z., Gu, Y., Rui, Y., Ran, B., 2023. TD3LVSL: a lane-level variable speed limit approach based on twin delayed deep deterministic policy gradient in a connected automated vehicle environment. Transport. Res. C Emerg. Technol. 153, 104221.

 
Lubars, J., Gupta, H., Chinchali, S., Li, L., Raja, A., Srikant, R., et al., 2021. Combining reinforcement learning with model predictive control for on-ramp merging. In: 2021 IEEE International Intelligent Transportation Systems Conference (ITSC). September 19-22, 2021, Indianapolis. IEEE, USA, pp. 942–947.
 

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., et al., 2015. Human-level control through deep reinforcement learning. Nature 518, 529–533.

 

Mousavi, S.S., Schukat, M., Howley, E., 2017. Traffic light control using deep policy-gradient and value-function-based reinforcement learning. IET Intell. Transp. Syst. 11, 417–423.

 

Ni, W., Cassidy, M.J., 2019. Cordon control with spatially-varying metering rates: a Reinforcement Learning approach. Transport. Res. C Emerg. Technol. 98, 358–369.

 
Nishi, T., Otaki, K., Hayakawa, K., Yoshimura, T., 2018. Traffic signal control based on reinforcement learning with graph convolutional neural nets. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). November 4-7, 2018. IEEE, Maui, HI, USA, pp. 877–883.
 
Nishitani, I., Yang, H., Guo, R., Keshavamurthy, S., Oguchi, K., 2020. Deep merging: vehicle merging controller based on deep reinforcement learning with embedding network. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). May 31 - August 31, 2020, Paris, France. IEEE, pp. 216–221.
 

Noaeen, M., Naik, A., Goodman, L., Crebo, J., Abrar, T., Abad, Z.S.H., et al., 2022. Reinforcement learning in urban network traffic signal control: a systematic literature review. Expert Syst. Appl. 199, 116830.

 

Pandey, V., Wang, E., Boyles, S.D., 2020. Deep reinforcement learning algorithm for dynamic pricing of express lanes with multiple access locations. Transport. Res. C Emerg. Technol. 119, 102715.

 

Papageorgiou, M., Diakaki, C., Dinopoulou, V., Kotsialos, A., Wang, Y., 2003. Review of road traffic control strategies. Proc. IEEE 91, 2043–2067.

 
Pattanaik, A., Tang, Z., Liu, S., Bommannan, G., Chowdhary, G., 2017. Robust deep reinforcement learning with adversarial attacks. arXiv: 1712.03632. https://arxiv.org/abs/1712.03632.pdf.
 

Peng, B., Keskin, M.F., Kulcsár, B., Wymeersch, H., 2021. Connected autonomous vehicles for improving mixed traffic efficiency in unsignalized intersections with deep reinforcement learning. Commun. Transp. Res. 1, 100017.

 
Pinto, L., Davidson, J., Sukthankar, R., Gupta, A., 2017. Robust adversarial reinforcement learning. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70. ACM, Sydney, NSW, Australia. New York, pp. 2817–2826. August 6-11, 2017.
 
Rizzo, S.G., Vantini, G., Chawla, S., 2019. Time critic policy gradient methods for traffic signal control in complex and congested scenarios. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. August 4-8, 2019. ACM, Anchorage, AK, USA. New York, pp. 1654–1664.
 
Rodrigues, F., Azevedo, C.L., 2019. Towards robust deep reinforcement learning for traffic signal control: demand surges, incidents and sensor failures. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC). October 27-30, 2019, Auckland, New Zealand. IEEE, pp. 3559–3566.
 

Schmidt-Dumont, T., Vuuren, J.V., 2015. Decentralised reinforcement learning for ramp metering and variable speed limits on highways. IEEE Trans. Intell. Transport. Syst. 14, 1.

 
Shabestary, S.M.A., Abdulhai, B., 2018. Deep learning vs. discrete reinforcement learning for adaptive traffic signal control. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC). November 4-7, 2018. IEEE, Maui, HI, USA, pp. 286–293.
 

Siri, S., Pasquale, C., Sacone, S., Ferrara, A., 2021. Freeway traffic control: a survey. Automatica 130, 109655.

 

Su, Z.C., Chow, A.H.F., Zhong, R.X., 2021. Adaptive network traffic control with an integrated model-based and data-driven approach and a decentralised solution method. Transport. Res. C Emerg. Technol. 128, 103154.

 

Su, Z.C., Chow, A.H.F., Fang, C.L., Liang, E.M., Zhong, R.X., 2023. Hierarchical control for stochastic network traffic with reinforcement learning. Transp. Res. Part B Methodol. 167, 196–216.

 

Tan, T., Bao, F., Deng, Y., Jin, A., Dai, Q., Wang, J., 2019. Cooperative deep reinforcement learning for large-scale traffic grid signal control. IEEE Trans. Cybern. 50, 2687–2700.

 

Tan, K.L., Sharma, A., Sarkar, S., 2020. Robust deep reinforcement learning for traffic signal control. J. Big Data Anal. Transp. 2, 263–274.

 

Tettamanti, T., Luspay, T., Kulcsár, B., Péni, T., Varga, I., 2013. Robust control for urban road traffic networks. IEEE Trans. Intell. Transport. Syst. 15, 385–398.

 
Thorpe, T.L., Anderson, C., 1996. Traac Light Control Using SARSA with Three State Representations. Technical report, Citeseer.
 

Touhbi, S., Babram, M.A., Nguyen-Huu, T., Marilleau, N., Hbid, M.L., Cambier, C., et al., 2017. Adaptive traffic signal control: exploring reward definition for reinforcement learning. Procedia Comput. Sci. 109, 513–520.

 
Van der Pol, E., Oliehoek, F., 2016. Coordinated deep reinforcement learners for traffic light control. In: Proceedings of Learning, Inference and Control of Multi-Agent Systems (At NIPS 2016), vol. 8, pp. 21–38.
 
Wang, P., Chan, C.Y., 2017. Formulation of deep reinforcement learning architecture toward autonomous driving for on-ramp merge. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). October 16-19, 2017, Yokohama, Japan. IEEE, pp. 1–6.
 
Wang, M., Wu, L., Li, J., Wu, D., Ma, C., 2022a. Urban traffic signal control with reinforcement learning from demonstration data. In: 2022 International Joint Conference on Neural Networks (IJCNN). July 18-23, 2022, Padua, Italy. IEEE, pp. 1–8.
 

Wang, Y., Xu, T., Niu, X., Tan, C., Chen, E., Xiong, H., 2020. STMARL: a spatio-temporal multi-agent reinforcement learning approach for cooperative traffic light control. IEEE Trans. Mobile Comput. 21, 2228–2242.

 

Wang, C., Xu, Y., Zhang, J., Ran, B., 2022b. Integrated traffic control for freeway recurrent bottleneck based on deep reinforcement learning. IEEE Trans. Intell. Transport. Syst. 23, 15522–15535.

 
Wei, H., Zheng, G., Yao, H., Li, Z., 2018. IntelliLight: a reinforcement learning approach for intelligent traffic light control. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. August 19-23, 2018. ACM, London, United Kingdom. New York, pp. 2496–2505.
 

Wang, X., Yin, Y., Feng, Y., Liu, H.X., 2022c. Learning the max pressure control for urban traffic networks considering the phase switching loss. Transport. Res. C Emerg. Technol. 140, 103670.

 
Wei, H., Chen, C., Zheng, G., Wu, K., Gayah, V., Xu, K., et al., 2019. PressLight: learning max pressure control to coordinate traffic signals in arterial network. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. August 4-8, 2019. ACM, Anchorage, AK, USA. New York, pp. 1290–1298.
 

Wei, H., Zheng, G., Gayah, V., Li, Z., 2021. Recent advances in reinforcement learning for traffic signal control: a survey of models and evaluation. SIGKDD Explor Newsl. 22, 12–18.

 
Wiering, M., 2000. Multi-agent reinforcement leraning for traffic light control. In: Proceedings of the Seventeenth International Conference on Machine Learning. ACM, New York, pp. 1151–1158.
 

Wu, Y., Tan, H., Qin, L., Ran, B., 2020. Differential variable speed limits control for freeway recurrent bottlenecks via deep actor-critic algorithm. Transport. Res. C Emerg. Technol. 117, 102649.

 

Xi, Y.G., Li, D.W., Lin, S., 2013. Model predictive control—status and challenges. Acta Autom. Sin. 39, 222–236.

 

Xiao, Y., Liu, J., Wu, J., Ansari, N., 2021. Leveraging deep reinforcement learning for traffic engineering: a survey. IEEE Commun. Surv. Tutor 23, 2064–2097.

 

Xie, J., Yang, Z., Lai, X., Liu, Y., Yang, X.B., Teng, T.H., et al., 2022. Deep reinforcement learning for dynamic incident-responsive traffic information dissemination. Transport. Res. Part E Logist Transp Rev 166, 102871.

 
Xiong, Y., Zheng, G., Xu, K., Li, Z., 2019. Learning traffic signal control from demonstrations. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. November 3-7, 2019. ACM, Beijing, China. New York, pp. 2289–2292.
 

Xu, M., Wu, J., Huang, L., Zhou, R., Wang, T., Hu, D., 2020. Network-wide traffic signal control based on the discovery of critical nodes and deep reinforcement learning. J. Intell. Transp. Syst. 24, 1–10.

 

Yoon, J., Ahn, K., Park, J., Yeo, H., 2021. Transferable traffic signal control: reinforcement learning with graph centric state representation. Transport. Res. C Emerg. Technol. 130, 103321.

 

Zang, X., Yao, H., Zheng, G., Xu, N., Xu, K., Li, Z., 2020. MetaLight: value-based meta-reinforcement learning for traffic signal control. Proc. AAAI Conf. Artif. Intell. 34, 1153–1160.

 
Zhang, R., Ishikawa, A., Wang, W., Striner, B., Tonguz, O., 2018. Using reinforcement learning with partial vehicle detection for intelligent traffic signal control. arXiv: 1807.01628. https://arxiv.org/abs/1807.01628.pdf.
 
Zhang, L., Wu, Q., Shen, J., Lü, L., Du, B., Wu, J., 2022. Expression might be enough: Representing pressure and demand for reinforcement learning based traffic signal control. In: International Conference on Machine Learning. PMLR, pp. 26645–26654.
 
Zhang, Z., Yang, J., Zha, H., 2019. Integrating independent and centralized multi-agent reinforcement learning for traffic signal network optimization. arXiv: 1909.10651. https://arxiv.org/abs/1909.10651.pdf.
 
Zhang, H., Liu, C., Zhang, W., Zheng, G., Yu, Y., 2020. GeneraLight: improving environment generalization of traffic signal control via meta reinforcement learning. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management. October 19-23, 2020, Virtual Event, Ireland. ACM, New York, pp. 1783–1792.
 
Zheng, G., Xiong, Y., Zang, X., Feng, J., Wei, H., Zhang, H., et al., 2019a. Learning phase competition for traffic signal control. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. November 3-7, 2019. ACM, Beijing, China. New York, pp. 1963–1972.
 
Zheng, G., Zang, X., Xu, N., Wei, H., Yu, Z., Gayah, V., et al., 2019b. Diagnosing reinforcement learning for traffic signal control. arXiv: 1905.04716. https://arxiv.org/abs/1905.04716.pdf.
 

Zhou, D., Gayah, V.V., 2021. Model-free perimeter metering control for two-region urban networks using deep reinforcement learning. Transport. Res. C Emerg. Technol. 124, 102949.

 

Zhou, D., Gayah, V.V., 2023. Scalable multi-region perimeter metering control for urban networks: a multi-agent deep reinforcement learning approach. Transport. Res. C Emerg. Technol. 148, 104033.

 

Zhu, F., Ukkusuri, S.V., 2014. Accounting for dynamic speed limit control in a stochastic traffic environment: a reinforcement learning approach. Transport. Res. C Emerg. Technol. 41, 30–47.

 

Zhou, M., Yu, Y., Qu, X., 2019. Development of an efficient driving strategy for connected and automated vehicles at signalized intersections: a reinforcement learning approach. IEEE Trans. Intell. Transport. Syst. 21, 433–443.

 

Zhu, L., Peng, P., Lu, Z., Tian, Y., 2023. MetaVIM: meta variationally intrinsic motivated reinforcement learning for decentralized traffic signal control. IEEE Trans. Knowl. Data Eng. 35, 11570–11584.

 
Zou, Y., Qin, Z., 2020. Bayesian meta-reinforcement learning for traffic signal control. arXiv: 2010.00163. https://arxiv.org/abs/2010.00163.pdf.
Communications in Transportation Research
Article number: 100104
Cite this article:
Han Y, Wang M, Leclercq L. Leveraging reinforcement learning for dynamic traffic control: A survey and challenges for field implementation. Communications in Transportation Research, 2023, 3: 100104. https://doi.org/10.1016/j.commtr.2023.100104

466

Views

24

Crossref

25

Web of Science

26

Scopus

Altmetrics

Received: 24 August 2023
Revised: 24 September 2023
Accepted: 25 September 2023
Published: 03 November 2023
© 2023 The Author(s).

This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Return