AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (10.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

A deep learning method for traffic light status recognition

Lan YangZeyu He( )Xiangmo ZhaoShan FangJiaqi YuanYixu HeShijie LiSongyan Liu
School of Information Engineering, Chang’an University, Xi’an 710064, China
Show Author Information

Abstract

Real-time and accurate traffic light status recognition can provide reliable data support for autonomous vehicle decision-making and control systems. To address potential problems such as the minor component of traffic lights in the perceptual domain of visual sensors and the complexity of recognition scenarios, we propose an end-to-end traffic light status recognition method, ResNeSt50-CBAM-DINO (RC-DINO). First, we performed data cleaning on the Tsinghua–Tencent traffic lights (TTTL) and fused it with the Shanghai Jiao Tong University’s traffic light dataset (S2TLD) to form a Chinese urban traffic light dataset (CUTLD). Second, we combined residual network with split-attention module-50 (ResNeSt50) and the convolutional block attention module (CBAM) to extract more significant traffic light features. Finally, the proposed RC-DINO and mainstream recognition algorithms were trained and analyzed using CUTLD. The experimental results show that, compared to the original DINO, RC-DINO improved the average precision (AP), AP at intersection over union (IOU) = 0.5 (AP50), AP for small objects (APs), average recall (AR), and balanced F score (F1-Score) by 3.1%, 1.6%, 3.4%, 0.9%, and 0.9%, respectively, and had a certain capability to recognize the partially covered traffic light status. The above results indicate that the proposed RC-DINO improved recognition performance and robustness, making it more suitable for traffic light status recognition tasks.

References

[1]

Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S., 2020. End-to-end object detection with transformers. In: European Conference on Computer Vision. Cham: Springer, 2020, 213–229.

[2]

Chen, X., Chen, Y., Zhang, G., 2021. A computer vision algorithm for locating and recognizing traffic signal control light status and countdown time. J Intell Transp Syst, 25, 533–546.

[3]
Fang, S., Yang, L., Zhao, X., Wang, W., Xu, Z., Wu, G., et al., 2023. A Dynamic Transformation Car-Following Model for the Prediction of the Traffic Flow Oscillation. IEEE Intell Trans Syst Mag.
[4]
Gong, J., Jiang, Y., Xiong, G., Guan, C., Tao, G., Chen, H., 2010. The recognition and tracking of traffic lights based on color segmentation and CAMSHIFT for intelligent vehicles. In: 2010 IEEE Intelligent Vehicles Symposium, 431–435.
[5]
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778.
[6]
He, Y., Liu, Y., Yang, L., Qu, X., 2023. Deep adaptive control: Deep reinforcement learning-based adaptive vehicle trajectory control algorithms for different risk levels. IEEE Trans Intell Veh. https://doi.org/10.1109/TIV.2023.3303408
[7]
Jiang, J., Huang, H., 2023. Semantic segmentation of remote sensing images based on dual channel attention mechanism (DCAM). https://doi.org/10.21203/rs.3.rs-3006101/v1
[8]
John, V., Yoneda, K., Qi, B., Liu, Z., Mita, S., 2014. Traffic light recognition in varying illumination using deep learning and saliency map. In: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2286–2291.
[9]
Lee, S. H., Kim, J. H., Lim, Y. J., Lim, J., 2018. Traffic light detection and recognition based on Haar-like features. In: 2018 International Conference on Electronics, Information, and Communication (ICEIC), 1–4.
[10]
Lin, T., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Zitnick, C., 2014. Microsoft coco: Common objects in context. In: Computer Vision–ECCV 2014: 13th European Conference, 740–755.
[11]
Lin, H., He, Y., Liu, Y., Gao, K., Qu, X., 2023a. Deep demand prediction: An enhanced conformer model with cold-start adaptation for origin–destination ride-hailing demand prediction. IEEE Intell Transp Syst Mag, 2–15.
[12]

Lin, H., Liu, Y., Li, S., Qu, X., 2023b. How generative adversarial networks promote the development of intelligent transportation systems: A survey. IEEE/CAA J Autom Sin, 10, 1781–1796.

[13]

Liu, Y., Jia, R., Ye, J., Qu, X., 2022. How machine learning informs ride-hailing services: A survey. Commun Transp Res, 2, 100075.

[14]

Liu, Y., Lyu, C., Zhang, Y., Liu, Z., Yu, W., Qu, X., 2021. DeepTSP: Deep traffic state prediction model based on large-scale empirical data. Commun Transp Res, 1, 100012.

[15]
Liu, Y., Wu, F., Liu, Z., Wang, K., Wang, F., Qu, X., 2023. Can language models be used for real-world urban-delivery route optimization? Innovation, 4, 100520.
[16]

Mentasti, S., Simsek, Y. C., Matteucci, M., 2023. Traffic lights detection and tracking for HD map creation. Front Robot AI, 10, 1065394.

[17]
Müller, J., Dietmayer, K., 2018. Detecting traffic lights by single shot detection. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 266–273.
[18]

Omachi, M., Omachi, S., 2009. Fast detection of traffic light with color and edge information. Electronics Engineers of Japan, 38, 673–679.

[19]
Qian, H., Wang, L., Mou, H., 2019. Fast detection and identification of traffic lights based on deep learning. Comput Sci, 46, 272–278. (in Chinese)
[20]

Qi, F., Yang, C., Shi, B., Ma, S., 2023. Micro-expression recognition based on DCBAM-EfficientNet model. J Phys: Conf Ser, 2504, 012062.

[21]
Reddy, V. P., Raja, A. R., Polasi, P. K., Ponnuri, R. T., Kumar, G. K., Ahamed, S. F., 2023. Design and Development of Traffic Light Recognition method for Autonomous vehicles using V2I Communication. In: 2023 3rd International conference on Artificial Intelligence and Signal Processing (AISP), 1–6.
[22]

Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell, 39, 1137–1149.

[23]
Saini, S., Nikhil, S., Konda, K. R., Bharadwaj, H. S., Ganeshan, N., 2017. An efficient vision-based traffic light detection and state recognition for autonomous vehicles. In: 2017 IEEE Intelligent Vehicles Symposium (IV), 606–611.
[24]
Sathiya, S., Balasubramanian, M., Priya, D. V., 2015. Real time recognition of traffic light and their signal count-down timings. In: International Conference on Information Communication and Embedded Systems (ICICES2014), 1–6.
[25]

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al., 2017. Attention is all You need. Advances in Neural Information Processing Systems, 2017, 30.

[26]
Wang, K., Xiao, Y., & He, Y., 2023. Charting the future: Intelligent and connected vehicles reshaping the bus system. J Intell Connect Veh. http://doi.org/10.26599/JICV.2023.9210024
[27]

Wang, Q., Zhang, Q., Liang, X., Wang, Y., Zhou, C., Mikulovich, V. I., 2021. Traffic lights detection and recognition method based on the improved YOLOv4 algorithm. Sensors, 22, 200.

[28]

Woo, S., Park, J., Lee, J. Y., Kweon, I. S., 2018. CBAM: convolutional block attention module. In: European Conference on Computer Vision. Cham: Springer, 2018, 3–19.

[29]
Zeng, S., Wang, R., Li, M., Guan, L., 2023. N_ResNet: A real-time traffic light recognition network using object detection. https://doi.org/10.21203/rs.3.rs-2526425/v1
[30]
Zhang, H., Li, F., Liu, S., Zhang, L., Su, H., Zhu, J., et al., 2022a. DINO: DETR with improved DeNoising anchor boxes for end-to-end object detection. https://doi.org/10.48550/arXiv.2203.03605
[31]
Zhang, H., Wu, C., Zhang, Z., Zhu, Y., Lin, H., Zhang, Z., et al., 2022b. ResNeSt: split-attention networks. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2735–2745.
[32]

Zong, F., He, Z., Zeng, M., Liu, Y., 2022a. Dynamic lane changing trajectory planning for CAV: A multi-agent model with path preplanning. Transp B Transp Dyn, 10, 266–292.

[33]

Zong, F., Wang, M., Tang, J., Zeng, M., 2022b. Modeling AVs & RVs’ car-following behavior by considering impacts of multiple surrounding vehicles and driving characteristics. Phys A Stat Mech Appl, 589, 126625.

Journal of Intelligent and Connected Vehicles
Pages 173-182
Cite this article:
Yang L, He Z, Zhao X, et al. A deep learning method for traffic light status recognition. Journal of Intelligent and Connected Vehicles, 2023, 6(3): 173-182. https://doi.org/10.26599/JICV.2023.9210022

695

Views

113

Downloads

5

Crossref

5

Scopus

Altmetrics

Received: 20 July 2023
Revised: 23 September 2023
Accepted: 10 October 2023
Published: 30 September 2023
© The author(s) 2023.

This is an open access article under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return