Discover the SciOpen Platform and Achieve Your Research Goals with Ease.
Search articles, authors, keywords, DOl and etc.
Real-time and accurate traffic light status recognition can provide reliable data support for autonomous vehicle decision-making and control systems. To address potential problems such as the minor component of traffic lights in the perceptual domain of visual sensors and the complexity of recognition scenarios, we propose an end-to-end traffic light status recognition method, ResNeSt50-CBAM-DINO (RC-DINO). First, we performed data cleaning on the Tsinghua–Tencent traffic lights (TTTL) and fused it with the Shanghai Jiao Tong University’s traffic light dataset (S2TLD) to form a Chinese urban traffic light dataset (CUTLD). Second, we combined residual network with split-attention module-50 (ResNeSt50) and the convolutional block attention module (CBAM) to extract more significant traffic light features. Finally, the proposed RC-DINO and mainstream recognition algorithms were trained and analyzed using CUTLD. The experimental results show that, compared to the original DINO, RC-DINO improved the average precision (AP), AP at intersection over union (IOU) = 0.5 (AP50), AP for small objects (APs), average recall (AR), and balanced F score (F1-Score) by 3.1%, 1.6%, 3.4%, 0.9%, and 0.9%, respectively, and had a certain capability to recognize the partially covered traffic light status. The above results indicate that the proposed RC-DINO improved recognition performance and robustness, making it more suitable for traffic light status recognition tasks.
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S., 2020. End-to-end object detection with transformers. In: European Conference on Computer Vision. Cham: Springer, 2020, 213–229.
Chen, X., Chen, Y., Zhang, G., 2021. A computer vision algorithm for locating and recognizing traffic signal control light status and countdown time. J Intell Transp Syst, 25, 533–546.
Lin, H., Liu, Y., Li, S., Qu, X., 2023b. How generative adversarial networks promote the development of intelligent transportation systems: A survey. IEEE/CAA J Autom Sin, 10, 1781–1796.
Liu, Y., Jia, R., Ye, J., Qu, X., 2022. How machine learning informs ride-hailing services: A survey. Commun Transp Res, 2, 100075.
Liu, Y., Lyu, C., Zhang, Y., Liu, Z., Yu, W., Qu, X., 2021. DeepTSP: Deep traffic state prediction model based on large-scale empirical data. Commun Transp Res, 1, 100012.
Mentasti, S., Simsek, Y. C., Matteucci, M., 2023. Traffic lights detection and tracking for HD map creation. Front Robot AI, 10, 1065394.
Omachi, M., Omachi, S., 2009. Fast detection of traffic light with color and edge information. Electronics Engineers of Japan, 38, 673–679.
Qi, F., Yang, C., Shi, B., Ma, S., 2023. Micro-expression recognition based on DCBAM-EfficientNet model. J Phys: Conf Ser, 2504, 012062.
Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell, 39, 1137–1149.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., et al., 2017. Attention is all You need. Advances in Neural Information Processing Systems, 2017, 30.
Wang, Q., Zhang, Q., Liang, X., Wang, Y., Zhou, C., Mikulovich, V. I., 2021. Traffic lights detection and recognition method based on the improved YOLOv4 algorithm. Sensors, 22, 200.
Woo, S., Park, J., Lee, J. Y., Kweon, I. S., 2018. CBAM: convolutional block attention module. In: European Conference on Computer Vision. Cham: Springer, 2018, 3–19.
Zong, F., He, Z., Zeng, M., Liu, Y., 2022a. Dynamic lane changing trajectory planning for CAV: A multi-agent model with path preplanning. Transp B Transp Dyn, 10, 266–292.
Zong, F., Wang, M., Tang, J., Zeng, M., 2022b. Modeling AVs & RVs’ car-following behavior by considering impacts of multiple surrounding vehicles and driving characteristics. Phys A Stat Mech Appl, 589, 126625.
This is an open access article under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).