AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Traffic signal detection and classification in street views using an attention model

TNList, Tsinghua University, Beijing 100084, China.
Department of Computer Science, University of Bath, United Kingdom.
Show Author Information

Abstract

Abstract  Detecting small objects is a challenging task. We focus on a special case: the detection and classification of traffic signals in street views. We present a novel framework that utilizes a visual attention model to make detection more efficient, without loss of accuracy, and which generalizes. The attention model is designed to generate a small set of candidate regions at a suitable scale so that small targets can be better located and classified. In order to evaluate our method in the context of traffic signal detection, we have built a traffic light benchmark with over 15,000 traffic light instances, based on Tencent street view panoramas. We have tested our method both on the dataset we have built and the Tsinghua–Tencent 100K (TT100K) traffic sign benchmark. Experiments show that our method has superior detection performance and is quicker than the general faster RCNN object detection framework on both datasets. It is competitive with state-of-the-art specialist traffic sign detectors on TT100K, but is an order of magnitude faster. To show generality, we tested it on the LISA dataset without tuning, and obtained an average precision in excess of 90%.

References

[1]
Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the Advances in Neural Information Processing Systems, 9199, 2015.
[2]
Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A. C. SSD: Single shot multibox detector. In: Computer Vision–ECCV 2016. Lecture Notes in Computer Science, Vol. 9905. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 2137, 2016.
[3]
Chen, C.; Liu, M.-Y.; Tuzel, O.; Xiao, J. R-CNN for small object detection. In: Computer Vision–ACCV 2016. Lecture Notes in Computer Science, Vol. 10115. Lai, S. H.; Lepetit, V.; Nishino, K.; Sato, Y. Eds. Springer Cham, 214230, 2016.
[4]
Jin, J.; Fu, K.; Zhang, C. Traffic sign recognition with hinge loss trained convolutional neural networks. IEEE Transactions on Intelligent Transportation Systems Vol. 15, No. 5, 19912000, 2014.
[5]
Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-sign detection and classification in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 21102118, 2016.
[6]
Girshick, R. Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, 14401448, 2015.
[7]
Rensink, R. A. The dynamic representation of scenes. Visual Cognition Vol. 7, Nos. 1–3, 1742, 2000.
[8]
Jensen, M. B.; Philipsen, M. P.; Møgelmose, A.; Moeslund, T. B.; Trivedi., M. M. Vision for looking at traffic lights: Issues, survey, and perspectives. IEEE Transactions on Intelligent Transportation Systems Vol. 17, No. 7, 18001815, 2016.
[9]
Diaz, M.; Cerri, P.; Pirlo, G.; Ferrer, M. A.; Impedovo, D. A survey on traffic light detection. In: New Trends in Image Analysis and Processing–ICIAP 2015 Workshops. Lecture Notes in Computer Science, Vol. 9281. Murino, V.; Puppo, E.; Sona, D.; Cristani, M.; Sansone, C. Eds. Springer Cham, 201208, 2015.
[10]
Maldonado-Bascon, S.; Lafuente-Arroyo, S.; Gil-Jimenez, P.; Gomez-Moreno, H.; Lopez-Ferreras, F. Road-sign detection and recognition based on support vector machines. IEEE Transactions on Intelligent Transportation Systems Vol. 8, No. 2, 264278, 2007.
[11]
Jang, C.; Kim, C.; Kim, D.; Lee, M.; Sunwoo, M. Multiple exposure images based traffic light recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 13131318, 2014.
[12]
De Charette, R.; Nashashibi, F. Real time visual traffic lights recognition based on spot light detection and adaptive traffic lights templates. In: Proceedings of the IEEE Intelligent Vehicles Symposium, 358363, 2009.
[13]
Cai, Z.; Gu, M.; Li, Y. Real-time arrow traffic light recognition system for intelligent vehicle. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition, 1, 2012.
[14]
Sooksatra, S.; Kondo, T. Red traffic light detection using fast radial symmetry transform. In: Proceedings of the 11th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, 16, 2014.
[15]
Ji, Y.; Yang, M.; Lu, Z.; Wang, C. Integrating visual selective attention model with HOG features for traffic light detection and recognition. In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 280285, 2015.
[16]
Fairfield, N.; Urmson, C. Traffic light mapping and detection. In: Proceedings of the IEEE International Conference on Robotics and Automation, 54215426, 2011.
[17]
John, V.; Yoneda, K.; Qi, B.; Liu, Z.; Mita, S. Traffic light recognition in varying illumination using deep learning and saliency map. In: Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems, 22862291, 2014.
[18]
John, V.; Yoneda, K.; Liu, Z.; Mita, S. Saliency map generation by the convolutional neural network for real-time traffic light detection using template matching. IEEE Transactions on Computational Imaging Vol. 1, No. 3, 159173, 2015.
[19]
Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
[20]
Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580587, 2014.
[21]
He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8691. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 346361, 2014.
[22]
Uijlings, J. R.; van de Sande, K. E. A.; Gevers, T.; Smeulders, A. W. Selective search for object recognition. International Journal of Computer Vision Vol. 104, No. 2, 154171, 2013.
[23]
Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779788, 2016.
[24]
Mnih, V.; Heess, N.; Graves, A.; kavukcuoglu, k. Recurrent models of visual attention. In: Proceedings of the Advances in Neural Information Processing Systems, 22042212, 2014.
[25]
Ba, J.; Mnih, V.; Kavukcuoglu, K. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014.
[26]
Huang, W.; He, D.; Yang, X.; Zhou, Z.; Kifer, D.; Giles, C. L. Detecting arbitrary oriented text in the wild with a visual attention model. In: Proceedings of the ACM on Multimedia Conference, 551555, 2016.
[27]
Gidaris, S.; Komodakis, N. Attend refine repeat: Active box proposal generation via in-out localization. arXiv preprint arXiv:1606.04446, 2016.
[28]
Gidaris, S.; Komodakis, N. Object detection via a multi-region and semantic segmentation-aware CNN model. In: Proceedings of the IEEE International Conference on Computer Vision, 11341142, 2015.
[29]
Zeiler, M. D.; Fergus, R. Visualizing and understanding convolutional networks. In: Computer Vision–ECCV 2014. Lecture Notes in Computer Science, Vol. 8689. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 818833, 2014.
[30]
He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, 10261034, 2015.
Computational Visual Media
Pages 253-266
Cite this article:
Lu Y, Lu J, Zhang S, et al. Traffic signal detection and classification in street views using an attention model. Computational Visual Media, 2018, 4(3): 253-266. https://doi.org/10.1007/s41095-018-0116-x

1088

Views

54

Downloads

73

Crossref

N/A

Web of Science

83

Scopus

3

CSCD

Altmetrics

Revised: 09 March 2018
Accepted: 07 April 2018
Published: 04 August 2018
© The Author(s) 2018

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return