AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (3.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Robust and efficient edge-based visual odometry

State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Institute of Computing Technology, Chinese Academy ofSciences, Beijing 100190, China
Show Author Information

Graphical Abstract

Abstract

Visual odometry, which aims to estimate relative camera motion between sequential video frames, has been widely used in the fields of augmented reality, virtual reality, and autonomous driving. However, it is still quite challenging for state-of-the-art approaches to handle low-texture scenes. In this paper, we propose a robust and efficient visual odometry algorithm that directly utilizes edge pixels to track camera pose. In contrast to direct methods, we choose reprojection error to construct the optimization energy, which can effectively cope with illumination changes. The distance transform map built upon edge detection for each frame is used to improve tracking efficiency. A novel weighted edge alignment method together with sliding window optimization is proposed to further improve the accuracy. Experiments on public datasets show that the method is comparable to state-of-the-art methods in terms of tracking accuracy, while being faster and more robust.

Electronic Supplementary Material

Video
101320TP-2022-3-467_ESM.mp4

References

[1]
Blochliger, F.; Fehr, M.; Dymczyk, M.; Schneider, T.; Siegwart, R. Topomap: Topological mapping and navigation based on visual SLAM maps. In: Proceedings of the IEEE International Conference on Robotics and Automation, 3818-3825, 2018.
[2]
Ling, F. F.; Elvezio, C.; Bullock, J.; Henderson, S.; Feiner, S. A hybrid RTK GNSS and SLAM outdoor augmented reality system. In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 1044-1045, 2019.
[3]
Nescher, T.; Zank, M.; Kunz, A. Simultaneous mapping and redirected walking for ad hoc free walking in virtual environments. In: Proceedings of the IEEE Virtual Reality, 239-240, 2016.
[4]
Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In: Vision - ECCV 2014. Lecture Notes in Computer Science, Vol. 8690. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 834-849, 2014.
[5]
Engel, J.; Stückler, J.; Cremers, D. Large-scale direct SLAM with stereo cameras. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1935-1942, 2015.
[6]
Liu, H. M.; Zhang, G. F.; Bao, H. J. Robust keyframe-based monocular SLAM for augmented reality. In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, 1-10, 2016.
[7]
Mur-Artal, R.; Tardós, J. D. ORB-SLAM2: An open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Transactions on Robotics Vol. 33, No. 5, 1255-1262, 2017.
[8]
Hsiao, M.; Westman, E.; Zhang, G. F.; Kaess, M. Keyframe-based dense planar SLAM. In: Proceedings of the IEEE International Conference on Robotics and Automation, 5110-5117, 2017.
[9]
Engel, J.; Sturm, J.; Cremers, D. Semi-dense visual odometry for a monocular camera. In: Proceedings of the IEEE International Conference on Computer Vision, 1449-1456, 2013.
[10]
Forster, C.; Zhang, Z. C.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics Vol. 33, No. 2, 249-265, 2017.
[11]
Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 40, No. 3, 611-625, 2018.
[12]
Zhou, Y.; Li, H. D.; Kneip, L. Canny-VO: Visual odometry with RGB-D cameras based on geometric 3-D-2-D edge alignment. IEEE Transactions on Robotics Vol. 35, No. 1, 184-199, 2019.
[13]
Huang, J. H.; Yang, S.; Mu, T. J.; Hu, S. M. ClusterVO: Clustering moving instances and estimating visual odometry for self and surroundings. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2165-2174, 2020.
[14]
Mur-Artal, R.; Montiel, J. M. M.; Tardós, J. D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics Vol. 31, No. 5, 1147-1163, 2015.
[15]
Du, Z. J.; Huang, S. S.; Mu, T. J.; Zhao, Q. H.; Martin, R.; Xu, K. Accurate dynamic SLAM using CRF-based long-term consistency. IEEE Transactions on Visualization and Computer Graphics , 2020.
[16]
Wang, X.; Dong, W.; Zhou, M. C.; Li, R. J.; Zha, H. B. Edge enhanced direct visual odometry. In: Proceedings of the British Machine Vision Conference, 35.1-35.11, 2016.
[17]
Kuse, M.; Shen, S. J. Robust camera motion estimation using direct edge alignment and sub-gradient method.In: Proceedings of the IEEE International Conference on Robotics and Automation, 573-579, 2016.
[18]
Schenk, F.; Fraundorfer, F. Combining edge images and depth maps for robust visual odometry. In: Proceedings of the British Machine Vision Conference, 149.1-149.12, 2017.
[19]
Schenk, F.; Fraundorfer, F. Robust edge-based visual odometry using machine-learned edges. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1297-1304, 2017.
[20]
Schenk, F.; Fraundorfer, F. RESLAM: A real-time robust edge-based SLAM system. In: Proceedings of the International Conference on Robotics and Automation, 154-160, 2019.
[21]
Gee, A. P.; Mayol-Cuevas, W. Real-time model-based SLAM using line segments. In: Advances in Visual Computing. Lecture Notes in Computer Science, Vol. 4292. Bebis, G. et al. Eds. Springer Berlin Heidelberg, 354-363, 2006.
[22]
Smith, P.; Reid, I.; Davison, A. J. Real-time monocular SLAM with straight lines. In: Proceedings of the British Machine Vision Conference, 3.1-3.10, 2006.
[23]
Zhang, L. L.; Koch, R. Hand-held monocular SLAM based on line segments. In: Proceedings of the Irish Machine Vision and Image Processing Conference, 7-14, 2011.
[24]
Hirose, K.; Saito, H. Fast line description for line-based SLAM. In: Proceedings of the British Machine Vision Conference, 83.1-83.11, 2012.
[25]
Pumarola, A.; Vakhitov, A.; Agudo, A.; Sanfeliu, A.; Moreno-Noguer, F. PL-SLAM: Real-time monocular visual SLAM with points and lines. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4503-4508, 2017.
[26]
Zuo, X. X.; Xie, X. J.; Liu, Y.; Huang, G. Q. Robust visual SLAM with point and line features. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1775-1782, 2017.
[27]
Gomez-Ojeda, R.; Briales, J.; Gonzalez-Jimenez, J. PL-SVO: Semi-direct Monocular Visual Odometry by combining points and line segments. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 4211-4216, 2016.
[28]
Gomez-Ojeda, R.; Gonzalez-Jimenez, J. Robust stereo visual odometry through a probabilistic combination of points and line segments. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2521-2526, 2016.
[29]
Gomez-Ojeda, R.; Moreno, F. A.; Zuñiga-Noël, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A stereo SLAM system through the combination of points and line segments. IEEE Transactions on Robotics Vol. 35, No. 3, 734-746, 2019.
[30]
Yang, S. C.; Scherer, S. Direct monocular odometry using points and lines. In: Proceedings of the IEEE International Conference on Robotics and Automation, 3871-3877, 2017.
[31]
Li, S. J.; Ren, B.; Liu, Y.; Cheng, M. M.; Frost, D.; Prisacariu, V. A. Direct line guidance odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation, 5137-5143, 2018.
[32]
Huang, S. S.; Ma, Z. Y.; Mu, T. J.; Fu, H. B.; Hu, S. M. Lidar-monocular visual odometry using point and line features. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1091-1097, 2020.
[33]
Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 8, No. 6, 679-698, 1986.
[34]
Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In: Proceedings of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 225-234, 2007.
[35]
Newcombe, R. A.; Lovegrove, S. J.; Davison, A. J. DTAM: Dense tracking and mapping in real-time. In: Proceedings of the International Conference on Computer Vision, 2320-2327, 2011.
[36]
Caruso, D.; Engel, J.; Cremers, D. Large-scale direct SLAM for omnidirectional cameras. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 141-148, 2015.
[37]
Wang, R.; Schwörer, M.; Cremers, D. Stereo DSO: Large-scale direct sparse visual odometry with stereo cameras. In: Proceedings of the IEEE International Conference on Computer Vision, 3923-3931, 2017.
[38]
Gao, X.; Wang, R.; Demmel, N.; Cremers, D. LDSO: Direct sparse odometry with loop closure. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2198-2204, 2018.
[39]
Lee, S. H.; Civera, J. Loosely-coupled semi-direct monocular SLAM. IEEE Robotics and Automation Letters Vol. 4, No. 2, 399-406, 2019.
[40]
Von Gioi, R. G.; Jakubowicz, J.; Morel, J. M.; Randall, G. LSD: A line segment detector. Image Processing on Line Vol. 2, 35-55, 2012.
[41]
Zhang, L. L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. Journal of Visual Communication and Image Representation Vol. 24, No. 7, 794-805, 2013.
[42]
Zhang, J.; Zeng, G.; Zha, H. Structure-aware SLAM with planes in man-made environment. In: Computer Vision. Communications in Computer and Information Science, Vol. 773. Yang J. et al. Eds. Springer Singapore, 477-489, 2017.
[43]
Felzenszwalb, P. F.; Huttenlocher, D. P. Distance transforms of sampled functions. Theory of Computing Vol. 8, 415-428, 2012.
[44]
Palazzolo, E.; Behley, J.; Lottes, P.; Giguère, P.; Stachniss, C. ReFusion: 3D reconstruction in dynamic environments for RGB-D cameras exploiting residuals. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 7855-7862, 2019.
[45]
Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 573-580, 2012.
[46]
Handa, A.; Whelan, T.; McDonald, J.; Davison, A. J. A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: Proceedings of the IEEE International Conference on Robotics and Automation, 1524-1531, 2014.
Computational Visual Media
Pages 467-481
Cite this article:
Yan F, Li Z, Zhou Z. Robust and efficient edge-based visual odometry. Computational Visual Media, 2022, 8(3): 467-481. https://doi.org/10.1007/s41095-021-0251-7

858

Views

48

Downloads

2

Crossref

2

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 30 June 2021
Accepted: 13 August 2021
Published: 07 March 2022
© The Author(s) 2021.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return