AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (11 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Localization and mapping algorithm based on Lidar-IMU-Camera fusion

Yibing Zhao1( )Yuhe Liang1Zhenqiang Ma1Lie Guo1Hexin Zhang2
School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China
Department of the Built Environment, Technology University of Eindhoven, Eindhoven 5600MB, the Netherlands
Show Author Information

Abstract

Positioning and mapping technology is a difficult and hot topic in autonomous driving environment sensing systems. In a complex traffic environment, the signal of the Global Navigation Satellite System (GNSS) will be blocked, leading to inaccurate vehicle positioning. To ensure the security of automatic electric campus vehicles, this study is based on the Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain (LEGO-LOAM) algorithm with a monocular vision system added. An algorithm framework based on Lidar-IMU-Camera (Lidar means light detection and ranging) fusion was proposed. A lightweight monocular vision odometer model was used, and the LEGO-LOAM system was employed to initialize monocular vision. The visual odometer information was taken as the initial value of the laser odometer. At the back-end opti9mization phase error state, the Kalman filtering fusion algorithm was employed to fuse the visual odometer and LEGO-LOAM system for positioning. The visual word bag model was applied to perform loopback detection. Taking the test results into account, the laser radar loopback detection was further optimized, reducing the accumulated positioning error. The real car experiment results showed that our algorithm could improve the mapping quality and positioning accuracy in the campus environment. The Lidar-IMU-Camera algorithm framework was verified on the Hong Kong city dataset UrbanNav. Compared with the LEGO-LOAM algorithm, the results show that the proposed algorithm can effectively reduce map drift, improve map resolution, and output more accurate driving trajectory information.

References

 
Andrew J., Ian D., Nicholas D., Olivier S., 2007. MonoSLAM: Real-Time Single Camera SLAM. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, 1052–1067.
 
Chen, S., Ma, H., Jiang, C., Zhou, B., Xue, W., Xiao, Z., et al., 2022. NDT-LOAM: Real-time lidar odometry and mapping with weighted NDT and an LFA. IEEE Sens J, 22, 3660–3671.
 
Engel, J., Schöps, T., Cremers, D., 2014. LSD-SLAM: Large-scale direct monocular SLAM. In: European Conference on Computer Vision, 834–849.
 
Hess, W., Kohler, D., Rapp, H., Andor, D., 2016. Real-time loop closure in 2D LIDAR SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), 1271–1278.
 
Khattak, S., 2017. Multi-modal landmark detection and tracking for odometry estimation in degraded visual environments. M.D. Dissertation. NV, USA: University of Nevada.
 
Klein, G., Murray, D., 2007. Parallel tracking and mapping for small AR workspaces. In: 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, 225–234.
 
Kohlbrecher, S., von Stryk, O., Meyer, J., Klingauf, U., 2011. A flexible and scalable SLAM system with full 3D motion estimation. In: IEEE International Symposium on Safety, Security, and Rescue Robotics, 155–160.
 
Montemerlo, M., Thrun, S., 2003a. Simultaneous localization and mapping with unknown data association using FastSLAM. In: 2003 IEEE International Conference on Robotics and Automation, 1985–1991.
 
Montemerlo, M., Thrun, S., Koller, D., Wegbreit, B., 2003b. FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges. In: Proceedings of the 18th international joint conference on Artificial intelligence, 1151–1156.
 
Mourikis, A. I., Roumeliotis, S. I., 2007. A multi-state constraint Kalman filter for vision-aided inertial navigation. In: Proceedings 2007 IEEE International Conference on Robotics and Automation, 3565–3572.
 
Murphy, K., Russell, S., 2001. Rao-blackwellised particle filtering for dynamic Bayesian networks. In: Sequential Monte Carlo Methods in Practice, 499–515.
 
Park, Y. S., Jang, H., Kim, A., 2020. I-LOAM: Intensity enhanced LiDAR odometry and mapping. In: 2020 17th International Conference on Ubiquitous Robots (UR), 455–458.
 
Taihú, P., Thomas, F., Gastón, C., Pablo De, C., Javier, C., Julio Jacobo, B., 2017. S-PTAM: Stereo parallel tracking and mapping. Robot Auton Syst, 93, 27–42.
 
Shan, T., Englot, B., 2018. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 4758–4765.
 
Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., Rus, D., 2020. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5135–5142.
 

Wang, M., He, L., Yu, L., Chao, S., 2020. Mobile robot localization algorithm based on multi-sensor information fusion. J Meas Sci Instrum, 11, 152−160.

 
Zhang, J., Singh, S., 2014. LOAM: lidar odometry and mapping in real-time. In: Robotics: Science and Systems Conference, 1–9.
 
Zhang, J., Singh, S., 2015. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), 2174–2181.
Journal of Intelligent and Connected Vehicles
Pages 97-107
Cite this article:
Zhao Y, Liang Y, Ma Z, et al. Localization and mapping algorithm based on Lidar-IMU-Camera fusion. Journal of Intelligent and Connected Vehicles, 2024, 7(2): 97-107. https://doi.org/10.26599/JICV.2023.9210027

303

Views

65

Downloads

0

Crossref

0

Scopus

Altmetrics

Received: 16 June 2023
Revised: 18 August 2023
Accepted: 18 December 2023
Published: 30 June 2024
© The author(s) 2023.

This is an open access article under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/).

Return