AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.1 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

THP: Tensor-field-driven hierarchical path planning for autonomous scene exploration with depth sensors

College of Computing, National University of Defense Technology, Changsha 410073, China
Show Author Information

Graphical Abstract

Abstract

It is challenging to automatically explore an unknown 3D environment with a robot only equipped with depth sensors due to the limited field of view. We introduce THP, a tensor field-based framework for efficient environment exploration which can better utilize the encoded depth information through the geometric characteristics of tensor fields. Specifically, a corresponding tensor field is constructed incrementally and guides the robot to formulate optimal global exploration paths and a collision-free local movement strategy. Degenerate points generated during the exploration are adopted as anchors to formulate a hierarchical TSP for global path optimization. This novel strategy can help the robot avoid long-distance round trips more effectively while maintaining scanning completeness. Furthermore, the tensor field also enables a local movement strategy to avoid collision based on particle advection. As a result, the framework can eliminate massive, time-consuming recalculations of local movement paths. We have experimentally evaluate our method with a ground robot in 8 complex indoor scenes. Our method can on average achieve 14% better exploration efficiency and 21% better exploration completeness than state-of-the-art alternatives using LiDAR scans. Moreover, compared to similar methods, our method makes path decisions 39% faster due to our hierarchical exploration strategy.

Electronic Supplementary Material

Video
cvm-10-6-1121_ESM.mp4

References

[1]

Zeng, R.; Wen, Y.; Zhao, W.; Liu, Y. J. View planning in robot active vision: A survey of systems, algorithms, and applications. Computational Visual Media Vol. 6, No. 3, 225–245, 2020.

[2]

Höller, B.; Mossel, A.; Kaufmann, H. Automatic object annotation in streamed and remotely explored large 3D reconstructions. Computational Visual Media Vol. 7, No. 1, 71–86, 2021.

[3]
Bourgault, F.; Makarenko, A. A.; Williams, S. B.; Grocholsky, B.; Durrant-Whyte, H. F. Information based adaptive robotic exploration. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 540–545, 2002.
[4]
Umari, H.; Mukhopadhyay, S. Autonomous robotic exploration based on multiple rapidly-exploring randomized trees. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1396–1402, 2017.
[5]

Maurović, I.; Ðakulović, M.; Petrović, I. Autonomous exploration of large unknown indoor environments for dense 3D model building. IFAC Proceedings Volumes Vol. 47, No. 3, 10188–10193, 2014.

[6]
Senarathne, P. G. C. N.; Wang, D. Towards autonomous 3D exploration using surface frontiers. In: Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics, Lausanne, Switzerland, 34–41, 2016.
[7]

Xu, K.; Zheng, L.; Yan, Z.; Yan, G.; Zhang, E.; Niessner, M.; Deussen, O.; Cohen-Or, D.; Huang, H. Autonomous reconstruction of unknown indoor scenes guided by time-varying tensor fields. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 202, 2017.

[8]

Zhang, J.; Hu, C.; Chadha, R. G.; Singh, S. Falco: Fast likelihood-based collision avoidance with extension to human-guided navigation. Journal of Field Robotics Vol. 37, No. 8, 1300–1313, 2020.

[9]
Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. In: Proceedings of the IEEE International Conference on Robotics and Automation, 500–505, 1985.
[10]
Koren, Y.; Borenstein, J. Potential field methods and their inherent limitations for mobile robot navigation. In: Proceedings of the IEEE International Conference on Robotics and Automation, Sacramento, 1398–1404, 1991.
[11]
Ok, K.; Ansari, S.; Gallagher, B.; Sica, W.; Dellaert, F.; Stilman, M. Path planning with uncertainty: Voronoi Uncertainty Fields. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4596–4601, 2013.
[12]
Yamauchi, B. A frontier-based approach for autonomous exploration. In: Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation 'Towards New Computational Principles for Robotics and Automation', 146–151, 1997.
[13]
Holz, D.; Basilico, N.; Amigoni, F.; Behnke, S. Evaluating the efficiency of frontier-based exploration strategies. In: Proceedings of the Joint 41st International Symposium on Robotics and 6th German Conference on Robotics, 36–43, 2010.
[14]
Kulich, M.; Faigl, J.; Preucil, L. On distance utility in the exploration task. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4455–4460, 2011.
[15]
Cao, C.; Zhu, H.; Choset, H.; Zhang, J. TARE: A hierarchical framework for efficiently exploring complex 3D environments. In: Proceedings of the Robotics: Science and Systems XVⅡ, 2021.
[16]
Shade, R.; Newman, P. Choosing where to go: Complete 3D exploration with stereo. In: Proceedings of the IEEE International Conference on Robotics and Automation, 2806–2811, 2011.
[17]

Papadimitriou, C. H. The complexity of the Lin-Kernighan heuristic for the traveling salesman problem. SIAM Journal on Computing Vol. 21, No. 3, 450–465, 1992.

[18]

Kulich, M.; Kubalík, J.; Přeučil, L. An integrated approach to goal selection in mobile robot exploration. Sensors Vol. 19, No. 6, Article No. 1400, 2019.

[19]

Rani, M.; Nayak, R.; Vyas, O. P. An ontology-based adaptive personalized e-learning system, assisted by software agents on cloud storage. Knowledge-Based Systems, Vol. 90, 33–48, 2015.

[20]

Zhang, E.; Hays, J.; Turk, G. Interactive tensor field design and visualization on surfaces. IEEE Transactions on Visualization and Computer Graphics Vol. 13, No. 1, 94–107, 2007.

[21]
Chang, A.; Dai, A.; Funkhouser, T.; Halber, M.; Niebner, M.; Savva, M.; Song, S.; Zeng, A.; Zhang, Y. Matterport3D: Learning from RGB-D data in indoor environments. In: Proceedings of the International Conference on 3D Vision, 667–676, 2017.
[22]
Bai, S.; Wang, J.; Chen, F.; Englot, B. Information-theoretic exploration with Bayesian optimization. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 1816–1822, 2016.
Computational Visual Media
Pages 1121-1135
Cite this article:
Xi Y, Zhu C, Duan Y, et al. THP: Tensor-field-driven hierarchical path planning for autonomous scene exploration with depth sensors. Computational Visual Media, 2024, 10(6): 1121-1135. https://doi.org/10.1007/s41095-022-0312-6

53

Views

0

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 03 August 2022
Accepted: 12 September 2022
Published: 18 May 2024
© The Author(s) 2024.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return