AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (8.6 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Point cloud completion via structured feature maps using a feedback network

College of Computer Science & Software Engineering, Shenzhen University, Shenzhen, China
Kuaishou Technology, China
Show Author Information

Graphical Abstract

Abstract

In this paper, we tackle the challenging problem of point cloud completion from the perspective of feature learning. Our key observation is that to recover the underlying structures as well as surface details, given partial input, a fundamental component is a good feature representation that can capture both global structure and local geometric details. We accordingly first propose FSNet, a feature structuring module that can adaptively aggregate point-wise features into a 2D structured feature map by learning multiple latent patterns from local regions. We then integrate FSNet into a coarse-to-fine pipeline for point cloud completion. Specifically, a 2D convolutional neural network is adopted to decode feature maps from FSNet into a coarse and complete point cloud. Next, a point cloud upsampling network is used to generate a dense point cloud from the partial input and the coarse intermediate output. To efficiently exploit local structures and enhance point distribution uniformity, we propose IFNet, a point upsampling module with a self-correction mechanism that can progressively refine details of the generated dense point cloud. We have conducted qualitative and quantitative experiments on ShapeNet, MVP, and KITTI datasets, which demonstrate that our method outperforms state-of-the-art point cloud completion approaches.

Electronic Supplementary Material

Download File(s)
41095_0276_ESM.pdf (7.2 MB)

References

[1]
Armeni, I.; Sener, O.; Zamir, A. R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3D semantic parsing of large-scale indoor spaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 15341543, 2016.
[2]
Tarini, M.; Lensch, H. P. A.; Goesele, M.; Seidel, H. P. 3D acquisition of mirroring objects using striped patterns. Graphical Models Vol. 67, No. 4, 233259, 2005.
[3]
Kruse, T.; Pandey, A. K.; Alami, R.; Kirsch, A. Human-aware robot navigation: A survey. Robotics and Autonomous Systems Vol. 61, No. 12, 17261743, 2013.
[4]
Berger, M.; Tagliasacchi, A.; Seversky, L.; Alliez, P.; Levine, J.; Sharf, A.; Silva, C. State of the art in surface reconstruction from point clouds. In: Proceedings of the Eurographics 2014 - State of the Art Reports, 161185, 2014.
[5]
Mitra, N. J.; Pauly, M.; Wand, M.; Ceylan, D. Symmetry in 3D geometry: Extraction and applications. Computer Graphics Forum Vol. 32, No. 6, 123, 2013.
[6]
Dai, A.; Qi, C. R.; Nießner, M. Shape completion using 3D-encoder-predictor CNNs and shape synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 65456554, 2017.
[7]
Stutz, D.; Geiger, A. Learning 3D shape completion from laser scan data with weak supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19551964, 2018.
[8]
Charles, R. Q.; Su, H.; Kaichun, M.; Guibas, L. J. PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7785, 2017.
[9]
Qi, C. R.; Yi, L.; Su, H.; Guibas, L. J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv: 1706.02413, 2017.
[10]
Yuan, W. T.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. PCN: Point completion network. In: Proceedings of the International Conference on 3D Vision, 728737, 2018.
[11]
Tchapmi, L. P.; Kosaraju, V.; Rezatofighi, H.; Reid, I.; Savarese, S. TopNet: Structural point cloud decoder. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 383392, 2019.
[12]
Pan, L.; Chen, X.; Cai, Z.; Zhang, J.; Zhao, H.; Yi, S.; Liu, Z. Variational relational point completion network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 85208529, 2021.
[13]
Wen, X.; Li, T. Y.; Han, Z. Z.; Liu, Y. S. Point cloud completion by skip-attention network with hierarchical folding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19361945, 2020.
[14]
Zhang, W.; Yan, Q.; Xiao, C. Detail preserved point cloud completion via separated feature aggregation. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12370. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 512528, 2020.
[15]
Xie, H.; Yao, H.; Zhou, S.; Mao, J.; Zhang, S.; Sun, W. GRNet: Gridding residual network for dense point cloud completion. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12354. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 365381, 2020.
[16]
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 60006010, 2017.
[17]
Haris, M.; Shakhnarovich, G.; Ukita, N. Deep back-projection networks for super-resolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16641673, 2018.
[18]
Liu, M.; Sheng, L.; Yang, S.; Shao, J.; Hu, S.-M. Morphing and sampling network for dense point cloud completion. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence, 1159611603, 2020.
[19]
Sarmad, M.; Lee, H. J.; Kim, Y. M. RL-GAN-Net: A reinforcement learning agent controlled GAN network for real-time point cloud shape completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 58915900, 2019.
[20]
Hu, T.; Han, Z. Z.; Shrivastava, A.; Zwicker, M. Render4Completion: Synthesizing multi-view depth maps for 3D shape completion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop, 41144122, 2019.
[21]
Zong, D.; Sun, S.; Zhao, J. ASHF-Net: Adaptive sampling and hierarchical folding network for robust point cloud completion. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 35, No. 4, 36253632, 2021.
[22]
Wang, Y.; Tan, D. J.; Navab, N.; Tombari, F. SoftPoolNet: Shape descriptor for point cloud completion and classification. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12348. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 7085, 2020.
[23]
Wang, X. G.; Ang, M. H.; Hee Lee, G. Voxel-based network for shape completion by leveraging edge generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1316913178, 2021.
[24]
Huang, Z. T.; Yu, Y. K.; Xu, J. W.; Ni, F.; Le, X. Y. PF-net: Point fractal network for 3D point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 76597667, 2020.
[25]
Alliegro, A.; Valsesia, D.; Fracastoro, G.; Magli, E.; Tommasi, T. Denoise and contrast for category agnostic shape completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 46274636, 2021.
[26]
Yu, X.; Rao, Y.; Wang, Z.; Liu, Z.; Lu, J.; Zhou, J. PoinTr: Diverse point cloud completion with geometry-aware transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1247812487, 2021.
[27]
Xia, Y.; Xia, Y.; Li, W.; Song, R.; Cao, K.; Stilla, U. ASFM-Net: Asymmetrical Siamese feature matching network for point completion. arXiv preprint arXiv: 2104.09587, 2021.
[28]
Wang, X. G.; Ang, M. H.; Lee, G. H. Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 790799, 2020.
[29]
Xiang, P.; Wen, X.; Liu, Y. S.; Cao, Y. P.; Wan, P. F.; Zheng, W.; Han, Z. SnowflakeNet: Point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 54795489, 2021.
[30]
Wen, X.; Xiang, P.; Han, Z.; Cao, Y.-P.; Wan, P.; Zheng, W.; Liu, Y.-S. PMP-Net: Point cloud completion by learning multi-step point moving paths. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 74397448, 2021.
[31]
Xie, C.; Wang, C.; Zhang, B.; Yang, H.; Chen, D.; Wen, F. Style-based point generator with adversarial rendering for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 46194628, 2021.
[32]
Wen, X.; Han, Z.; Cao, Y.-P.; Wan, P.; Zheng,W.; Liu, Y.-S. Cycle4Completion: Unpaired point cloud completion using cycle transformation with missing region coding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1307513084, 2021.
[33]
Yu, L.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P.-A. PU-Net: Point cloud upsampling network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 27902799, 2018.
[34]
Yu, L.; Li, X.; Fu, C. W.; Cohen-Or, D.; Heng, P. A. EC-Net: An edge-aware point set consolidation network. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11211. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 398414, 2018.
[35]
Wang, Y. F.; Wu, S. H.; Huang, H.; Cohen-Or, D.; Sorkine-Hornung, O. Patch-based progressive 3D point set upsampling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 59515960, 2019.
[36]
Li, R. H.; Li, X. Z.; Fu, C. W.; Cohen-Or, D.; Heng, P. A. PU-GAN: A point cloud upsampling adversarial network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 72027211, 2019.
[37]
Qian, Y.; Hou, J.; Kwong, S.; He, Y. PUGeo-Net: A geometry-centric network for 3D point cloud upsampling. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12364. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 752769, 2020.
[38]
Qian, G. C.; Abualshour, A.; Li, G. H.; Thabet, A.; Ghanem, B. PU-GCN: Point cloud upsampling using graph convolutional networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1167811687, 2021.
[39]
Lee, J.; Lee, Y.; Kim, J.; Kosiorek, A.; Choi, S.; Teh, Y. W. Set transformer: A framework for attention-based permutation-invariant neural networks. In: Proceedings of the 36th International Conference on Machine Learning, 37443753, 2019.
[40]
Wang, Y.; Sun, Y. B.; Liu, Z. W.; Sarma, S. E.; Bronstein, M. M.; Solomon, J. M. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics Vol. 38, No. 5, Article No. 146, 2019.
[41]
Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-attention generative adversarial networks. In: Proceedings of the 36th International Conference on Machine Learning, 73547363, 2019.
[42]
Fan, H. Q.; Su, H.; Guibas, L. A point set generation network for 3D object reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 24632471, 2017.
[43]
Wu, Z. R.; Song, S. R.; Khosla, A.; Yu, F.; Zhang, L. G.; Tang, X. O.; Xiao. J. 3D ShapeNets: A deep representation for volumetric shapes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 19121920, 2015.
[44]
Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research Vol. 32, No. 11, 12311237, 2013.
[45]
Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2014.
[46]
Pan, L. ECG: Edge-aware point cloud completion with graph convolution. IEEE Robotics and Automation Letters Vol. 5, No. 3, 43924398, 2020.
[47]
Knapitsch, A.; Park, J.; Zhou, Q.-Y.; Koltun, V. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 78, 2017.
[48]
Kazhdan, M.; Hoppe, H. Screened Poisson surface reconstruction. ACM Transactions on Graphics Vol. 32, No. 3, Article No. 29, 2013.
[49]
Guo, M.-H.; Xu, T.-X.; Liu, J.-J.; Liu, Z.-N.; Jiang, P.-T.; Mu, T.-J.; Zhang, S.-H.; Martin, R. R.; Cheng, M.-M.; Hu, S.-M. Attention mechanisms in computer vision: A survey. Computational Visual Media Vol. 8, No. 3, 331368, 2022.
[50]
Guo, M.-H.; Cai, J.-X.; Liu, Z.-N.; Mu, T.-J.; Martin, R. R.; Hu, S.-M. PCT: Point cloud transformer. Computational Visual Media Vol. 7, No. 2, 187199, 2021.
[51]
Zhao, H. S.; Jiang, L.; Jia, J. Y.; Torr, P.; Koltun, V. Point transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1623916248, 2021.
Computational Visual Media
Pages 71-85
Cite this article:
Su Z, Huang H, Ma C, et al. Point cloud completion via structured feature maps using a feedback network. Computational Visual Media, 2023, 9(1): 71-85. https://doi.org/10.1007/s41095-022-0276-6

983

Views

71

Downloads

9

Crossref

8

Web of Science

8

Scopus

0

CSCD

Altmetrics

Received: 05 January 2022
Accepted: 15 February 2022
Published: 18 October 2022
© The Author(s) 2022.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return