PDF (25.1 MB)
Collect
Submit Manuscript
Research Article | Open Access

FACNet: Feature alignment fast point cloud completion network

School of Computer Science and Engineering, Faculty of Innovation Engineering, Macau University of Science and Technology, Taipa, Macau, China
Faculty of Science and Technology, University of Macau, Taipa, Macau, China
Show Author Information

Graphical Abstract

View original image Download original image

Abstract

Point cloud completion aims to infer complete point clouds based on partial 3D point cloud inputs. Various previous methods apply coarse-to-fine strategy networks for generating complete point clouds. However, such methods are not only relatively time-consuming but also cannot provide representative complete shape features based on partial inputs. In this paper, a novel feature alignment fast point cloud completion network (FACNet) is proposed to directly and efficiently generate the detailed shapes of objects. FACNet aligns high-dimensional feature distributions of both partial and complete point clouds to maintain global information about the complete shape. During its decoding process, the local features from the partial point cloud are incorporated along with the maintained global information to ensure complete and time-saving generation of the complete point cloud. Experimental results show that FACNet outperforms the state-of-the-art on PCN, Completion3D, and MVP datasets, and achieves competitive performance on ShapeNet-55 and KITTI datasets. Moreover, FACNet and a simplified version, FACNet-slight, achieve a significant speedup of 3–10 times over other state-of-the-art methods.

References

[1]
Yan, X.; Lin, L.; Mitra, N. J.; Lischinski, D.; Cohen-Or, D.; Huang, H. ShapeFormer: Transformer-based shape completion via sparse representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 6229–6239, 2022.
[2]
Yuan, W.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. PCN: Point completion network. In: Proceedings of the International Conference on 3D Vision, 728–737, 2018.
[3]
Zhao, Y.; Zhou, Y.; Chen, R.; Hu, B.; Ai, X.; Zhao, Y.; Zhou, Y.; Chen, R.; Hu, B.; Ai, X. MM-flow: Multi-modal flow network for point cloud completion. In: Proceedings of the 29th ACM International Conference on Multimedia, 3266–3274, 2021.
[4]
Charles, R. Q.; Hao, S.; Mo, K.; Guibas, L. J. PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 77–85, 2017.
[5]
Qi, C. R.; Yi, L.; Su, H.; Guibas, L. J. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 5105–5114, 2017.
[6]

Liu, X.; Han, Z.; Hong, F.; Liu, Y. S.; Zwicker, M. LRC-Net: Learning discriminative features on point clouds by encoding local region contexts. Computer Aided Geometric Design Vol. 79, Article No. 101859, 2020.

[7]
Wen, X.; Li, T.; Han, Z.; Liu, Y. S. Point cloud completion by skip-attention network with hierarchical folding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1936–1945, 2020.
[8]

Wang, J.; Qi, Y. Multi-task learning and joint refinement between camera localization and object detection. Computational Visual Media Vol. 10, No. 5, 993–1011, 2024.

[9]
Qi, C. R.; Litany, O.; He, K.; Guibas, L. J. Deep Hough voting for 3D object detection in point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 9276–9285, 2019.
[10]
Cai, Y.; Chen, X.; Zhang, C.; Lin, K. Y.; Wang, X.; Li, H. Semantic scene completion via integrating instances and scene in-the-loop. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 324–333, 2021.
[11]
Yan, X.; Yan, H.; Wang, J.; Du, H.; Wu, Z.; Xie, D.; Pu, S.; Lu, L. FBNet: Feedback network for point cloud completion. In: Computer Vision – ECCV 2022. Lecture Notes in Computer Science, Vol. 13662. Avidan, S.; Brostow, G.; Cissé, M.; Farinella, G. M.; Hassner, T. Eds. Springer Cham, 676–693, 2022.
[12]
Zhou, H.; Cao, Y.; Chu, W.; Zhu, J.; Lu, T.; Tai, Y.; Wang, C. SeedFormer: Patch seeds based point cloud completion with upsample transformer. In: Computer Vision – ECCV 2022. Lecture Notes in Computer Science, Vol. 13663. Avidan, S.; Brostow, G.; Cissé, M.; Farinella, G. M.; Hassner, T. Eds. Springer Cham, 416–432, 2022.
[13]
Zhu, Z.; Chen, H.; He, X.; Wang, W.; Qin, J.; Wei, M. SVDFormer: Complementing point cloud via self-view augmentation and self-structure dual-generator. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 14462–14472, 2023.
[14]
Li, S.; Gao, P.; Tan, X.; Wei, M. ProxyFormer: Proxy alignment assisted point cloud completion with missing part sensitive transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9466–9475, 2023.
[15]
Xiang, P.; Wen, X.; Liu, Y. S.; Cao, Y. P.; Wan, P.; Zheng, W.; Han, Z. SnowflakeNet: Point cloud completion by snowflake point deconvolution with skip-transformer. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 5479–5489, 2021.
[16]
Berger, M.; Tagliasacchi, A.; Seversky, L. M.; Alliez, P.; Levine, J. A.; Sharf, A.; Silva, C. T. State of the art in surface reconstruction from point clouds. In: Proceedings of the 35th Annual Conference of the European Association for Computer Graphics, 161–185, 2014.
[17]

Mitra, N. J.; Pauly, M.; Wand, M.; Ceylan, D. Symmetry in 3D geometry: Extraction and applications. Computer Graphics Forum Vol. 32, No. 6, 1–23, 2013.

[18]

Wang, Y.; Sun, Y.; Liu, Z.; Sarma, S. E.; Bronstein, M. M.; Solomon, J. M. Dynamic graph CNN for learning on point clouds. ACM Transactions on Graphics Vol. 38, No. 5, Article No. 146, 2019.

[19]
He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
[20]

Guo, M. H.; Cai, J. X.; Liu, Z. N.; Mu, T. J.; Martin, R. R.; Hu, S. M. PCT: Point cloud transformer. Computational Visual Media Vol. 7, No. 2, 187–199, 2021.

[21]
Xie, H.; Yao, H.; Zhou, S.; Mao, J.; Zhang, S.; Sun, W. GRNet: Gridding residual network for dense point cloud completion. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12354. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 365–381, 2020.
[22]
Wang, X.; Ang, M. H.; Lee, G. H. Voxel-based network for shape completion by leveraging edge generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 13169–13178, 2021.
[23]

Wen, X.; Xiang, P.; Han, Z.; Cao, Y. P.; Wan, P.; Zheng, W.; Liu, Y. S. PMP-net: Point cloud completion by transformer-enhanced multi-step point moving paths. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 45, No. 1, 852–867, 2023.

[24]

Yu, X.; Rao, Y.; Wang, Z.; Lu, J.; Zhou, J. AdaPoinTr: Diverse point cloud completion with adaptive geometry-aware transformers. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 45, No. 12, 14114–14130, 2023.

[25]

Pan, L. ECG: Edge-aware point cloud completion with graph convolution. IEEE Robotics and Automation Letters Vol. 5, No. 3, 4392–4398, 2020.

[26]
Wang, X.; Ang, M. H.; Lee, G. H. Cascaded refinement network for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 787–796, 2020.
[27]

Han, W.; Wu, H.; Wen, C.; Wang, C.; Li, X. BLNet: Bidirectional learning network for point clouds. Computational Visual Media Vol. 8, No. 4, 585–596, 2022.

[28]
Wu, W.; Qi, Z.; Fuxin, L. PointConv: Deep convolutional networks on 3D point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9613–9622, 2019.
[29]

Pan, L.; Chen, X.; Cai, Z.; Zhang, J.; Zhao, H.; Yi, S.; Liu, Z. Variational relational point completion network for robust 3D classification. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 45, No. 9, 11340–11351, 2023.

[30]
Yuan, W.; Khot, T.; Held, D.; Mertz, C.; Hebert, M. PCN: Point completion network. In: Proceedings of the International Conference on 3D Vision, 728–737, 2018.
[31]

Liu, M.; Sheng, L.; Yang, S.; Shao, J.; Hu, S. M. Morphing and sampling network for dense point cloud completion. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 34, No. 7, 11596–11603, 2020.

[32]

Zhang, Z.; Yu, Y.; Da, F. Partial-to-partial point generation network for point cloud completion. IEEE Robotics and Automation Letters Vol. 7, No. 4, 11990–11997, 2022.

[33]

Hu, F.; Chen, H.; Lu, X.; Zhu, Z.; Wang, J.; Wang, W.; Wang, F. L.; Wei, M. SPCNet: Stepwise point cloud completion network. Computer Graphics Forum Vol. 41, No. 7, 153–164, 2022.

[34]

Su, Z.; Huang, H.; Ma, C.; Huang, H.; Hu, R. Point cloud completion via structured feature maps using a feedback network. Computational Visual Media Vol. 9, No. 1, 71–85, 2023.

[35]
Wang, X.; Ang, M. H.; Lee, G. H.; Wang, X.; Ang, M. H.; Lee, G. H. Point cloud completion by learning shape priors. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 10719–10726, 2020.
[36]
Xia, Y.; Xia, Y.; Li, W.; Song, R.; Cao, K.; Stilla, U.; Xia, Y.; Xia, Y.; Li, W.; Song, R.; et al. ASFM-net: Asymmetrical Siamese feature matching network for point completion. In: Proceedings of the 29th ACM International Conference on Multimedia, 1938–1947, 2021.
[37]
Fei, B.; Yang, W.; Chen, W. M.; Ma, L.; Fei, B.; Yang, W.; Chen, W. M.; Ma, L. VQ-DcTr: Vector-quantized autoencoder with dual-channel transformer points splitting for 3D point cloud completion. In: Proceedings of the 30th ACM International Conference on Multimedia, 4769–4778, 2022.
[38]
Van Den Oord, A.; Vinyals, O.; Kavukcuoglu, K. Neural discrete representation learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6309–6318, 2017.
[39]

Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research Vol. 32, No. 11, 1231–1237, 2013.

[40]
Tchapmi, L. P.; Kosaraju, V.; Rezatofighi, H.; Reid, I.; Savarese, S. TopNet: Structural point cloud decoder. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 383–392, 2019.
[41]
Yu, X.; Rao, Y.; Wang, Z.; Liu, Z.; Lu, J.; Zhou, J. PoinTr: Diverse point cloud completion with geometry-aware transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 12478–12487, 2021.
[42]
Wang, Y.; Tan, D. J.; Navab, N.; Tombari, F. Learning local displacements for point cloud completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1558–1567, 2022.
[43]

Laumann, T. O.; Ortega, M.; Hoyt, C. R.; Seider, N. A.; Snyder, A. Z.; Dosenbach, N. U.; Group, B. N. P. Brain network reorganisation in an adolescent after bilateral perinatal strokes. The Lancet. Neurology Vol. 20, No. 4, 255–256, 2021.

[44]
McInnes, L.; Healy, J.; Melville, J. UMAP: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv: 1802.03426, 2018.
Computational Visual Media
Pages 141-157
Cite this article:
Yu X, Li J, Wong C-C, et al. FACNet: Feature alignment fast point cloud completion network. Computational Visual Media, 2025, 11(1): 141-157. https://doi.org/10.26599/CVM.2025.9450449
Metrics & Citations  
Article History
Copyright
Rights and Permissions
Return