AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (6.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Deep panoramic depth prediction and completion for indoor scenes

Giovanni Pintore1,*( )Eva Almansa1,*( )Armando Sanchez2Giorgio Vassena2,3Enrico Gobbetti1( )
Visual and Data-intensive Computing, CRS4, Cagliari 09134, Italy
Gexcel srl, Elmas (CA) 09097, Italy
Department of Civil, Environment, Architectural Engineering, and Mathematics (DICATAM), Università degli Studi di Brescia (UNIBS), Brescia (BS) 25123, Italy

* Giovanni Pintore and Eva Almansa contributed equally to this work.

Show Author Information

Graphical Abstract

Abstract

We introduce a novel end-to-end deep-learning solution for rapidly estimating a dense spherical depth map of an indoor environment. Our input is a single equirectangular image registered with a sparse depth map, as provided by a variety of common capture setups. Depth is inferred by an efficient and lightweight single-branch network, which employs a dynamic gating system to process together dense visual data and sparse geometric data. We exploit the characteristics of typical man-made environments to efficiently compress multi-resolution features and find short- and long-range relations among scene parts. Furthermore, we introduce a new augmentation strategy to make the model robust to different types of sparsity, including those generated by various structured light sensors and LiDAR setups. The experimental results demonstrate that our method provides interactive performance and outperforms state-of-the-art solutions in computational efficiency, adaptivity to variable depth sparsity patterns, and prediction accuracy for challenging indoor data, even when trained solely on synthetic data without any fine tuning.

References

[1]

Zollhöfer, M.; Stotko, P.; Görlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; Kolb, A. State of the art on 3D reconstruction with RGB-D cameras. Computer Graphics Forum Vol. 37, No. 2, 625–652, 2018.

[2]

Pintore, G.; Mura, C.; Ganovelli, F.; Fuentes-Perez, L.; Pajarola, R.; Gobbetti, E. State-of-the-art in automatic 3D reconstruction of structured indoor environments. Computer Graphics Forum Vol. 39, No. 2, 667–699, 2020.

[3]

Mertan, A.; Duff, D. J.; Unal, G. Single image depth estimation: An overview. Digital Signal Processing Vol. 123, 103441, 2022.

[4]

Ming, Y.; Meng, X. Y.; Fan, C. X.; Yu, H. Deep learning for monocular depth estimation: A review. Neurocomputing Vol. 438, 14–33, 2021.

[5]
Jokela, T.; Ojala, J.; Väänänen, K. How people use 360-degree cameras. In: Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia, 1–10, 2019.
[6]
Wang, F. E.; Yeh, Y. H.; Sun, M.; Chiu, W. C.; Tsai, Y. H. BiFuse: Monocular 360 depth estimation via bi-projection fusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 459–468, 2020.
[7]
Sun, C.; Sun, M.; Chen, H. T. HoHoNet: 360 indoor holistic understanding with latent horizontal features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2573–2582, 2021.
[8]
Pintore, G.; Agus, M.; Almansa, E.; Schneider, J.; Gobbetti, E. SliceNet: Deep dense depth estimation from a single indoor panorama using a slice-based representation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11531–11540, 2021.
[9]
Lopez-Rodriguez, A.; Busam, B.; Mikolajczyk, K. Project to adapt: Domain adaptation for depth completion from noisy and sparse sensor data. In: Computer Vision – ACCV 2020. Lecture Notes in Computer Science, Vol. 12622. Ishikawa, H.; Liu, C. L.; Pajdla, T.; Shi, J. Eds. Springer Cham, 330–348, 2021.
[10]
Xiong, X.; Xiong, H. P.; Xian, K.; Zhao, C.; Cao, Z. G.; Li, X. Sparse-to-dense depth completion revisited: Sampling strategy and graph construction. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12366. Vedaldi, A.; Bischof, H.;Brox, T.; Frahm, J. M. Eds. Springer Cham, 682–699, 2020.
[11]
Eigen, D.; Puhrsch, C.; Fergus, R. Depth map prediction from a single image using a multi-scale deep network. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2, 2366–2374, 2014.
[12]
Fu, H.; Gong, M.; Wang, C.; Batmanghelich, K.; Tao, D. Deep ordinal regression network for monocular depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2002–2011, 2018.
[13]
Gan, Y. K.; Xu, X. Y.; Sun, W. X.; Lin, L. Monocular depth estimation with affinity, vertical pooling, and label enhancement. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11207. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 232–247, 2018.
[14]
Yin, W.; Liu, Y. F.; Shen, C. H.; Yan, Y. L. Enforcing geometric constraints of virtual normal for depth prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 5683–5692, 2019.
[15]
Imran, S.; Long, Y. F.; Liu, X. M.; Morris, D. Depth coefficients for depth completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2438–2447, 2019.
[16]
Qiu, J. X.; Cui, Z. P.; Zhang, Y. D.; Zhang, X. D.; Liu, S. C.; Zeng, B.; Pollefeys, M. DeepLiDAR: Deep surface normal guided depth prediction for outdoor scene from sparse LiDAR data and single color image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3308–3317, 2019.
[17]
Huang, Y. K.; Liu, Y. C.; Wu, T. H.; Su, H. T.; Chang, Y. C.; Tsou, T. L.; Wang, Y.; Hsu, W. H. S3: Learnable sparse signal superdensity for guided depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16701–16711, 2021.
[18]
Park, J.; Joo, K.; Hu, Z.; Liu, C. K.; Kweon, I. S. Non-local spatial propagation network for depth completion. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12358. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 120–136, 2020.
[19]
Eldesokey, A.; Felsberg, M.; Holmquist, K.; Persson, M. Uncertainty-aware CNNs for depth completion: Uncertainty from beginning to end. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12011–12020, 2020.
[20]
Ku, J.; Harakeh, A.; Waslander, S. L. In defense of classical image processing: Fast depth completion on the CPU. In: Proceedings of the 15th Conference on Computer and Robot Vision, 16–22, 2018.
[21]

Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research Vol. 32, No. 11, 1231–1237, 2013.

[22]
New York University. NYU-Depth V2. 2012. Available at https://cs.nyu.edu/~sil berman/datasets/nyu depth v2.html
[23]
Matterport. Matterport3D. 2017. Available at https://github.com/niessner/Matter port
[24]
Stanford University. BuildingParser Dataset. 2017. Available at http://buildingpa rser.stanford.edu/dataset.html
[25]
Zheng, J.; Zhang, J. F.; Li, J.; Tang, R.; Gao, S. H.; Zhou, Z. H. Structured3D: A large photo-realistic dataset for structured 3D modeling. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12354. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 519–535, 2020.
[26]
Zhang, Y. D.; Funkhouser, T. Deep depth completion of a single RGB-D image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 175–185, 2018.
[27]
Straub, J.; Whelan, T.; Ma, L. N.; Chen, Y. F.; Wijmans, E.; Green, S.; Engel, J. J.; Mur-Artal, R.; Ren, C.; Verma, S.; et al. The replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797, 2019.
[28]
Zioulis, N.; Karakottas, A.; Zarpalas, D.; Alvarez, F.; Daras, P. Spherical view synthesis for self-supervised 360° depth estimation. In: Proceedings of the International Conference on 3D Vision, 690–699, 2019.
[29]
Xian, W. Q.; Li, Z. Q.; Snavely, N.; Fisher, M.; Eisenman, J.; Shechtman, E. UprightNet: Geometry-aware camera orientation estimation from single images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 9973–9982, 2019.
[30]
Jung, R.; Lee, A. S. J.; Ashtari, A.; Bazin, J. C. Deep360Up: A deep learning-based approach for automatic VR image upright adjustment. In: Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces, 1–8, 2019.
[31]
Davidson, B.; Alvi, M. S.; Henriques, J. F. 360° camera alignment via segmentation. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12373. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 579–595, 2020.
[32]
Sun, C.; Hsiao, C. W.; Sun, M.; Chen, H. T. HorizonNet: Learning room layout with 1D representation and pano stretch data augmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1047–1056, 2019.
[33]
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6000–6010, 2017.
[34]
Yi, Z. L.; Tang, Q.; Azizi, S.; Jang, D.; Xu, Z. Contextual residual aggregation for ultra high-resolution image inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7505–7514, 2020.
[35]
Guizilini, V.; Ambrus, R.; Burgard, W.; Gaidon, A. Sparse auxiliary networks for unified monocular depth prediction and completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11073–11083, 2021.
[36]
Huang, Y. K.; Wu, T. H.; Liu, Y. C.; Hsu, W. H. Indoor depth completion with boundary consistency and self-attention. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshop, 1070–1078, 2019.
[37]
Yang, Y. C.; Wong, A.; Soatto, S. Dense depth posterior (DDP) from single image and sparse range. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3348–3357, 2019.
[38]
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
[39]
Yu, J. H.; Lin, Z.; Yang, J. M.; Shen, X. H.; Lu, X.; Huang, T. Free-form image inpainting with gated convolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 4470–4479, 2019.
[40]
Ma, F. C.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In: Proceedings of the IEEE International Conference on Robotics and Automation, 4796–4803, 2018.
[41]
Kujiale.com. Structured3D Data. 2019.
[42]
Laina, I.; Rupprecht, C.; Belagiannis, V.; Tombari, F.; Navab, N. Deeper depth prediction with fully convolutional residual networks. In: Proceedings of the 4th International Conference on 3D Vision, 239–248, 2016.
[43]
Liu, F. Y.; Shen, C. H.; Lin, G. S. Deep convolutional neural fields for depth estimation from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5162–5170, 2015.
[44]
Wang, P.; Shen, X. H.; Lin, Z.; Cohen, S.; Price, B.; Yuille, A. Towards unified depth and semantic prediction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2800–2809, 2015.
[45]

Cao, Y.; Wu, Z. F.; Shen, C. H. Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE Transactions on Circuits and Systems for –Video Technology Vol. 28, No. 11, 3174–3182, 2018.

[46]
Xu, D.; Wang, W.; Tang, H.; Liu, H.; Sebe, N.; Ricci, E. Structured attention guided convolutional neural fields for monocular depth estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3917–3925, 2018.
[47]
Godard, C.; Mac Aodha, O.; Brostow, G. J. Unsupervised monocular depth estimation with left-right consistency. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6602–6611, 2017.
[48]
Zhan, H. Y.; Garg, R.; Weerasekera, C. S.; Li, K. J.; Agarwal, H.; Reid, I. M. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 340–349, 2018.
[49]
Ji, P.; Li, R. Z.; Bhanu, B.; Xu, Y. MonoIndoor: Towards good practice of self-supervised monocular depth estimation for indoor environments. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 12767–12776, 2021.
[50]
Zioulis, N.; Karakottas, A.; Zarpalas, D.; Daras, P. OmniDepth: Dense depth estimation for indoors spherical panoramas. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11210. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 453–471, 2018.
[51]
Cheng, H. T.; Chao, C. H.; Dong, J. D.; Wen, H. K.; Liu, T. L.; Sun, M. Cube padding for weakly-supervised saliency prediction in 360° videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1420–1429, 2018.
[52]
Su, Y. C.; Grauman, K. Learning spherical convolution for fast features from 360° imagery. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 529–539, 2017.
[53]
Tateno, K.; Navab, N.; Tombari, F. Distortion-aware convolutional filters for dense prediction in panoramic images. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11220. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 732–750, 2018.
[54]
Payen de La Garanderie, G.; Atapour Abarghouei, A.; Breckon, T. P. Eliminating the blind spot: Adapting 3D object detection and monocular depth estimation to 360° panoramic imagery. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11217. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 812–830, 2018.
[55]
Su, Y. C.; Grauman, K. Kernel transformer networks for compact spherical convolution. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9434–9443, 2019.
[56]

Liao, Y.; Xie, J.; Geiger, A. KITTI-360: A novel dataset and benchmarks for urban scene understanding in 2D and 3D. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 45, No. 3, 3292–3310, 2023.

[57]

Eldesokey, A.; Felsberg, M.; Khan, F. S. Confidence propagation through CNNs for guided sparse depth regression. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 42, No. 10, 2423–2436, 2020.

[58]

Tang, J.; Tian, F. P.; Feng, W.; Li, J.; Tan, P. Learning guided convolutional network for depth completion. IEEE Transactions on Image Processing Vol. 30, 1116–1129, 2021.

[59]
Van Gansbeke, W.; Neven, D.; De Brabandere, B.; Van Gool, L. Sparse and noisy LiDAR completion with RGB guidance and uncertainty. In: Proceedings of the 16th International Conference on Machine Vision Applications, 1–6, 2019.
[60]

Lee, S.; Lee, J.; Kim, D.; Kim, J. Deep architecture with cross guidance between single image and sparse LiDAR data for depth completion. IEEE Access Vol. 8, 79801–79810, 2020.

[61]
Oh, C.; Cho, W.; Chae, Y.; Park, D.; Wang, L.; Yoon, K. J. BIPS: Bi-modal indoor panorama synthesis via residual depth-aided adversarial learning. In: Computer Vision – ECCV 2022. Lecture Notes in Computer Science, Vol. 13676. Avidan, S.; Brostow, G.; Cissé, M.; Farinella, G. M.; Hassner, T. Eds. Springer Cham, 352–371, 2022.
[62]
Silberman, N.; Hoiem, D.; Kohli, P.; Fergus, R. Indoor segmentation and support inference from RGBD images. In: Computer Vision – ECCV 2012. Lecture Notes in Computer Science, Vol. 7576. Fitzgibbon, A.; Lazebnik, S.; Perona, P.; Sato, Y.; Schmid, C. Eds. Springer Berlin Heidelberg, 746–760, 2012.
[63]
Cheng, X. J.; Wang, P.; Zhou, Y. Q.; Guan, C. Y.; Yang, R. G. Omnidirectional depth extension networks. In: Proceedings of the IEEE International Conference on Robotics and Automation, 589–595, 2020.
[64]
Yu, J. H.; Lin, Z.; Yang, J. M.; Shen, X. H.; Lu, X.; Huang, T. S. Generative image inpainting with contextual attention. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5505–5514, 2018.
[65]
Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science, Vol. 9351. Navab, N.; Hornegger, J.; Wells, W.; Frangi, A. Eds. Springer Cham, 234–241, 2015.
[66]

Liu, R. Y.; Zhang, G. D.; Wang, J. M.; Zhao, S. W. Cross-modal 360° depth completion and reconstruction for large-scale indoor environment. IEEE Transactions on Intelligent Transportation Systems Vol. 23, No. 12, 25180–25190, 2022.

[67]

Pintore, G.; Almansa, E.; Agus, M.; Gobbetti, E. Deep3DLayout: 3D reconstruction of an indoor layout from a spherical panoramic image. ACM Transactions on Graphics Vol. 40, No. 6, Article No. 250, 2021.

[68]
Gkitsas, V.; Sterzentsenko, V.; Zioulis, N.; Albanis, G.; Zarpalas, D. PanoDR: Spherical panorama diminished reality for indoor scenes. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 3711–3721, 2021.
[69]
Liu, G. L.; Reda, F. A.; Shih, K. J.; Wang, T. C.; Tao, A.; Catanzaro, B. Image inpainting for irregular holes using partial convolutions. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11215. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 89–105, 2018.
[70]
Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122, 2015.
[71]
Zheng, C. X.; Cham, T. J.; Cai, J. F. Pluralistic image completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1438–1447, 2019.
[72]
Clevert, D. A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289, 2015.
[73]
Guizilini, V.; Li, J.; Ambrus, R.; Pillai, S.; Gaidon, A. Robust semi-supervised monocular depth estimation with reprojected distances. In: Proceedings of the Conference on Robot Learning, 503–512, 2020.
[74]

Morales, J.; Plaza-Leiva, V.; Mandow, A.; Gomez-Ruiz, J. A.; Serón, J.; García-Cerezo, A. Analysis of 3D scan measurement distribution with application to a multi-beam lidar on a rotating platform. Sensors Vol. 18, No. 2, 395, 2018.

[75]

Wu, T.; Fu, H.; Liu, B. K.; Xue, H. Z.; Ren, R. K.; Tu, Z. M. Detailed analysis on generating the range image for LiDAR point cloud processing. Electronics Vol. 10, No. 11, 1224, 2021.

[76]
You, Y. R.; Wang, Y.; Chao, W. L.; Garg, D.; Pleiss, G.; Hariharan, B.; Campbell, M.; Weinberger, K. Q. Pseudo-LiDAR++: Accurate depth for 3D object detection in autonomous driving. arXiv preprint arXiv:1906.06310, 2019.
[77]

Lambert-Lacroix, S.; Zwald, L. The adaptive BerHu penalty in robust regression. Journal of Nonparametric Statistics Vol. 28, No. 3, 487–514, 2016.

[78]

Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600–612, 2004.

[79]

Li, Y. W.; Dai, S. M.; Shi, Y.; Zhao, L. L.; Ding, M. H. Navigation simulation of a mecanum wheel mobile robot based on an improved A* algorithm in Unity3D. Sensors Vol. 19, No. 13, 2976, 2019.

[80]
Kingma, D. P.; Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[81]
Ma, F. C.; Cavalheiro, G. V.; Karaman, S. Self-supervised sparse-to-dense: Self-supervised depth completion from LiDAR and monocular camera. In: Proceedings of the International Conference on Robotics and Automation, 3288–3295, 2019.
[82]
Du, W. C.; Chen, H.; Yang, H. Y.; Zhang, Y. Depth completion using geometry-aware embedding. In: Proceedings of the International Conference on Robotics and Automation, 8680–8686, 2022.
[83]
Hu, M.; Wang, S. L.; Li, B.; Ning, S. Y.; Fan, L.; Gong, X. J. PENet: Towards precise and efficient image guided depth completion. In: Proceedings of the IEEE International Conference on Robotics and Automation, 13656–13662, 2021.
[84]

Eldesokey, A.; Felsberg, M.; Khan, F. S. Confidence propagation through CNNs for guided sparse depth regression. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 42, No. 10, 2423–2436, 2020.

[85]
Harrison, A.; Newman, P. Image and sparse laser fusion for dense scene reconstruction. In: Field and Service Robotics. Springer Tracts in Advanced Robotics, Vol. 62. Howard, A.; Iagnemma, K.; Kelly, A. Eds. Springer Berlin Heidelberg, 219–228, 2010.
[86]
Liu, J. Y.; Gong, X. J. Guided depth enhancement via anisotropic diffusion. In: Advances in Multimedia Information Processing – PCM 2013. Lecture Notes in Computer Science, Vol. 8294. Huet, B.; Ngo, C. W.; Tang, J.; Zhou, Z. H.; Hauptmann, A. G.; Yan, S. Eds. Springer Cham, 408–417, 2013.
[87]
Xiong, X. H.; Huber, D. Using context to create semantic 3D models of indoor environments. In: Proceedings of the British Machine Vision Conference, 2010.
Computational Visual Media
Pages 903-922
Cite this article:
Pintore G, Almansa E, Sanchez A, et al. Deep panoramic depth prediction and completion for indoor scenes. Computational Visual Media, 2024, 10(5): 903-922. https://doi.org/10.1007/s41095-023-0358-0

132

Views

3

Downloads

3

Crossref

1

Web of Science

2

Scopus

0

CSCD

Altmetrics

Received: 07 March 2023
Accepted: 03 June 2023
Published: 08 February 2024
© The Author(s) 2024.

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return