AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (5.7 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Depth error correction for projector-camera based consumer depth cameras

College of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan.
Faculty of Science and Engineering, Kindai University, Osaka, 577-8502, Japan.
Graduate School of Information Sciences, Hiroshima City University, Hiroshima, 731-3194, Japan.
The Institute of Scientific and Industrial Research, Osaka University, Osaka, 567-0047, Japan.
Show Author Information

Abstract

This paper proposes a depth measurement error model for consumer depth cameras such as the Microsoft Kinect, and a corresponding calibration method. These devices were originally designed as video game interfaces, and their output depth maps usually lack sufficient accuracy for 3D measurement. Models have been proposed to reduce these depth errors, but they only consider camera-related causes. Since the depth sensors are based on projector-camera systems, we should also consider projector-related causes. Also, previous models require disparity observations, which are usually not output by such sensors, so cannot be employed in practice. We give an alternative error model for projector-camera based consumer depth cameras, based on their depth measurement algorithm, and intrinsic parameters of the camera and the projector; it does not need disparity values. We also give a corresponding new parameter estimation method which simply needs observation of a planar board. Our calibrated error model allows use of a consumer depth sensor as a 3D measuring device. Experimental results show the validity and effectiveness of the error model and calibration procedure.

References

[1]
Zhang, Z. Microsoft Kinect sensor and its effect. IEEE Multimedia Vol. 19, No. 2, 4-10, 2012.
[2]
Han, J.; Shao, L.; Xu, D.; Shotton, J. Enhanced computer vision with Microsoft Kinect sensor: A review. IEEE Transactions on Cybernetics Vol. 43, No. 5, 1318-1334, 2013.
[3]
Smisek, J.; Jancosek, M.; Pajdla, T. 3D with Kinect. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, 1154-1160, 2011.
[4]
Herrera, D.; Kannala, J.; Heikkilä, J. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 34, No. 10, 2058-2064, 2012.
[5]
Yamazoe, H.; Habe, H.; Mitsugami, I.; Yagi, Y. Easy depth sensor calibration. In: Proceedings of the 21st International Conference on Pattern Recognition, 465-468, 2012.
[6]
Raposo, C.; Barreto, J. P.; Nunes, U. Fast and accurate calibration of a Kinect sensor. In: Proceedings of the International Conference on 3D Vision, 342-349, 2013.
[7]
Xiang, W.; Conly, C.; McMurrough, C. D.; Athitsos, V. A review and quantitative comparison of methods for Kinect calibration. In: Proceedings of the 2nd International Workshop on Sensor-based Activity Recognition and Interaction, Article No. 3, 2015.
[8]
Darwish, W.; Tang, S.; Li, W.; Chen, W. A new calibration method for commercial RGB-d sensors. Sensors Vol. 17, No. 6, 1204, 2017.
[9]
Weiss, A.; Hirshberg, D.; Black, M. J. Home 3D body scans from noisy image and range data. In: Proceedings of the International Conference on Computer Vision, 1951-1958, 2011.
[10]
Jin, B.; Lei, H.; Geng, W. Accurate intrinsic calibration of depth camera with cuboids. In: Computer Vision – ECCV 2014. ECCV 2014. Lecture Notes in Computer Science, Vol. 8693. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer, Cham, 788-803, 2014.
[11]
Di Cicco, M.; Iocchi, L.; Grisetti, G. Non-parametric calibration for depth sensors. Robotics and Autonomous Systems Vol. 74, 309-317, 2015.
[12]
Teichman, A.; Miller, S.; Thrun, S. Unsupervised intrinsic calibration of depth sensors via SLAM. Robotics: Science and Systems Vol. 248, 3, 2013.
[13]
Wang, H.; Wang, J.; Liang, W. Online reconstruction of indoor scenes from RGB-D streams. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3271-3279, 2016.
[14]
Park, J.-H.; Shin, Y.-D.; Bae, J.-H.; Baeg, M.-H. Spatial uncertainty model for visual features using a KinectTM sensor. Sensors Vol. 12, No. 7, 8640-8662, 2012.
[15]
Nguyen, C. V.; Izadi, S.; Lovell, D. Modeling Kinect sensor noise for improved 3D reconstruction and tracking. In: Proceedings of the 2nd International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, 524-530, 2012.
[16]
Zhang, Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 22, No. 11, 1330-1334, 2000.
[17]
Khoshelham, K.; Elberink, S. O. Accuracy and resolution of Kinect depth data for indoor mapping applications. Sensors Vol. 12, No. 2, 1437-1454, 2012.
[18]
Heikkila, J. Geometric camera calibration using circular control points. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 22, No. 10, 1066-1077, 2000.
[19]
Dal Mutto, C.; Zanuttigh, P.; Cortelazzo, G. M. Time-of-Flight Cameras and Microsoft KinectTM. Springer Science & Business Media, 2012.
[20]
Freedman, B.; Shpunt, A.; Arieli, Y. Distance-varying illumination and imaging techniques for depth mapping. U.S. Patent 8,761,495. 2014.
Computational Visual Media
Pages 103-111
Cite this article:
Yamazoe H, Habe H, Mitsugami I, et al. Depth error correction for projector-camera based consumer depth cameras. Computational Visual Media, 2018, 4(2): 103-111. https://doi.org/10.1007/s41095-017-0103-7

663

Views

22

Downloads

5

Crossref

N/A

Web of Science

7

Scopus

0

CSCD

Altmetrics

Revised: 31 August 2017
Accepted: 26 December 2017
Published: 14 March 2018
© The Author(s) 2017

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return