AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Accurate disparity estimation in light field using ground control points

Hao Zhu1Qing Wang1( )
School of Computer Science, Northwestern Polytechnic University, Xi’an 710072, China.
Show Author Information

Abstract

The recent development of light field cameras has received growing interest, as their rich angular information has potential benefits for many computer vision tasks. In this paper, we introduce a novel method to obtain a dense disparity map by use of ground control points (GCPs) in the light field. Previous work optimizes the disparity map by local estimation which includes both reliable points and unreliable points. To reduce the negative effect of the unreliable points, we predict the disparity at non-GCPs from GCPs. Our method performs more robustly in shadow areas than previous methods based on GCP work, since we combine color information and local disparity. Experiments and comparisons on a public dataset demonstrate the effectiveness of our proposed method.

References

[1]
Levin, A. Analyzing depth from coded aperture sets. In: Lecture Notes in Computer Science, Vol. 6311. Daniilidis, K.; Maragos, P.; Paragios, N. Eds. Springer Berlin Heidelberg, 214-227, 2010.
[2]
Zhou, C.; Miau, D.; Nayar, S. K. Focal sweep camera for space-time refocusing. Columbia University Academic Commons, 2012. Available at http://hdl.handle.net/10022/AC:P:15386.
[3]
Lumsdaine, A.; Georgiev, T. The focused plenoptic camera. In: Proceedings of IEEE International Conference on Computational Photography, 1-8, 2009.
[4]
Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light field photography with a hand-held plenoptic camera. Stanford Tech Report CTSR, 2005.
[5]
Birklbauer, C.; Bimber, O. Panorama light-field imaging. Computer Graphics Forum Vol. 33, No. 2, 43-52, 2014.
[6]
Dansereau, D. G.; Pizarro, O.; Williams, S. B. Linear volumetric focus for light field cameras. ACM Transactions on Graphics Vol. 34, No. 2, Article No. 15, 2015.
[7]
Li, N.; Ye, J.; Ji, Y.; Ling, H.; Yu, J. Saliency detection on light field. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2806-2813, 2014.
[8]
Raghavendra, R.; Raja, K. B.; Busch, C. Presentation attack detection for face recognition using light field camera. IEEE Transactions on Image Processing Vol. 24, No. 3, 1060-1075, 2015.
[9]
Scharstein, D.; Szeliski, R. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision Vol. 47, No. 1, 7-42, 2002.
[10]
Du, S.-P.; Masia, B.; Hu, S.-M.; Gutierrez, D. A metric of visual comfort for stereoscopic motion. ACM Transactions on Graphics Vol. 32, No. 6, Article No. 222, 2013.
[11]
Wang, M.; Zhang, X.-J.; Liang, J.-B.; Zhang, S.-H.; Martin, R. R. Comfort-driven disparity adjustment for stereoscopic video. Computational Visual Media Vol. 2, No. 1, 3-17, 2016.
[12]
Wanner, S.; Fehr, J.; Jähne, B. Generating EPI representations of 4D light fields with a single lens focused plenoptic camera. In: Lecture Notes in Computer Science, Vol. 6938. Bebis, G.; Boyle, R.; Parvin, B. et al. Eds. Springer Berlin Heidelberg, 90-101, 2011.
[13]
Wanner, S.; Goldluecke, B. Globally consistent depth labeling of 4D light fields. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 41-48, 2012.
[14]
Wanner, S.; Goldluecke, B. Variational light field analysis for disparity estimation and super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 36, No. 3, 606-619, 2014.
[15]
Wang, L.; Yang, R. Global stereo matching leveraged by sparse ground control points. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 3033-3040, 2011.
[16]
Wanner, S.; Meister, S.; Goldluecke, B. Datasets and benchmarks for densely sampled 4D light fields. In: Proceedings of Annual Workshop on Vision, Modeling and Visualization, 225-226, 2013.
[17]
Levoy, M.; Hanrahan, P. Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31-42, 1996.
[18]
Bolles, R. C.; Baker, H. H.; Marimont, D. H. Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision Vol. 1, No. 1, 7-55, 1987.
[19]
Tosic, I.; Berkner, K. Light field scale-depth space transform for dense depth estimation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, 441-448, 2014.
[20]
Criminisi, A.; Kang, S. B.; Swaminathan, R.; Szeliski, R.; Anandan, P. Extracting layers and analyzing their specular properties using epipolar-plane-image analysis. Computer Vision and Image Understanding Vol. 97, No. 1, 51-85, 2005.
[21]
Tao, M. W.; Hadap, S.; Malik, J.; Ramamoorthi, R. Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of IEEE International Conference on Computer Vision, 673-680, 2013.
[22]
Heber, S.; Pock, T. Shape from light field meets robust PCA. In: Lecture Notes in Computer Science, Vol. 8694. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer International Publishing, 751-767, 2014.
[23]
Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In: Lecture Notes in Computer Science, Vol. 2134. Figueiredo, M.; Zerubia, J.; Jain, A. K. Eds. Springer Berlin Heidelberg, 359-374, 2001.
[24]
Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 23, No. 11, 1222-1239, 2001.
[25]
Kolmogorov, V.; Zabin, R. What energy functions can be minimized via graph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 26, No. 2, 147-159, 2004.
Computational Visual Media
Pages 173-181
Cite this article:
Zhu H, Wang Q. Accurate disparity estimation in light field using ground control points. Computational Visual Media, 2016, 2(2): 173-181. https://doi.org/10.1007/s41095-016-0052-6

761

Views

26

Downloads

3

Crossref

N/A

Web of Science

5

Scopus

0

CSCD

Altmetrics

Revised: 01 December 2015
Accepted: 01 April 2016
Published: 17 May 2016
© The Author(s) 2016

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return