AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (29.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Removing fences from sweep motion videos using global 3D reconstruction and fence-aware light field rendering

Department of Science and Technology, Keio University, Japan.
Institute of Computer Graphics and Vision, Graz University of Technology, Austria.
Show Author Information

Abstract

Diminishing the appearance of a fence in an image is a challenging research area due to the characteristics of fences (thinness, lack of texture, etc.) and the need for occluded background restoration. In this paper, we describe a fence removal method for an image sequence captured by a user making a sweep motion, in which occluded background is potentially observed. To make use of geometric and appearance information such as consecutive images, we use two well-known approaches: structure from motion and light field rendering. Results using real image sequences show that our method can stably segment fences and preserve background details for various fence and background combinations. A new video without the fence, with frame coherence, can be successfully provided.

Electronic Supplementary Material

Video
CVM_2019_1_21-32_ESM.mp4

References

[1]
S. Mori,; S. Ikeda,; H. Saito, A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects. IPSJ Transactions on Computer Vision and Applications Vol. 9, 17, 2017.
[2]
M. Park,; K. Brocklehurst,; R. T. Collins,; Y. Liu, Image de-fencing revisited. In: Computer Vision - ACCV 2010. Lecture Notes in Computer Science, Vol. 6495. R. Kimmel,; R. Klette,; A. Sugimoto, Eds. Springer Berlin Heidelberg, 422-434, 2011.
[3]
V. S. Khasare,; R. R. Sahay,; M. S. Kankanhalli, Seeing through the fence: Image de-fencing using a video sequence. In: Proceedings of the IEEE International Conference on Image Processing, 1351-1355, 2013.
[4]
C. S. Negi,; K. Mandal,; R. R. Sahay,; M. S. Kankanhalli, Super-resolution de-fencing: Simultaneous fence removal and high-resolution image recovery using videos. In: Proceedings of the IEEE International Conference on Multimedia and Expo Workshops, 1-6, 2014.
[5]
S. Jonna,; S. Satapathy,; R. R. Sahay, Stereo image de-fencing using smartphones. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 1792-1796, 2017.
[6]
S. Jonna,; V. S. Voleti,; R. R. Sahay,; M. S. Kankanhalli, A multimodal approach for image de-fencing and depth inpainting. In: Proceedings of the 8th International Conference on Advances in Pattern Recognition, 1-6, 2015.
[7]
Y. Liu,; T. Belkina,; J. H. Hays,; R. Lublinerman, Image de-fencing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1-8, 2008.
[8]
Y. Mu,; W. Liu,; S. Yan, Video de-fencing. IEEE Transactions on Circuits and Systems for Video Technology Vol. 24, No. 7, 1111-1121, 2014.
[9]
R. Yi,; J. Wang,; P. Tan, Automatic fence segmentation in videos of dynamic scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 705-713, 2016.
[10]
A. Yamashita,; A. Matsui,; T. Kaneko, Fence removal from multi-focus images. In: Proceedings of the 20th International Conference on Pattern Recognition, 4532-4535, 2010.
[11]
Q. Zhang,; Y. Yuan,; X. Lu, Image de-fencing with hyperspectral camera. In: Proceedings of the International Conference on Computer, Information and Telecommunication Systems, 1-5, 2016.
[12]
F.-L. Zhang,; J. Wang,; E. Shechtman,; Z.-Y. Zhou,; J.-X. Shi,; S.-M. Hu, PlenoPatch: Patch-based plenoptic image manipulation. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 5, 1561-1573, 2017.
[13]
C. Barnes,; F.-L. Zhang,; L. Lou,; X. Wu,; S.-M. Hu, PatchTable: Efficient patch queries for large datasets and applications. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 97, 2015.
[14]
T. Xue,; M. Rubinstein,; C. Liu,; W. T. Freeman, A computational approach for obstruction-free photography. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 79, 2015.
[15]
C. Barnes,; F.-L. Zhang, A survey of the state-of-the-art in patch-based synthesis. Computational Visual Media Vol. 3, No. 1, 3-20, 2017.
[16]
A. Criminisi,; P. Prez,; K. Toyama, Region filling and object removal by exemplar-based inpainting. IEEE Transactions on Image Processing Vol. 13, No. 9, 1200-1212, 2004.
[17]
M. Datar,; N. Immorlica,; P. Indyk,; V. S. Mirrokni, Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the 20th Annual Symposium on Computational Geometry, 253-262, 2004.
[18]
T. Goldstein,; S. Osher, The split Bregman method for L1-regularized problems. SIAM Journal on Imaging Sciences Vol. 2, No. 2, 323-343, 2009.
[19]
J. L. Schönberger,; J.-M. Frahm, Structure-from-motion revisited. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 4104-4113, 2016.
[20]
J. L. Schönberger,; E. Zheng,; J.-M. Frahm,; M. Pollefeys, Pixelwise view selection for unstructured multi-view stereo. In: Computer Vision - ECCV 2016. Lecture Notes in Computer Science, Vol. 9907. B. Leibe,; J. Matas,; N. Sebe,; M. Welling, Eds. Springer Cham, 501-518, 2016.
[21]
M. Kazhdan,; H. Hoppe, Screened poisson surface reconstruction. ACM Transactions on Graphics Vol. 32, No. 3, Article No. 29, 2013.
[22]
A. Davis,; M. Levoy,; F. Durand, Unstructured light fields. Computer Graphics Forum Vol. 31, No. 2pt1, 305-314, 2012.
[23]
A. Isaksen,; L. McMillan,; S. J. Gortler, Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 297-306, 2000.
[24]
N. Kusumoto,; S. Hiura,; K. Sato, Uncalibrated synthetic aperture for defocus control. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2552-2559, 2009.
[25]
M. Levoy,; P. Hanrahan, Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 31-42, 1996.
[26]
S. J. Gortler,; R. Grzeszczuk,; R. Szeliski,; M. F. Cohen, The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 43-54, 1996.
[27]
C. Buehler,; M. Bosse,; L. McMillan,; S. Gortler,; M. Cohen, Unstructured lumigraph rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 425-432, 2001.
[28]
G. Farnebäck, Two-frame motion estimation based on polynomial expansion. In: Image Analysis. Lecture Notes in Computer Science, Vol. 2749. J. Bigun,; T. Gustavsson, Eds. Springer Berlin Heidelberg, 363-370, 2003.
[29]
C. Barnes,; E. Shechtman,; A. Finkelstein,; D. B. Goldman, PatchMatch: A randomized corres-pondence algorithm for structural image editing. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 24, 2009.
Computational Visual Media
Pages 21-32
Cite this article:
Lueangwattana C, Mori S, Saito H. Removing fences from sweep motion videos using global 3D reconstruction and fence-aware light field rendering. Computational Visual Media, 2019, 5(1): 21-32. https://doi.org/10.1007/s41095-018-0126-8

652

Views

90

Downloads

2

Crossref

N/A

Web of Science

2

Scopus

0

CSCD

Altmetrics

Revised: 21 August 2018
Accepted: 31 October 2018
Published: 08 April 2019
© The author(s) 2019

This article is published with open access at Springerlink.com

The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return