AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (10.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Real-time distance field acceleration based free-viewpoint video synthesis for large sports fields

School of Telecommunications Engineering, XidianUniversity, Xi’an 710071, China
National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, SAIIP, the School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
Show Author Information

Graphical Abstract

Abstract

Free-viewpoint video allows the user to view objects from any virtual perspective, creating an immersive visual experience. This technology enhances the interactivity and freedom of multimedia performances. However, many free-viewpoint video synthesis methods hardly satisfy the requirement to work in real time with high precision, particularly for sports fields having large areas and numerous moving objects. To address these issues, we propose a free-viewpoint video synthesis method based on distance field acceleration. The central idea is to fuse multi-view distance field information and use it to adjust the search step size adaptively. Adaptive step size search is used in two ways: for fast estimation of multi-object three-dimensional surfaces, and synthetic view rendering based on global occlusion judgement. We have implemented our ideas using parallel computing for interactive display, using CUDA and OpenGL frameworks, and have used real-world and simulated experimental datasets for evaluation. The results show that the proposed method can render free-viewpoint videos with multiple objects on large sports fields at 25 fps. Furthermore, the visual quality of our synthetic novel viewpoint images exceeds that of state-of-the-art neural-rendering-based methods.

Electronic Supplementary Material

Video
41095_0323_ESM.mp4

References

[1]
Fukushima, N.; Fujii, T.; Ishibashi, Y.; Yendo, T.; Tanimoto, M. Real-time free viewpoint image rendering by using fast multi-pass dynamic programming. In: Proceedings of the 3DTV-Conference: The TrueVision - Capture, Transmission and Display of 3D Video, 14, 2010.
[2]
Wang, R. G.; Luo, J. J.; Jiang, X. B.; Wang, Z. Y.; Wang, W. M.; Li, G.; Gao, W. Accelerating image-domain-warping virtual view synthesis on GPGPU. IEEE Transactions on Multimedia Vol. 19, No. 6, 13921400, 2017.
[3]
Ceulemans, B.; Lu, S. P.; Lafruit, G.; Munteanu, A. Robust multiview synthesis for wide-baseline camera arrays. IEEE Transactions on Multimedia Vol. 20, No. 9, 22352248, 2018.
[4]
Cheung, C. H.; Sheng, L.; Ngan, K. N. Motion compensated virtual view synthesis using novel particle cell. IEEE Transactions on Multimedia Vol. 23, 19081923, 2021.
[5]
Nonaka, K.; Sabirin, H.; Chen, J.; Sankoh, H.; Naito, S. Optimal billboard deformation via 3D voxel for free-viewpoint system. IEICE Transactions on Information and Systems Vol. E101.D, No. 9, 23812391, 2018.
[6]
Sabirin, H.; Yao, Q.; Nonaka, K.; Sankoh, H.; Naito, S. Toward real-time delivery of immersive sports content. IEEE MultiMedia Vol. 25, No. 2, 6170, 2018.
[7]
Sankoh, H.; Naito, S.; Nonaka, K.; Sabirin, H.; Chen, J. Robust billboard-based, free-viewpoint video synthesis algorithm to overcome occlusions under challenging outdoor sport scenes. In: Proceedings of the 26th ACM International Conference on Multimedia, 17241732, 2018.
[8]
Yao, Q.; Nonaka, K.; Sankoh, H.; Naito, S. Robust moving camera calibration for synthesizing free viewpoint soccer video. In: Proceedings of the IEEE International Conference on Image Processing, 11851189, 2016.
[9]
Chen, J.; Watanabe, R.; Nonaka, K.; Konno, T.; Sankoh, H.; Naito, S. A robust billboard-based free-viewpoint video synthesizing algorithm for sports scenes. arXiv preprint arXiv:1908.02446, 2019.
[10]
Yamada, K.; Sankoh, H.; Sugano, M.; Naito, S. Occlusion robust free-viewpoint video synthesis based on inter-camera/-frame interpolation. In: Proceedings of the IEEE International Conference on Image Processing, 20722076, 2013.
[11]
Shin, T.; Kasuya, N.; Kitahara, I.; Kameda, Y.; Ohta, Y. A comparison between two 3D free-viewpoint generation methods: Player-billboard and 3D reconstruction. In: Proceedings of the 3DTV-Conference: The True Vision -Capture, Transmission and Display of 3D Video, 14, 2010.
[12]
Carballeira, P.; Carmona, C.; Díaz, C.; Berjón, D.; Corregidor, D.; Cabrera, J.; Morán, F.; Doblado, C.; Arnaldo, S.; del Mar Martín, M.; et al. FVV live: A real-time free-viewpoint video system with consumer electronics hardware. IEEE Transactions on Multimedia Vol. 24, 23782391, 2022.
[13]
Hedman, P.; Philip, J.; Price, T.; Frahm, J. M.; Drettakis, G.; Brostow, G. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 257, 2018.
[14]
Do, L.; Bravo, G.; Zinger, S.; De With, P. H. N. GPU-accelerated real-time free-viewpoint DIBR for 3DTV. IEEE Transactions on Consumer Electronics Vol. 58, No. 2, 633640, 2012.
[15]
Gao, X. S.; Li, K. Q.; Chen, W. Q.; Yang, Z. Y.; Wei, W. G.; Cai, Y. G. Free viewpoint video synthesis based on DIBR. In: Proceedings of the IEEE Conference on Multimedia Information Processing and Retrieval, 275278, 2020.
[16]
Li, S.; Zhu, C.; Sun, M. T. Hole filling with multiple reference views in DIBR view synthesis. IEEE Transactions on Multimedia Vol. 20, No. 8, 19481959, 2018.
[17]
Tian, S. S.; Zhang, L.; Morin, L.; Déforges, O. A benchmark of DIBR synthesized view quality assessment metrics on a new database for immersive media applications. IEEE Transactions on Multimedia Vol. 21, No. 5, 12351247, 2019.
[18]
Wang, X. J.; Shao, F.; Jiang, Q. P.; Meng, X. C.; Ho, Y. S. Measuring coarse-to-fine texture and geometric distortions for quality assessment of DIBR-synthesized images. IEEE Transactions on Multimedia Vol. 23, 11731186, 2021.
[19]
Jin, J.; Wang, A. H.; Zhao, Y.; Lin, C. Y.; Zeng, B. Region-aware 3-D warping for DIBR. IEEE Transactions on Multimedia Vol. 18, No. 6, 953966, 2016.
[20]
Liu, Z. M.; Jia, W.; Yang, M.; Luo, P. Y.; Guo, Y.; Tan, M. K. Deep view synthesis via self-consistent generative network. IEEE Transactions on Multimedia Vol. 24, 451465, 2022.
[21]
Zhou, T. H.; Tucker, R.; Flynn, J.; Fyffe, G.; Snavely, N. Stereo magnification: Learning view synthesis using multiplane images. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 65, 2018.
[22]
Wang, Y. R.; Huang, Z. H.; Zhu, H.; Li, W.; Cao, X.; Yang, R. G. Interactive free-viewpoint video generation. Virtual Reality & Intelligent Hardware Vol. 2, No. 3, 247260, 2020.
[23]
Broxton, M.; Flynn, J.; Overbeck, R.; Erickson, D.; Hedman, P.; Duvall, M.; Dourgarian, J.; Busch, J.; Whalen, M.; Debevec, P. Immersive light field video with a layered mesh representation. ACM Transactions on Graphics Vol. 39, No. 4, Article No. 86, 2020.
[24]
Broxton, M.; Busch, J.; Dourgarian, J.; DuVall, M.; Erickson, D.; Evangelakos, D.; Flynn, J.; Hedman, P.; Overbeck, R.; Whalen, M.; et al. DeepView immersive light field video. In: Proceedings of the ACM SIGGRAPH Immersive Pavilion, Article No. 15, 2020.
[25]
Flynn, J.; Broxton, M.; Debevec, P.; DuVall, M.; Fyffe, G.; Overbeck, R.; Snavely, N.; Tucker, R. DeepView: View synthesis with learned gradient descent. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 23622371, 2019.
[26]
Penner, E.; Zhang, L. Soft 3D reconstruction for view synthesis. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 235, 2017.
[27]
Dou, M. S.; Khamis, S.; Degtyarev, Y.; Davidson, P.; Fanello, S. R.; Kowdle, A.; Escolano, S. O.; Rhemann, C.; Kim, D.; Taylor, J.; et al. Fusion4D: Real-time performance capture of challenging scenes. ACM Transactions on Graphics Vol. 35, No. 4, Article No. 114, 2016.
[28]
Wei, D. X.; Xu, X. W.; Shen, H. B.; Huang, K. J. GAC-GAN: A general method for appearance-controllable human video motion transfer. IEEE Transactions on Multimedia Vol. 23, 24572470, 2021.
[29]
Collet, A.; Chuang, M.; Sweeney, P.; Gillett, D.; Evseev, D.; Calabrese, D.; Hoppe, H.; Kirk, A.; Sullivan, S. High-quality streamable free-viewpoint video. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 69, 2015.
[30]
Huang, Z.; Li, T. Y.; Chen, W. K.; Zhao, Y. J.; Xing, J.; LeGendre, C.; Luo, L. J.; Ma, C. Y.; Li, H. Deep volumetric video from very sparse multi-view performance capture. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11220. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 351369, 2018.
[31]
Natsume, R.; Saito, S.; Huang, Z.; Chen, W. K.; Ma, C. Y.; Li, H.; Morishima, S. SiCloPe: Silhouette-based clothed people. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 44754485, 2019.
[32]
Leroy, V.; Franco, J. S.; Boyer, E. Volume sweeping: Learning photoconsistency for multi-view shape reconstruction. International Journal of Computer Vision Vol. 129, No. 2, 284299, 2021.
[33]
Zheng, Z. R.; Yu, T.; Liu, Y. B.; Dai, Q. H. PaMIR: Parametric model-conditioned implicit representation for image-based human reconstruction. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, No. 6, 31703184, 2022.
[34]
Meerits, S.; Thomas, D.; Nozick, V.; Saito, H. FusionMLS: Highly dynamic 3D reconstruction with consumer-grade RGB-D cameras. Computational Visual Media Vol. 4, No. 4, 287303, 2018.
[35]
Li, J. W.; Gao, W.; Wu, Y. H.; Liu, Y. D.; Shen, Y. F. High-quality indoor scene 3D reconstruction with RGB-D cameras: A brief review. Computational Visual Media Vol. 8, No. 3, 369393, 2022.
[36]
Nonaka, K.; Watanabe, R.; Chen, J.; Sabirin, H.; Naito, S. Fast plane-based free-viewpoint synthesis for real-time live streaming. In: Proceedings of the IEEE Visual Communications and Image Processing, 14, 2018.
[37]
Yusuke, U.; Takahashi, K.; Fujii, T. Free viewpoint video generation system using visual hull. In: Proceedings of the International Workshop on Advanced Image Technology, 14, 2018.
[38]
Chen, J.; Watanabe, R.; Nonaka, K.; Konno, T.; Sankoh, H.; Naito, S. Fast free-viewpoint video synthesis algorithm for sports scenes. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 32093215, 2019.
[39]
Watanabe, T.; Tanaka, T. Free viewpoint video synthesis on human action using shape from silhouette method. In: Proceedings of the SICE Annual Conference, 27482751, 2010.
[40]
Dellaert, F.; Yen-Chen, L. Neural volume rendering: NeRF and beyond. arXiv preprint arXiv:2101.05204, 2020.
[41]
Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as neural radiance fields for view synthesis. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12346. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 405421, 2020.
[42]
Lombardi, S.; Simon, T.; Schwartz, G.; Zollhoefer, M.; Sheikh, Y.; Saragih, J. Mixture of volumetric primitives for efficient neural rendering. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 59, 2021.
[43]
Yu, A.; Li, R. L.; Tancik, M.; Li, H.; Ng, R.; Kanazawa, A. PlenOctrees for real-time rendering of neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 57325741, 2021.
[44]
Fridovich-Keil, S.; Yu, A.; Tancik, M.; Chen, Q. H.; Recht, B.; Kanazawa, A. Plenoxels: Radiance fields without neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 54915500, 2022.
[45]
Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics Vol. 41, No. 4, Article No. 102, 2022.
[46]
Peng, S. D.; Zhang, Y. Q.; Xu, Y. H.; Wang, Q. Q.; Shuai, Q.; Bao, H. J.; Zhou, X. W. Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 90509059, 2021.
[47]
Deng, K. L.; Liu, A.; Zhu, J. Y.; Ramanan, D. Depth-supervised NeRF: Fewer views and faster training for free. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1287212881, 2022.
[48]
Martin-Brualla, R.; Radwan, N.; Sajjadi, M. S. M.; Barron, J. T.; Dosovitskiy, A.; Duckworth, D. NeRF in the wild: Neural radiance fields for unconstrained photo collections. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 72067215, 2021.
[49]
Abdel-Aziz, Y. I.; Karara, H. M.; Hauck, M. Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry. Photogrammetric Engineering & Remote Sensing Vol. 81, No. 2, 103107, 2015.
[50]
Chen, L. C.; Zhu, Y. K.; Papandreou, G.; Schroff, F.; Adam, H. Encoder–decoder with atrous separable convolution for semantic image segmentation. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11211. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 833851, 2018.
[51]
Shah, S.; Dey, D.; Lovett, C.; Kapoor, A. AirSim: High-fidelity visual and physical simulation for autonomous vehicles. In: Field and Service Robotics. Springer Proceedings in Advanced Robotics, Vol. 5. Hutter, M.; Siegwart, R. Eds. Springer Cham, 621635, 2018.
Computational Visual Media
Pages 331-353
Cite this article:
Dai Y, Li J, Jiang Y, et al. Real-time distance field acceleration based free-viewpoint video synthesis for large sports fields. Computational Visual Media, 2024, 10(2): 331-353. https://doi.org/10.1007/s41095-022-0323-3

452

Views

35

Downloads

0

Crossref

0

Web of Science

1

Scopus

0

CSCD

Altmetrics

Received: 13 April 2022
Accepted: 26 October 2022
Published: 03 January 2024
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return