AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (10.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Dynamic ocean inverse modeling based on differentiable rendering

State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China
Qingdao Research Institute, Beihang University, Qingdao 266100, China, and Peng Cheng Lab, Shenzhen 518000, China
SKLCS, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China, and University of Chinese Academy of Sciences, Beijing 100049, China
Department of Computer Science, Stony Brook University (SUNY at Stony Brook), Stony Brook, New York 11794-2424, USA
Show Author Information
An erratum to this article is available online at:

Graphical Abstract

Abstract

Learning and inferring underlying motion patterns of captured 2D scenes and then re-creating dynamic evolution consistent with the real-world natural phenomena have high appeal for graphics and animation. To bridge the technical gap between virtual and real environments, we focus on the inverse modeling and reconstruction of visually consistent and property-verifiable oceans, taking advantage of deep learning and differentiable physics to learn geometry and constitute waves in a self-supervised manner. First, we infer hierarchical geometry using two networks, which are optimized via the differentiable renderer. We extract wave components from the sequence of inferred geometry through a network equipped with a differentiable ocean model. Then, ocean dynamics can be evolved using the reconstructed wave components. Through extensive experiments, we verify that our new method yields satisfactory results for both geometry reconstruction and wave estimation. Moreover, the new framework has the inverse modeling potential to facilitate a host of graphics applications, such as the rapid production of physically accurate scene animation and editing guided by real ocean scenes.

Electronic Supplementary Material

Download File(s)
41095_0338_ESM.wmv (85 MB)

References

[1]
Nielsen, U. D.; Dietz, J. Ocean wave spectrum estimation using measured vessel motions from an in-service container ship. Marine Structures Vol. 69, 102682, 2020.
[2]
Vasavi, S.; Divya, C.; Sarma, A. S. Detection of solitary ocean internal waves from SAR images by using U-Net and KDV solver technique. Global Transitions Proceedings Vol. 2, No. 2, 145–151, 2021.
[3]
Pierson, W. J. Jr., Moskowitz, L. A proposed spectral form for fully developed wind seas based on the similarity theory of S. A. Kitaigorodskii. Journal of Geophysical Research Vol. 69, No. 24, 5181–5190, 1964.
[4]
Hasselmann, K.; Barnett, T.; Bouws, E.; Carlson, H.; Cartwright, D.; Enke, K.; Ewing, J.; Gienapp, H.; Hasselmann, D.; Kruseman, P.; et al. Measurements of wind-wave growth and swell decay during the Joint North Sea Wave Project (JONSWAP). Report. Deutches Hydrographisches Institut, 1973.
[5]
Layton, A. T.; van de Panne, M. A numerically efficient and stable algorithm for animating water waves. The Visual Computer Vol. 18, No. 1, 41–53, 2002.
[6]
Max, N. L. Vectorized procedural models for natural terrain. ACM SIGGRAPH Computer Graphics Vol. 15, No. 3, 317–324, 1981.
[7]
Peachey, D. R. Modeling waves and surf. ACM SIGGRAPH Computer Graphics Vol. 20, No. 4, 65–74, 1986.
[8]
Fournier, A.; Reeves, W. T. A simple model of ocean waves. In: Proceedings of the 13th Annual Conference on Computer Graphics and Interactive Techniques, 75–84, 1986.
[9]
Hasselmann, D. E.; Dunckel, M.; Ewing, J. A. Directional wave spectra observed during JONSWAP 1973. Journal of Physical Oceanography Vol. 10, No. 8, 1264–1280, 1980.
[10]
Bouws, E.; Günther, H.; Rosenthal, W.; Vincent, C. L. Similarity of the wind wave spectrum in finite depth water: 1. Spectral form. Journal of Geophysical Research: Oceans Vol. 90, No. C1, 975–986, 1985.
[11]
Bruneton, E.; Neyret, F.; Holzschuch, N. Real-time realistic ocean lighting using seamless transitions from geometry to BRDF. Computer Graphics Forum Vol. 29, No. 2, 487–496, 2010.
[12]
Podee, N.; Max, N.; Iwasaki, K.; Dobashi, Y. Temporal and spatial anti-aliasing for rendering reflections on water waves. Computational Visual Media Vol. 7, No. 2, 201–215, 2021.
[13]
Hopper, R.; Wolter, K. The water effects of Pirates of the Caribbean: Dead Men Tell no Tales. In: Proceedings of the ACM SIGGRAPH 2017 Talks, 1–2, 2017.
[14]
Huang, L. B.; Qu, Z. Y.; Tan, X.; Zhang, X. X.; Michels, D. L.; Jiang, C. Ships, splashes, and waves on a vast ocean. ACM Transactions on Graphics Vol. 40, No. 6, Article No. 203, 2021.
[15]
Xiong, S. Y.; Wang, Z. C.; Wang, M. D.; Zhu, B. A Clebsch method for free-surface vortical flow simulation. ACM Transactions on Graphics Vol. 41, No. 4, Article No. 116, 2022.
[16]
Tessendorf, J. Simulating ocean water. In: Simulating Nature: Realistic and Interactive Techniques Course Notes on SIGGRAPH, 3:1–3:26, 2001.
[17]
Ashikhmin, M.; Premože, S.; Shirley, P.; Smits, B. A variance analysis of the Metropolis Light Transportalgorithm. Computers & Graphics Vol. 25, No. 2, 287–294, 2001.
[18]
Premože, S.; Ashikhmin, M. Rendering natural waters. Computer Graphics Forum Vol. 20, No. 4, 189–200, 2001.
[19]
Hu, Y. H.; Velho, L.; Tong, X.; Guo, B. N.; Shum, H. Realistic, real-time rendering of ocean waves. Computer Animation and Virtual Worlds Vol. 17, No. 1, 59–67, 2006.
[20]
Schneider, J.; Westermann, R. Towards real-time visual simulation of water surfaces. In: Proceedings of the Vision Modeling and Visualization Conference, 211–218, 2001.
[21]
Loper, M. M.; Black, M. J. OpenDR: An approximate differentiable renderer. In: Computer Vision – ECCV 2014. Lecture Notes in Computer Science, Vol. 8695. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 154–169, 2014.
[22]
Henderson, P.; Ferrari, V. Learning single-image 3D reconstruction by generative modelling of shape, pose and shading. International Journal of Computer Vision Vol. 128, No. 4, 835–854, 2020.
[23]
Kato, H.; Ushiku, Y.; Harada, T. Neural 3D mesh renderer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3907–3916, 2018.
[24]
Liu, S. C.; Chen, W. K.; Li, T. Y.; Li, H. Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 7707–7716, 2019.
[25]
Chen, W. Z.; Gao, J.; Ling, H.; Smith, E. J.; Lehtinen, J.; Jacobson, A.; Fidler, S. Learning to predict 3D objects with an interpolation-based differentiable renderer. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems, Article No. 862, 9609–9619, 2019.
[26]
Cole, F.; Genova, K.; Sud, A.; Vlasic, D.; Zhang, Z. T. Differentiable surface rendering via non-differentiable sampling. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 6068–6077, 2021.
[27]
Li, T. M.; Aittala, M.; Durand, F.; Lehtinen, J. Differentiable Monte Carlo ray tracing through edge sampling. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 222, 2018.
[28]
Nimier-David, M.; Vicini, D.; Zeltner, T.; Jakob, W. Mitsuba 2: A retargetable forward and inverse renderer. ACM Transactions on Graphics Vol. 38, No. 6, Article No. 203, 2019.
[29]
Girdhar, R.; Fouhey, D. F.; Rodriguez, M.; Gupta, A. Learning a predictable and generative vectorrepresentation for objects. In: Computer Vision – ECCV 2016. Lecture Notes in Computer Science, Vol. 9910. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 484–499, 2016.
[30]
Hane, C.; Tulsiani, S.; Malik, J. Hierarchical surface prediction for 3D object reconstruction. In: Proceedings of the International Conference on 3D Vision, 412–420, 2017.
[31]
Xie, X. G.; Zhai, X.; Hou, F.; Hao, A. M.; Qin, H. Multitask learning on monocular water images: Surface reconstruction and image synthesis. Computer Animation and Virtual Worlds Vol. 30, Nos. 3–4, e1896, 2019.
[32]
Tulsiani, S.; Zhou, T.; Efros, A. A.; Malik, J. Multi-view supervision for single-view reconstruction via differentiable ray consistency. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, No. 12, 8754–8765, 2022.
[33]
Pavlakos, G.; Zhu, L. Y.; Zhou, X. W.; Daniilidis, K. Learning to estimate 3D human pose and shape from a single color image. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 459–468, 2018.
[34]
Baek, S.; Kim, K. I.; Kim, T. K. Pushing the envelope for RGB-based dense 3D hand pose estimation via neural rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1067–1076, 2019.
[35]
Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; Ng, R. NeRF: Representing scenes as neural radiance fields for view synthesis. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12346. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 405–421, 2020.
[36]
Thapa, S.; Li, N. Y.; Ye, J. W. Dynamic fluid surface reconstruction using deep neural network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 21–30, 2020.
[37]
Li, C.; Qiu, S.; Wang, C. B.; Qin, H. Learning physical parameters and detail enhancement for gaseous scene design based on data guidance. IEEE Transactions on Visualization and Computer Graphics Vol. 27, No. 10, 3867–3880, 2021.
[38]
Qiu, S.; Li, C.; Wang, C. B.; Qin, H. A rapid, end-to-end, generative model for gaseous phenomena from limited views. Computer Graphics Forum Vol. 40, No. 6, 242–257, 2021.
[39]
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Deep residual learning for image recognition. In: Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
[40]
Xie, Y.; Franz, E.; Chu, M. Y.; Thuerey, N. tempoGAN: A temporally coherent, volumetric GAN for super-resolution fluid flow. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 95, 2018.
[41]
Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Lecture Notes in Computer Science, Vol. 9351. Navab, N.; Hornegger, J.; Wells, W.; Frangi, A. Eds. Springer Cham, 234–241, 2015.
[42]
Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Vol. 2, 2672–2680, 2014.
Computational Visual Media
Pages 279-294
Cite this article:
Xie X, Gao Y, Hou F, et al. Dynamic ocean inverse modeling based on differentiable rendering. Computational Visual Media, 2024, 10(2): 279-294. https://doi.org/10.1007/s41095-023-0338-4

410

Views

33

Downloads

0

Crossref

1

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 06 January 2023
Accepted: 26 February 2023
Published: 03 January 2024
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return