AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (13.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

A GAN-based temporally stable shading model for fast animation of photorealistic hair

Department of General Systems Studies, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
Show Author Information

Abstract

We introduce an unsupervised GAN-based model for shading photorealistic hair animations. Our model is much faster than previous rendering algorithms and produces fewer artifacts than other neural image translation methods. The main idea is to extend the Cycle-GAN structure to avoid semi-transparent hair appearance and to exactly reproduce the interaction of the lights with the scene. We usetwo constraints to ensure temporal coherence and highlight stability. Our approach outperforms and is computationally more efficient than previous methods.

Electronic Supplementary Material

Video
41095_2020_201_MOESM1_ESM.mp4

References

[1]
D’Eon, E.; Francois, G.; Hill, M.; Letteri, J.; Aubry, J.-M. An energy-conserving hair reectance model. In: Proceedings of the 22nd Eurographics Conference on Rendering, 1181-1187, 2011.
[2]
Zinke, A. Photo-realistic rendering of fiber assemblies. In: Ausgezeichnete Informatikdissertationen, 2007.
[3]
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242-2251, 2017.
[4]
Kajiya, J. T.; Kay, T. L. Rendering fur with three dimensional textures. ACM SIGGRAPH Computer Graphics Vol. 23, No. 3, 271-280, 1989.
[5]
Yan, L.-Q.; Tseng, C.-W.; Jensen, H. W.; Ramamoorthi, R. Physically-accurate fur reflectance: Modeling, measurement and rendering. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 185, 2015.
[6]
Marschner, S. R.; Jensen, H. W.; Cammarano, M.; Worley, S.; Hanrahan, P. Light scattering from human hair fibers. ACM Transactions on Graphics Vol. 22, No. 3, 780-791, 2003.
[7]
Ward, K.; Bertails, F.; Kim, T. Y.; Marschner, S. R.; Cani, M. P.; Lin, M. C. A survey on hair modeling: Styling, simulation, and rendering. IEEE Transactions on Visualization and Computer Graphics Vol. 13, No. 2, 213-234, 2007.
[8]
Moon, J. T.; Walter, B.; Marschner, S. Efficient multiple scattering in hair using spherical harmonics. ACM Transactions on Graphics Vol. 27, No. 3, 1-7, 2008.
[9]
Ren, Z.; Zhou, K.; Li, T. F.; Hua, W.; Guo, B. N. Interactive hair rendering under environment lighting. In: Proceedings of the ACM SIGGRAPH 2010 Papers, Article No. 55, 2010.
[10]
Jansson, E. S. V.; Chajdas, M. G.; Lacroix, J.; Ragnemalm, I. Real-time hybrid hair rendering. In: Proceedings of the Eurographics Symposium on Rendering, 1-8, 2019.
[11]
Gatys, L. A.; Ecker, A. S.; Bethge, M. Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2414-2423, 2016.
[12]
Johnson, J.; Alahi, A.; Li, F. F. Perceptual losses for real-time style transfer and super-resolution. In: Computer Vision - ECCV 2016. Lecture Notes in Computer Science, Vol. 9906. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 694-711, 2016.
[13]
Luan, F. J.; Paris, S.; Shechtman, E.; Bala, K. Deep photo style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6997-7005, 2017.
[14]
Wei, L. Y.; Hu, L. W.; Kim, V.; Yumer, E.; Li, H. Real-time hair rendering using sequential adversarial networks. In: Computer Vision - ECCV 2018. Lecture Notes in Computer Science, Vol. 11208. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 105-122, 2018.
[15]
Chai, M.; Ren, J.; Tulyakov, S. Neural hair rendering. In: Computer Vision - ECCV 2020. Lecture Notes in Computer Science, Vol. 12363. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 371-388, 2020.
[16]
Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, 2672-2680, 2014.
[17]
Gatys, L.; Ecker, A.; Bethge, M. A neural algorithm of artistic style. Journal of Vision Vol. 16, No. 12, 326, 2016.
[18]
Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365-3385, 2020.
[19]
Isola, P.; Zhu, J. Y.; Zhou, T. H.; Efros, A. A. Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 5967-5976, 2017.
[20]
Wang, T. C.; Liu, M. Y.; Zhu, J. Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8798-8807, 2018.
[21]
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242-2251, 2017.
[22]
Bansal, A.; Ma, S. G.; Ramanan, D.; Sheikh, Y. Recycle-GAN: Unsupervised video retargeting. In: Computer Vision - ECCV 2018. Lecture Notes in Computer Science, Vol. 11209. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 122-138, 2018.
[23]
Chen, Y.; Pan, Y. W.; Yao, T.; Tian, X. M.; Mei, T. Mocycle-GAN: Unpaired video-to-video translation. In: Proceedings of the 27th ACM International Conference on Multimedia, 647-655, 2019.
[24]
Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 6629-6640, 2017.
Computational Visual Media
Pages 127-138
Cite this article:
Qiao Z, Kanai T. A GAN-based temporally stable shading model for fast animation of photorealistic hair. Computational Visual Media, 2021, 7(1): 127-138. https://doi.org/10.1007/s41095-020-0201-9

673

Views

55

Downloads

3

Crossref

2

Web of Science

4

Scopus

0

CSCD

Altmetrics

Received: 20 September 2020
Accepted: 27 November 2020
Published: 18 January 2021
© The Author(s) 2020

This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return