AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (14.2 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Learning physically based material and lighting decompositions for face editing

Brown University, Providence, Rhode Island 02906, USA
Show Author Information

Graphical Abstract

Abstract

Lighting is crucial for portrait photography, yet the complex interactions between the skin and incident light are expensive to model computationally in graphics and difficult to reconstruct analytically via computer vision. Alternatively, to allow fast and controllable reflectance and lighting editing, we developed a physically based decomposition through deep learned priors from path-traced portrait images. Previous approaches that used simplified material models or low-frequency or low-dynamic-range lighting struggled to model specular reflections or relight directly without intermediate decomposition. However, we estimate the surface normal, skin albedo and roughness, and high-frequency HDRI maps, and propose an architecture to estimate both diffuse and specular reflectance components. In our experiments, we show that this approach can represent the true appearance function more effectively than simpler baseline methods, leading to better generalization and higher-quality editing.

References

[1]
Bousseau, A.; Paris, S.; Durand, F. User-assisted intrinsic images. ACM Transactions on Graphics Vol. 28, No. 5, 110, 2009.
[2]
Land, E. H.; McCann, J. J. Lightness and retinex theory. Journal of the Optical Society of America Vol. 61, No. 1, 111, 1971.
[3]
Li, C.; Zhou, K.; Lin, S. Intrinsic face image decomposition with human face priors. In: Computer Vision – ECCV 2014. Lecture Notes in Computer Science, Vol. 8693. Fleet, D.; Pajdla, T.; Schiele, B.; Tuytelaars, T. Eds. Springer Cham, 218233, 2014.
[4]
Janner, M.; Wu, J. J.; Kulkarni, T. D.; Yildirim, I.; Tenenbaum, J. B. Self-supervised intrinsic image decomposition. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, 59385948, 2017.
[5]
Li, Z. Q.; Snavely, N. CGIntrinsics: Better intrinsic image decomposition through physically-based rendering. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11207. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 381399, 2018.
[6]
Sengupta, S.; Kanazawa, A.; Castillo, C. D.; Jacobs, D. W. SfSNet: Learning shape, reflectance and illuminance of faces ‘in the wild’. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 62966305, 2018.
[7]
Nestmeyer, T.; Lalonde, J. F.; Matthews, I.; Lehrmann, A. Learning physics-guided face relighting under directional light. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 51235132, 2020.
[8]
Wang, Z. B.; Yu, X.; Lu, M.; Wang, Q.; Qian, C.; Xu, F. Single image portrait relighting via explicit multiple reflectance channel modeling. ACM Transactions on Graphics Vol. 39, No. 6, Article No. 220, 2020.
[9]
Weyrich, T.; Matusik, W.; Pfister, H.; Bickel, B.; Donner, C.; Tu, C. E.; McAndless, J.; Lee, J.; Ngan, A.; Jensen, H. W.; et al. Analysis of human faces using a measurement-based skin reflectance model. In: Proceedings of the ACM SIGGRAPH Papers, 10131024, 2006.
[10]
Smith, W. A. P.; Seck, A.; Dee, H.; Tiddeman, B.; Tenenbaum, J. B.; Egger, B. A morphable face albedo model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 50105019, 2020.
[11]
Nicodemus, F. E.; Richmond, J. C.; Hsia, J. J.; Ginsberg, I. W.; Limperis, T. Geometrical considerations and nomenclature for reflectance. In: Radiometry. Jones and Bartlett Publishers, Inc., 94145, 1992.
[12]
Yamaguchi, S.; Saito, S.; Nagano, K.; Zhao, Y. J.; Chen, W. K.; Olszewski, K.; Morishima, S.; Li, H. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 162, 2018.
[13]
Chen, A. P.; Chen, Z.; Zhang, G. L.; Mitchell, K.; Yu, J. Y. Photo-realistic facial details synthesis from single image. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 94289438, 2019.
[14]
Lattas, A.; Moschoglou, S.; Gecer, B.; Ploumpis, S.; Triantafyllou, V.; Ghosh, A.; Zafeiriou, S. AvatarMe: Realistically renderable 3D facial reconstruction “in-the-wild”. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 757766, 2020.
[15]
Dib, A.; Bharaj, G.; Ahn, J.; Thébault, C.; Gosselin, P.; Romeo, M.; Chevallier, L. Practical face reconstruction via differentiable ray tracing. Computer Graphics Forum Vol. 40, No. 2, 153164, 2021.
[16]
Ramamoorthi, R.; Hanrahan, P. A signal-processing framework for inverse rendering. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, 117128, 2001.
[17]
Basri, R.; Jacobs, D. Lambertian reflectance and linear subspaces. In: Proceedings of the 8th IEEE International Conference on Computer Vision, 383390, 2002.
[18]
Ramamoorthi, R.; Hanrahan, P. On the relationship between radiance and irradiance: Determining the illumination from images of a convex Lambertian object. Journal of the Optical Society of America A Vol. 18, No. 10, 24482459, 2001.
[19]
Zhou, H.; Hadap, S.; Sunkavalli, K.; Jacobs, D. Deep single-image portrait relighting. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 71937201, 2019.
[20]
Kanamori, Y.; Endo, Y. Relighting humans: Occlusion-aware inverse rendering for full-body human images. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 270, 2018.
[21]
Debevec, P. Image-based lighting. IEEE Computer Graphics and Applications Vol. 22, No. 2, 2634, 2002.
[22]
Yi, R. J.; Zhu, C. Y.; Tan, P.; Lin, S. Faces as lighting probes via unsupervised deep highlight extraction. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11213. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 321338, 2018.
[23]
Calian, D. A.; Lalonde, J. F.; Gotardo, P.; Simon, T.; Matthews, I.; Mitchell, K. From faces to outdoor light probes. Computer Graphics Forum Vol. 37, No. 2, 5161, 2018.
[24]
Sun, T. C.; Barron, J. T.; Tsai, Y. T.; Xu, Z. X.; Yu, X. M.; Fyffe, G.; Rhemann, C.; Busch, J.; Debevec, P.; Ramamoorthi, R. Single image portrait relighting. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 79, 2019.
[25]
Community BO. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. Available at https://www.blender.org/.
[26]
Torrance, K. E.; Sparrow, E. M. Theory for off-specular reflection from roughened surfaces. Journal of the Optical Society of America Vol. 57, No. 9, 1105, 1967.
[27]
Walter, B.; Marschner, S. R.; Li, H. S.; Torrance, K. E. Microfacet models for refraction through rough surfaces. In: Proceedings of the 18th Eurographics Conference on Rendering Techniques, 195206, 2007.
[28]
Yang, H. T.; Zhu, H.; Wang, Y. R.; Huang, M. K.; Shen, Q.; Yang, R. G.; Cao, X. FaceScape: A large-scale high quality 3D face dataset and detailed riggable 3D face prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 598607, 2020.
[29]
Jakob, W. Mitsuba renderer. 2010. Available at http://www.mitsubarenderer.org.
[30]
Gardner, M. A.; Sunkavalli, K.; Yumer, E.; Shen, X. H.; Gambaretto, E.; Gagné, C.; Lalonde, J. F. Learning to predict indoor illumination from a single image. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 176, 2017.
[31]
Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. ACM Transactions on Graphics Vol. 21, No. 3, 267276, 2002.
[32]
Wu, Y. X.; He, K. M. Group normalization. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11217. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 319, 2018.
[33]
Hu, Y. M.; Wang, B. Y.; Lin, S. FC4: Fully convolutional color constancy with confidence-weighted pooling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 330339, 2017.
[34]
Karras, T.; Laine, S.; Aila, T. M. A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 43964405, 2019.
[35]
Weber, H.; Prévost, D.; Lalonde, J. F. Learning to estimate indoor lighting from 3D objects. In: Proceedings of the International Conference on 3D Vision, 199207, 2018.
[36]
Zhang, R.; Isola, P.; Efros, A. A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 586595, 2018.
[37]
Chandran, P.; Winberg, S.; Zoss, G.; Riviere, J.; Gross, M.; Gotardo, P.; Bradley, D. Rendering with style: Combining traditional and neural approaches for high-quality face rendering. ACM Transactions on Graphics Vol. 40, No. 6, Article No. 223, 2021.
[38]
Christensen, P. H. An approximate reflectance profile for efficient subsurface scattering. In: Proceedings of the ACM SIGGRAPH Talks, Article No. 25, 2015.
Computational Visual Media
Pages 295-308
Cite this article:
Zhang Q, Thamizharasan V, Tompkin J. Learning physically based material and lighting decompositions for face editing. Computational Visual Media, 2024, 10(2): 295-308. https://doi.org/10.1007/s41095-022-0309-1

380

Views

33

Downloads

0

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 07 March 2022
Accepted: 01 September 2022
Published: 03 January 2024
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return