Realistic human skin rendering has been a long-standing challenge in computer graphics. Recently, biophysical-based skin rendering has received increasing attention, as it provides a more realistic skin-rendering and a more intuitive way to adjust the skin style. In this work, we present a novel heterogeneous biophysical-based volume rendering method for human skin that improves the realism of skin appearance while easily simulating various types of skin effects, including skin diseases, by modifying biological coefficient textures. Specifically, we introduce a two-layer skin representation by mesh deformation that explicitly models the epidermis and dermis with heterogeneous volumetric medium layers containing the corresponding spatially varying melanin and hemoglobin, respectively. Furthermore, to better facilitate skin acquisition, we introduced a learning-based framework that automatically estimates spatially varying biological coefficients from an albedo texture, enabling biophysical-based and intuitive editing, such as tanning, pathological vitiligo, and freckles. We illustrated the effects of multiple skin-editing applications and demonstrated superior quality to the commonly used random walk skin-rendering method, with more convincing skin details regarding subsurface scattering.


Gradient-domain rendering estimates finite difference gradients of image intensities and reconstructs the final result by solving a screened Poisson problem, which shows improvements over merely sampling pixel intensities. Adaptive sampling is another orthogonal research area that focuses on distributing samples adaptively in the primal domain. However, adaptive sampling in the gradient domain with low sampling budget has been less explored. Our idea is based on the observation that signals in the gradient domain are sparse, which provides more flexibility for adaptive sampling. We propose a deep-learning-based end-to-end sampling and reconstruction framework in gradient-domain rendering, enabling adaptive sampling gradient and the primal maps simultaneously. We conducted extensive experiments for evaluation and showed that our method produces better reconstruction quality than other methods in the test dataset.