AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (19.5 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Hierarchical vectorization for facial images

Qian Fu1,2,*Linlin Liu3,*Fei Hou4,5Ying He1( )
School of Computer Science and Engineering, Nanyang Technological University, 639798, Singapore
Data61, Commonwealth Scientific and Industrial Research Organisation, Sydney 2015, Australia
Interdisciplinary Graduate School, Nanyang Technological University and Alibaba Group, 639798, Singapore
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
University of Chinese Academy of Sciences, Beijing 100049, China

* Qian Fu and Linlin Liu contributed equally to this work.

Show Author Information

Graphical Abstract

Abstract

The explosive growth of social media means portrait editing and retouching are in high demand. While portraits are commonly captured and stored as raster images, editing raster images is non-trivial and requires the user to be highly skilled. Aiming at developing intuitive and easy-to-use portrait editing tools, we propose a novel vectorization method that can automatically convert raster images into a 3-tier hierarchical representation. The base layer consists of a set of sparse diffusion curves (DCs) which characterize salient geometric features and low-frequency colors, providing a means for semantic color transfer and facial expression editing. The middle level encodes specular highlights and shadows as large, editable Poisson regions (PRs) and allows the user to directly adjust illumination by tuning the strength and changing the shapes of PRs. The top level contains two types of pixel-sized PRs for high-frequency residuals and fine details such as pimples and pigmentation. We train a deep generative model that can produce high-frequency residuals automatically. Thanks to the inherent meaning in vector primitives, editing portraits becomes easy and intuitive. In particular, our method supports color transfer, facial expression editing, highlight and shadow editing, and automatic retouching. To quantitatively evaluate the results, we extend the commonly used FLIP metric (which measures color and feature differences between two images) to consider illumination. The new metric, illumination-sensitive FLIP, can effectively capture salient changes in color transfer results, and is more consistent with human perception than FLIP and other quality measures for portrait images. We evaluate our method on the FFHQR dataset and show it to be effective for common portrait editing tasks, such as retouching, light editing, color transfer, and expression editing.

References

[1]
Orzan, A.; Bousseau, A.; Winnemöller, H.; Barla, P.; Thollot, J.; Salesin, D. Diffusion curves. ACM Transactions on Graphics Vol. 27, No. 3, 18, 2008.
[2]
Finch, M.; Snyder, J.; Hoppe, H. Freeform vector graphics with controlled thin-plate splines. ACM Transactions on Graphics Vol. 30, No. 6, 110, 2011.
[3]
Xie, G. F.; Sun, X.; Tong, X.; Nowrouzezahrai, D. Hierarchical diffusion curves for accurate automatic image vectorization. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 230, 2014.
[4]
Zhao, S.; Durand, F.; Zheng, C. X. Inverse diffusion curves using shape optimization. IEEE Transactions on Visualization and Computer Graphics Vol. 24, No. 7, 21532166, 2018.
[5]
Hou, F.; Sun, Q.; Fang, Z.; Liu, Y. J.; Hu, S. M.; Qin, H.; Hao, A. M.; He, Y. Poisson vector graphics (PVG). IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 2, 13611371, 2020.
[6]
Lee, C. H.; Liu, Z. W.; Wu, L. Y.; Luo, P. MaskGAN: Towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 55485557, 2020.
[7]
Shafaei, A.; Little, J. J.; Schmidt, M. AutoRetouch: Automatic professional face retouching. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 989997, 2021.
[8]
Bell, S.; Bala, K.; Snavely, N. Intrinsic images in the wild. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 159, 2014.
[9]
Cheng, Z. A.; Zheng, Y. Q.; You, S. D.; Sato, I. Non-local intrinsic decomposition with near-infrared priors. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 25212530, 2019.
[10]
Zhou, H.; Yu, X.; Jacobs, D. GLoSH: Global-local spherical harmonics for intrinsic image decomposition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 78197828, 2019.
[11]
Sengupta, S.; Kanazawa, A.; Castillo, C. D.; Jacobs, D. W. SfSNet: Learning shape, reflectance and illuminance of faces ‘in the wild’. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 62966305, 2018.
[12]
Shu, Z. X.; Yumer, E.; Hadap, S.; Sunkavalli, K.; Shechtman, E.; Samaras, D. Neural face editing with intrinsic image disentangling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 54445453, 2017.
[13]
Sun, J.; Liang, L.; Wen, F.; Shum, H. Y. Image vectorization using optimized gradient meshes. ACM Transactions on Graphics Vol. 26, No. 3, 11es, 2007.
[14]
Lai, Y. K.; Hu, S. M.; Martin, R. R. Automatic and topology-preserving gradient mesh generation for image vectorization. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 85, 2009.
[15]
Chen, K. W.; Luo, Y. S.; Lai, Y. C.; Chen, Y. L.; Yao, C. Y.; Chu, H. K.; Lee, T. Y. Image vectorization with real-time thin-plate spline. IEEE Transactions on Multimedia Vol. 22, No. 1, 1529, 2020.
[16]
Liao, Z. C.; Hoppe, H.; Forsyth, D.; Yu, Y. Z. A subdivision-based representation for vector image editing. IEEE Transactions on Visualization and Computer Graphics Vol. 18, No. 11, 18581867, 2012.
[17]
Zhou, H. L.; Zheng, J. M.; Wei, L. Representing images using curvilinear feature driven subdivision surfaces. IEEE Transactions on Image Processing Vol. 23, No. 8, 32683280, 2014.
[18]
Zhang, S. H.; Chen, T.; Zhang, Y. F.; Hu, S. M.; Martin, R. R. Vectorizing cartoon animations. IEEE Transactions on Visualization and Computer Graphics Vol. 15, No. 4, 618629, 2009.
[19]
Boyé, S.; Barla, P.; Guennebaud, G. A vectorial solver for free-form vector gradients. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 173, 2012.
[20]
Canny, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. PAMI-8, No. 6, 679698, 1986.
[21]
Lu, S. F.; Jiang, W.; Ding, X. F.; Kaplan, C. S.; Jin, X. G.; Gao, F.; Chen, J. Z. Depth-aware image vectorization and editing. The Visual Computer Vol. 35, Nos. 6–8, 10271039, 2019.
[22]
Shu, Z. X.; Hadap, S.; Shechtman, E.; Sunkavalli, K.; Paris, S.; Samaras, D. Portrait lighting transfer using a mass transport approach. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 2, 2017.
[23]
Zhou, H.; Hadap, S.; Sunkavalli, K.; Jacobs, D. Deep single-image portrait relighting. In: Proceedings of theIEEE/CVF International Conference on Computer Vision, 71937201, 2019.
[24]
Zhang, X. M.; Fanello, S.; Tsai, Y. T.; Sun, T. C.; Xue, T. F.; Pandey, R.; Orts-Escolano, S.; Davidson, P.; Rhemann, C.; Debevec, P.; et al. Neural light transport for relighting and view synthesis. ACM Transactions on Graphics Vol. 40, No. 1, Article No. 9, 2021.
[25]
Fu, Q.; He, Y.; Hou, F.; Zhang, J. Y.; Zeng, A. X.; Liu, Y. J. Vectorization based color transfer for portrait images. Computer-Aided Design Vol. 115, 111121, 2019.
[26]
Liao, J.; Yao, Y.; Yuan, L.; Hua, G.; Kang, S. B. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 120, 2017.
[27]
Afifi, M.; Brubaker, M. A.; Brown, M. S. HistoGAN: Controlling colors of GAN-generated and real images via color histograms. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 79377946, 2021.
[28]
Dekel, T.; Gan, C.; Krishnan, D.; Liu, C.; Freeman, W. T. Sparse, smart contours to represent and edit images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 35113520, 2018.
[29]
Lu, Z. H.; Hu, T. H.; Song, L. X.; Zhang, Z. X.; He, R. Conditional expression synthesis with face parsing transformation. In: Proceedings of the 26th ACM International Conference on Multimedia, 10831091, 2018.
[30]
Shih, Y.; Paris, S.; Barnes, C.; Freeman, W. T.; Durand, F. Style transfer for headshot portraits. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 148, 2014.
[31]
Sheng, L.; Lin, Z. Y.; Shao, J.; Wang, X. G. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 82428250, 2018.
[32]
Portenier, T.; Hu, Q. Y.; Szabó, A.; Bigdeli, S. A.; Favaro, P.; Zwicker, M. Faceshop. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 99, 2018.
[33]
Jo, Y.; Park, J. SC-FEGAN: Face editing generative adversarial network with user’s sketch and color. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 17451753, 2019.
[34]
Chen, S. Y.; Liu, F. L.; Lai, Y. K.; Rosin, P. L.; Li, C. P.; Fu, H. B.; Gao, L. DeepFaceEditing: Deep face generation and editing with disentangled geometry and appearance control. ACM Transactions on Graphics Vol. 40, No. 4, Article No. 90, 2021.
[35]
Thanh-Tung, H.; Tran, T. Catastrophic forgetting and mode collapse in GANs. In: Proceedings of the International Joint Conference on Neural Networks, 110, 2020.
[36]
Bang, D.; Shim, H. MGGAN: Solving mode collapse using manifold-guided training. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 23472356, 2021.
[37]
Shen, H. L.; Zheng, Z. H. Real-time highlight removal using intensity ratio. Applied Optics Vol. 52, No. 19, 4483, 2013.
[38]
Leordeanu, M.; Sukthankar, R.; Sminchisescu, C. Efficient closed-form solution to generalized boundary detection. In: Computer Vision – ECCV 2012. Lecture Notes in Computer Science, Vol. 7575. Fitzgibbon, A.; Lazebnik, S.; Perona, P.; Sato, Y.; Schmid, C. Eds. Springer Berlin Heidelberg, 516529, 2012.
[39]
Soria, X.; Riba, E.; Sappa, A. Dense extreme inception network: Towards a robust CNN model for edge detection. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 19121921, 2020.
[40]
Xie, Q. Z.; Luong, M. T.; Hovy, E.; Le, Q. V. Self-training with noisy student improves ImageNet classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1068410695, 2020.
[41]
Zoph, B.; Ghiasi, G.; Lin, T. Y.; Cui, Y.; Liu, H. X.; Cubuk, E. D.; Le, Q. V. Rethinking pre-training and self-training. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, Article No. 323, 38333845, 2020.
[42]
Wang, T. C.; Liu, M. Y.; Zhu, J. Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 87988807, 2018.
[43]
Andersson, P.; Nilsson, J.; Akenine-Möller, T.; Oskarsson, M.; Åström, K.; Fairchild, M. FLIP: A difference evaluator for alternating images. Proceedings of the ACM on Computer Graphics and Interactive Techniques Vol. 3, No. 2, Article No. 15, 2020.
[44]
Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing Vol. 13, No. 4, 600612, 2004.
[45]
Bi, S.; Han, X. G.; Yu, Y. Z. An L1 image transform for edge-preserving smoothing and scene-level intrinsic decomposition. ACM Transactions on Graphics Vol. 34, No. 4, Article No. 78, 2015.
[46]
Favreau, J. D.; Lafarge, F.; Bousseau, A. Photo2clipart: Image abstraction and vectorization using layered linear gradients. ACM Transactions on Graphics Vol. 36, No. 6, Article No. 180, 2017.
Computational Visual Media
Pages 97-118
Cite this article:
Fu Q, Liu L, Hou F, et al. Hierarchical vectorization for facial images. Computational Visual Media, 2024, 10(1): 97-118. https://doi.org/10.1007/s41095-022-0314-4

424

Views

12

Downloads

1

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 27 May 2022
Accepted: 21 September 2022
Published: 30 November 2023
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return