AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (2.9 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

DualFace: Two-stage drawing guidance for freehand portrait sketching

Japan Advanced Institute of Science and Technology, Ishikawa, 9231211, Japan
The University of Tokyo, Tokyo, 1138654, Japan
Show Author Information

Graphical Abstract

Abstract

Special skills are required in portrait painting, such as imagining geometric structures and facial detail for final portrait designs. This makes it a difficult task for users, especially novices without prior artistic training, to draw freehand portraits with high-quality details. In this paper, we propose dualFace, a portrait drawing interface to assist users with different levels of drawing skills to complete recognizable and authentic face sketches. Inspired by traditional artist workflows for portrait drawing, dualFace gives two-stages of drawing assistance to provide global and local visual guidance. The former helps users draw contour lines for portraits (i.e., geometric structure), and the latter helps users draw details of facial parts, which conform to the user-drawn contour lines. In the global guidance stage, the user draws several contour lines, and dualFace then searches for several relevant images from an internal database and displays the suggested face contour lines on the background of the canvas. In the local guidance stage, we synthesize detailed portrait images with a deep generative model from user-drawn contour lines, and then use the synthesized results as detailed drawing guidance. We conducted a user study to verify the effectiveness of dualFace, which confirms that dualFace significantly helps users to produce a detailed portrait sketch.

Electronic Supplementary Material

Video
101320TP-2022-1-063_ESM.mp4

References

[1]
Xie, J.; Hertzmann, A.; Li, W.; Winnemöller, H. PortraitSketch: Face sketching assistance for novices. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, 407–417, 2014.
[2]
Lee, Y. J.; Zitnick, C. L.; Cohen, M. F. ShadowDraw: Real-time user guidance for freehand drawing. ACM Transactions on Graphics Vol. 30, No. 4, Article No. 27, 2011.
[3]
Choi, J.; Cho, H.; Song, J.; Yoon, S. M. SketchHelper: Real-time stroke guidance for freehand sketch retrieval. IEEE Transactions on Multimedia Vol. 21, No. 8, 2083–2092, 2019.
[4]
Ghosh, A.; Zhang, R.; Dokania, P.; Wang, O.; Efros, A.; Torr, P.; Shechtman, E. Interactive sketch & fill: Multiclass sketch-to-image translation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 1171–1180, 2019.
[5]
Zhu, J. Y.; Krähenbühl, P.; Shechtman, E.; Efros, A. A. Generative visual manipulation on the natural image manifold. In: Computer Vision – ECCV 2016. Lecture Notes in Computer Science, Vol. 9909. Leibe, B.; Matas, J.; Sebe, N.; Welling, M. Eds. Springer Cham, 597–613, 2016.
[6]
Bradley, B. Drawing People: How to Portray the Clothed Figure. North Light/Writers Digest, 2003. Available at https://books.google.co.jp/books?id= lhdjDwAAQBAJ.
[7]
Liu, L.; Shen, F. M.; Shen, Y. M.; Liu, X. L.; Shao, L. Deep sketch hashing: Fast free-hand sketch-based image retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2298–2307, 2017.
[8]
Yu, Q.; Liu, F.; Song, Y. Z.; Xiang, T.; Hospedales, T. M.; Loy, C. C. Sketch me that shoe. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 799–807, 2016.
[9]
Dekel, T.; Gan, C.; Krishnan, D.; Liu, C.; Freeman, W. T. Sparse, smart contours to represent and edit images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3511–3520, 2018.
[10]
Portenier, T.; Hu, Q. Y.; Szabó, A.; Bigdeli, S. A.; Favaro, P.; Zwicker, M. FaceShop: Deep sketch-based face image editing. ACM Transactions on Graphics Vol. 37, No. 4, Article No. 99, 2018.
[11]
Tseng, H. Y.; Fisher, M.; Lu, J. W.; Li, Y. J.; Kim, V.; Yang, M. H. Modeling artistic workflows for image generation and editing. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12363. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 158–174, 2020.
[12]
Yang, S.; Wang, Z. Y.; Liu, J. Y.; Guo, Z. M. Deep plastic surgery: Robust and controllable image editing with human-drawn sketches. In: Computer Vision – ECCV 2020. Lecture Notes in Computer Science, Vol. 12360. Vedaldi, A.; Bischof, H.; Brox, T.; Frahm, J. M. Eds. Springer Cham, 601–617, 2020.
[13]
Hu, Z. Y.; Xie, H. R.; Fukusato, T.; Sato, T.; Igarashi, T. Sketch2VF: Sketch-based flow design with conditional generative adversarial network. Computer Animation and Virtual Worlds Vol. 30, Nos. 3–4, e1889, 2019.
[14]
Peng, Y. C.; Mishima, Y.; Igarashi, Y.; Miyauchi, R.; Okawa, M.; Xie, H. R.; Miyata, K. Sketch2Domino: Interactive chain reaction design and guidance. In: Proceedings of the Nicograph International, 32–38, 2020.
[15]
Fukusato, T.; Noh, S.-T.; Igarashi, T.; Ito, D. Interactive meshing of user-defined point sets. Journal of Computer Graphics Techniques Vol. 9, No. 3, 39–58, 2020.
[16]
Igarashi, T.; Hughes, J. F. A suggestive interface for 3D drawing. In: Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology, 173–181, 2001.
[17]
He, Y.; Xie, H. R.; Zhang, C.; Yang, X.; Miyata, K. Sketch-based normal map generation with geometric sampling. In: Proceedings of the SPIE 11766, International Workshop on Advanced Imaging Technology, 117661B, 2021.
[18]
Flagg, M.; Rehg, J. M. Projector-guided painting. In: Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, 235–244, 2006.
[19]
Igarashi, T.; Matsuoka, S.; Kawachiya, S.; Tanaka, H. Interactive beautification: A technique for rapid geometric design. In: Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, 105–114, 1997.
[20]
Laviole, J.; Hachet, M. PapARt: Interactive 3D graphics and multi-touch augmented paper for artistic creation. In: Proceedings of the IEEE Symposium on 3D User Interfaces, 3–6, 2012.
[21]
Dixon, D.; Prasad, M.; Hammond, T. iCanDraw: Using sketch recognition and corrective feedback to assist a user in drawing human faces. In: Proceedings of the 28th International Conference on Human Factors In Computing Systems, 897–906, 2010.
[22]
Iarussi, E.; Bousseau, A.; Tsandilas, T. The drawing assistant: Automated drawing guidance and feedback from photographs. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology, 183–192, 2013.
[23]
Su, Q.; Li, W. H. A.; Wang, J.; Fu, H. Ez-sketching: Three-level optimization for error-tolerant image tracing. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 54, 2014.
[24]
He, Z. Z.; Xie, H. R.; Miyata, K. Interactive projection system for calligraphy practice. In: Proceedings of the Nicograph International, 55–61, 2020.
[25]
Matsui, Y.; Shiratori, T.; Aizawa, K. DrawFromDrawings: 2D drawing assistance via stroke interpolation with a sketch database. IEEE Transactions on Visualization and Computer Graphics Vol. 23, No. 7, 1852–1862, 2017.
[26]
Lee, C. H.; Liu, Z. W.; Wu, L. Y.; Luo, P. MaskGAN: Towards diverse and interactive facial image manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5548–5557, 2020.
[27]
Rosin, P. L.; Mould, D.; Berger, I.; Collomosse, J. P.; Lai, Y.-K.; Li, C.; Li, H.; Shamir, A.; Wand, M.; Wang, T.; Winnemöller, H. Benchmarking non-photorealistic rendering of portraits. In: Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Article No. 11, 2017.
[28]
Chen, Z. Y.; Zhou, J. Y.; Gao, X. Y.; Li, L. S.; Liu, J. F. A novel method for pencil drawing generation in non-photo-realistic rendering. In: Advances in Multimedia Information Processing - PCM 2008. Lecture Notes in Computer Science, Vol. 5353. Huang, Y. M. R. et al. Eds. Springer Berlin Heidelberg, 931–934, 2008.
[29]
Xie, D. G.; Zhao, Y.; Xu, D. An efficient approach for generating pencil filter and its implementation on GPU. In: Proceedings of the 10th IEEE International Conference on Computer-Aided Design and Computer Graphics, 185–190, 2007.
[30]
Zhang, J. W.; Wang, R. Z.; Dan, X. Automatic genaration of sketch-like pencil drawing from image. In: Proceedings of the IEEE International Conference on Multimedia & Expo Workshops, 261–266, 2017.
[31]
Li, C.; Wand, M. Combining Markov random fields and convolutional neural networks for image synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2479–2486, 2016.
[32]
Liao, J.; Yao, Y.; Yuan, L.; Hua, G.; Kang, S. B. Visual attribute transfer through deep image analogy. ACM Transactions on Graphics Vol. 36, No. 4, Article No. 120, 2017.
[33]
Silva, F.; de Castro, P. A. L.; Júnior, H. R.; Marujo, E. C. ManGAN: Assisting colorization of manga characters concept art using conditional GAN. In: Proceedings of the IEEE International Conference on Image Processing, 3257–3261, 2019.
[34]
Zhu, J. Y.; Park, T.; Isola, P.; Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, 2242–2251, 2017.
[35]
Yi, R.; Liu, Y. J.; Lai, Y. K.; Rosin, P. L. APDrawingGAN: Generating artistic portrait drawings from face photos with hierarchical GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10735–10744, 2019.
[36]
Eitz, M.; Richter, R.; Boubekeur, T.; Hildebrand, K.; Alexa, M. Sketch-based shape retrieval. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 31, 2012.
[37]
Yu, C. Q.; Wang, J. B.; Peng, C.; Gao, C. X.; Yu, G.; Sang, N. BiSeNet: Bilateral segmentation network for real-time semantic segmentation. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science, Vol. 11217. Ferrari, V.; Hebert, M.; Sminchisescu, C.; Weiss, Y. Eds. Springer Cham, 334–349, 2018.
[38]
Kazemi, V.; Sullivan, J. One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1867–1874, 2014.
[39]
Li, Y. H.; Chen, X. J.; Wu, F.; Zha, Z. J. LinesToFacePhoto: Face photo generation from lines with conditional self-attention generative adversarial networks. In: Proceedings of the 27th ACM International Conference on Multimedia, 2323–2331, 2019.
[40]
Li, Y. H.; Chen, X. J.; Yang, B. X.; Chen, Z. H.; Cheng, Z. H.; Zha, Z. J. DeepFacePencil: Creating face images from freehand sketches. In: Proceedings of the 28th ACM International Conference on Multimedia, 991–999, 2020.
[41]
Richardson, E.; Alaluf, Y.; Patashnik, O.; Nitzan, Y.; Cohen-Or, D. Encoding in style: A StyleGAN encoder for image-to-image translation. arXiv preprint arXiv:2008.00951v1, 2020.
[42]
Chen, S.-Y.; Su, W.; Gao, L.; Xia, S.; Fu, H. DeepFaceDrawing: Deep generation of face images from sketches. ACM Transactions on Graphics Vol. 39, No. 4, Article No. 72, 2020.
Computational Visual Media
Pages 63-77
Cite this article:
Huang Z, Peng Y, Hibino T, et al. DualFace: Two-stage drawing guidance for freehand portrait sketching. Computational Visual Media, 2022, 8(1): 63-77. https://doi.org/10.1007/s41095-021-0227-7

765

Views

17

Downloads

28

Crossref

16

Web of Science

26

Scopus

2

CSCD

Altmetrics

Received: 14 January 2021
Accepted: 15 March 2021
Published: 27 October 2021
© The Author(s) 2021.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www. editorialmanager.com/cvmj.

Return