AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (12.3 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

Stroke-GAN Painter: Learning to paint artworks using stroke-style generative adversarial networks

School of Computer Science and Engineering, Macau University of Science and Technology, Macau, China
Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China
School of Computing and Information Engineering, Hanshan Normal University, Chaozhou, China
Department of Computer Science, Hong Kong Baptist University, Hong Kong, China
School of Design, The Hong Kong Polytechnic University, Hong Kong, China
Show Author Information

Graphical Abstract

Abstract

It is a challenging task to teach machines to paint like human artists in a stroke-by-stroke fashion. Despite advances in stroke-based image rendering and deep learning-based image rendering, existing painting methods have limitations: they (i) lack flexibility to choose different art-style strokes, (ii) lose content details of images, and (iii) generate few artistic styles for paintings. In this paper, we propose a stroke-style generative adversarial network, called Stroke-GAN, to solve the first two limitations. Stroke-GAN learns styles of strokes from different stroke-style datasets, so can produce diverse stroke styles. We design three players in Stroke-GAN to generate pure-color strokes close to human artists’ strokes, thereby improving the quality of painted details. To overcome the third limitation, we have devised a neural network named Stroke-GAN Painter, based on Stroke-GAN; it can generate different artistic styles of paintings. Experiments demonstrate that our artful painter can generate various styles of paintings while well-preserving content details (such as details of human faces and building textures) and retaining high fidelity to the input images.

References

[1]
Wang, L.; Wang, Z.; Yang, X. S.; Hu, S. M.; Zhang, J. J. Photographic style transfer. The Visual Computer Vol. 36, No. 2, 317–331, 2020.
[2]
Gatys, L. A.; Ecker, A. S.; Bethge, M. Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2414–2423, 2016.
[3]
Zhao, Y. R.; Deng, B.; Huang, J. Q.; Lu, H. T.; Hua, X. S. Stylized adversarial AutoEncoder for image generation. In: Proceedings of the 25th ACM International Conference on Multimedia, 244–251, 2017.
[4]
Zhou, W. Y.; Yang, G. W.; Hu, S. M. Jittor-GAN: A fast-training generative adversarial network model zoo based on Jittor. Computational Visual Media Vol. 7, No. 1, 153–157, 2021.
[5]
Haeberli, P. Paint by numbers: Abstract image representations. ACM SIGGRAPH Computer Graphics Vol. 24, No. 4, 207–214, 1990.
[6]
Hertzmann, A. Painterly rendering with curved brush strokes of multiple sizes. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 453–460, 1998.
[7]
Lee, H.; Seo, S.; Ryoo, S.; Ahn, K.; Yoon, K. A multi-level depiction method for painterly rendering based on visual perception cue. Multimedia Tools and Applications Vol. 64, No. 2, 277–292, 2013.
[8]
Ha, D.; Eck, D. A neural representation of sketch drawings. In: Proceedings of the International Conference on Learning Representations, 1–16, 2018.
[9]
Zheng, N.; Jiang, Y.; Huang, D. StrokeNet: A neural painting environment. In: Proceedings of the International Conference on Learning Representations, 1–12, 2019.
[10]
Ganin, Y.; Kulkarni, T.; Babuschkin, I.; Eslami, S. M. A.; Vinyals, O. Synthesizing programs for images using reinforced adversarial learning. In: Proceedings of the 35th International Conference on Machine Learning, 1666–1675, 2018.
[11]
Xie, N.; Hachiya, H.; Sugiyama, M. Artist agent: A reinforcement learning approach to automatic stroke generation in oriental ink painting. IEICE Transactions on Information and Systems Vol. E96.D, No. 5, 1134–1144, 2013.
[12]
Huang, Z. W.; Zhou, S. C.; Heng, W. Learning to paint with model-based deep reinforcement learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 8708–8717, 2019.
[13]
Wang, Q.; Guo, C.; Dai, H. N.; Li, P. Self-stylized neural painter. In: Proceedings of the SIGGRAPH Asia 2021 Posters, Article No. 9, 2021.
[14]
Song, J. F.; Pang, K. Y.; Song, Y. Z.; Xiang, T.; Hospedales, T. M. Learning to sketch with shortcut cycle consistency. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 801–810, 2018.
[15]
Nakano, R. Neural painters: A learned differentiable constraint for generating brushstroke paintings. In: Proceedings of the 33rd Conference on Neural Information Processing Systems, 2019.
[16]
Deussen, O.; Strothotte, T. Computer-generated pen-and-ink illustration of trees. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 13–18, 2000.
[17]
Wilson, B.; Ma, K. L. Rendering complexity in computer-generated pen-and-ink illustrations. In: Proceedings of the 3rd International Symposium on Non-photorealistic Animation and Rendering, 129–137, 2004.
[18]
Deussen, O.; Hiller, S.; Van Overveld, C.; Strothotte, T. Floating points: A method for computing stipple drawings. Computer Graphics Forum Vol. 19, No. 3, 41–50, 2000.
[19]
Deussen, O.; Isenberg, T. Halftoning and stippling. In: Image and Video-based Artistic Stylisation. Compu-tational Imaging and Vision, Vol. 42. Rosin, P.; Collomosse, J. Eds. Springer London, 45–61, 2013.
[20]
Hertzmann, A. A survey of stroke-based rendering. IEEE Computer Graphics and Applications Vol. 23, No. 4, 70–81, 2003.
[21]
Mellor, J.; Park, E.; Ganin, Y.; Babuschkin, I.; Kulkarni, T.; Rosenbaum, D.; Ballard, A.; Weber, T.; Vinyals, O.; Eslami, S. M. A. Unsupervised doodling and painting with improved SPIRAL. In: Proceedings of the Neural Information Processing Systems Workshops, 2019.
[22]
Zou, Z. X.; Shi, T. Y.; Qiu, S.; Yuan, Y.; Shi, Z. W. Stylized neural painting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 15684–15693, 2021.
[23]
Liu, S. H.; Lin, T. W.; He, D. L.; Li, F.; Deng, R. F.; Li, X.; Ding, E.; Wang, H. Paint transformer: Feed forward neural painting with stroke prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 6578–6587, 2021.
[24]
Gatys, L.; Ecker, A.; Bethge, M. A neural algorithm of artistic style. Journal of Vision Vol. 16, No. 12, 326, 2016.
[25]
Jing, Y. C.; Yang, Y. Z.; Feng, Z. L.; Ye, J. W.; Yu, Y. Z.; Song, M. L. Neural style transfer: A review. IEEE Transactions on Visualization and Computer Graphics Vol. 26, No. 11, 3365–3385, 2020.
[26]
Dutta, T.; Singh, A.; Biswas, S. StyleGuide: Zero-shot sketch-based image retrieval using style-guided image generation. IEEE Transactions on Multimedia Vol. 23, 2833–2842, 2021.
[27]
Xu, M. L.; Su, H.; Li, Y. F.; Li, X.; Liao, J.; Niu, J. W.; Lv, P.; Zhou, B. Stylized aesthetic QR code. IEEE Transactions on Multimedia Vol. 21, No. 8, 1960–1970, 2019.
[28]
Chu, W. T.; Wu, Y. L. Image style classification based on learnt deep correlation features. IEEE Transactions on Multimedia Vol. 20, No. 9, 2491–2502, 2018.
[29]
Jia, B.; Fang, C.; Brandt, J.; Kim, B.; Manocha, D. PaintBot: A reinforcement learning approach for natural media painting. arXiv preprint arXiv: 1904.02201, 2019.
[30]
Justin, R. Elements of art: Interpreting meaning through the language of visual cues. Ph.D. Thesis. Stony Brook University, 2018.
[31]
Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceedings of the International Conference on Learning Representations, 1–16, 2016.
[32]
Donoho, D. L. De-noising by soft-thresholding. IEEE Transactions on Information Theory Vol. 41, No. 3, 613–627, 1995.
[33]
Szegedy, C.; Liu, W.; Jia, Y. Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9, 2015.
[34]
He, K. M.; Zhang, X. Y.; Ren, S. Q.; Sun, J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778, 2016.
[35]
Liu, Z. W.; Luo, P.; Wang, X. G.; Tang, X. O. Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, 3730–3738, 2015.
[36]
Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S. A.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision Vol. 115, No. 3, 211–252, 2015.
[37]
Huang, H. Z.; Zhang, S. H.; Martin, R. R.; Hu, S. M. Learning natural colors for image recoloring. Computer Graphics Forum Vol. 33, No. 7, 299–308, 2014.
[38]
Tong, Z. Y.; Chen, X. H.; Ni, B. B.; Wang, X. H. Sketch generation with drawing process guided by vector flow and grayscale. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 35, No. 1, 609–616, 2021.
[39]
Liddell, T. M.; Kruschke, J. K. Analyzing ordinal data with metric models: What could possibly go wrong? Journal of Experimental Social Psychology Vol. 79, 328–348, 2018.
Computational Visual Media
Pages 787-806
Cite this article:
Wang Q, Guo C, Dai H-N, et al. Stroke-GAN Painter: Learning to paint artworks using stroke-style generative adversarial networks. Computational Visual Media, 2023, 9(4): 787-806. https://doi.org/10.1007/s41095-022-0287-3

500

Views

28

Downloads

5

Crossref

2

Web of Science

3

Scopus

0

CSCD

Altmetrics

Received: 14 February 2022
Accepted: 12 April 2022
Published: 11 March 2023
© The Author(s) 2023.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return