AI Chat Paper
Note: Please note that the following content is generated by AMiner AI. SciOpen does not take any responsibility related to this content.
{{lang === 'zh_CN' ? '文章概述' : 'Summary'}}
{{lang === 'en_US' ? '中' : 'Eng'}}
Chat more with AI
PDF (5.8 MB)
Collect
Submit Manuscript AI Chat Paper
Show Outline
Outline
Show full outline
Hide outline
Outline
Show full outline
Hide outline
Research Article | Open Access

CTSN: Predicting cloth deformation for skeleton-based characters with a two-stream skinning network

College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, China
Aurora Studios, Tencent, Shenzhen 518057, China
Show Author Information

Graphical Abstract

Abstract

We present a novel learning method using a two-stream network to predict cloth deformation for skeleton-based characters. The characters processed in our approach are not limited to humans, and can be other targets with skeleton-based representations such as fish or pets. We use a novel network architecturewhich consists of skeleton-based and mesh-based residual networks to learn the coarse features and wrinkle features forming the overall residual from the template cloth mesh. Our network may be used to predict the deformation for loose or tight-fitting clothing. The memory footprint of our network is low, thereby resulting in reduced computational requirements. In practice, a prediction for a single cloth mesh for a skeleton-based character takes about 7 ms on an nVidia GeForce RTX 3090 GPU. Compared to prior methods, our network can generate finer deformation results with details and wrinkles.

Electronic Supplementary Material

Video
41095_0344_ESM1.mp4
41095_0344_ESM2.mp4

References

[1]
Baraff, D.; Witkin, A. Large steps in cloth simulation. In: Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, 4354, 1998.
[2]
Provot, X. Deformation constraints in a mass-spring model to describe rigid cloth behavior. In: Proceedings of the Graphics Interface, 147154, 1995.
[3]
Bouaziz, S.; Martin, S.; Liu, T.; Kavan, L.; Pauly, M. Projective dynamics: Fusing constraint projections for fast simulation. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 154, 2014.
[4]
Bridson, R.; Fedkiw, R.; Anderson, J. Robust treatment of collisions, contact and friction for cloth animation. ACM Transactions on Graphics Vol. 21, No. 3, 594603, 2002.
[5]
Harmon, D.; Vouga, E.; Tamstorf, R.; Grinspun, E. Robust treatment of simultaneous collisions. ACM Transactions on Graphics Vol. 27, No. 3, 14, 2008.
[6]
Tang, M.; Tong, R. F.; Wang, Z. D.; Manocha, D. Fast and exact continuous collision detection with Bernstein sign classification. ACM Transactions on Graphics Vol. 33, No. 6, Article No. 186, 2014.
[7]
Tang, M.; Wang, T.; Liu, Z.; Tong, R.; Manocha, D. I-cloth: Incremental collision handling for GPU-based interactive cloth simulation. ACM Transactions on Graphics Vol. 37, No. 6, Article No. 204, 2018.
[8]
Patel, C.; Liao, Z.; Pons-Moll, G. TailorNet: Predicting clothing in 3D as a function of human pose, shape and garment style. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 73637373, 2020.
[9]
Bertiche, H.; Madadi, M.; Tylson, E.; Escalera, S. DeePSD: Automatic deep skinning and pose space deformation for 3D garment animation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 54515460, 2021.
[10]
Santesteban, I.; Otaduy, M. A.; Casas, D. Learning-based animation of clothing for virtual try-on. Computer Graphics Forum Vol. 38, No. 2, 355366, 2019.
[11]
Loper, M.; Mahmood, N.; Romero, J.; Pons-Moll, G.; Black, M. J. SMPL: A skinned multi-person linear model. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 248, 2015.
[12]
Wang, T. Y.; Shao, T. J.; Fu, K.; Mitra, N. J. Learning an intrinsic garment space for interactive authoring of garment animation. ACM Transactions on Graphics Vol. 38, No. 6, Article No. 220, 2019.
[13]
Gundogdu, E.; Constantin, V.; Seifoddini, A.; Dang, M.; Salzmann, M.; Fua, P. GarNet: A two-stream network for fast and accurate 3D cloth draping. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, 87388747, 2019.
[14]
Gundogdu, E.; Constantin, V.; Parashar, S.; Seifoddini, A.; Dang, M.; Salzmann, M.; Fua, P. GarNet++: Improving fast and accurate static 3D cloth draping by curvature loss. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 44, No. 1, 181195, 2022.
[15]
Vidaurre, R.; Santesteban, I.; Garces, E.; Casas, D. Fully convolutional graph neural networks for parametric virtual try-on. Computer Graphics Forum Vol. 39, No. 8, 145156, 2020.
[16]
Tang, M.; Wang, H. M.; Tang, L.; Tong, R. F.; Manocha, D. CAMA: Contact-aware matrix assembly with unified collision handling for GPU-based cloth simulation. Computer Graphics Forum Vol. 35, No. 2, 511521, 2016.
[17]
Li, C.; Tang, M.; Tong, R.; Cai, M.; Zhao, J.; Manocha, D. P-cloth: Interactive complex cloth simulation on multi-GPU systems using dynamic matrix assembly and pipelined implicit integrators. ACM Transactions on Graphics Vol. 39, No. 6, Article No. 180, 2020.
[18]
Li, T.; Shi, R.; Kanai, T. Detail-aware deep clothing animations infused with multi-source attributes. Computer Graphics Forum Vol. 42, 231244, 2023.
[19]
Kavan, L.; Collins, S.; Žára, J.; O’Sullivan, C. Skinning with dual quaternions. In: Proceedings of the Symposium on Interactive 3D Graphics and Games, 3946, 2007.
[20]
Santesteban, I.; Thuerey, N.; Otaduy, M. A.; Casas, D. Self-supervised collision handling via generative 3D garment models for virtual try-on. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1175811768, 2021.
[21]
Holden, D.; Duong, B. C.; Datta, S.; Nowrouzezahrai, D. Subspace neural physics: Fast data-driven interactive simulation. In: Proceedings of the 18th Annual ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Article No. 6, 2019.
[22]
Li, Y.; Tang, M.; Yang, Y.; Huang, Z.; Tong, R.; Yang, S.; Li, Y.; Manocha, D. N-cloth: Predicting 3D cloth deformation with mesh-based networks. Computer Graphics Forum Vol. 41, 547558, 2022.
[23]
Tan, Q. Y.; Pan, Z. R.; Manocha, D. Lcollision: Fast generation of collision-free human poses using learnednon-penetration constraints. Proceedings of the AAAI Conference on Artificial Intelligence Vol. 35, No. 5, 39133921, 2021.
[24]
Tan, Q.; Pan, Z.; Smith, B.; Shiratori, T.; Manocha, D. Active learning of neural collision handler for complex 3D mesh deformations. arXiv preprint arXiv:2110.07727, 2021.
[25]
Bertiche, H.; Madadi, M.; Escalera, S. PBNS: Physically based neural simulation for unsupervised garment pose space deformation ACM Transactions on Graphics Vol. 40, No. 610, Article No. 198, 2021.
[26]
Pan, X. Y.; Huang, J. C.; Mai, J. M.; Wang, H.; Li, H. L.; Su, T. K.; Wang, W. J.; Jin, X. G. HeterSkinNet: A heterogeneous network for skin weights prediction. Proceedings of the ACM on Computer Graphics and Interactive Techniques Vol. 4, No. 1, Article No. 10, 2021.
[27]
Xu, Z.; Zhou, Y.; Kalogerakis, E.; Landreth, C.; Singh, K. Rignet: Neural rigging for articulated characters. arXiv preprint arXiv:2005.00559, 2020.
[28]
Liu, L.; Zheng, Y.; Tang, D.; Yuan, Y.; Fan, C.; Zhou, K. NeuroSkinning: Automatic skin binding for production characters with deep graph networks. ACM Transactions on Graphics Vol. 38, No. 4, Article No. 114, 2019.
[29]
Magnenat-Thalmann, N.; Laperrière, R.; Thalmann, D. Joint-dependent local deformations for hand animation and object grasping. In: Proceedings on Graphics Interface, 2633, 1989.
[30]
Shi, Y.; Huang, Z.; Feng, S.; Zhong, H.; Wang, W.; Sun, Y. Masked label prediction: Unified message passing model for semi-supervised classification. arXiv preprint arXiv:2009.03509, 2020.
[31]
Narain, R.; Samii, A.; O’Brien, J. F. Adaptive anisotropic remeshing for cloth simulation. ACM Transactions on Graphics Vol. 31, No. 6, Article No. 152, 2012.
[32]
Narain, R.; Pfaff, T.; O’Brien, J. F. Folding and crumpling adaptive sheets. ACM Transactions on Graphics Vol. 32, No. 4, Article No. 51, 2013.
[33]
Pfaff, T.; Narain, R.; de Joya, J. M.; O’Brien, J. F. Adaptive tearing and cracking of thin sheets. ACM Transactions on Graphics Vol. 33, No. 4, Article No. 110, 2014.
[34]
Kingma, D. P., Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[35]
Guan, P.; Reiss, L.; Hirshberg, D. A.; Weiss, A.; Black, M. J. DRAPE. ACM Transactions on Graphics Vol. 31, No. 4, Article No. 35, 2012.
[36]
Santesteban, I.; Otaduy, M. A.; Casas, D. SNUG: Self-supervised neural dynamic garments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 81308140, 2022.
Computational Visual Media
Pages 471-485
Cite this article:
Li Y, Tang M, Yang Y, et al. CTSN: Predicting cloth deformation for skeleton-based characters with a two-stream skinning network. Computational Visual Media, 2024, 10(3): 471-485. https://doi.org/10.1007/s41095-023-0344-6

406

Views

12

Downloads

1

Crossref

0

Web of Science

0

Scopus

0

CSCD

Altmetrics

Received: 14 January 2023
Accepted: 20 March 2023
Published: 19 April 2024
© The Author(s) 2024.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduc-tion in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

Return