Abstract
In this paper, we introduce GarTrans, a novel graph-learning based method for the task of garment animation. It emphasizes efficiently rendering realistic deformation effects. GarTrans goes beyond existing models by providing improved generalization capabilities, along with the ability to capture fine-scale garment dynamics and details. Our approach begins by constructing a garment graph that comprehensively encodes the dynamic state of the garment, taking into account its shape and topology, as well as the underlying body shape and corresponding motion. We have also designed a structure-augmented transformer (SAT) capable of processing the node information and edges within the graph, enabling the generation of deformation details that are contextually informed. Our model employs a unified optimization scheme that incorporates both supervised and unsupervised loss functions, enabling a robust approach capable of realistically mimicking the behavior of intricate garments. Experimental evaluations show that our method surpasses the existing state-of-the-art in terms of both functional capabilities and visual fidelity, advancing the field of garment animation.