· Contributors · Organizations ·
Bodyformer: Semantics-guided 3D Body Gesture Synthesis With Transformer
DescriptionThis paper presents a new variational transformer framework for synthesizing 3D body gestures driven by speech. The system uses a mode position embedding and intra-modal pre-training to learn motion patterns from limited conversational data. Extensive studies show that the system can generate realistic and diverse gestures similar to the ground-truth.
LocationPetree Hall C
Research & Education
Artificial Intelligence/Machine Learning