We introduce MeshRet, a pioneering solution that facilitates geometric interaction-aware motion retargeting across varied mesh topologies in a single pass. We present the SCS and the novel DMI field to guide the training of MeshRet, effectively encapsulating both contact and non-contact interaction semantics.
Given a source hand motion and a target hand model, our method can retarget realistic hand motions with high fidelity to the target while preserving intricate motion semantics.
We propose to synthesize co-speech gestures using discrete
motion representation (DMR). By learning a DMR space for
gesture motions and modeling the distribution of DMR, our
approach generates more high-quality salient motions.
We propose to formalize the human choreography knowledge by defining CAU and introduce it into music-to-dance synthesis. We propose a two-stage framework ChoreoNet to implement the music-CAU-skeleton mapping. Experiments demonstrate the effectiveness of our meth