Playmate: Flexible Control of Portrait Animation via 3D-Implicit Space Guided Diffusion

[Paper] [Code Coming Soon ]


Anonymous Team

Anonymous Inc.

Abstract: Recent diffusion-based talking face generation models have demonstrated impressive potential in synthesizing videos that accurately match a speech audio clip with a given reference identity. However, existing approaches still encounter significant challenges due to uncontrollable factors, such as inaccurate lip-sync, inappropriate head posture and the lack of fine-grained control over facial expressions. In order to introduce more face-guided conditions beyond speech audio clips, a novel two-stage training framework Playmate is proposed to generate more lifelike facial expressions and talking faces. In the first stage, we introduce a decoupled implicit 3D representation along with a meticulously designed motion-decoupled module to facilitate more accurate attribute disentanglement and generate expressive talking videos directly from audio cues. Then, in the second stage, we introduce an emotion-control module to encode emotion control information into the latent space, enabling fine-grained control over emotions and thereby achieving the ability to generate talking videos with desired emotion. Extensive experiments demonstrate that Playmate outperforms existing state-of-the-art methods in terms of video quality and lip-synchronization, and improves flexibility in controlling emotion and head pose.

Playmate can generate lifelike talking faces for arbitrary identity, guided by a speech audio clip and a variety of optional control conditions. (a) shows the generation results under different emotional conditions using the same audio clip. The top row in (b) shows the driving images, while the bottom row shows the generated results. The poses in the generated results are controlled by the driving images, and the lip movements are guided by the driving audio. (c) demonstrates highly accurate lip synchronization and vivid, rich expressions across different style images.

Overview Of Playmate

Playmate is a two-stage training framework that leverages a 3D-Implicit Space Guided Diffusion Model to generate lifelike talking faces. In the first stage, Playmate utilizes a motion-decoupled module to enhance attribute disentanglement accuracy and trains a diffusion transformer to generate motion sequences directly from audio cues. In the second stage, we use an emotion-control module to encode emotion control information into the latent space, enabling fine-grained control over emotions, thereby improving flexibility in controlling emotion and head pose

Audio driven examples

Audio Driven (talking)
Audio Driven (Sing)

Emotion Controllability






Angry Disgusted Contempt Fear Happy Sad Surprised

Method Comparsion (audio driven)

Playmate Hallo Hallo2 JoyVASA MEMO Sonic





Acknowledgements: The webpage template is borrowed from Takin-ADA. We thank the authors for their codebase.