EDTalk: Efficient Disentanglement for Emotional Talking Head Synthesis

Shuai Tan1, Bin Ji1, Mengxiao Bi2, Ye Pan1,
1Shanghai Jianghong University, 2Fuxi AI Lab, NetEase Inc

(If the video is slightly lagging behind the audio, please try using a different device.)

Given an identity source, EDTalk synthesizes talking face videos characterized by mouth shapes, head poses, and expressions consistent with mouth GT, pose source and expression source. These facial dynamics can also be inferred directly from driven audio. Importantly, EDTalk demonstrates superior efficiency in disentanglement training compared to other methods.

Abstract

Achieving disentangled control over multiple facial motions and accommodating diverse input modalities greatly enhances the application and entertainment of the talking head generation. This necessitates a deep exploration of the decoupling space for facial features, ensuring that they a) operate independently without mutual interference and b) can be preserved to share with different modal inputs—both aspects often neglected in existing methods. To address this gap, this paper proposes a novel Efficient Disentanglement framework for Talking head generation (EDTalk). Our framework enables individual manipulation of mouth shape, head pose, and emotional expression, conditioned on both video and audio inputs. Specifically, we employ three lightweight modules to decompose the facial dynamics into three distinct latent spaces representing mouth, pose, and expression, respectively. Each space is characterized by a set of learnable bases whose linear combinations define specific motions. To ensure independence and accelerate training, we enforce orthogonality among bases and devise an efficient training strategy to allocate motion responsibilities to each space without relying on external knowledge. The learned bases are then stored in corresponding banks, enabling shared visual priors with audio input. Furthermore, considering the properties of each space, we propose Audio-to-Motion module for audio-driven talking head synthesis. Experiments are conducted to demonstrate the effectiveness of EDTalk.



Proposed Method

synctalk


Illustration of our proposed EDTalk. (a) EDTalk framework. Given an identity source \( I^i \) and various driving images \( I^* \) (\( * \in \{m,p,e\} \)) for controlling corresponding facial components, EDTalk animates the identity image \( I^i \) to mimic the mouth shape, head pose, and expression of \( I^m \), \( I^p \), and \( I^e \) with the assistance of three Component-aware Latent Navigation modules: MLN, PLN, and ELN. (b) Efficient Disentanglement. The disentanglement process consists of two parts: Mouth-Pose decouple and Expression Decouple. For the former, we introduce the cross-reconstruction training strategy aimed at separating mouth shape and head pose. For the latter, we achieve expression disentanglement using self-reconstruction complementary learning.