Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

Generating 3D avatars from VR headset cameras

Released in: Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

Contributor:

Summary

Social presence, the feeling of being there with a real person, will fuel the next generation of communication systems driven by digital humans in virtual reality (VR). The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models. However, these PS models are time-consuming to build and are typically trained with limited data variability, which results in poor generalization and robustness. Major sources of variability that affects the accuracy of facial expression transfer algorithms include using different VR headsets (e.g., camera configuration, slop of the headset), facial appearance changes over time (e.g., beard, make-up), and environmental factors (e.g., lighting, backgrounds). This is a major drawback for the scalability of these models in VR. This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture (MIA) trained with specialized augmentation strategies. MIA drives the shape component of the avatar from three cameras in the VR headset (two eyes, one mouth), in untrained subjects, using minimal personalized information (i.e., neutral 3D mesh shape). Similarly, if the PS texture decoder is available, MIA is able to drive the full avatar (shape+texture) robustly outperforming PS models in challenging scenarios. The key contribution to improve robustness and generalization is that the proposed method implicitly decouples, in an unsupervised manner, the facial expression from nuisance factors (e.g., headset, environment, facial appearance). The authors demonstrate superior performance and robustness of the proposed method versus state-of-the-art PS approaches in a variety of experiments.

2022

Year Released

Key Links & Stats

Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual Reality

@InProceedings{Jourabloo_2022_CVPR, author = {Jourabloo, Amin and De la Torre, Fernando and Saragih, Jason and Wei, Shih-En and Lombardi, Stephen and Wang, Te-Li and Belko, Danielle and Trimble, Autumn and Badino, Hernan}, title = {Robust Egocentric Photo-Realistic Facial Expression Transfer for Virtual Reality}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {20323-20332} }

ML Tasks

  1. Face Animation
  2. Facial Modeling

ML Platform

  1. Not Applicable

Modalities

  1. 3D Asset

Verticals

  1. Facial
  2. Digital Human

CG Platform

  1. Not Applicable

Related organizations

Facebook Reality Labs

Imperial College London

Carnegie Mellon University