Registering Explicit to Implicit for Garment Mesh Reconstruction

3D reconstruction of garments from single in-the-wild photos

Released in: Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images

Contributor:

Summary

Fueled by the power of deep learning techniques and implicit shape learning, recent advances in single-image human digitalization have reached unprecedented accuracy and could recover fine-grained surface details such as garment wrinkles. However, a common problem for the implicit-based methods is that they cannot produce separated and topology-consistent mesh for each garment piece, which is crucial for the current 3D content creation pipeline. To address this issue, the authors propose a novel geometry inference framework ReEF that reconstructs topology-consistent layered garment mesh by registering the explicit garment template to the whole-body implicit fields predicted from single images. Experiments demonstrate that this method notably outperforms its counterparts on single-image layered garment reconstruction and could bring high-quality digital assets for further content creation.

2022

Year Released

Key Links & Stats

Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction from Single Images

@InProceedings{Zhu_2022_CVPR, author = {Zhu, Heming and Qiu, Lingteng and Qiu, Yuda and Han, Xiaoguang}, title = {Registering Explicit to Implicit: Towards High-Fidelity Garment Mesh Reconstruction From Single Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {3845-3854} }

ML Tasks

  1. General

ML Platform

  1. Not Applicable

Modalities

  1. 3D Asset

Verticals

  1. Digital Human
  2. Synthetic Media & Art

CG Platform

  1. Not Applicable

Related organizations

CUHK-Shenzhen

Shenzhen Research Institute of Big Data