clothed
Papers with tag clothed
2022
- ARAH: Animatable Volume Rendering of Articulated Human SDFsShaofei Wang, Katja Schwarz, Andreas Geiger, and Siyu TangIn 2022
Combining human body models with differentiable rendering has recentlyenabled animatable avatars of clothed humans from sparse sets of multi-view RGBvideos. While state-of-the-art approaches achieve realistic appearance withneural radiance fields (NeRF), the inferred geometry often lacks detail due tomissing geometric constraints. Further, animating avatars inout-of-distribution poses is not yet possible because the mapping fromobservation space to canonical space does not generalize faithfully to unseenposes. In this work, we address these shortcomings and propose a model tocreate animatable clothed human avatars with detailed geometry that generalizewell to out-of-distribution poses. To achieve detailed geometry, we combine anarticulated implicit surface representation with volume rendering. Forgeneralization, we propose a novel joint root-finding algorithm forsimultaneous ray-surface intersection search and correspondence search. Ouralgorithm enables efficient point sampling and accurate point canonicalizationwhile generalizing well to unseen poses. We demonstrate that our proposedpipeline can generate clothed avatars with high-quality pose-dependent geometryand appearance from a sparse set of multi-view RGB videos. Our method achievesstate-of-the-art performance on geometry and appearance reconstruction whilecreating animatable avatars that generalize well to out-of-distribution posesbeyond the small number of training poses.
- Capturing and Animation of Body and Clothing from Monocular VideoYao Feng, Jinlong Yang, Marc Pollefeys, Michael J. Black, and Timo BolkartIn 2022
While recent work has shown progress on extracting clothed 3D human avatarsfrom a single image, video, or a set of 3D scans, several limitations remain.Most methods use a holistic representation to jointly model the body andclothing, which means that the clothing and body cannot be separated forapplications like virtual try-on. Other methods separately model the body andclothing, but they require training from a large set of 3D clothed human meshesobtained from 3D/4D scanners or physics simulations. Our insight is that thebody and clothing have different modeling requirements. While the body is wellrepresented by a mesh-based parametric 3D model, implicit representations andneural radiance fields are better suited to capturing the large variety inshape and appearance present in clothing. Building on this insight, we proposeSCARF (Segmented Clothed Avatar Radiance Field), a hybrid model combining amesh-based body with a neural radiance field. Integrating the mesh into thevolumetric rendering in combination with a differentiable rasterizer enables usto optimize SCARF directly from monocular videos, without any 3D supervision.The hybrid modeling enables SCARF to (i) animate the clothed body avatar bychanging body poses (including hand articulation and facial expressions), (ii)synthesize novel views of the avatar, and (iii) transfer clothing betweenavatars in virtual try-on applications. We demonstrate that SCARF reconstructsclothing with higher visual quality than existing methods, that the clothingdeforms with changing body pose and body shape, and that clothing can besuccessfully transferred between avatars of different subjects. The code andmodels are available at https://github.com/YadiraF/SCARF.
输入单目RGB视频与衣服的分割,输出一个单独的人体和衣服层,可驱动