contact
Papers with tag contact
2022
- Contact-aware Human Motion ForecastingWei Mao, Miaomiao Liu, Richard Hartley, and Mathieu SalzmannIn 2022
In this paper, we tackle the task of scene-aware 3D human motion forecasting,which consists of predicting future human poses given a 3D scene and a pasthuman motion. A key challenge of this task is to ensure consistency between thehuman and the scene, accounting for human-scene interactions. Previous attemptsto do so model such interactions only implicitly, and thus tend to produceartifacts such as "ghost motion" because of the lack of explicit constraintsbetween the local poses and the global motion. Here, by contrast, we propose toexplicitly model the human-scene contacts. To this end, we introducedistance-based contact maps that capture the contact relationships betweenevery joint and every 3D scene point at each time instant. We then develop atwo-stage pipeline that first predicts the future contact maps from the pastones and the scene point cloud, and then forecasts the future human poses byconditioning them on the predicted contact maps. During training, we explicitlyencourage consistency between the global motion and the local poses via a priordefined using the contact maps and future poses. Our approach outperforms thestate-of-the-art human motion forecasting and human synthesis methods on bothsynthetic and real datasets. Our code is available athttps://github.com/wei-mao-2019/ContAwareMotionPred.
- HULC: 3D Human Motion Capture with Pose Manifold Sampling and Dense Contact GuidanceSoshi Shimada, Vladislav Golyanik, Zhi Li, Patrick Pérez, Weipeng Xu, and Christian TheobaltIn ECCV 2022
Marker-less monocular 3D human motion capture (MoCap) with scene interactionsis a challenging research topic relevant for extended reality, robotics andvirtual avatar generation. Due to the inherent depth ambiguity of monocularsettings, 3D motions captured with existing methods often contain severeartefacts such as incorrect body-scene inter-penetrations, jitter and bodyfloating. To tackle these issues, we propose HULC, a new approach for 3D humanMoCap which is aware of the scene geometry. HULC estimates 3D poses and densebody-environment surface contacts for improved 3D localisations, as well as theabsolute scale of the subject. Furthermore, we introduce a 3D pose trajectoryoptimisation based on a novel pose manifold sampling that resolves erroneousbody-environment inter-penetrations. Although the proposed method requires lessstructured inputs compared to existing scene-aware monocular MoCap algorithms,it produces more physically-plausible poses: HULC significantly andconsistently outperforms the existing approaches in various experiments and ondifferent metrics. Project page: https://vcai.mpi-inf.mpg.de/projects/HULC/.