Quick Start
Demo for Motion Capture
Demo on multiple calibrated cameras
Download our demo dataset here and extract the dataset. If you want to run this code on your own dataset, see Prepare Your MoCap Dataset for more details.
data=/path/to/dataset
python3 apps/demo/mocap.py ${data} --work lightstage-dense-smplh --subs_vis 01 --ranges 0 800 1
The visualization results can be found in ${data}/output-mv1p-smplh/smplmesh.mp4
Video comes from our ZJU-MoCap dataset with 19 calibrated and synchronized cameras.
Optionally, you can change the mode for other models:
Model | SMPL | MANO |
---|---|---|
Mode | –work lightstage-dense-smpl | –work lightstage-dense-manol |
Results |
Our method only takes 30 seconds to optimize the SMPL model of 800 frames. As rendering the results takes the longest time, you can add flag ` –disable_vismesh` to skip this.
Demo on monocular videos
Download demo dataset here and extract the dataset.
data=<path/to/data>
python3 apps/demo/mocap.py ${data} --work internet
Download the challenging data here and extract it.
data=<path/to/data>
python3 apps/demo/mocap.py ${data} --work internet-rotate --fps 30 --render_side
Videos come from Youtube.
Demo on monocular+mirror videos
Download example dataset here and extract the dataset.
data=<path/to/data>
python3 apps/demo/mocap.py ${data} --work mirror --fps 30 --vis_scale 0.5
Videos come from Youtube.
Demo for Novel View Synthesis
Demo for NeuralBody
Download example dataset here and extract the dataset.
data=/path/to/data
# Train Neuralbody:
python3 apps/neuralbody/demo.py ${data} --mode neuralbody --gpus 0,
# Render Neuralbody:
python3 apps/neuralbody/demo.py ${data} --mode neuralbody --gpus 0, --demo
Full step of motion capture and data preparation:
# motion capture
python3 apps/demo/mocap.py ${data} --work lightstage-dense-unsync --subs_vis 01 07 13 19 --disable_crop