1. Try the code
  2. Basic Method
  3. Step-by-step Explanation
    1. stages
  4. Fit MANO
  5. Reference

Try the code

The example dataset can be download from 01_triangulate/street_dance.zip. After downloading, unzip it to the data/examples folder.

data=data/examples/street_dance
emc --data config/datasets/mvimage.yml --exp config/mv1p/detect_triangulate_fitSMPL.yml --root ${data} --subs_vis 07 01 05 03

The results can be found in output/detect_triangulate_fitSMPL.


Video are captured outdoors using 9 smartphones.

Basic Method

Our method utilizes the SMPL1 (Skinned Multi-Person Linear) parametric model. It minimizes the distance between the keypoints of the SMPL model and the triangulated keypoints, as well as incorporates temporal smoothness loss for SMPL parameters and prior loss for SMPL pose parameters. By jointly optimizing the SMPL parameters for all frames, we achieve temporally consistent results.

Step-by-step Explanation

stages

The keypoint detection and triangulation parts are reused from the content of 01_triangulate. Here, we will mainly focus on describing the optimization process that occurs after triangulating all frames. First, we load the SMPL model and then construct the SMPL parameters. Through a multi-stage optimization approach, we optimize different parameters in each stage to obtain the final result.

...
  at_final:
    load_body_model: # Load SMPL model
      module: myeasymocap.io.model.SMPLLoader
      args:
        model_path: models/pare/data/body_models/smpl/SMPL_NEUTRAL.pkl #
        regressor_path: models/J_regressor_body25.npy
    init_params: ... # initialize the SMPL parameters
    fitShape: ... # Fit the SMPL shape by 3D limbs
    init_RT: ... # Initialize the body rotation and translation
    refine_poses: ... # Optimize the poses
      # ...
      args:
        optimize_keys: [poses, Rh, Th]
        loss:
          k3d: ... # 3D keypoints distance
          smooth: ... # smooth term
          prior: ... # use GMM as pose prior
    write: ... # write the results
    render_ground: ... # render the SMPL with a ground
    make_video: ...

Fit MANO

The example dataset can be download from 01_triangulate/412-hand.zip. After downloading, unzip it to the data/examples folder.

data=data/examples/412-hand
emc --data config/datasets/mvimage.yml --exp config/mv1p/detect_hand_triangulate_fitMANO.yml --root ${data} --subs_vis 0 1 2

Video are captured using 3 USB cameras.

Reference

  1. Loper, Matthew, et al. “SMPL: A skinned multi-person linear model.” ACM transactions on graphics (TOG) 34.6 (2015): 1-16.