Prepare Your Mirrored-Human Dataset

  1. Capture
  2. Extract keypoints


Record the video by yourself or download from YouTube

python3 scripts/dataset/ "" --database ${database}
python3 apps/preprocess/ ${database}
python3 apps/annotation/ ${database}
python3 apps/annotation/ ${database} --copy --out data/mirror-youtube-clip

Extract keypoints

See prepare keypoints for detailed instruction.

python3 apps/preprocess/ ${data} --mode yolo-hrnet
# use OpenPose to detect the feet, skip it if you don't install OpenPose
python3 apps/preprocess/ ${data} --mode feetcrop --hand
# track the human
python3 apps/preprocess/ ${data}

case 1: tracking failed because wrong clip, you should re-clip this sequence:

python3 scripts/preprocess/ ${data} --start 0 --end <right_end_frame> --delete

This script will auto create a new folder and copy the images and annotations to the new folder. --delete flag will help you to delete the original folder.

case 2: tracking failed because wrong bboxes, you should annotate the wrong bboxes:

python3 apps/annotation/ ${data} --sub <the wrong sub>
# estimate the 2D keypoints
python3 apps/preprocess/ ${data} --mode hrnet --subs <the wrong subs> --force