Synthetic Human Motion Video Generation Based on Biomechanical Model
More Info
expand_more
Abstract
Biomechanics studies the underlying mechanisms between body movements and forces. Accurate motion data is crucial for the biomechanics. Currently, marker-based motion capture systems are often used by researchers to record motion data. Marker-based motion capture systems are not widely adopted due to its drawbacks in terms of financial and time costs, portability, etc. Video-based motion capture systems can record motions using videos collected by webcams, cameras, and smartphones as the input and then estimate human motions from those videos. A simpler setup makes video-based motion capture technology more accessible for widespread use. However, existing motion capture datasets commonly lack of biomechanically accurate annotation, resulting in a deficiency in the biomechanical accuracy of exsisting video-based motion capture methods. In the biomechanics community, there are a lot of validated and biomechanically accurate models and motion data; however, corresponding video data is lacking. We can construct human-like appearance based on these data and generate a synthetic human motion video dataset using 3D graphic software. In this thesis, we purposed a pipeline that can generate synthetic human motion videos. The pipeline takes subject-specific OpenSim model and motion as input and uses SMPL-X model to generate human-like appearance. We validated the synthetic data generated by our pipeline and demonstrated the biomechanical reliability of the pipeline. Using this pipeline, we created synthetic dataset ODAH with biomechanically accurate annotations for neural network training.