Human-robot collaboration can be improved if the motions of the robot are more legible and predictable. This can be achieved by making the motions more human-like. It is assumed that humans move optimal with respect to a certain objective or cost function. To find this function a
...
Human-robot collaboration can be improved if the motions of the robot are more legible and predictable. This can be achieved by making the motions more human-like. It is assumed that humans move optimal with respect to a certain objective or cost function. To find this function an inverse optimal control approach is developed. It uses a bilevel optimization for finding what linear weighted combination of physically interpretable cost functions best mimics human point-to-point motions. The upper level compares the optimal result of the lower level with a reference motion. Two depth cameras are combined in a motion capture setup to record this reference motion.
The human arm is modeled as a seven degrees of freedom manipulator, similar to a robot arm model of the YuMi. The bilevel optimization is done with both models, resulting in two differently weighted cost functions. The cost function that was derived using the robot model is then used to generate new motions for the corresponding real robot arm. The results of an experiment show that humans experience these motions as more anthropomorphic and feel at least equally as safe compared to existing motion planning strategies. This proves the validity of the inverse optimal control method and shows it is a step forward for better collaboration between humans and robots.