Embodiment Matters: Affordance Grounding From Robot and Human Videos
D.J. Wright (TU Delft - Mechanical Engineering)
Anne Kemmeren – Mentor (TNO)
Gertjan Burghouts – Mentor (TNO)
Y.B. Eisma – Mentor (TU Delft - Human-Robot Interaction)
JCF Winter – Graduation committee member (TU Delft - Human-Robot Interaction)
Dimitra Dodou – Graduation committee member (TU Delft - Medical Instruments & Bio-Inspired Technology)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Affordances, or action possibilities, have been explored to enable robotic manipulation with everyday objects, however the effect of an agent's embodiment has not received much attention. Here we investigate how embodiment changes affordances between a human and robot. We present a method to automatically generate affordance pseudo-labels from a robotic manipulator for the task of grounding (localising) affordances on an object, as there is no such existing dataset. We then propose a general model for embodiment-conditioned affordance grounding, and explore three ways to condition on the embodiment. Our model learns to perform an affine transformation on image embeddings based on the effect of embodiment on the affordance. We evaluate all three variants of our model and compare them to a variant without embodiment conditioning and a state-of-the-art affordance grounding method. The results show that our best performing model decreases affordance prediction error by 25% when compared to the variant without embodiment conditioning and by 68% when compared to the state-of-the-art method. Through our results we demonstrate that embodiment matters when perceiving affordances.