Affordances, or action possibilities, have been explored to enable robotic manipulation with everyday objects, however the effect of an agent's embodiment has not received much attention. Here we investigate how embodiment changes affordances between a human and robot. We present
...
Affordances, or action possibilities, have been explored to enable robotic manipulation with everyday objects, however the effect of an agent's embodiment has not received much attention. Here we investigate how embodiment changes affordances between a human and robot. We present a method to automatically generate affordance pseudo-labels from a robotic manipulator for the task of grounding (localising) affordances on an object, as there is no such existing dataset. We then propose a general model for embodiment-conditioned affordance grounding, and explore three ways to condition on the embodiment. Our model learns to perform an affine transformation on image embeddings based on the effect of embodiment on the affordance. We evaluate all three variants of our model and compare them to a variant without embodiment conditioning and a state-of-the-art affordance grounding method. The results show that our best performing model decreases affordance prediction error by 25% when compared to the variant without embodiment conditioning and by 68% when compared to the state-of-the-art method. Through our results we demonstrate that embodiment matters when perceiving affordances.