How can robots without expressive faces or bodies convey emotions? Why would it be useful if robots could express emotion? In the context of human-robot interaction, could emotional expression lead to a greater comprehension of robotic behaviors and intents? These are questions a
...
How can robots without expressive faces or bodies convey emotions? Why would it be useful if robots could express emotion? In the context of human-robot interaction, could emotional expression lead to a greater comprehension of robotic behaviors and intents? These are questions addressed by the field of affective robotics, which seeks to develop and establish naturalistic social interaction between robots and humans. Emotions can provide a natural communication modality to augment the multi-modal capabilities of social robots in a variety of domains.
Historically, the emphasis in the field has been on facial and bodily expressions, relying heavily on anthropomorphic or zoomorphic robot appearances. This presents a challenge, as most robots are designed with functionality in mind, often lacking expressive faces and bodies, which limits their ability to effectively convey emotions. This study investigates the potential for appearance-constrained robots to convey emotions through variations in motion, light, and sound parameters.
We conducted an experiment where participants rated the emotional qualities of a non-humanoid, faceless robot’s behaviors, which were manipulated through variations in motion, light, and sound parameters. Our approach is unique in that it adopts a bottom-up methodology similar to the work of Jack et al. on facial expressions. By systematically varying individual features and observing the resultant emotional perceptions, we aimed to discern the specific affective contributions of each parameter. Using machine-learning based regression models, we sought to predict the perceived emotional qualities based on these systematically varied parameters.
Our findings reveal that variations in motion parameters, particularly speed, significantly influence the perceived intensity of arousal, joy, and dominance. Light temperature was found to affect the perceived intensity of anger and joy, while sound pitch influenced perceptions of surprise and fear. The regression models showed varying degrees of success, with the random forest models often outperforming linear models but also exhibiting a higher tendency to overfit the training data. The linear models, while less prone to overfitting, struggled to capture the full complexity of the emotional responses. These findings suggest that non-anthropomorphic robots can indeed convey emotional qualities through controlled variations in their behaviors, though the strength and clarity of these emotions remain limited. Future research should focus on enhancing the expressiveness of these parameters and testing the models with new data to better understand their generalizability and effectiveness.