A two-agent VR study: the effects of driver eye gaze visualisation on AV-pedestrian interaction
C.S. Mok (TU Delft - Mechanical Engineering)
Pavlo Bazilinskyy (TU Delft - Human-Robot Interaction)
J.C.F. Winter (TU Delft - Human-Robot Interaction)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Problem statement. The introduction of automated vehicles (AVs) changes the role of the driver and may cause a lack of social interaction with pedestrians. This study proposes a concept where the AV is manoeuvre-based controlled via eye gaze, and the AV driver’s gaze is visualised for the driver and pedestrians. However, it was unknown if gaze-based AV control is a viable concept and how the AV’s yielding behaviour should depend on the eye driver’s gaze. Method. A two-agent virtual-reality-based experiment was conducted using two Varjo VR2-PRO head-mounted displays (HMDs). Seventeen pairs of participants (a pedestrian and a driver) each interacted in a road crossing scenario. The pedestrians’ task was to hold a button when they felt safe to cross the road, and the drivers’ task was to direct their gaze according to the instructions. Each session consisted of three blocks of 16 trials: the baseline block, in which the AV driver did not communicate with the pedestrian, and two other blocks in which the driver’s gaze was visualised, namely “gaze at the pedestrian to yield” (GTY) and “look away to yield” (LATY). The effectiveness of the interaction was examined using the pedestrians’ button presses. Acceptance and preference were measured using questionnaires. Results. Pedestrians showed the highest crossing performance and acceptance in the GTY mapping, followed by the LATY mapping and the baseline. The eye gaze visualisation caused pedestrians to spend more time looking at the AV; this effect was particularly dominant when the driver looked at the pedestrian. Conclusion. Gaze visualisation in combination with GTY mapping has the potential to be used as a communication tool for AVs at intersections until full automation of driving (SAE level 5) is technically feasible.