A two-agent VR study: the effects of driver eye gaze visualisation on AV-pedestrian interaction

Master Thesis (2021)
Authors

C.S. Mok (TU Delft - Mechanical Engineering)

Supervisors

Pavlo Bazilinskyy (TU Delft - Human-Robot Interaction)

J.C.F. Winter (TU Delft - Human-Robot Interaction)

Faculty
Mechanical Engineering, Mechanical Engineering
Copyright
© 2021 Johnson Mok
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Johnson Mok
Graduation Date
16-06-2021
Awarding Institution
Delft University of Technology
Programme
Mechanical Engineering
Faculty
Mechanical Engineering, Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Problem statement. The introduction of automated vehicles (AVs) changes the role of the driver and may cause a lack of social interaction with pedestrians. This study proposes a concept where the AV is manoeuvre-based controlled via eye gaze, and the AV driver’s gaze is visualised for the driver and pedestrians. However, it was unknown if gaze-based AV control is a viable concept and how the AV’s yielding behaviour should depend on the eye driver’s gaze. Method. A two-agent virtual-reality-based experiment was conducted using two Varjo VR2-PRO head-mounted displays (HMDs). Seventeen pairs of participants (a pedestrian and a driver) each interacted in a road crossing scenario. The pedestrians’ task was to hold a button when they felt safe to cross the road, and the drivers’ task was to direct their gaze according to the instructions. Each session consisted of three blocks of 16 trials: the baseline block, in which the AV driver did not communicate with the pedestrian, and two other blocks in which the driver’s gaze was visualised, namely “gaze at the pedestrian to yield” (GTY) and “look away to yield” (LATY). The effectiveness of the interaction was examined using the pedestrians’ button presses. Acceptance and preference were measured using questionnaires. Results. Pedestrians showed the highest crossing performance and acceptance in the GTY mapping, followed by the LATY mapping and the baseline. The eye gaze visualisation caused pedestrians to spend more time looking at the AV; this effect was particularly dominant when the driver looked at the pedestrian. Conclusion. Gaze visualisation in combination with GTY mapping has the potential to be used as a communication tool for AVs at intersections until full automation of driving (SAE level 5) is technically feasible.

Files

Msc_paper_Repository.pdf
(pdf | 6.62 Mb)
License info not available