Guiding the Eye Using the SEEV Model and Gaze-Contingent Feedback
D.J. Eijssen (TU Delft - Mechanical Engineering)
Joost C.F. Winter (TU Delft - Human-Robot Interaction)
Y. B. Eisma (TU Delft - Human-Robot Interaction)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Background: Automated vehicles are promoted as a safety improvement, but they may also bring a new problem of ‘out of the loop’ errors. Automation support in the form of gaze-contingent feedback might be the solution for these errors. Using the SEEV model, operator’s attention allocation can be predicted and using live gaze data, the driver’s attention guided towards areas of interest.
Methods: An experiment was designed in which twenty participants monitored an automated vehicle and performed a secondary task: either indicating cars driving next to them or monitoring dial crossings. Using an Eye tracker, gaze data was measured and compared to a predictive attention allocation model, gaze-contingent visual feedback was provided when deviating to much from the SEEV prediction. Participants performed each task with and without gaze-contingent feedback in three different driving situations.
Results: The results showed that participants were more actively allocating their attention over all areas of interest with the support of gaze-contingent feedback. They performed more saccades and fixations distributed over the monitor. The fit of the observed percentage dwell time (PDT) on the predicted PDT also improved using gaze- contingent feedback. The task performance of the dial crossing secondary task decreased significantly with feedback on, while the task performance of the hazard perception task did not change.
Conclusion: Attention allocation in complex environments can successfully be predicted using the SEEV model, and implemented in a gaze-contingent feedback model. Attention was divided better and more in line with the predicted PDT with gaze-contingent feedback on, but the current implementation did not improve the secondary task performance.