EV-Eye

Rethinking High-frequency Eye Tracking through the Lenses of Event Cameras

Conference Paper (2023)
Authors

Guangrong Zhao (Shandong University)

Yurun Yang (Shandong University)

Jingwei Liu (Shandong University)

Ning Chen (TU Delft - Photovoltaic Materials and Devices, Shandong University)

Yiran Shen (Shandong University)

Hongkai Wen (University of Warwick)

G. Lan (TU Delft - Embedded Systems)

Research Group
Embedded Systems
More Info
expand_more
Publication Year
2023
Language
English
Research Group
Embedded Systems
Volume number
36
ISBN (print)
9781713899921
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this paper, we present EV-Eye, a first-of-its-kind large-scale multimodal eye tracking dataset aimed at inspiring research on high-frequency eye/gaze tracking. EV-Eye utilizes the emerging bio-inspired event camera to capture independent pixel-level intensity changes induced by eye movements, achieving sub-microsecond latency. Our dataset was curated over two weeks and collected from 48 participants encompassing diverse genders and age groups. It comprises over 1.5 million near-eye grayscale images and 2.7 billion event samples generated by two DAVIS346 event cameras. Additionally, the dataset contains 675 thousand scene images and 2.7 million gaze references captured by a Tobii Pro Glasses 3 eye tracker for cross-modality validation. Compared with existing event-based high-frequency eye tracking datasets, our dataset is significantly larger in size, and the gaze references involve more natural and diverse eye movement patterns, i.e., fixation, saccade, and smooth pursuit. Alongside the event data, we also present a hybrid eye tracking method as a benchmark, which leverages both the near-eye grayscale images and event data for robust and high-frequency eye tracking. We show that our method achieves higher accuracy for both pupil and gaze estimation tasks compared to the existing solution.

Files

License info not available