Constructing A Dataset For Gesture Recognition Using Ambient Light

Bachelor Thesis (2022)
Author(s)

O.T. Akadiri (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Qing Wang – Coach (TU Delft - Embedded Systems)

Qing Wang – Mentor (TU Delft - Embedded Systems)

Christoph Lofi – Mentor (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Femi Akadiri
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Femi Akadiri
Graduation Date
22-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project', 'Gesture Recognition Empowered by Ambient Light and Embedded AI']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Public technology has been shown to have a strong dependence on physical touch, which increases the transmission of diseases. Gesture recognition helps to reduce this transmission, as the dependence on physical touch is removed. Furthermore, the use of visible light for gesture recognition would reduce the power consumption of public technology, as less power would have to be supplied for a light source. In this paper, a dataset for gesture recognition using ambient light is presented, alongside the design process and challenges faced. The gestures were collected using three photodiodes in order for a machine learning algorithm to identify the patterns made by the shadows cast when the gestures are performed. The dataset consists of 10 gestures performed by a total of 50 people 5 times on each hand. This was collected under 5 different light intensity ranges. This dataset was then passed through a machine learning model to be trained and tested, resulting in a 86.8% (3.s.f) validation accuracy. There are many factors related to the light source that caused the accuracy of the algorithm not to be as high as expected, however, the highest accuracy was found in environments with light intensities of 100-1000 lux; a well-lit indoor room.

Files

License info not available