Assessing learner’s distraction in a multimodal platform for sustained attention in the remote learning context using mobile devices sensors

Bachelor Thesis (2021)
Author(s)

G. Di Giuseppe Deininger (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Yoon Lee – Mentor (TU Delft - Software Technology)

Marcus M. Specht – Graduation committee member (TU Delft - Software Technology)

M.A. Migut – Coach (TU Delft - Software Technology)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Giuseppe Di Giuseppe Deininger
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Giuseppe Di Giuseppe Deininger
Graduation Date
01-07-2021
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Related content

GitHub repositories containing all code used during the research

https://github.com/MultimodalLearningAnalytics
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

During a learning task, keeping a steady attentive state is detrimental for good performance. A person is subject to distraction from different sources, among which distractions originating from within him or herself or from external sources, such as ambient sound. The detection of such distraction can improve the effectiveness of a task by giving feedback when necessary. Existing researches tried to measure performance on specific activities with the use of mobile devices such as smartphones and smartwatches, and a study showed a correlation between changes of posture and distraction. This paper tackles a main question \say{How mobile devices sensors can indicate learner’s distractions in the remote learning context?}. The process to do so included the recording of raw data from the movement sensors from a smartphone and smartwatch during a reading task, which was processed to highlight movements and then used to train a Convolutional Long Short Term Memory (LSTM) model. The final produced result showed a F1 score of 0.919 on validation data and was also combined with an external model to detect distraction from ambient noise to create a multimodal model, which showed better performance than both models individually. The limitations of the data collected during the experiment and improvements for future work are also discussed.

Files

RP_Final_Paper_no_email.pdf
(pdf | 15.6 Mb)
License info not available