DeepSleep

A sensor-agnostic approach towards modelling a sleep classification system

More Info
expand_more

Abstract

Sleep is a natural state of our mind and body during which our muscles heal and our memories are consolidated. It is such a habitual phenomenon that we have been viewing it as another ordinary task in our day-to-day life. However, owing to the current fast-paced, technology-driven generation, we are letting ourselves be sleep-deprived, giving way to serious health concerns such as depression, insomnia, restlessness, apnea and Alzheimer’s. Polysomnography (PSG) studies are used for diagnosing and treating sleep-related disorders. Although the PSG studies are considered as the gold standard, they are obtrusive and do not allow for long-term monitoring. Various wearables have been manufactured to help people monitor their sleep-health. However, these devices have been shown to be inaccurate.

The ubiquitous sensor technology employed by the wearables provides large volumes of data, recorded in the most natural setting of the user. There is an opportunity to make use of the highly available sensor data to model a sleep scoring system that could help individuals monitor their sleep-health from the comfort of their home. In this thesis, we aim to alleviate this problem by attempting to bridge the gap between the highly accurate but obtrusive medical diagnosis (PSG) and the non-intrusive yet inaccurate wearables.

In this work, we propose DeepSleep, a deep neural net-based sleep classification model using an unobtrusive BCG-based heart sensor signal. Our proposed model’s architecture uses the combination of CNN and LSTM layers to perform self-feature extraction and sequential learning respectively. We show that our model can classify sleep stages with a mean f1-score of 74% using the BCG signal. We employ a 2-phase training strategy to build a pre-trained model to tackle the limited dataset size and test the transferability of the model on other types of heart-signal. With an average classification accuracy of 82% and 63% using ECG and PPG based heart signal respectively, we show that our pre-trained model can be used in the transfer learning setting as well. Lastly, with the help of a user study of 16 subjects, we show that the objective sleep quality metrics correlate with the perceived sleep quality reported by the subjects with a correlation score of 푟 = 0.43.

Although our proposed model’s performance is not yet comparable to the medical standards, we show that it is possible to monitor our sleep-health using the wearable signals with the least domain knowledge and preprocessing techniques. The prediction and performance of our DeepSleep model show that it is able to learn the biological rules of sleep wherein it always follows a Deep or REM stage with a transitional Light stage. Our model treats the classification problem sequentially, thus, identifying important sleep parameters like the onset of sleep cycles and time spent in different sleep stages which are time-dependent factors. Furthermore, our user study, conducted using the SATED questionnaire, provides an insight into the difference in the user’s perceived sleep quality and model’s estimation. It shows that an automated classification system needs to incorporate various external factors such as environmental and ambient conditions to be able to strongly correlate with the perceived or subjective quality. We further discuss the future research gaps and opportunities that could improve the model’s performance and also extend it to other domains like irregular heart-beat and apnea detection. We consider this work to be a starting point for research into sleep and heart health using non-intrusive wearable sensors and deep neural network-based architectures.