Brain Activity Recognition using Deep Electroencephalography Representation

Conference Paper (2023)
Author(s)

Riddhi Johri (Indian Institute of Technology Gandhinagar)

Pankaj Pandey (Indian Institute of Technology Gandhinagar)

Krishna Miyapuram (Indian Institute of Technology Gandhinagar)

James Derek Lomas (TU Delft - Form and Experience)

Research Group
Form and Experience
Copyright
© 2023 Riddhi Johri, Pankaj Pandey, Krishna Prasad Miyapuram, J.D. Lomas
DOI related publication
https://doi.org/10.1109/APSCON56343.2023.10100986
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Riddhi Johri, Pankaj Pandey, Krishna Prasad Miyapuram, J.D. Lomas
Research Group
Form and Experience
ISBN (electronic)
9781665461634
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Advances in neurotechnology have enhanced and simplified our ability to research brain activity with low-cost and effective equipment. One such scalable and noninvasive technique is Electroencephalography (EEG), which detects and records electrical brain activity. Brain activity recognition is one of the emerging problems as EEG wearables become more readily available. Our research has modeled EEG signals to classify three states (i) music listening, (ii) movie watching, and (iii) meditating. The datasets incorporating the brain signals induced while performing these activities are NMED-T for music listening, SEED for movie watching, and VIP_Y_HYT for meditating. EEG activity is transformed into deep representation using a convolutional neural network comprising three different types of 2D convolutions: Temporal, Spatial, and Separable, to capture dependencies and extract high-level features from the data. The Depthwise Convolution function is responsible for learning spatial filters within each temporal convolution, and combining these spatial filters across all temporal bands optimally is learned by the Separable Convolutions. EEGNet and EEGNet-SSVEP are specially designed for EEG Signal Processing and Classification, and the DeepConvNet has incorporated more convolution layers. Our finding demonstrates that increasing the number of layers in the Network provided a higher accuracy of 99.94% using DeepConvNet. In contrast, the accuracy of EEGNet and EEGNet-SSVEP resulted in 85.63% and 75.76%, respectively.

Files

Brain_Activity_Recognition_usi... (pdf)
(pdf | 0.557 Mb)
- Embargo expired in 17-10-2023
License info not available