Noise Attacks as a First Layer of Privacy Protection in Semantic Data Extraction From Brain Activity
T.C. Walter (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Xucong Zhang – Mentor (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper explores using synthetic noise superimposed on fMRI data to selectively impact the performance of the Generic Object Decoding (GOD) model developed at Kamitani Lab. The GOD model predicts image categories that subjects viewed, based on their recorded fMRI brain activity. To evaluate how selective the noise can be in impacting performance, a new measure is proposed: the Noise Specificity Score (NSS). A highly selective noise pattern would allow for protecting sensitive data while retaining performance on non-sensitive categories. An evolutionary approach of iteratively mutating noise candidates was chosen to maximise the NSS.
Scores ranging between 0.75 to 0.8 were achieved across three different categories. The results also further support the GOD hypothesis of analogous structures between large image classification algorithms and the human visual cortex.
Limitations included computational capabilities and inherent challenges of evolutionary algorithms. Consequently, multiple opportunities for future research are proposed. These include improvements to the current approach, specifically increasing population and generation sizes of the evolutionary algorithm, enabling adaptive learning rates to escape local maxima, but also another approach based on evaluating an individual voxel's impact on the different category performances. Additionally, a novel architecture is proposed for future research that leverages pre-generated noise candidates and selects the most promising one for each visual stimulus. This is predicted to achieve much better results at lower training times.