This paper explores using synthetic noise superimposed on fMRI data to selectively impact the performance of the Generic Object Decoding (GOD) model developed at Kamitani Lab. The GOD model predicts image categories that subjects viewed, based on their recorded fMRI brain activit
...
This paper explores using synthetic noise superimposed on fMRI data to selectively impact the performance of the Generic Object Decoding (GOD) model developed at Kamitani Lab. The GOD model predicts image categories that subjects viewed, based on their recorded fMRI brain activity. To evaluate how selective the noise can be in impacting performance, a new measure is proposed: the Noise Specificity Score (NSS). A highly selective noise pattern would allow for protecting sensitive data while retaining performance on non-sensitive categories. An evolutionary approach of iteratively mutating noise candidates was chosen to maximise the NSS.
Scores ranging between 0.75 to 0.8 were achieved across three different categories. The results also further support the GOD hypothesis of analogous structures between large image classification algorithms and the human visual cortex.
Limitations included computational capabilities and inherent challenges of evolutionary algorithms. Consequently, multiple opportunities for future research are proposed. These include improvements to the current approach, specifically increasing population and generation sizes of the evolutionary algorithm, enabling adaptive learning rates to escape local maxima, but also another approach based on evaluating an individual voxel's impact on the different category performances. Additionally, a novel architecture is proposed for future research that leverages pre-generated noise candidates and selects the most promising one for each visual stimulus. This is predicted to achieve much better results at lower training times.