Rights and Wrongs in Talk of Mind-Reading Technology

Journal Article (2024)
Author(s)

S. Rainey (TU Delft - Ethics & Philosophy of Technology)

Ethics & Philosophy of Technology
Copyright
© 2024 S. Rainey
DOI related publication
https://doi.org/10.1017/S0963180124000045
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 S. Rainey
Ethics & Philosophy of Technology
Issue number
4
Volume number
33
Pages (from-to)
521-531
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This article examines the idea of mind-reading technology by focusing on an interesting case of applying a large language model (LLM) to brain data. On the face of it, experimental results appear to show that it is possible to reconstruct mental contents directly from brain data by processing via a chatGPT-like LLM. However, the author argues that this apparent conclusion is not warranted. Through examining how LLMs work, it is shown that they are importantly different from natural language. The former operates on the basis of nonrational data transformations based on a large textual corpus. The latter has a rational dimension, being based on reasons. Using this as a basis, it is argued that brain data does not directly reveal mental content, but can be processed to ground predictions indirectly about mental content. The author concludes that this is impressive but different in principle from technology-mediated mind reading. The applications of LLM-based brain data processing are nevertheless promising for speech rehabilitation or novel communication methods.