Understanding from Machine Learning Models

Journal Article (2019)
Author(s)

E. Sullivan (TU Delft - Web Information Systems)

Research Group
Web Information Systems
DOI related publication
https://doi.org/10.1093/bjps/axz035
More Info
expand_more
Publication Year
2019
Language
English
Research Group
Web Information Systems
Issue number
1
Volume number
73
Pages (from-to)
109-133
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Simple idealized models seem to provide more understanding than opaque, complex, and hyperrealistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In this article, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding.

Files

Und_MLM_Sullivan_penultiamte.p... (pdf)
(pdf | 0.944 Mb)
- Embargo expired in 01-01-2022
License info not available