The Risks of Risk Assessment
Causal Blind Spots When Using Prediction Models for Treatment Decisions
Nan van Geloven (Leiden University Medical Center)
Ruth H. Keogh (London School of Hygiene and Tropical Medicine)
Wouter van Amsterdam ( University Medical Centre Utrecht)
Giovanni Cinà (Universiteit van Amsterdam, Pacmed)
Jesse H. Krijthe (TU Delft - Pattern Recognition and Bioinformatics)
Niels Peek (University of Cambridge, The University of Manchester)
Kim Luijken ( University Medical Centre Utrecht)
Sara Magliacane (Universiteit van Amsterdam)
Paweł Morzywołek (Universiteit Gent, University of Washington)
undefined More Authors
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Clinicians increasingly rely on prediction models to guide treatment choices. Most prediction models, however, are developed using observational data that include some patients who have already received the treatment the prediction model is meant to inform. Special attention to the causal role of those earlier treatments is required when interpreting the resulting predictions.
“Causal blind spots” were identified in 3 common approaches to handling treatment when developing a prediction model: including treatment as a predictor, restricting to persons taking a certain treatment, and ignoring treatment. Through several real examples, this article illustrates how the risks obtained from models developed using such approaches may be misinterpreted and can lead to misinformed decision making. The discussion covers issues attributable to confounding, selection, mediation, and changes in treatment protocols over time.
An extension of guidelines for the development, reporting, and evaluation of prediction models is advocated to avoid such misinterpretations. Developers must ensure that the intended target population for the model, and the treatment conditions under which predictions hold, are clearly communicated. When prediction models are intended to inform treatment decisions, they need to provide estimates of risk under the specific treatment (or intervention) options being considered, known as “prediction under interventions.” Next to suitable data, this requires causal reasoning and causal inference techniques during model development and evaluation. Being clear about what a given prediction model can and cannot be used for prevents misinformed treatment decisions and thereby prevents potential harm to patients.