Predicting Intensive Care Unit Readmission: Performance and Explainability of Machine Learning Algorithms

More Info
expand_more

Abstract

Intensive Care Unit (ICU) readmission is a serious adverse event associated with high mortality rates and costs. Prediction of ICU readmission could support physicians in their decision to discharge patients from the ICU to lower care wards. Due to increasing ICU data availability, Artificial Intelligence (AI) models in the form of machine learning (ML) algorithms can be used to build high-performing decision support tools. To have impact on patient outcomes, these decision support tools should have high discriminative performance and should be explainable to the ICU physician. The goal of this thesis was to compare several types of ML models on predictive performance and explainability for the prediction of ICU readmission for discharge decision support. The scientific paper that aims to answer this question can be found in Part III of this thesis. In a broader perspective, we proposed a framework for the development and implementation of clinically valuable AI-based decision support.
First, a systematic review was conducted to examine current literature on ML prediction models for ICU readmission (Part I). We concluded that previously developed models reported inappropriate performance metrics and were not implemented in clinical practice. Furthermore, previous work did not compare explainable outcomes in terms of patient factors contributing to the risk of readmission between models. Secondly, we conducted a questionnaire among ICU physicians to investigate current discharge practices and their attitude towards the use of AI tools in their work processes (Part II). Although not all physicians agreed that the decision to discharge ICU patients is complex, most of them do believe in the clinical value of an AI-based discharge decision support tool. Thirdly, we developed several prediction models for ICU readmission and compared them on discriminative performance, calibration properties, and explainability (Part III). We concluded that advanced ML models did not outperform logistic regression in terms of discriminative performance and calibration properties. However, the explanations of XGBoost, a state-of-the-art ML algorithm, were more in line with the ICU physician’s clinical reasoning compared to logistic regression and neural networks. Lastly, we designed a study protocol to prospectively evaluate the predictive performance of Pacmed Critical, a CE-certified AI-based discharge decision support tool, and that of the ICU physician (Part IV).
This thesis contributed to making the step from developing high-performing prediction models to clinical adoption of an ICU discharge decision support system. Due to small differences in discriminative power and calibration properties between models, the model best explainable to the physician and most in line with clinical reasoning should be chosen for decision support. Before final implementation, impact on patient outcomes and costs will need to be studied in prospective trials.

Files

Master_thesis_Technical_Medici... (.pdf)
(.pdf | 4.8 Mb)
- Embargo expired in 01-01-2022