Accountable AI for Healthcare IoT Systems

Conference Paper (2022)
Authors

P. Bagave (TU Delft - Information and Communication Technology)

Marcus Westberg (TU Delft - Information and Communication Technology)

Roel Dobbe (TU Delft - Information and Communication Technology)

M. F.W.H.A. Janssen (TU Delft - Engineering, Systems and Services)

Aaron Yi Ding (TU Delft - Information and Communication Technology)

Research Group
Information and Communication Technology
Copyright
© 2022 P. Bagave, M. Westberg, R.I.J. Dobbe, M.F.W.H.A. Janssen, Aaron Yi Ding
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 P. Bagave, M. Westberg, R.I.J. Dobbe, M.F.W.H.A. Janssen, Aaron Yi Ding
Research Group
Information and Communication Technology
Pages (from-to)
20-28
ISBN (electronic)
9781665474085
DOI:
https://doi.org/10.1109/TPS-ISA56441.2022.00013
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Various AI systems have taken a unique space in our daily lives, helping us in decision-making in critical as well as non-critical scenarios. Although these systems are widely adopted across different sectors, they have not been used to their full potential in critical domains such as the healthcare sector enabled by the Internet of Things (IoT). One of the important hindering factors for adoption is the implication for accountability of decisions and outcomes affected by an AI system, where the term accountability is understood as a means to ensure the performance of a system. However, this term is often interpreted differently in various sectors. Since the EU GDPR regulations and the US congress have emphasised the importance of enabling accountability in AI systems, there is a strong demand to understand and conceptualise this term. It is crucial to address various aspects integrated with accountability and understand how it affects the adoption of AI systems. In this paper, we conceptualise these factors affecting accountability and how it contributes to a trustworthy healthcare AI system. By focusing on healthcare IoT systems, our conceptual mapping will help the readers understand what system aspects those factors are contributing to and how they affect the system trustworthiness. Besides illustrating accountability in detail, we also share our vision towards causal interpretability as a means to enhance accountability for healthcare AI systems. The insights of this paper shall contribute to the knowledge of academic research on accountability, and benefit AI developers and practitioners in the healthcare sector.

Files

Accountable_AI_for_Healthcare_... (pdf)
(pdf | 2.25 Mb)
- Embargo expired in 15-09-2023
License info not available