Explainability in Deep Learning Segmentation Models for Breast Cancer by Analogy with Texture Analysis

Conference Paper (2024)
Author(s)

Md Masum Billah (Åbo Akademi University)

Pragati Manandhar (Åbo Akademi University)

Sarosh Krishan (Åbo Akademi University)

Alejandro Cedillo (Åbo Akademi University)

Hergys Rexha (Åbo Akademi University)

Sébastien Lafond (Åbo Akademi University)

Kurt K Benke (University of Melbourne)

Sepinoud Azimi (TU Delft - Information and Communication Technology)

Janan Arslan (Institut du Cerveau)

Research Group
Information and Communication Technology
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Information and Communication Technology
Event
Medical Imaging with Deep Learning (2024-07-03 - 2024-07-05), Paris, France
Downloads counter
235
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Despite their predictive capabilities and rapid advancement, the black-box nature of Artificial Intelligence (AI) models, particularly in healthcare, has sparked debate regarding their trustworthiness and accountability. In response, the field of Explainable AI (XAI) has emerged, aiming to create transparent AI technologies. We present a novel approach to enhance AI interpretability by leveraging texture analysis, with a focus on cancer datasets. By focusing on specific texture features and their correlations with a prediction outcome extracted from medical images, our proposed methodology aims to elucidate the underlying mechanics of AI, improve AI trustworthiness, and facilitate human understanding. The code is available at https://github.com/xrai-lib/xai-texture.