Explainability in Deep Learning Segmentation Models for Breast Cancer by Analogy with Texture Analysis
Md Masum Billah (Åbo Akademi University)
Pragati Manandhar (Åbo Akademi University)
Sarosh Krishan (Åbo Akademi University)
Alejandro Cedillo (Åbo Akademi University)
Hergys Rexha (Åbo Akademi University)
Sébastien Lafond (Åbo Akademi University)
Kurt K Benke (University of Melbourne)
Sepinoud Azimi (TU Delft - Information and Communication Technology)
Janan Arslan (Institut du Cerveau)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Despite their predictive capabilities and rapid advancement, the black-box nature of Artificial Intelligence (AI) models, particularly in healthcare, has sparked debate regarding their trustworthiness and accountability. In response, the field of Explainable AI (XAI) has emerged, aiming to create transparent AI technologies. We present a novel approach to enhance AI interpretability by leveraging texture analysis, with a focus on cancer datasets. By focusing on specific texture features and their correlations with a prediction outcome extracted from medical images, our proposed methodology aims to elucidate the underlying mechanics of AI, improve AI trustworthiness, and facilitate human understanding. The code is available at https://github.com/xrai-lib/xai-texture.