Searched for: contributor%3A%22Altmeyer%2C+P.+%28mentor%29%22
(1 - 4 of 4)
document
Zagorac, Ivor (author)
Counterfactual explanations (CEs) are emerging as a crucial tool in Explainable AI (XAI) for understanding model decisions. This research investigates the impact of various factors on the quality of CEs generated for classification tasks. We explore how inter-class distance, data imbalance, balancing techniques, the presence of biased...
master thesis 2024
document
Angela, Giovan (author)
Machine learning classifiers have become a household tool for banks, companies, and government institutes for automated decision-making. In order to help explain why a person was classified a certain way, a solution was proposed that could generate these counterfactual explanations. Several generators have been introduced and tested but include...
bachelor thesis 2022
document
Buszydlik, Aleksander (author)
Algorithmic recourse aims to provide individuals affected by a negative classification outcome with actions which, if applied, would flip this outcome. Various approaches to the generation of recourse have been proposed in the literature; these are typically assessed on statistical measures such as the validity of generated explanations or their...
bachelor thesis 2022
document
Dobiczek, Karol (author)
Employing counterfactual explanations in a recourse process gives a positive outcome to an individual, but it also shifts their corresponding data point. For systems where models are updated frequently, a change might be seen when recourse is applied, and after multiple rounds, severe shifts in both model and domain may occur. Algorithmic...
bachelor thesis 2022
Searched for: contributor%3A%22Altmeyer%2C+P.+%28mentor%29%22
(1 - 4 of 4)