Searched for: contributor%3A%22Corti%2C+L.+%28mentor%29%22
(1 - 7 of 7)
document
Karnani, Simran (author)
In recent years, there has been a growing interest among researchers in the explainability, fairness, and robustness of Computer Vision models. While studies have explored the usability of these models for end users, limited research has delved into the challenges and requirements faced by researchers investigating these requirements. This study...
master thesis 2023
document
Oltmans, Rembrandt (author)
Despite the low adoption rates of artificial intelligence (AI) in respiratory medicine, its potential to improve patient outcomes is substantial. To facilitate the integration of AI systems into the clinical setting, it is essential to prioritise the development of explainable AI (XAI) solutions that improve the understanding of the AI...
master thesis 2023
document
Ziad Ahmad Saad Soliman Nawar, Ziad (author)
Machine learning (ML) systems for computer vision applications are widely deployed in decision-making contexts, including high-stakes domains such as autonomous driving and medical diagnosis. While largely accelerating the decision-making process, those systems have been found to suffer from a severe issue of reliability, i.e., they can easily...
master thesis 2023
document
Singh, Shivani (author)
The goal of this paper is to examine how different presentation strategies of Explanainable Artificial Intelligence (XAI) explanation methods for textual data affect non-expert understanding in the context of fact-checking. The importance of understand- ing the decision of an Artificial Intelligence (AI) in human-AI interaction and the need for...
bachelor thesis 2023
document
Smit, Jean-Paul (author)
Deep-learning (DL) models could greatly advance the automation of fact-checking, yet have not widely been adopted by the public because of their hard-to-explain nature. Although various techniques have been proposed to use local explanations for the behaviour of DL models, little attention has been paid to global explanations. <br/>In response,...
bachelor thesis 2023
document
Simons, Annabel (author)
In today's society, claims are everywhere, in the online and offline world. Fact-checking models can check these claims and predict if a claim is true or false, but how can these models be checked? Post-hoc XAI feature attribution methods can be used for this. These methods give scores indicating the influence of the individual tokens on the...
bachelor thesis 2023
document
Afriat, Eliott (author)
We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adversarial attacks are ineffective against ExPred. Uncertainties about...
bachelor thesis 2023
Searched for: contributor%3A%22Corti%2C+L.+%28mentor%29%22
(1 - 7 of 7)