Searched for: subject%3A%22Explanation%22
(1 - 20 of 56)

Pages

document
Andringa, Jilles (author)
Machine learning models have improved Prognostics and Health Management (PHM) in aviation, notably in estimating the Remaining Useful Life (RUL) of aircraft engines. However, their 'black-box' nature limits transparency, critical in safety-sensitive aviation maintenance. Explainable AI (XAI), particularly Counterfactual (CF) explanations, offers...
master thesis 2024
document
Mehrotra, S. (author), Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper...
journal article 2024
document
Sharma, Bhawana (author), Sharma, Lokesh (author), Lal, C. (author), Roy, Satyabrata (author)
The Internet of Things (IoT) is currently seeing tremendous growth due to new technologies and big data. Research in the field of IoT security is an emerging topic. IoT networks are becoming more vulnerable to new assaults as a result of the growth in devices and the production of massive data. In order to recognize the attacks, an intrusion...
journal article 2024
document
Wehner, Jan (author)
Learning rewards from humans is a promising approach to aligning AI with human values. However, methods are not able to consistently extract the correct reward functions from demonstrations or feedback. To allow humans to understand the limitations and misalignments of a learned reward function we adopt the technique of counterfactual...
master thesis 2023
document
Zhou, Jing (author)
Explainable AI (XAI) has gained increasing attention from more and more researchers with an aim to improve human interaction with AI systems. In the context of human-agent teamwork (HAT), providing explainability to the agent helps to increase shared team knowledge and belief, therefore improving overall teamwork. With various backgrounds and...
master thesis 2023
document
Robbemond, Vincent (author)
Advances in artificial intelligence and machine learning have led to a steep rise in the adoption of AI to augment or support human decision-making across domains.<br/>There has been an increasing body of work addressing the benefits of model interpretability and explanations to help end-users or other stakeholders decipher the inner workings of...
master thesis 2023
document
Najafian, S. (author)
My thesis investigates what makes good explanations for group recommendations, considering the privacy concerns of group members. Let’s give an example. Have you ever been to lunch with other colleagues on a business trip? Do you recall how long it took you to pick a restaurant? In these situations, recommender systems could help people decide,...
doctoral thesis 2023
document
Olatunji, Iyiola E (author), Rathee, Mandeep (author), Funke, Thorben (author), Khosla, M. (author)
Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on...
conference paper 2023
document
Buijsman, S.N.R. (author)
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from such models. I argue that when machine learning is used to conduct...
journal article 2023
document
Mehrotra, S. (author), Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact of displaying integrity, which is one of the factors that influence...
poster 2023
document
Altmeyer, P. (author), Liem, C.C.S. (author), van Deursen, A. (author)
We present CounterfactualExplanations.jl: a package for generating Counterfactual Explanations (CE) and Algorithmic Recourse (AR) for black-box models in Julia. CE explain how inputs into a model need to change to yield specific model predictions. Explanations that involve realistic and actionable changes can be used to provide AR: a set of...
conference paper 2023
document
Altmeyer, P. (author), Giovan, Angela (author), Buszydlik, Aleksander (author), Dobiczek, Karol (author), van Deursen, A. (author), Liem, C.C.S. (author)
Existing work on Counterfactual Explanations (CE) and Algorithmic Recourse (AR) has largely focused on single individuals in a static environment: given some estimated model, the goal is to find valid counterfactuals for an individual instance that fulfill various desiderata. The ability of such counterfactuals to handle dynamics like data and...
conference paper 2023
document
Barile, Francesco (author), Draws, T.A. (author), Inel, Oana (author), Rieger, A. (author), Najafian, S. (author), Ebrahimi Fard, Amir (author), Hada, Rishav (author), Tintarev, N. (author)
Social choice aggregation strategies have been proposed as an explainable way to generate recommendations to groups of users. However, it is not trivial to determine the best strategy to apply for a specific group. Previous work highlighted that the performance of a group recommender system is affected by the internal diversity of the group...
journal article 2023
document
Lyu, L. (author), Anand, A. (author)
This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations...
conference paper 2023
document
Yurrita Semperena, M. (author), Draws, T.A. (author), Balayn, A.M.A. (author), Murray-Rust, D.S. (author), Tintarev, N. (author), Bozzon, A. (author)
Recent research claims that information cues and system attributes of algorithmic decision-making processes affect decision subjects' fairness perceptions. However, little is still known about how these factors interact. This paper presents a user study (N = 267) investigating the individual and combined effects of explanations, human...
conference paper 2023
document
Agiollo, A. (author), Cavalcante Siebert, L. (author), Murukannaiah, P.K. (author), Omicini, Andrea (author)
Although popular and effective, large language models (LLM) are characterised by a performance vs. transparency trade-off that hinders their applicability to sensitive scenarios. This is the main reason behind many approaches focusing on local post-hoc explanations recently proposed by the XAI community. However, to the best of our knowledge,...
conference paper 2023
document
Ciatto, Giovanni (author), Magnini, Matteo (author), Buzcu, Berk (author), Aydoğan, Reyhan (author), Omicini, Andrea (author)
Building on prior works on explanation negotiation protocols, this paper proposes a general-purpose protocol for multi-agent systems where recommender agents may need to provide explanations for their recommendations. The protocol specifies the roles and responsibilities of the explainee and the explainer agent and the types of information...
conference paper 2023
document
Bharos, Abri (author)
Powerful predictive AI systems have demonstrated great potential in augmenting human decision-making. Recent empirical work has argued that the vision for optimal human-AI collaboration requires ‘appropriate reliance’ of humans on AI systems. However, accurately estimating the trustworthiness of AI advice at the instance level is quite...
master thesis 2022
document
Kap, Ryan (author)
Communication is one of the main challenges in Human-Agent Teams (HATs). An important aspect of communication in HATs is the use of explanation styles. This thesis examines the influence of an explainable agent adapting its explanation style to a supervising human team leader on team performance, trust, situation awareness, collaborative fluency...
master thesis 2022
document
Buszydlik, Aleksander (author)
Algorithmic recourse aims to provide individuals affected by a negative classification outcome with actions which, if applied, would flip this outcome. Various approaches to the generation of recourse have been proposed in the literature; these are typically assessed on statistical measures such as the validity of generated explanations or their...
bachelor thesis 2022
Searched for: subject%3A%22Explanation%22
(1 - 20 of 56)

Pages