Searched for: subject%3A%22Interpretable%255C+Machine%255C+Learning%22
(1 - 10 of 10)
document
Ziad Ahmad Saad Soliman Nawar, Ziad (author)
Machine learning (ML) systems for computer vision applications are widely deployed in decision-making contexts, including high-stakes domains such as autonomous driving and medical diagnosis. While largely accelerating the decision-making process, those systems have been found to suffer from a severe issue of reliability, i.e., they can easily...
master thesis 2023
document
KINDYNIS, Chrysanthos (author)
In this paper, we tackle the problem of creating decision trees that are both optimal and individually fair. While decision trees are popular due to their interpretability, achieving optimality can be difficult. Existing approaches either lack scalability or fail to consider individual fairness. To address this, we define individual fairness as...
bachelor thesis 2023
document
van den Bos, Mim (author)
Decision trees make decisions in a way interpretable to humans, this is important when machines are increasingly used to aid in making high-stakes and socially sensitive decisions. While heuristics have been used for a long time to find decision trees with reasonable accuracy, recent approaches find fully optimal trees. Due to the computational...
bachelor thesis 2023
document
De Bosscher, Benjamin (author)
Airport terminals are complex sociotechnical systems, in which humans interact with diverse technical systems. A natural way to represent them is through agent-based modeling. However, this method has two drawbacks: it entails a heavy computational burden and the emergent properties are often difficult to analyze. The purpose of our research is...
master thesis 2023
document
De Bosscher, Benjamin C.D. (author), Mohammadi Ziabari, S.S. (author), Sharpanskykh, Alexei (author)
Airport terminals are complex sociotechnical systems, in which humans interact with diverse technical systems. A natural way to represent them is through agent-based modeling. However, this method has two drawbacks: it entails a heavy computational burden and the emergent properties are often difficult to analyze. The purpose of our research...
journal article 2023
document
Zheng, Meng (author)
Machine learning models are so-called a "black box," which means people can not easily observe the relationship between the output and input or explain the reason for such results. In recent years, much work has been done on interpretable machine-learning, such as Shapley values, counterfactual explanations, partial dependence plots, or saliency...
master thesis 2022
document
WANG, Siwei (author)
master thesis 2022
document
Zhang, Zijian (author), Setty, Vinay (author), Anand, A. (author)
We introduce SparCAssist, a general-purpose risk assessment tool for the machine learning models trained for language tasks. It evaluates models' risk by inspecting their behavior on counterfactuals, namely out-of-distribution instances generated based on the given data instance. The counterfactuals are generated by replacing tokens in...
conference paper 2022
document
van der Waa, J.S. (author), Schoonderwoerd, Tjeerd (author), Diggelen, Jurriaan van (author), Neerincx, M.A. (author)
Decision support systems (DSS) have improved significantly but are more complex due to recent advances in Artificial Intelligence. Current XAI methods generate explanations on model behaviour to facilitate a user's understanding, which incites trust in the DSS. However, little focus has been on the development of methods that establish and...
journal article 2020
document
Shen, Qiaomu (author), Wu, Yanhong (author), Jiang, Yuzhe (author), Zeng, Wei (author), Lau, Alexis K.H. (author), Vilanova Bartroli, A. (author), Qu, Huamin (author)
Recent attempts at utilizing visual analytics to interpret Recurrent Neural Networks (RNNs) mainly focus on natural language processing (NLP) tasks that take symbolic sequences as input. However, many real-world problems like environment pollution forecasting apply RNNs on sequences of multi-dimensional data where each dimension represents an...
conference paper 2020
Searched for: subject%3A%22Interpretable%255C+Machine%255C+Learning%22
(1 - 10 of 10)