Searched for: +
(1 - 4 of 4)
document
Nadeem, A. (author), Vos, D.A. (author), Cao, C.S. (author), Pajola, Luca (author), Dieck, S. (author), Baumgartner, R. (author), Verwer, S.E. (author)
Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers,...
conference paper 2023
document
Vos, D.A. (author), Verwer, S.E. (author)
Interpretability of reinforcement learning policies is essential for many real-world tasks but learning such interpretable policies is a hard problem. Particularly, rule-based policies such as decision trees and rules lists are difficult to optimize due to their non-differentiability. While existing techniques can learn verifiable decision...
conference paper 2023
document
Vos, D.A. (author), Verwer, S.E. (author)
Decision trees are popular models for their interpretation properties and their success in ensemble models for structured data. However, common decision tree learning algorithms produce models that suffer from adversarial examples. Recent work on robust decision tree learning mitigates this issue by taking adversarial perturbations into...
conference paper 2023
document
Vos, D.A. (author), Verwer, S.E. (author)
Recently it has been shown that many machine learning models are vulnerable to adversarial examples: perturbed samples that trick the model into misclassifying them. Neural networks have received much attention but decision trees and their ensembles achieve state-of-the-art results on tabular data, motivating research on their robustness....
conference paper 2021
Searched for: +
(1 - 4 of 4)