Print Email Facebook Twitter Improving Adversarial Attacks on Decision Tree Ensembles Title Improving Adversarial Attacks on Decision Tree Ensembles: Exploring the impact of starting points on attack performance Author Pigmans, Max (TU Delft Electrical Engineering, Mathematics and Computer Science; TU Delft Cyber Security) Contributor Verwer, S.E. (mentor) Anand, A. (graduation committee) Degree granting institution Delft University of Technology Programme Computer Science | Cyber Security Date 2024-04-15 Abstract Most of the adversarial attacks suitable for attacking decision tree ensembles work by doing multiple local searches from randomly selected starting points, around the to be attacked victim. In this thesis we investigate the impact of these starting points on the performance of the attack, and find that the starting points significantly impact the performance: some do much better than others. However, we do find that this is not the case for all attacked points, as there are large differences between points in how difficult they are to attack and for all datasets some points are always optimally attacked.We compare the baseline randomly selected points to three alternative strategies. First, we try alternate random distributions, playing with both the standard deviation, to create a more narrow cone around the victim point, and mean, creating bimodal distributions further away from the victim point. We find that for some datasets these can give up to $5$-$7\%$ improved performance on subsets of the dataset, but these improvements do not generalize to the remainder of the dataset. In general, as long as the distribution is wide enough to successfully find starting points we do not find a substantial performance change.Secondly, we try to remove the randomness and attack from a fixed direction. For the simpler datasets we find it is possible for a starting direction to perform better than random starting points, but for larger datasets performance becomes much worse. We also try an attack from all main directions around the victim point, which we find performs much worse than $5$-$20$ times fewer random points. Lastly, we create an attack strategy where we select the closest points that scored well on previously attacked victims. We find that on smaller test sets this gets outperformed by the baseline, but when we extend the attack and give more possible previously well performing starting points we match or outperform the baseline slightly. Subject Adversarial attacksTree ensembleCyber securityAdversarial Examples To reference this document use: http://resolver.tudelft.nl/uuid:4062f33b-12c6-49ca-98f3-685c207a3fb7 Part of collection Student theses Document type master thesis Rights © 2024 Max Pigmans Files PDF master_thesis_max_pigmans.pdf 2.65 MB Close viewer /islandora/object/uuid:4062f33b-12c6-49ca-98f3-685c207a3fb7/datastream/OBJ/view