AA

A. Agiollo

5 records found

Backdoor attacks targeting Neural Networks face little to no resistance in achieving misclassifications thanks to an injected trigger. Neuro-symbolic architectures combine such networks with symbolic components to introduce semantic knowledge into purely connectionist designs. Th ...

Towards Benchmarking the Robustness of Neuro-Symbolic Learning against Data Poisoning Backdoor Attacks

Evaluating the Robustness of Logic Tensor Networks under BadNet attacks

Neural Networks have become standard solutions in many real-life relevant applications, such as healthcare. Yet, their vulnerability to backdoor attacks is a concern. These attacks modify a small portion of the data or the model to insert hidden triggered behaviors. Neuro-symbo ...
The growing reliance on Artificial Intelligence (AI) systems increases the need for their understandability and explainability. As a reaction, Neuro-Symbolic (NeSy) models have been introduced to separate neural classification from symbolic logic. Traditional deep learning models ...
Neuro-Symbolic (NeSy) models combine the generalization ability of neural networks with the interpretability of symbolic reasoning. While the vulnerability of neural networks to backdoor data poisoning attacks is well-documented, their implications for NeSy models remain underexp ...
Neuro-Symbolic (NeSy) models promise better interpretability and robustness than conventional neural networks, yet their resilience to data poisoning backdoors is largely untested. This work investigates that gap by attacking a Logic Tensor Network (LTN) with clean-label triggers ...