Searched for: %2520
(1 - 4 of 4)
document
Xu, J. (author), Abad, Gorka (author), Picek, S. (author)
Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While...
conference paper 2023
document
Xu, J. (author), Picek, S. (author)
Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a...
conference paper 2022
document
Xu, J. (author), Wang, R. (author), Koffas, S. (author), Liang, K. (author), Picek, S. (author)
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. Due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to...
conference paper 2022
document
Xu, J. (author), Xue, Minhui (author), Picek, S. (author)
Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural...
conference paper 2021
Searched for: %2520
(1 - 4 of 4)