Searched for: author%3A%22Xu%2C+J.%22
(1 - 8 of 8)
document
Xu, J. (author), Yang, Gongliu (author), Sun, Yiding (author), Picek, S. (author)
The current navigation systems used in many autonomous mobile robotic applications, like unmanned vehicles, are always equipped with various sensors to get accurate navigation results. The key point is to fuse the information from different sensors efficiently. However, different sensors provide asynchronous measurements, some of which even...
journal article 2021
document
Xu, J. (author), Xue, Minhui (author), Picek, S. (author)
Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural...
conference paper 2021
document
Conti, M. (author), Li, Jiaxin (author), Picek, S. (author), Xu, J. (author)
Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the message of nodes' neighbors and structure information to acquire expressive representations of nodes for node classification, graph classification, and link prediction. Previous studies have indicated that node-level GNNs are vulnerable to Membership...
conference paper 2022
document
Xu, J. (author), Picek, S. (author)
Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a...
conference paper 2022
document
Koffas, S. (author), Xu, J. (author), Conti, M. (author), Picek, S. (author)
This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers. By doing so, we make the backdoor attack challenging to detect for legitimate users and, consequently, potentially more dangerous. We conduct experiments on two versions of a speech dataset and three neural networks and explore the...
conference paper 2022
document
Xu, J. (author), Wang, R. (author), Koffas, S. (author), Liang, K. (author), Picek, S. (author)
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. Due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to...
conference paper 2022
document
Xu, J. (author), Koffas, S. (author), Ersoy, Oǧuzhan (author), Picek, S. (author)
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. Building a powerful GNN model is not a trivial task, as it requires a large amount of training data, powerful computing resources, and human expertise. Moreover, with the development of adversarial attacks, e.g., model stealing attacks, GNNs...
conference paper 2023
document
Xu, J. (author), Abad, Gorka (author), Picek, S. (author)
Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While...
conference paper 2023
Searched for: author%3A%22Xu%2C+J.%22
(1 - 8 of 8)