Searched for: contributor%3A%22Huang%2C+J.+%28mentor%29%22
(1 - 8 of 8)
document
van der Meulen, Jan (author)
Federated learning (FL) is a privacy preserving machine learning approach which allows a machine learning model to be trained in a distributed fashion without ever sharing user data. Due to the large amount of valuable text and voice data stored on end-user devices, this approach works particularly well for natural language processing (NLP)...
bachelor thesis 2024
document
Nenovski, Lazar (author)
Abstract— Federated Learning (FL) makes it possible for a network of clients to jointly train a machine learning model, while also keeping the training data private. There are several approaches when designing a FL network and while most existing research is focused on a single-server design, new and promising variations are arising that make...
bachelor thesis 2024
document
Van Opstal, Quinten (author)
Federated learning provides a lot of opportunities, especially with the built-in privacy considerations. There is however one attack that might compromise the utility of federated learning: backdoor attacks [14]. There are already some existing defenses, like flame [13] but they are computationally expensive [14]. This paper evaluates a version...
bachelor thesis 2024
document
Psathas, Steffano (author)
A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The research question this re- port address is the topic of using model...
bachelor thesis 2022
document
Vigilanza Lorenzo, Pietro (author)
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...
bachelor thesis 2022
document
Dwivedi, Kanish (author)
Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in the application of them for providing adversarial robustness. In this...
bachelor thesis 2022
document
van Veen, Floris (author)
Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the...
bachelor thesis 2022
document
Jansen, Joost (author)
In recent years, there has been a great deal of studies about the optimisation of generating adversarial examples for Deep Neural Networks (DNNs) in a black-box environment. The use of gradient-based techniques to get the adversarial images in a minimal amount of input-output correspondence with the attacked model has been extensively studied....
bachelor thesis 2022
Searched for: contributor%3A%22Huang%2C+J.+%28mentor%29%22
(1 - 8 of 8)