JH

J. Huang

9 records found

Attacking Federated Time Series Forecasting Models

Reconstructing Private Household Energy Data during Federated Learning with Gradient Inversion Attacks

Federated learning for time series forecasting enables clients with privacy-sensitive time series data to collaboratively learn accurate forecasting models, e.g., in energy load prediction.
Unfortunately, privacy risks in federated learning persist, as servers can potentially ...
Federated learning (FL) is a privacy preserving machine learning approach which allows a machine learning model to be trained in a distributed fashion without ever sharing user data. Due to the large amount of valuable text and voice data stored on end-user devices, this approach ...
Abstract— Federated Learning (FL) makes it possible for a network of clients to jointly train a machine learning model, while also keeping the training data private. There are several approaches when designing a FL network and while most existing research is focused on a single-s ...
Federated learning provides a lot of opportunities, especially with the built-in privacy considerations. There is however one attack that might compromise the utility of federated learning: backdoor attacks [14]. There are already some existing defenses, like flame [13] but they ...
Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in ...

Black-box Adversarial Attacks using Substitute models

Effects of Data Distributions on Sample Transferability

Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the m ...
A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The resea ...
Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in th ...
In recent years, there has been a great deal of studies about the optimisation of generating adversarial examples for Deep Neural Networks (DNNs) in a black-box environment. The use of gradient-based techniques to get the adversarial images in a minimal amount of input-output cor ...