When Machine Learning Models Leak

An Exploration of Synthetic Training Data

Conference Paper (2022)
Author(s)

Manel Slokom (Radboud Universiteit Nijmegen, TU Delft - Multimedia Computing, Statistics Netherlands (CBS))

Peter Paul de Wolf (Statistics Netherlands (CBS))

M. Larson (TU Delft - Multimedia Computing, Radboud Universiteit Nijmegen)

Multimedia Computing
Copyright
© 2022 M. Slokom, Peter Paul de Wolf, M.A. Larson
DOI related publication
https://doi.org/10.1007/978-3-031-13945-1_20
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 M. Slokom, Peter Paul de Wolf, M.A. Larson
Multimedia Computing
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.@en
Pages (from-to)
283-296
ISBN (print)
9783031139444
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We investigate an attack on a machine learning classifier that predicts the propensity of a person or household to move (i.e., relocate) in the next two years. The attack assumes that the classifier has been made publically available and that the attacker has access to information about a certain number of target individuals. That attacker might also have information about another set of people to train an auxiliary classifier. We show that the attack is possible for target individuals independently of whether they were contained in the original training set of the classifier. However, the attack is somewhat less successful for individuals that were not contained in the original data. Based on this observation, we investigate whether training the classifier on a data set that is synthesized from the original training data, rather than using the original training data directly, would help to mitigate the effectiveness of the attack. Our experimental results show that it does not, leading us to conclude that new approaches to data synthesis must be developed if synthesized data is to resemble “unseen” individuals to an extent great enough to help to block machine learning model attacks.

Files

978_3_031_13945_1_20.pdf
(pdf | 0.355 Mb)
- Embargo expired in 01-07-2023
License info not available