Differentially Private GAN for Time Series
P.H. te Marvelde (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Lydia Y. Chen – Mentor (TU Delft - Data-Intensive Systems)
A. Kunar – Mentor (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Z. Zhao – Mentor (TU Delft - Data-Intensive Systems)
D.M.J. Tax – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
More Info
expand_more
GitHub repository containing the used models.
https://github.com/ptemarvelde/dp-timeseriesOther than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Generative Adversarial Networks (GANs) are a modern solution aiming to encourage public sharing of data, even if the data contains inherently private information, by generating synthetic data that looks like, but is not equal to, the data the GAN was trained on. However, GANs are prone to remembering samples from the training data, therefore additional care is needed to guarantee privacy. Differentially Private (DP) GANs offer a solution to this problem by protecting user privacy through a mathematical guarantee, achieved by adding carefully constructed noise at specific points in the training process. A state-of-the-art example of such a GAN is Gradient Sanitized Wasserstein GAN, (GS-WGAN), \cite{chen2021gswgan}. This model is shown to create higher quality synthetic images than other DP GANs. To extend the applicability of GS-WGAN we first reproduce and extend the evaluation, verifying that the model outperforms DP-CGAN by an average of 40\% when assessed across three qualitative metrics and two datasets. Secondly we propose improvements to the architecture and training procedure to make GS-WGAN applicable on timeseries data. The experimental results show that GS-WGAN is fit for generating synthetic timeseries through promising experimental results.
[1] D. Chen, T. Orekondy, and M. Fritz, “Gs-wgan: A gradient-sanitized approach for learning differentially private generators,” 2021