Differential Privacy (DP) has become one of the most used approaches to protect individual data. However, its implementation can vary significantly depending on the context we are using it. In this study, we aim to compare two such implementations of DP: Google's Differential Pri
...
Differential Privacy (DP) has become one of the most used approaches to protect individual data. However, its implementation can vary significantly depending on the context we are using it. In this study, we aim to compare two such implementations of DP: Google's Differential Privacy, an open-source library used for structured data analytics, and Differentially Private Offsite Prompt Tuning (DP-OPT), a tool used for the adaptation of machine learning models.
The main research question of this study is the following: "How do DP-OPT and Google's Differential Privacy Library compare when accounting for different factors in different contexts?" We aim to conduct this research by doing a literature-based study where no empirical experiments will be performed. This is because of the underlying complexity of both tools, especially DP-OPT, and the time constraints posed in this project.
The main results of this study show that Google DP is a tool that benefits from its interpretability and speed when processing data analysis, while DP-OPT shows a higher accuracy when training large language models (LLMs). Meaning that there is no single mechanism for each problem, but instead it depends on factors like the underlying task complexity, current available resources, and the final goal of the task.
By comparing these tools side-by-side, this research aims to provide more insights into how each tool behaves and performs under different contexts and tasks. We aim to guide developers and researchers to make better and more informed decisions when choosing a tool for their desired task.