Federated Learning for Mobile and Embedded Systems

Master Thesis (2020)
Author(s)

S. Hofman (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Z. Al-Ars – Mentor (TU Delft - Computer Engineering)

T.G.R.M. van Leuken – Graduation committee member (TU Delft - Signal Processing Systems)

Joost Hoozemans – Graduation committee member (TU Delft - Computer Engineering)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2020 Stefan Hofman
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 Stefan Hofman
Graduation Date
27-11-2020
Awarding Institution
Delft University of Technology
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

An increase in the performance of mobile devices has started a revolution in deploying artificial intelligence (AI) algorithms on mobile and embedded systems. In addition, fueled by the need for privacy-aware insights into data, we see a strong push towards federated machine learning, where data is stored locally and not shared with a central server. By allowing data to stay on client devices and do training locally, we work towards a more privacy-friendly future. Furthermore, utilizing federated machine learning enables machine learning in data-constrained environments where bandwidth is not sufficient to upload the entire dataset. In this thesis, we look at the recent trend into less complex machine learning models. These models optimize resource usage while reducing accuracy loss. We investigate how these simpler models hold up within a federated setting. We also look into the developments of AI frameworks and their capabilities for mobile platforms. Based on these findings, we propose that model-hyper-parameter optimization is possible to maximize accuracy for smaller networks during federated learning. We show that it is possible to reduce the accuracy loss from 15% to only 0.04%. We then demonstrate what a mobile implementation looks like and the performance we see from an iPhone X. We show that an iPhone implementation takes less than 2x the amount of a regular laptop implementation. Finally, we demonstrate that we can reduce the model-size by up to 7x using modern weight quantization methods.

Files

MSc_Thesis_Stefan_Hofman.pdf
(pdf | 0.449 Mb)
License info not available