Federated Learning for Mobile and Embedded Systems

More Info
expand_more

Abstract

An increase in the performance of mobile devices has started a revolution in deploying artificial intelligence (AI) algorithms on mobile and embedded systems. In addition, fueled by the need for privacy-aware insights into data, we see a strong push towards federated machine learning, where data is stored locally and not shared with a central server. By allowing data to stay on client devices and do training locally, we work towards a more privacy-friendly future. Furthermore, utilizing federated machine learning enables machine learning in data-constrained environments where bandwidth is not sufficient to upload the entire dataset. In this thesis, we look at the recent trend into less complex machine learning models. These models optimize resource usage while reducing accuracy loss. We investigate how these simpler models hold up within a federated setting. We also look into the developments of AI frameworks and their capabilities for mobile platforms. Based on these findings, we propose that model-hyper-parameter optimization is possible to maximize accuracy for smaller networks during federated learning. We show that it is possible to reduce the accuracy loss from 15% to only 0.04%. We then demonstrate what a mobile implementation looks like and the performance we see from an iPhone X. We show that an iPhone implementation takes less than 2x the amount of a regular laptop implementation. Finally, we demonstrate that we can reduce the model-size by up to 7x using modern weight quantization methods.