Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in Privacy-Preserving EdgeIoT
Jingjing Zheng (Real-Time and Embedded Computing Systems Research Centre)
Kai Li (Real-Time and Embedded Computing Systems Research Centre)
N. Mhaisen (TU Delft - Embedded Systems)
Wei Ni (CSIRO: Commonwealth Scientific and Industrial Research Organisation)
Eduardo Tovar (Real-Time and Embedded Computing Systems Research Centre)
Mohsen Guizani (Mohamed Bin Zayed University of Artificial Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile-edge computing-based Internet of Things (EdgeIoT). On the one hand, the learning accuracy of FL can be improved by selecting the IoT devices with large data sets for training, which gives rise to a higher energy consumption. On the other hand, the energy consumption can be reduced by selecting the IoT devices with small data sets for FL, resulting in a falling learning accuracy. In this article, we formulate a new resource allocation problem for privacy-preserving EdgeIoT to balance the learning accuracy of FL and the energy consumption of the IoT device. We propose a new FL-enabled twin-delayed deep deterministic policy gradient (FL-DLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain. Furthermore, long short-term memory (LSTM) is leveraged in FL-DLT3 to predict the time-varying network state while FL-DLT3 is trained to select the IoT devices and allocate the transmit power. Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to the existing state-of-the-art benchmark.