Accelerating Gossip-Based Deep Learning in Heterogeneous Edge Computing Platforms

Journal Article (2021)
Author(s)

Rui Han

Shilin Li (Beijing Institute of Technology)

Xiangwei Wang (Beijing Institute of Technology)

Chi Harold Liu

Gaofeng Xin (Beijing Institute of Technology)

Y. Chen (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
DOI related publication
https://doi.org/10.1109/TPDS.2020.3046440
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Data-Intensive Systems
Issue number
7
Volume number
32
Pages (from-to)
1591-1602

Abstract

With the exponential growth of data created at the network edge, decentralized and Gossip-based training of deep learning (DL) models on edge computing (EC) gains tremendous research momentum, owing to its capability to learn from resource-strenuous edge nodes with limited network connectivity. Today's edge devices are extremely heterogeneous, e.g., hardware and software stacks, and result in high performance variation of training time and inducing extra delay to synchronize and converge. The large body of prior art accelerates DL, being data or model parallelization, via a centralized server, e.g., parameter server scheme, which may easily turn into the system bottleneck or single point of failure. In this artice, we propose EdgeGossip, a framework specifically designed to accelerate the training process of decentralized and Gossip-based DL training for heterogeneous EC platforms. EdgeGossip features on: (i) low performance variation among multiple EC platforms during iterative training, and (ii) accuracy-aware training to fastly obtain best possible model accuracy. We implement EdgeGossip based on popular Gossip algorithms and demonstrate its effectiveness using real-world DL workloads, i.e., considerably reducing model training time by an average of 2.70 times while only incurring accuracy losses of 0.78 percent.

No files available

Metadata only record. There are no files for this record.