Analysis of Loading Policies for Multi-model Inference on the Edge

More Info
expand_more

Abstract

The increasingly growing expansion of the Internet of Things (IoT) along with the convergence of multiple technologies such as the arrival of next generation wireless broadband in 5G, is creating a paradigm shift from cloud computing towards edge computing. Performing tasks normally done by the cloud directly on edge devices would ensure multiple benefits such as latency gains and a more robust privacy of data. However, edge devices are resource-constrained and often do not possess the computational and memory capabilities to perform demanding tasks. Complex algorithms such as the training and inference of a complete Deep Neural Network (DNN) is often not feasible on these devices.

In this paper we perform a novel empirical study of the various ways that multiple inference tasks of deep learning models can be loaded on these edge devices. We analyse the run time gain, under different resource limits, of various DNN layer loading policies that aim to optimize the overall run time of consecutive inference tasks. We combine this with further research in the memory usage and swapping behaviour when performing these inference tasks. Using these results, we show that if the memory overhead becomes too large, loading and executing DNN layers in an interleaved manner provides significant gains in run time. This is achieved trough multiple experiments in our specially made evaluation environment EdgeCaffe which is presented in this paper as well.