Monitoring hardware utilization when training on GPUs in distributed machine learning

More Info
expand_more

Abstract

Large­scale machine learning frameworks can accelerate training of a neural network by per­ forming distributed training on a cluster using multiple GPUs per node and multiple nodes. Because distributed training on a cluster involves many nodes which need to communicate and load and exchange data, a machine learning framework may at certain times during training not fully utilize the available hardware of the system. Various techniques are as­ sessed in their capability to measure the performance of specific parts of the hardware of a cluster. We present ML Board, a tool that measures and visualizes the utilization of the system while training a neural network model using some of the previously assessed tech­niques, and does so without requiring any changes to the used machine learning framework. ML Board can be used to identify straggling nodes, and by subsequently letting the user select different nodes using the Slurm job scheduler, can help to decrease the training time of a ResNet model by between 15 and 45% when using an ImageNet or CIFAR­10 dataset. Furthermore, the energy used by the GPUs can be measured and used to identify and replace GPUs to reduce the total used energy by between 5 to 16%.