BackboneAnalysis: Structured Insights into Compute Platforms from CNN Inference Latency

Conference Paper (2022)
Author(s)

Frank M. Hafner (ZF Friedrichshafen AG)

Matthias Zeller (ZF Friedrichshafen AG)

Mark Schutera (ZF Friedrichshafen AG)

Jochen Abhau (ZF Friedrichshafen AG)

J.F.P. Kooij (TU Delft - Intelligent Vehicles)

Research Group
Intelligent Vehicles
Copyright
© 2022 Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, J.F.P. Kooij
DOI related publication
https://doi.org/10.1109/IV51971.2022.9827260
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, J.F.P. Kooij
Research Group
Intelligent Vehicles
Pages (from-to)
1801-1809
ISBN (print)
978-1-6654-8821-1
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance.We present BackboneAnalysis, a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type.In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to higher throughput does not always hold. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.

Files

BackboneAnalysis_Structured_In... (pdf)
(pdf | 0.585 Mb)
- Embargo expired in 19-01-2023
License info not available