Jumping Shift

A Logarithmic Quantization Method for Low-Power CNN Acceleration

Conference Paper (2023)
Author(s)

Longxing Jiang (Student TU Delft)

D. Aledo Ortega (TU Delft - Signal Processing Systems)

R. van Leuken (TU Delft - Signal Processing Systems)

Research Group
Signal Processing Systems
Copyright
© 2023 Longxing Jiang, D. Aledo Ortega, T.G.R.M. van Leuken
DOI related publication
https://doi.org/10.23919/DATE56975.2023.10137169
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Longxing Jiang, D. Aledo Ortega, T.G.R.M. van Leuken
Research Group
Signal Processing Systems
Pages (from-to)
1-6
ISBN (print)
979-8-3503-9624-9
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Logarithmic quantization for Convolutional Neural Networks (CNN): a) fits well typical weights and activation distributions, and b) allows the replacement of the multiplication operation by a shift operation that can be implemented with fewer hardware resources. We propose a new quantization method named Jumping Log Quantization (JLQ). The key idea of JLQ is to extend the quantization range, by adding a coefficient parameter “s” in the power of two exponents $(2^{sx+i})$. This quantization strategy skips some values from the standard logarithmic quantization. In addition, we also develop a small hardware-friendly optimization called weight de-zero. Zero-valued weights that cannot be performed by a single shift operation are all replaced with logarithmic weights to reduce hardware resources with almost no accuracy loss. To implement the Multiply-And-Accumulate (MAC) operation (needed to compute convolutions) when the weights are JLQ-ed and de-zeroed, a new Processing Element (PE) have been developed. This new PE uses a modified barrel shifter that can efficiently avoid the skipped values. Resource utilization, area, and power consumption of the new PE standing alone are reported. We have found that JLQ performs better than other state-of-the-art logarithmic quantization methods when the bit width of the operands becomes very small.

Files

Jumping_Shift_A_Logarithmic_Qu... (pdf)
(pdf | 0.586 Mb)
- Embargo expired in 02-12-2023
License info not available