Understanding weight-magnitude hyperparameters in training binary networks

Master Thesis (2022)
Author(s)

J.J.R. Quist (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Yunqiang Li – Mentor (TU Delft - Pattern Recognition and Bioinformatics)

J.C. van Gemert – Mentor (TU Delft - Pattern Recognition and Bioinformatics)

Christoph Lofi – Coach (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Joris Quist
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Joris Quist
Graduation Date
07-11-2022
Awarding Institution
Delft University of Technology
Programme
Computer Science | Data Science and Technology
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Binary Neural Networks (BNNs) are compact and efficient by using binary weights instead of real-valued weights. Current BNNs use latent real-valued weights during training, where several training hyper-parameters are inherited from real-valued networks. The interpretation of several of these hyperparameters is based on the magnitude of the real-valued weights. For BNNs, however, the magnitude of binary weights is not meaningful, and thus it is unclear what these hyperparameters actually do. One example is weight-decay, which aims to keep the magnitude of real-valued weights small. Other examples are latent weight initialization, the learning rate, and learning rate decay, which influence the magnitude of the real-valued weights. The magnitude is interpretable for real-valued weights, but loses its meaning for binary weights.
In this paper we offer a new interpretation of these magnitude-based hyperparameters based on higher-order gradient filtering during network optimization. Our analysis makes it possible to understand how magnitude-based hyperparameters influence the training of binary networks which allows for new optimization filters specifically designed for binary neural networks that are independent of their real-valued interpretation. Moreover, our improved understanding reduces the number of hyperparameters, which in turn eases the hyperparameter tuning effort which may lead to better hyperparameter values for improved accuracy.

Files

License info not available