Multiobjective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks

Journal Article (2025)
Author(s)

Ambrogio Maria Bernardelli (Università di Pavia)

Stefano Gualandi (Università di Pavia)

Simone Milanesi (Università di Pavia)

Hoong Chuin Lau (Singapore Management University)

N. Yorke-Smith (TU Delft - Algorithmics)

Research Group
Algorithmics
DOI related publication
https://doi.org/10.1287/ijoc.2023.0281
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Algorithmics
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Issue number
3
Volume number
37
Pages (from-to)
623-643
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to 61 and integer-valued neural networks (INNs) whose values lie in the range {―P, ::: , P}. Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the {―P, ::: , P} interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of 51:1% on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed.

Files

2024_Bernardelli_et_al_-_INFOR... (pdf)
(pdf | 1.87 Mb)
- Embargo expired in 24-07-2025
License info not available