Enhancing Classifier Conservativeness and Robustness by Polynomiality

Conference Paper (2022)
Author(s)

Z. Wang (TU Delft - Pattern Recognition and Bioinformatics)

Marco Loog (TU Delft - Pattern Recognition and Bioinformatics, University of Copenhagen)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2022 Z. Wang, M. Loog
DOI related publication
https://doi.org/10.1109/CVPR52688.2022.01297
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Z. Wang, M. Loog
Research Group
Pattern Recognition and Bioinformatics
Pages (from-to)
13317-13326
ISBN (print)
978-1-6654-6947-0
ISBN (electronic)
978-1-6654-6946-3
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We illustrate the detrimental effect, such as overconfident decisions, that exponential behavior can have in methods like classical LDA and logistic regression. We then show how polynomiality can remedy the situation. This, among others, leads purposefully to random-level performance in the tails, away from the bulk of the training data. A directly related, simple, yet important technical novelty we subsequently present is softRmax: a reasoned alternative to the standard softmax function employed in contemporary (deep) neural networks. It is derived through linking the standard softmax to Gaussian class-conditional models, as employed in LDA, and replacing those by a polynomial alternative. We show that two aspects of softRmax, conservativeness and inherent gradient regularization, lead to robustness against adversarial attacks without gradient obfuscation.

Files

Enhancing_Classifier_Conservat... (pdf)
(pdf | 0.642 Mb)
- Embargo expired in 01-07-2023
License info not available