Champagne Taste on a Beer Budget

Better Budget Utilisation in Multi-label Adversarial Attacks

More Info
expand_more

Abstract

Abstract—Multi-label classification is an important branch of classification problems as in many real world classification scenarios an object can belong to multiple classes simultaneously. Deep learning based classifiers perform well at image classifica- tion but their predictions have shown to be unstable when subject to small input distortions, called adversarial perturbations. There are multi-class classifiers, which assign images to a single class, and multi-label classifiers that attribute multiple labels to an image. In multi-class scenarios these adversarial attacks are conventionally constrained by a perturbation magnitude budget in order to enforce visual imperceptibility. In the related studies concerning multi-label attacks there has been no notion of a budget and this results in visible perturbations in the image. In this paper we develop attacks that cause the most severe disruptions in the binary label predictions, i.e. a maximum number of label flips, while adhering to a perturbation budget. To achieve this, we first analyse the applicability of the existing single label attack MI-FGSM on multi label problems. A naive way of using MI-FGSM in a multi-label scenario means using binary cross entropy loss and targeting all labels simultaneously. Our key observations are that targeting all labels simultaneously when restricted to a small budgets leads to inefficient budget use, that all labels have different attackability and also that labels exhibit different correlation structures which influences the combined attackability. Moreover, we show that the loss function determines the optimisation direction through prioritising labels with certain confidence values. We find that there are two dif- ferent strategies to optimise budget use and propose two distinct methods namely, Smart Loss-function for Attacks on Multi-label models (SLAM) and Classification Landscape Attentive Subset Selection (CLASS). SLAM comprises a loss function that uses an estimate for the potential amount of flips to adapt the shape of the curve, and hence the label prioritisation. CLASS uses binary cross entropy loss but focuses the budget on merely a subset of the labels, which was constructed while considering label attackability and pairwise label correlation. CLASS does have the drawback that it relies on classifier specific heuristics for determining the size of the label subset. We extensively evaluate SLAM and CLASS on three datasets, using two state of the art models, namely Query2Label and ASL. Our evaluation results show that CLASS and SLAM are able to increase the flips given the budget constraint by up to 131% and 61% respectively compared to naive MI-FGSM.

Files