Robust Anomaly Detection on Unreliable Data

Conference Paper (2019)
Author(s)

Zilong Zhao (Université Grenoble Alpes)

Sophie Cerf (Université Grenoble Alpes)

Robert Birke (ABB Research)

Bogdan Robu (Université Grenoble Alpes)

Sara Bouchenak (INSA Lyon)

Sonia Ben Ben Mokhtar (INSA Lyon)

Y. Chen (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
DOI related publication
https://doi.org/10.1109/DSN.2019.00068
More Info
expand_more
Publication Year
2019
Language
English
Research Group
Data-Intensive Systems
Pages (from-to)
630-637
ISBN (print)
978-1-7281-0058-6
ISBN (electronic)
9781728100562

Abstract

Classification algorithms have been widely adopted to detect anomalies for various systems, e.g., IoT and cloud, under the common assumption that the data source is clean, i.e., features and labels are correctly set. However, data collected from the field can be unreliable due to careless annotations or malicious data transformation for incorrect anomaly detection. In this paper, we present a two-layer learning framework for robust anomaly detection (RAD) in the presence of unreliable anomaly labels. The first layer of quality model filters the suspicious data, where the second layer of classification model detects the anomaly types. We specifically focus on two use cases, (i) detecting 10 classes of IoT attacks and (ii) predicting 4 classes of task failures of big data jobs. Our evaluation results show that RAD can robustly improve the accuracy of anomaly detection, to reach up to 98% for IoT device attacks (i.e., +11%) and up to 83% for cloud task failures (i.e., +20%), under a significant percentage of altered anomaly labels. Index Terms-Unreliable Data; Anomaly Detection; Failures; Attacks; Machine Learning.

No files available

Metadata only record. There are no files for this record.