Is your anomaly detector ready for change? adapting aiops solutions to the real world

Conference Paper (2024)
Author(s)

Lorena Poenaru-Olaru (TU Delft - Software Engineering)

Natalia Karpova (Student TU Delft)

Luis Cruz (TU Delft - Software Engineering)

Jan S. Rellermeyer (TU Delft - Data-Intensive Systems, Leibniz Universität)

Arie Van Deursen (TU Delft - Software Engineering)

Research Group
Software Engineering
DOI related publication
https://doi.org/10.1145/3644815.3644961
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Software Engineering
Pages (from-to)
222-233
ISBN (electronic)
9798400705915
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Anomaly detection techniques are essential in automating the monitoring of IT systems and operations. These techniques imply that machine learning algorithms are trained on operational data corresponding to a specific period of time and that they are continuously evaluated on newly emerging data. Operational data is constantly changing over time, which affects the performance of deployed anomaly detection models. Therefore, continuous model maintenance is required to preserve the performance of anomaly detectors over time. In this work, we analyze two different anomaly detection model maintenance techniques in terms of the model update frequency, namely blind model retraining and informed model retraining. We further investigate the effects of updating the model by retraining it on all the available data (full-history approach) and only the newest data (sliding window approach). Moreover, we investigate whether a data change monitoring tool is capable of determining when the anomaly detection model needs to be updated through retraining.