Re-evaluating Method-Level Bug Prediction

Conference Paper (2018)
Author(s)

Luca Pascarella (TU Delft - Software Engineering)

Fabio Palomba (Universitat Zurich)

A Bacchelli (Universitat Zurich)

Research Group
Software Engineering
Copyright
© 2018 L. Pascarella, F. Palomba, A. Bacchelli
DOI related publication
https://doi.org/10.1109/SANER.2018.8330264
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 L. Pascarella, F. Palomba, A. Bacchelli
Research Group
Software Engineering
Pages (from-to)
1-10
ISBN (electronic)
978-1-5386-4969-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Bug prediction is aimed at supporting developers in the identification of code artifacts more likely to be defective. Researchers have proposed prediction models to identify bug prone methods and provided promising evidence that it is possible to operate at this level of granularity. Particularly, models based on a mixture of product and process metrics, used as independent variables, led to the best results.
In this study, we first replicate previous research on method- level bug prediction on different systems/timespans. Afterwards, we reflect on the evaluation strategy and propose a more realistic one. Key results of our study show that the performance of the method-level bug prediction model is similar to what previously reported also for different systems/timespans, when evaluated with the same strategy. However—when evaluated with a more realistic strategy—all the models show a dramatic drop in performance exhibiting results close to that of a random classifier. Our replication and negative results indicate that method-level bug prediction is still an open challenge.

Files

TUD_SERG_2018_006.pdf
(pdf | 0.343 Mb)
License info not available