Java Unit Testing Tool Competition - Seventh Round

Conference Paper (2019)
Author(s)

Fitsum Kifetew (Fondazione Bruno Kessler)

Xavier Devroey (TU Delft - Software Engineering)

Urko Rueda (Universitat Politécnica de Valencia)

DOI related publication
https://doi.org/10.1109/SBST.2019.00014 Final published version
More Info
expand_more
Publication Year
2019
Language
English
Related content
Article number
8812209
Pages (from-to)
15-20
ISBN (print)
978-1-7281-2234-2
ISBN (electronic)
978-1-7281-2233-5
Event
Downloads counter
148
Collections
Institutional Repository
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We report on the results of the seventh edition of the JUnit tool competition. This year, four tools were executed on a benchmark with (i) new classes, selected from real-world software projects, and (ii) challenging classes from the previous edition. We use Randoop and manual test suites from the projects as baselines. Given the interesting findings of last year, we analyzed the effectiveness of the combined test suites generated by all competing tools and compared; results are confronted with the manual test suites of the projects, as well as those generated by the competing tools. This paper describes our methodology and the results, highlight challenges faced during the contest.

Files

Sbst_contest_final.pdf
(pdf | 0.397 Mb)
License info not available