JUGE: An Infrastructure for Benchmarking Java Unit Test Generators

Journal Article (2022)
Author(s)

Xavier Devroey (University of Namur)

Alessio Gambi (University of Passau)

Juan Pablo Galeotti (Universidad de Buenos Aires)

René Just (University of Washington)

Fitsum M. Kifetew (Fondazione Bruno Kessler)

Sebastiano Panichella (Zurich University of Applied Science (ZHAW))

A. Panichella (TU Delft - Software Engineering)

Research Group
Software Engineering
Copyright
© 2022 Xavier Devroey, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Meshesha Kifetew, Sebastiano Panichella, A. Panichella
DOI related publication
https://doi.org/10.1002/stvr.1838
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Xavier Devroey, Alessio Gambi, Juan Pablo Galeotti, René Just, Fitsum Meshesha Kifetew, Sebastiano Panichella, A. Panichella
Research Group
Software Engineering
Issue number
3
Volume number
33
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Researchers and practitioners have designed and implemented various automated test case generators to support effective software testing. Such generators exist for various languages (e.g., Java, C#, or Python) and various platforms (e.g., desktop, web, or mobile applications). The generators exhibit varying effectiveness and efficiency, depending on the testing goals they aim to satisfy (e.g., unit-testing of libraries versus system-testing of entire applications) and the underlying techniques they implement. In this context, practitioners need to be able to compare different generators to identify the most suited one for their requirements, while researchers seek to identify future research directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empirical evaluations is not trivial and requires substantial effort to select appropriate benchmarks, setup the evaluation infrastructure, and collect and analyse the results. In this Software Note, we present our JUnit Generation Benchmarking Infrastructure (JUGE) supporting generators (search-based, random-based, symbolic execution, etc.) seeking to automate the production of unit tests for various purposes (validation, regression testing, fault localization, etc.). The primary goal is to reduce the overall benchmarking effort, ease the comparison of several generators, and enhance the knowledge transfer between academia and industry by standardizing the evaluation and comparison process. Since 2013, several editions of a unit testing tool competition, co-located with the Search-Based Software Testing Workshop, have taken place where JUGE was used and evolved. As a result, an increasing amount of tools (over 10) from academia and industry have been evaluated on JUGE, matured over the years, and allowed the identification of future research directions. Based on the experience gained from the competitions, we discuss the expected impact of JUGE in improving the knowledge transfer on tools and approaches for test generation between academia and industry. Indeed, the JUGE infrastructure demonstrated an implementation design that is flexible enough to enable the integration of additional unit test generation tools, which is practical for developers and allows researchers to experiment with new and advanced unit testing tools and approaches.

Files

Software_Testing_Verif_Rel_202... (pdf)
(pdf | 2.75 Mb)
- Embargo expired in 01-07-2023
License info not available