Revisiting Test Smells in Automatically Generated Tests: Limitations, Pitfalls, and Opportunities

Conference Paper (2020)
Author(s)

Annibale Panichella (TU Delft - Software Engineering)

Sebastiano Panichella (Zurich University of Applied Science (ZHAW))

Gordon Fraser (University of Passau)

Anand Ashok Sawant (University of California)

Vincent J. Hellendoorn (University of California)

Research Group
Software Engineering
Copyright
© 2020 A. Panichella, Sebastiano Panichella, Gordon Fraser, Anand Ashok Sawant, Vincent J. Hellendoorn
DOI related publication
https://doi.org/10.1109/ICSME46990.2020.00056
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 A. Panichella, Sebastiano Panichella, Gordon Fraser, Anand Ashok Sawant, Vincent J. Hellendoorn
Related content
Research Group
Software Engineering
Pages (from-to)
523-533
ISBN (print)
978-1-7281-5620-0
ISBN (electronic)
978-1-7281-5619-4
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Test smells attempt to capture design issues in test code that reduce their maintainability. Previous work found such smells to be highly common in automatically generated test-cases, but based this result on specific static detection rules; although these are based on the original definition of “test smells”, a recent empirical study showed that developers perceive these as overly strict and non-representative of the maintainability and quality of test suites. This leads us to investigate how effective such test smell detection tools are on automatically generated test suites. In this paper, we build a dataset of 2,340 test cases automatically generated by EVOSUITE for 100 Java classes. We performed a multi-stage, cross-validated manual analysis to identify six types of test smells and label their instances. We benchmark the performance of two test smell detection tools: one widely used in prior work, and one recently introduced with the express goal to match developer perceptions of test smells. Our results show that these test smell detection strategies poorly characterized the issues in automatically generated test suites; the older tool’s detection strategies, especially, misclassified over 70% of test smells, both missing real instances (false negatives) and marking many smell-free tests as smelly (false positives). We identify common patterns in these tests that can be used to improve the tools, refine and update the definition of certain test smells, and highlight as of yet uncharacterized issues. Our findings suggest the need for (i) more appropriate metrics to match development practice; and (ii) more accurate detection strategies, to be evaluated primarily in industrial contexts.

Files

Main.pdf
(pdf | 0.453 Mb)
GNU LGPL