The Impact of Test Case Summaries on Bug Fixing Performance
An Empirical Investigation
Sebastiano Panichella (Universitat Zurich)
Annibale Panichella (TU Delft - Software Engineering)
M.M. Beller (TU Delft - Software Engineering)
A.E. Zaidman (TU Delft - Software Engineering)
Harald C. Gall (Universitat Zurich)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Automated test generation tools have been widely investigated with the goal of reducing the cost of testing activities. However, generated tests have been shown not to help developers in detecting and finding more bugs even though they reach higher structural coverage compared to manual testing. The main reason is that generated tests are diff-cult to understand and maintain. Our paper proposes an approach, coined TestDescriber, which automatically generates test case summaries of the portion of code exercised by each individual test, thereby improving understandability. We argue that this approach can complement the current techniques around automated unit test generation or searchbased techniques designed to generate a possibly minimal set of test cases. In evaluating our approach we found that (1) developers find twice as many bugs, and (2) test case summaries significantly improve the comprehensibility of test cases, which is considered particularly useful by developers.