Running a Red Light

An Investigation into Why Software Engineers (Occasionally) Ignore Coverage Checks

Conference Paper (2024)
Author(s)

Alexander Sterk (Student TU Delft)

Mairieli Wessel (Radboud Universiteit Nijmegen)

Eli Hooten (Sentry.io)

A.E. Zaidman (TU Delft - Software Technology)

Department
Software Technology
DOI related publication
https://doi.org/10.1145/3644032.3644444
More Info
expand_more
Publication Year
2024
Language
English
Department
Software Technology
Pages (from-to)
12-22
ISBN (print)
979-8-4007-0588-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Many modern code coverage tools track and report code coverage data generated from running tests during continuous integration. They report code coverage data through a variety of channels, including email, Slack, Mattermost, or through the web interface of social coding platforms such as GitHub. In fact, this ensemble of tools can be configured in such a way that the software engineer gets a failing status check when code coverage drops below a certain threshold. In this study, we broadly investigate the opinions and experience with code coverage tools through a survey among 279 software engineers whose projects use the Codecov coverage tool and bot. In particular, we are investigating why software engineers would ignore a failing status check caused by drop in code coverage. We observe that >80% of software engineers-at least sometimes-ignore these failing status checks, and we get insights into the main reasons why software engineers ignore these checks.