Search-Based Software Testing (SBST) tools can automatically generate tests to achieve high code coverage; however, a systematic understanding of why they fail in specific situations is necessary. This thesis addresses this gap by developing a comprehensive taxonomy of coverage f
...
Search-Based Software Testing (SBST) tools can automatically generate tests to achieve high code coverage; however, a systematic understanding of why they fail in specific situations is necessary. This thesis addresses this gap by developing a comprehensive taxonomy of coverage failures through an empirical analysis of the three most prominent SBST tools: Pynguin (Python), SynTest (JavaScript), and EvoSuite (Java). By classifying and analysing failure patterns across these tools and language paradigms, this research provides a foundational framework to diagnose shortcomings, prioritise future development, and enhance the practical effectiveness of automated test generation.