TypeEvalPy
A Micro-benchmarking Framework for Python Type Inference Tools
Ashwin Prasad S. Venkatesh (Paderborn University)
Samkutty Sabu (Paderborn University)
Jiawei Wang (Monash University)
S.A.M. Mir (TU Delft - Software Engineering)
Li Li (Beihang University)
Eric Bodden (Paderborn University)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
In light of the growing interest in type inference research for Python, both researchers and practitioners require a standardized process to assess the performance of various type inference techniques. This paper introduces TypeEvalPy, a comprehensive microbenchmarking framework for evaluating type inference tools. Type- EvalPy contains 154 code snippets with 845 type annotations across 18 categories that target various Python features. The framework manages the execution of containerized tools, transforms inferred types into a standardized format, and produces meaningful metrics for assessment. Through our analysis, we compare the performance of six type inference tools, highlighting their strengths and limitations. Our findings provide a foundation for further research and optimization in the domain of Python type inference.