TypeEvalPy

A Micro-benchmarking Framework for Python Type Inference Tools

Conference Paper (2024)
Author(s)

Ashwin Prasad S. Venkatesh (Paderborn University)

Samkutty Sabu (Paderborn University)

Jiawei Wang (Monash University)

S.A.M. Mir (TU Delft - Software Engineering)

Li Li (Beihang University)

Eric Bodden (Paderborn University)

Research Group
Software Engineering
DOI related publication
https://doi.org/10.1145/3639478.3640033
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Software Engineering
Pages (from-to)
49-53
ISBN (electronic)
9798400705021
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In light of the growing interest in type inference research for Python, both researchers and practitioners require a standardized process to assess the performance of various type inference techniques. This paper introduces TypeEvalPy, a comprehensive microbenchmarking framework for evaluating type inference tools. Type- EvalPy contains 154 code snippets with 845 type annotations across 18 categories that target various Python features. The framework manages the execution of containerized tools, transforms inferred types into a standardized format, and produces meaningful metrics for assessment. Through our analysis, we compare the performance of six type inference tools, highlighting their strengths and limitations. Our findings provide a foundation for further research and optimization in the domain of Python type inference.