On the Evaluation of NLP-based Models for Software Engineering

Conference Paper (2022)
Authors

M. Izadi (TU Delft - Software Engineering)

Martin Nili Nili Ahmadabadi (University of Tehran)

Research Group
Software Engineering
Copyright
© 2022 M. Izadi, Martin Nili Ahmadabadi
To reference this document use:
https://doi.org/10.1145/3528588.3528665
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 M. Izadi, Martin Nili Ahmadabadi
Research Group
Software Engineering
Pages (from-to)
48-50
ISBN (print)
978-1-6654-6231-0
ISBN (electronic)
978-1-4503-9343-0
DOI:
https://doi.org/10.1145/3528588.3528665
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

NLP-based models have been increasingly incorporated to address SE problems. These models are either employed in the SE domain with little to no change, or they are greatly tailored to source code and its unique characteristics. Many of these approaches are considered to be outperforming or complementing existing solutions. However, an important question arises here: Are these models evaluated fairly and consistently in the SE community?. To answer this question, we reviewed how NLP-based models for SE problems are being evaluated by researchers. The findings indicate that currently there is no consistent and widely-accepted protocol for the evaluation of these models. While different aspects of the same task are being assessed in different studies, metrics are defined based on custom choices, rather than a system, and finally, answers are collected and interpreted case by case. Consequently, there is a dire need to provide a methodological way of evaluating NLP-based models to have a consistent assessment and preserve the possibility of fair and efficient comparison.