Predicting the objective and priority of issue reports in software repositories

Journal Article (2022)
Authors

M. Izadi (Sharif University of Technology)

Kiana Akbari (Sharif University of Technology)

Abbas Heydarnoori

Affiliation
External organisation
To reference this document use:
https://doi.org/10.1007/s10664-021-10085-3
More Info
expand_more
Publication Year
2022
Language
English
Affiliation
External organisation
Issue number
2
Volume number
27
DOI:
https://doi.org/10.1007/s10664-021-10085-3

Abstract

Software repositories such as GitHub host a large number of software entities. Developers collaboratively discuss, implement, use, and share these entities. Proper documentation plays an important role in successful software management and maintenance. Users exploit Issue Tracking Systems, a facility of software repositories, to keep track of issue reports, to manage the workload and processes, and finally, to document the highlight of their team’s effort. An issue report is a rich source of collaboratively-curated software knowledge, and can contain a reported problem, a request for new features, or merely a question about the software product. As the number of these issues increases, it becomes harder to manage them manually. GitHub provides labels for tagging issues, as a means of issue management. However, about half of the issues in GitHub’s top 1000 repositories do not have any labels. In this work, we aim at automating the process of managing issue reports for software teams. We propose a two-stage approach to predict both the objective behind opening an issue and its priority level using feature engineering methods and state-of-the-art text classifiers. To the best of our knowledge, we are the first to fine-tune a Transformer for issue classification. We train and evaluate our models in both project-based and cross-project settings. The latter approach provides a generic prediction model applicable for any unseen software project or projects with little historical data. Our proposed approach can successfully predict the objective and priority level of issue reports with 82 % (fine-tuned RoBERTa) and 75 % (Random Forest) accuracy, respectively. Moreover, we conducted human labeling and evaluation on unlabeled issues from six unseen GitHub projects to assess the performance of the cross-project model on new data. The model achieves 90 % accuracy on the sample set. We measure inter-rater reliability and obtain an average Percent Agreement of 85.3 % and Randolph’s free-marginal Kappa of 0.71 that translate to a substantial agreement among labelers.

No files available

Metadata only record. There are no files for this record.