The Last Dependency Crusade: Solving Python Dependency Conflicts with LLMs

Conference Paper (2025)
Author(s)

A.J. Bartlett (TU Delft - Multimedia Computing)

C.C.S. Liem (TU Delft - Multimedia Computing)

A. Panichella (TU Delft - Software Engineering)

Research Group
Multimedia Computing
DOI related publication
https://doi.org/10.1109/ASEW67777.2025.00022
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Multimedia Computing
Pages (from-to)
66-73
Publisher
IEEE
ISBN (print)
979-8-3315-8504-4
ISBN (electronic)
979-8-3315-8503-7
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Resolving Python dependency issues remains a tedious and error-prone process, forcing developers to manually trial compatible module versions and interpreter configurations. Existing automated solutions, such as knowledge-graph-based and database-driven methods, face limitations due to the variety of dependency error types, large sets of possible module versions, and conflicts among transitive dependencies. This paper investigates the use of Large Language Models (LLMs) to automatically repair dependency issues in Python programs. We propose pllm (pronounced “plum”), a novel retrieval-augmented generation (RAG) approach that iteratively infers missing or incorrect dependencies. PLLM builds a test environment where the LLM proposes module combinations, observes execution feedback, and refines its predictions using natural language processing (NLP) to parse error messages. We evaluate PLLM on the Gistable HG2. 9K dataset, a curated collection of real-world Python programs. Using this benchmark, we explore multiple PLLM configurations, including six open-source LLMs evaluated both with and without RAG. Our findings show that RAG consistently improves fix rates, with the best performance achieved by Gemma-2 9B when combined with RAG. Compared to two state-of-the-art baselines, PyEGo and ReadPyE, PLLM achieves significantly higher fix rates; $\mathbf{+ 1 5. 9 7 \%}$ more than ReadPyE and $\mathbf{+ 2 1. 5 8 \%}$ more than PyEGo. Further analysis shows that PLLM is especially effective for projects with numerous dependencies and those using specialized numerical or machine-learning libraries.

Files

Taverne
warning

File under embargo until 19-07-2026