Scalable Task Planning via Large Language Models and Structured World Representations

Journal Article (2025)
Author(s)

Rodrigo Pérez-Dattari (TU Delft - Learning & Autonomous Control)

Z. Li (TU Delft - Learning & Autonomous Control)

R. Babuska (Czech Technical University, TU Delft - Learning & Autonomous Control)

J. Kober (TU Delft - Learning & Autonomous Control)

C. Della Santina (Deutsches Zentrum für Luft- und Raumfahrt (DLR), TU Delft - Learning & Autonomous Control)

Research Group
Learning & Autonomous Control
DOI related publication
https://doi.org/10.1002/adrr.202500002
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Learning & Autonomous Control
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Planning methods often struggle with computational intractability when solving task-level problems in large-scale environments. This work explores how the commonsense knowledge encoded in Large Language Models (LLMs) can be leveraged to enhance planning techniques for such complex scenarios. Specifically, we propose an approach that uses LLMs to efficiently prune irrelevant components from the planning problem's state space, thereby substantially reducing its complexity. We demonstrate the efficacy of our system through extensive experiments in a household simulation environment as well as real-world validation on a 7-DoF manipulator (video: https://youtu.be/6ro2UOtOQS4).