Language-Conditioned Navigation Affordance Prediction under Occlusion

Master Thesis (2026)
Author(s)

X. Gao (TU Delft - Mechanical Engineering)

Contributor(s)

Javier Alonso-Mora – Mentor (TU Delft - Learning & Autonomous Control)

Gang Cheng – Mentor

Holger Caesar – Graduation committee member (TU Delft - Intelligent Vehicles)

R. Sabzevari – Graduation committee member (TU Delft - Group Sabzevari)

Faculty
Mechanical Engineering
More Info
expand_more
Publication Year
2026
Language
English
Graduation Date
30-03-2026
Awarding Institution
Delft University of Technology
Programme
Mechanical Engineering, Vehicle Engineering, Cognitive Robotics
Faculty
Mechanical Engineering
Downloads counter
21
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Language-conditioned local navigation requires a robot to infer a nearby traversable target location from its current observation and an open-vocabulary, relational instruction. Existing vision-language spatial grounding methods usually rely on vision–language models (VLMs) to reason in image space, producing 2D predictions tied to visible pixels. As a result, they struggle to infer target locations in occluded regions, typically caused by furniture or moving humans. To address this issue, we propose BEACON, which predicts an ego-centric Bird’s-Eye View (BEV) affordance heatmap over a bounded local region including occluded areas. Given an instruction and surround-view RGB-D observations from four directions around the robot, BEACON predicts the BEV heatmap by injecting spatial cues into a VLM and fusing the VLM’s output with depth-derived BEV features. Using an occlusion-aware dataset built in the Habitat simulator, we conduct detailed experimental analysis to validate both our BEV space formulation and the design choices of each module. Our method improves the accuracy averaged across geodesic thresholds by 22.74 percentage points over the state-of-the-art image-space baseline on the validation subset with occluded target locations.

Files

License info not available