Learning task constraints in visual-action planning from demonstrations

Conference Paper (2021)
Author(s)

Francesco Esposito (KTH Royal Institute of Technology)

Christian Pek (KTH Royal Institute of Technology)

Michael C. Welle (KTH Royal Institute of Technology)

Danica Kragic (KTH Royal Institute of Technology)

Affiliation
External organisation
DOI related publication
https://doi.org/10.1109/RO-MAN50785.2021.9515548
More Info
expand_more
Publication Year
2021
Language
English
Affiliation
External organisation
Pages (from-to)
131-138
ISBN (electronic)
9781665404921

Abstract

Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.

No files available

Metadata only record. There are no files for this record.