Symbolic learning of interpretable reduced-order models for jumping quadruped robots

Journal Article (2026)
Author(s)

Gioele Buriani (Student TU Delft)

Jingyue Liu (TU Delft - Learning & Autonomous Control)

Maximilian Stölzle (Massachusetts Institute of Technology, TU Delft - Learning & Autonomous Control)

Cosimo Della Santina (Deutsches Zentrum für Luft- und Raumfahrt (DLR), TU Delft - Learning & Autonomous Control)

J. Ding (Università degli Studi di Trento)

Research Group
Learning & Autonomous Control
DOI related publication
https://doi.org/10.1016/j.ifacsc.2025.100360
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Learning & Autonomous Control
Volume number
35
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Reduced-order models are central to motion planning and control of quadruped robots, yet existing templates are often hand-crafted for a specific locomotion modality. This motivates the need for automatic methods that extract task-specific, interpretable low-dimensional dynamics directly from data. We propose a methodology that combines a linear autoencoder with symbolic regression to derive such models. The linear autoencoder provides a consistent latent embedding for configurations, velocities, accelerations, and inputs, enabling the sparse identification of nonlinear dynamics (SINDy) to operate in a compact, physics-aligned space. A multi-phase, hybrid-aware training scheme ensures coherent latent coordinates across contact transitions. We focus our validation on quadruped jumping—a representative, challenging, yet contained scenario in which a principled template model is especially valuable. The resulting symbolic dynamics outperform the state-of-the-art handcrafted actuated spring-loaded inverted pendulum (aSLIP) baseline in simulation and hardware across multiple robots and jumping modalities.