SymFormer

End-to-End Symbolic Regression Using Transformer-Based Architecture

Journal Article (2024)
Author(s)

Martin Vastl (Charles University)

Jonas Kulhanek (Czech Technical University)

Jiří Kubalík (Czech Technical University)

Erik Derner (Czech Technical University)

Robert Babuska (TU Delft - Learning & Autonomous Control)

Research Group
Learning & Autonomous Control
DOI related publication
https://doi.org/10.1109/ACCESS.2024.3374649
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Learning & Autonomous Control
Volume number
12
Pages (from-to)
37840-37849
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Many real-world systems can be naturally described by mathematical formulas. The task of automatically constructing formulas to fit observed data is called symbolic regression. Evolutionary methods such as genetic programming have been commonly used to solve symbolic regression tasks, but they have significant drawbacks, such as high computational complexity. Recently, neural networks have been applied to symbolic regression, among which the transformer-based methods seem to be most promising. After training a transformer on a large number of formulas, the actual inference, i.e., finding a formula for new, unseen data, is very fast (in the order of seconds). This is considerably faster than state-of-the-art evolutionary methods. The main drawback of transformers is that they generate formulas without numerical constants, which have to be optimized separately, yielding suboptimal results. We propose a transformer-based approach called SymFormer, which predicts the formula by outputting the symbols and the constants simultaneously. This helps to generate formulas that fit the data more accurately. In addition, the constants provided by SymFormer serve as a good starting point for subsequent tuning via gradient descent to further improve the model accuracy. We show on several benchmarks that SymFormer outperforms state-of-the-art methods while having faster inference.