Print Email Facebook Twitter Mini-Batching, Gradient-Clipping, First-versus Second-Order Title Mini-Batching, Gradient-Clipping, First-versus Second-Order: What Works in Gradient-Based Coefficient Optimisation for Symbolic Regression' Author Harrison, Joe (Centrum Wiskunde & Informatica (CWI)) Virgolin, Marco (Centrum Wiskunde & Informatica (CWI)) Alderliesten, T. (Leiden University Medical Center) Bosman, P.A.N. (TU Delft Algorithmics; Centrum Wiskunde & Informatica (CWI)) Date 2023 Abstract The aim of Symbolic Regression (SR) is to discover interpretable expressions that accurately describe data. The accuracy of an expression depends on both its structure and coefficients. To keep the structure simple enough to be interpretable, effective coefficient optimisation becomes key. Gradient-based optimisation is clearly effective at training neural networks in Deep Learning (DL), which can essentially be viewed as large, over-parameterised expressions: in this paper, we study how gradient-based optimisation techniques as often used in DL transfer to SR. In particular, we first assess what techniques work well across random SR expressions, independent of any specific SR algorithm. We find that mini-batching and gradient-clipping can be helpful (similar to DL), while second-order optimisers outperform first-order ones (different from DL). Next, we consider whether including gradient-based optimisation in Genetic Programming (GP), a classic SR algorithm, is beneficial. On five real-world datasets, in a generation-based comparison, we find that second-order optimisation outperforms coefficient mutation (or no optimisation). However, in time-based comparisons, performance gaps shrink substantially because the computational expensiveness of second-order optimisation causes GP to perform fewer generations. The interplay of computational costs between the optimisation of structure and coefficients is thus a critical aspect to consider. Subject coefficient optimisationexplainable AIgenetic programminggradient descentsymbolic regression To reference this document use: http://resolver.tudelft.nl/uuid:02197e3e-2cd1-4d98-a152-c6d28d4264f8 DOI https://doi.org/10.1145/3583131.3590368 Publisher Association for Computing Machinery (ACM) ISBN 9798400701191 Source GECCO 2023 - Proceedings of the 2023 Genetic and Evolutionary Computation Conference Event 2023 Genetic and Evolutionary Computation Conference, GECCO 2023, 2023-07-15 → 2023-07-19, Lisbon, Portugal Series GECCO 2023 - Proceedings of the 2023 Genetic and Evolutionary Computation Conference Part of collection Institutional Repository Document type conference paper Rights © 2023 Joe Harrison, Marco Virgolin, T. Alderliesten, P.A.N. Bosman Files PDF 3583131.3590368.pdf 1.23 MB Close viewer /islandora/object/uuid:02197e3e-2cd1-4d98-a152-c6d28d4264f8/datastream/OBJ/view