Cooperative data-driven modeling

Journal Article (2023)
Author(s)

Aleksandr Dekhovich (TU Delft - Team Marcel Sluiter)

O. Taylan Turan (TU Delft - Pattern Recognition and Bioinformatics)

Jiaxiang Jiaxiang (TU Delft - Team Marcel Sluiter)

Miguel Bessa (TU Delft - Team Marcel Sluiter, Brown University)

Research Group
Team Marcel Sluiter
Copyright
© 2023 A. Dekhovich, O.T. Turan, Y. Jiaxiang, M.A. Bessa
DOI related publication
https://doi.org/10.1016/j.cma.2023.116432
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 A. Dekhovich, O.T. Turan, Y. Jiaxiang, M.A. Bessa
Research Group
Team Marcel Sluiter
Volume number
417
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Data-driven modeling in mechanics is evolving rapidly based on recent machine learning advances, especially on artificial neural networks. As the field matures, new data and models created by different groups become available, opening possibilities for cooperative modeling. However, artificial neural networks suffer from catastrophic forgetting, i.e. they forget how to perform an old task when trained on a new one. This hinders cooperation because adapting an existing model for a new task affects the performance on a previous task trained by someone else. The authors developed a continual learning method that addresses this issue, applying it here for the first time to solid mechanics. In particular, the method is applied to recurrent neural networks to predict history-dependent plasticity behavior, although it can be used on any other architecture (feedforward, convolutional, etc.) and to predict other phenomena. This work intends to spawn future developments on continual learning that will foster cooperative strategies among the mechanics community to solve increasingly challenging problems. We show that the chosen continual learning strategy can sequentially learn several constitutive laws without forgetting them, using less data to achieve the same error as standard (non-cooperative) training of one law per model.

Files

1_s2.0_S004578252300556X_main.... (pdf)
(pdf | 1.56 Mb)
- Embargo expired in 25-03-2024
License info not available