Physically Recurrent Neural Networks for accelerating multiscale simulations of complex materials

Doctoral Thesis (2025)
Author(s)

M. Alves Maia (TU Delft - Applied Mechanics)

Contributor(s)

F.P. van der Meer – Promotor (TU Delft - Applied Mechanics)

I.B.C.M. Rocha – Copromotor (TU Delft - Applied Mechanics)

Research Group
Applied Mechanics
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Applied Mechanics
ISBN (electronic)
978-94-6518-203-2
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Modeling the mechanical behavior of high-performance materials often requires accounting for interactions occurring at a lower scale than the one of interest. This scale transition can be addressed in many ways, with different levels of fidelity and computational effort. Naturally, a trade-off exists between these two aspects, and no single method - analytical, numerical or computational - perfectly balances them. Among the high-fidelity options to model complex materials (e.g., composite laminate and concrete) is the concurrent multiscale analysis, or simply FE2.

In FE2, two distinct scales, e.g. macro and micro, are solved iteratively. At the microscale, the material geometry is explicitly described by the so-called Representative Volume Element (RVE), where relatively simple constitutive models describe its constituents. At the macroscale, an RVE is coupled to each integration point, and homogenization operators downscale strains and upscale stresses, removing the need for a (macroscopic) constitutive model. However, this generality is associated with high, often prohibitive, computational costs. The limited scalability of FE2 hinders its adoption in solving real-life engineering problems, driving the need for acceleration strategies that retain the generality of the multiscale framework.

In the last decade, machine learning-based techniques emerged as a popular alternative to reduce computational costs in these simulations. The use of a surrogate model to replace the RVE altogether is arguably the most popular one. Nevertheless, critical issues in data-driven surrogate models remain unsolved and are particularly evident when modelling history-dependent materials. Among them are the data-hungry nature, limited extrapolation capabilities and lack of interpretability.

To address these issues, we introduce a novel class of neural networks (NNs): the Physically Recurrent Neural Networks (PRNNs). The idea is to preserve the knowledge built into constitutive models by embedding them in an encoder-decoder NN architecture with several links to the computational homogenization framework. This hybrid approach, which is non-intrusive and unbound to a specific material model, seeks to combine the benefits of purely data-driven models with those of classical physics-based models.

Starting with a composite micromodel with elastic inclusions and elastoplastic matrix, we demonstrate how training data requirements can be dramatically reduced compared to standard state-of-the-art approaches, with speed-ups over four orders of magnitude compared to FE2. Then, we illustrate how architectural design choices not only improve interpretability but also push training requirements towards a new lower bound. Next, we incorporate cohesive zone models to model microscopic debonding. In later chapters, we shift to a 3D finite strain setting and adapt the method to handle hyperelasticity and elasto-viscoplasticity. The final chapter focuses on a real-life scientific application, followed by closing remarks, contributions and future research directions.