Generalizable models of magnetic hysteresis via physics-aware recurrent neural networks
Abhishek Chandra (Eindhoven University of Technology)
T. Kapoor (TU Delft - Railway Engineering)
Bram Daniels (Eindhoven University of Technology)
Mitrofan Curti (Eindhoven University of Technology)
Koen Tiels (Eindhoven University of Technology)
Daniel M. Tartakovsky (Stanford University)
Elena A. Lomonova (Eindhoven University of Technology)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Hysteresis is a ubiquitous phenomenon in magnetic materials; its modeling and identification are crucial for understanding and optimizing the behavior of electrical machines. Such machines often operate under uncertain conditions, necessitating modeling methods that can generalize across unobserved scenarios. Traditional recurrent neural architectures struggle to generalize hysteresis patterns beyond their training domains. This paper mitigates the generalization challenge by introducing a physics-aware recurrent neural network approach to model and generalize the hysteresis manifesting in sequentiality and history-dependence. The proposed method leverages ordinary differential equations (ODEs) governing the phenomenological hysteresis models to update hidden recurrent states. The effectiveness of the proposed method is evaluated by predicting generalized scenarios, including first-order reversal curves and minor loops. The results demonstrate robust generalization to previously untrained regions, even with noisy data, an essential feature that hysteresis models must have. The results highlight the advantages of integrating physics-based ODEs into recurrent architectures, including superior performance over traditional methods in capturing the complex, nonlinear hysteresis behaviors in magnetic materials.