Optimizing the Reduced Basis Construction for Reduced-Order Mechanical Models

Automatic and efficient load case selection using Bayesian machine learning

More Info
expand_more

Abstract

Numerical simulations have become an essential part of design in every field of engineering, and the boundaries of technology are pushed further out every year. In structural engineering, the desire to design structures that have complex shapes or that are simply cheaper and more efficient, has necessitated the use of complex numerical simulations. This necessity is further substantiated when taking into consideration structures such as wind turbines that are subjected to extreme environmental conditions. In many cases, however, such simulations are prohibitively expensive due to complex material behaviour or the many-query nature of design optimization. The lack of knowledge and understanding of the behaviour and failure mechanisms is compensated by adopting less complex designs and/or high safety factors, which leads to less efficient and more expensive designs. In recent years, methods to circumvent such high computational costs have been developed. Acceleration techniques such as model-order reduction (MOR) are now widely researched, and significant developments are being made to overcome the issues of prohibitively expensive high-fidelity models. This thesis uses Proper Orthogonal Decomposition (POD) to drastically reduce the degrees of freedom of a simply supported beam that is loaded in the downwards direction along the span. The result of the MOR process is a reduced-order model (ROM) that accurately approximates the behaviour of the full-order model (FOM). The ROM is constructed by determining a set of basis vectors that contain compressed information of representative full-order solutions collected in an extit{offline} training phase. The goal in this thesis is so collect the information and construct the reduced basis as efficiently as possible while guaranteeing a given accuracy of the ROM. Two methods are presented to iteratively construct the reduced basis. The first one is the extit{Surrogate Parameter Space} (SPS) method, where greedy sampling is performed on incrementally finer grids of points along the beam. Each individual grid is referred to as an SPS, and they are exhausted by selecting the load location that in each iteration will add the most new information to the ROM. The other method is the extit{Gaussian Process Regression} (GPR). Greedy sampling using a Bayesian machine learning algorithm involving GPR is used to predict load locations along the beam that add the most new information to the ROM. A method to efficiently combine and compress information obtained in each iteration was also developed. The results showed that the SPS method is efficient to construct an accurate ROM for the beam model in this thesis, but the GPR managed to recognize that some areas in the span did not have to be sampled as much as others. This makes GPR the more promising method for other high-fidelity models when high accuracy is desired. The results also showed that both methods depend greatly on input parameters that define how much information is kept in each iteration, and for GPR an additional parameter that determines how much accurate the regression itself should be. In order to execute an efficient offline phase, these parameters must be chosen carefully, and it is recommended for future work to develop methods to adaptively choose them. It was also shown that the number of ROMs that have to be run as part of the greedy sampling is the bottleneck of efficiency for both methods. For high-fidelity models with higher-dimensional parameter spaces, this bottleneck necessitates the use of hyper-reduction techniques such as the Empirical Cubature Method (ECM) to reduce the computation time of the ROM itself. It is recommended that the methods investigated and the corresponding results in this thesis are used as a stepping stone to implement automatic and efficient sampling methods to other high-fidelity models.