Exploring the optimization process in neurally reparameterized topology optimization

Master Thesis (2021)
Author(s)

Surya Narayanan Surya Narayanan (TU Delft - Mechanical Engineering)

Contributor(s)

G.I. Kuś – Mentor (TU Delft - Novel Aerospace Materials)

M.A. Bessa – Mentor (TU Delft - Team Georgy Filonenko)

Faculty
Mechanical Engineering
Copyright
© 2021 Surya Narayanan Suryanarayanan
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Surya Narayanan Suryanarayanan
Graduation Date
09-11-2021
Awarding Institution
Delft University of Technology
Programme
['Mechanical Engineering']
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Inverse design with topology optimization has followed the same computational
graph for decades. The unknown material density is distributed within a domain,
a computational analysis predicts the response of that design and its derivative
with respect to the unknown, and this information is used by a chosen gradient­
based optimization algorithm to find the next design iteration until it reaches an
optimum (local or global). Recently, however, a counter­intuitive strategy was pro­
posed which augments the computational graph by including a neural network in
between the response prediction (computational analysis) and the generation of a
new design (image). This shifts the optimization problem from its original space
to the weight space of the neural network. Yet, this indirect optimization pro­cess was shown elsewhere to outperform conventional topology optimization for a
large number of structural compliance problems – at least when choosing a par­ticular convolutional neural network (part of a U-­Net) and a particular optimizer
(L­-BFGS). This investigation provides quantitative and qualitative arguments that
justify why these choices are successful, concluding that the line­-search compo­nent of L-­BFGS is key to traversing the reparameterized objective (loss) landscape
and quickly reaching good solutions in “flat” regions of the landscape. Importantly,
these topology optimization problems are not stochastic which make them different
from the majority of conventional deep learning applications, favoring the use of
line-­search. Similarly, although to lesser extent, the approximation of the Hessian
provided by L-­BFGS helps moving more effectively within the flat regions by rescal­ing the gradients, the quality of the approximation is less relevant. Together with
the deep image prior effect associated to deep learning, these arguments explain
the early success of the neural reparameterization strategy in topology optimiza­tion, in spite of the non­-convex objective landscape distortion that they introduce
even for landscapes that were originally convex.

Files

Master_Thesis_2_.pdf
(pdf | 19.8 Mb)
License info not available