Inverse design with topology optimization has followed the same computational
graph for decades. The unknown material density is distributed within a domain,
a computational analysis predicts the response of that design and its derivative
with respect to the unknown,
...
Inverse design with topology optimization has followed the same computational
graph for decades. The unknown material density is distributed within a domain,
a computational analysis predicts the response of that design and its derivative
with respect to the unknown, and this information is used by a chosen gradient
based optimization algorithm to find the next design iteration until it reaches an
optimum (local or global). Recently, however, a counterintuitive strategy was pro
posed which augments the computational graph by including a neural network in
between the response prediction (computational analysis) and the generation of a
new design (image). This shifts the optimization problem from its original space
to the weight space of the neural network. Yet, this indirect optimization process was shown elsewhere to outperform conventional topology optimization for a
large number of structural compliance problems – at least when choosing a particular convolutional neural network (part of a U-Net) and a particular optimizer
(L-BFGS). This investigation provides quantitative and qualitative arguments that
justify why these choices are successful, concluding that the line-search component of L-BFGS is key to traversing the reparameterized objective (loss) landscape
and quickly reaching good solutions in “flat” regions of the landscape. Importantly,
these topology optimization problems are not stochastic which make them different
from the majority of conventional deep learning applications, favoring the use of
line-search. Similarly, although to lesser extent, the approximation of the Hessian
provided by L-BFGS helps moving more effectively within the flat regions by rescaling the gradients, the quality of the approximation is less relevant. Together with
the deep image prior effect associated to deep learning, these arguments explain
the early success of the neural reparameterization strategy in topology optimization, in spite of the non-convex objective landscape distortion that they introduce
even for landscapes that were originally convex.