Neural topology optimization
the good, the bad, and the ugly
S. Manoj Sanu (TU Delft - Team Marcel Sluiter)
Alejandro M. Aragón (TU Delft - Computational Design and Mechanics)
M.A. Bessa (Brown University)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Neural networks (NNs) hold great promise for advancing inverse design via topology optimization (TO), yet misconceptions about their application persist. This article focuses on neural topology optimization (neural TO), which leverages NNs to reparameterize the decision space and reshape the optimization landscape. While the method is still in its infancy, our analysis tools reveal critical insights into the NNs’ impact on the optimization process. We demonstrate that the choice of NN architecture significantly influences the objective landscape and the optimizer’s path to an optimum. Notably, NNs introduce non-convexities even in otherwise convex landscapes, potentially delaying convergence in convex problems but enhancing exploration for non-convex problems. This analysis lays the groundwork for future advancements by highlighting: (1) the potential of neural TO for non-convex problems and dedicated GPU hardware (the “good”), (2) the limitations in smooth landscapes (the “bad”), and (3) the complex challenge of selecting optimal NN architectures and hyperparameters for superior performance (the “ugly”).
Files
File under embargo until 04-04-2026