Convolutional Neural Operators for robust and accurate learning of PDEs
Bogdan Raonić (ETH Zürich)
Roberto Molinaro (ETH Zürich)
Tim De Ryck (ETH Zürich)
Tobias Rohner (ETH Zürich)
Francesca Bartolucci (TU Delft - Analysis)
Rima Alaifari (ETH Zürich)
Siddhartha Mishra (ETH Zürich)
Emmanuel de Bézenac (ETH Zürich)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Although very successfully used in conventional machine learning, convolution based neural network architectures - believed to be inconsistent in function space - have been largely ignored in the context of learning solution operators of PDEs. Here, we present novel adaptations for convolutional neural networks to demonstrate that they are indeed able to process functions as inputs and outputs. The resulting architecture, termed as convolutional neural operators (CNOs), is designed specifically to preserve its underlying continuous nature, even when implemented in a discretized form on a computer. We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy. CNOs are tested on a novel suite of benchmarks, encompassing a diverse set of PDEs with possibly multi-scale solutions and are observed to significantly outperform baselines, paving the way for an alternative framework for robust and accurate operator learning.