Representation Equivalent Neural Operators

a Framework for Alias-free Operator Learning

Journal Article (2023)
Author(s)

Francesca Bartolucci (TU Delft - Analysis)

Emmanuel de Bézenac (ETH Zürich)

Bogdan Raonić (ETH Zürich)

Roberto Molinaro (ETH Zürich)

Siddhartha Mishra (ETH Zürich)

Rima Alaifari (ETH Zürich)

Research Group
Analysis
More Info
expand_more
Publication Year
2023
Language
English
Research Group
Analysis
Volume number
36
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recently, operator learning, or learning mappings between infinite-dimensional function spaces, has garnered significant attention, notably in relation to learning partial differential equations from data. Conceptually clear when outlined on paper, neural operators necessitate discretization in the transition to computer implementations. This step can compromise their integrity, often causing them to deviate from the underlying operators. This research offers a fresh take on neural operators with a framework Representation equivalent Neural Operators (ReNO) designed to address these issues. At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations. We explore this for widely-used operator learning techniques. Our findings detail how aliasing introduces errors when handling different discretizations and grids and loss of crucial continuous structures. More generally, this framework not only sheds light on existing challenges but, given its constructive and broad nature, also potentially offers tools for developing new neural operators.

Files

License info not available