Error Bounds for Physics-Informed Neural Networks in Fokker-Planck PDEs

Journal Article (2025)
Author(s)

Chun Wei Kong (University of Colorado Boulder)

L. Laurenti (TU Delft - Team Luca Laurenti)

Jay McMahon (University of Colorado Boulder)

Morteza Lahijanian (University of Colorado Boulder)

Research Group
Team Luca Laurenti
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Team Luca Laurenti
Volume number
286
Pages (from-to)
2291-2301
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Stochastic differential equations are commonly used to describe the evolution of stochastic processes. The state uncertainty of such processes is best represented by the probability density function (PDF), whose evolution is governed by the Fokker-Planck partial differential equation (FPPDE). However, it is generally infeasible to solve the FP-PDE in closed form. In this work, we show that physics-informed neural networks (PINNs) can be trained to approximate the solution PDF. Our main contribution is the analysis of PINN approximation error: we develop a theoretical framework to construct tight error bounds using PINNs. In addition, we derive a practical error bound that can be efficiently constructed with standard training methods. We discuss that this error-bound framework generalizes to approximate solutions of other linear PDEs. Empirical results on nonlinear, high-dimensional, and chaotic systems validate the correctness of our error bounds while demonstrating the scalability of PINNs and their significant computational speedup in obtaining accurate PDF solutions compared to the Monte Carlo approach.

Files

Kong25b.pdf
(pdf | 7.31 Mb)
License info not available