学术报告

您所在的位置:首页  学术交流  学术报告

Learning Frechet Differentiable Operators Via Prespecified Neural Operators

发布时间:2025-05-13阅读次数:10

Neural operators, built on neural networks, have emerged as a crucial tool in deep learning for approximating nonlinear operators. The present work develops an approximation and generalization theory for neural operators with prespecified encoders and decoders, improving and extending previous work by focusing on target operators that are Frechet differentiable. To extract the smoothness feature, we expand the target operator by the Taylor formula and apply a re-discretizing technique. This enables us to derive an upper bound on the approximation error for Frechet differentiable operators, and achieve improved rates of approximation under some properly chosen classes of encoders and decoders compared to those for Lipschitz continuous operators. Furthermore, we establish an upper bound on the generalization error for the empirical risk minimizer induced by prespecified neural operators. Explicit learning rates are derived when encoder-decoder pairs are chosen via polynomial approximation and principle component analysis. These findings quantitatively demonstrate how the reconstruction errors of infinite dimensional spaces and the smoothness of target operators influence learning performances.

学术海报.pdf