Neural operators

From HandWiki
Revision as of 17:34, 6 February 2024 by LinuxGuru (talk | contribs) (fixing)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Machine learning framework


Neural operators are a class of deep learning architectures designed to learn maps between infinite-dimensional function spaces.[1] Neural operators represent an extension of traditional artificial neural networks, marking a departure from the typical focus on learning mappings between finite-dimensional Euclidean spaces or finite sets. Neural operators directly learn operators between function spaces; they can receive input functions, and the output function can be evaluated at any discretization.[2]

The primary application of neural operators is in learning surrogate maps for the solution operators of partial differential equations (PDEs),[2] which are critical tools in modeling the natural environment.[3][4] Standard PDE solvers can be time-consuming and computationally intensive, especially for complex systems. Neural operators have demonstrated improved performance in solving PDEs [5] compared to existing machine learning methodologies while being significantly faster than numerical solvers.[6][7][8][9] Neural operators have also been applied to various scientific and engineering disciplines such as turbulent flow modeling, computational mechanics, graph-structured data,[10] and the geosciences.[11] In particular, they have been applied to learning stress-strain fields in materials, classifying complex data like spatial transcriptomics, predicting multiphase flow in porous media,[12] and climate modeling through long-term weather forecasting[13] and carbon dioxide migration simulations. Finally, the operator learning paradigm allows learning maps between function spaces, and is different from parallel ideas of learning maps from finite-dimensional spaces to function spaces,[14][15] and subsumes these settings when limited to fixed input resolution.

Operator learning

Understanding and mapping relationships between function spaces has many applications in engineering and the sciences. In particular, one can cast the problem of solving partial differential equations as identifying a map between function spaces, such as from an initial condition to a time-evolved state. In other PDEs this map takes an input coefficient function and outputs a solution function. Operator learning is a machine learning paradigm to learn solution operators mapping the input function to the output function.

Using traditional machine learning methods, addressing this problem would involve discretizing the infinite-dimensional input and output function spaces into finite-dimensional grids and applying standard learning models, such as neural networks. This approach reduces the operator learning to finite-dimensional function learning and has some limitations, such as generalizing to discretizations beyond the grid used in training.

The primary properties of neural operators that differentiate them from traditional neural networks is discretization invariance and discretization convergence.[2] Unlike conventional neural networks, which are fixed on the discretization of training data, neural operators can adapt to various discretizations without re-training. This property improves the robustness and applicability of neural operators in different scenarios, providing consistent performance across different resolutions and grids.

Definition and formulation

Architecturally, neural operators are similar to feed-forward neural networks in the sense that they are composed of alternating linear maps and non-linearities. Since neural operators act on and output functions, neural operators have been instead formulated as a sequence of alternating linear integral operators on function spaces and point-wise non-linearities.[1][2] Using an analogous architecture to finite-dimensional neural networks, similar universal approximation theorems have been proven for neural operators. In particular, it has been shown that neural operators can approximate any continuous operator on a compact set.[2]

Neural operators seek to approximate some operator [math]\displaystyle{ \mathcal{G} : \mathcal{A} \to \mathcal{U} }[/math] between function spaces [math]\displaystyle{ \mathcal{A} }[/math] and [math]\displaystyle{ \mathcal{U} }[/math] by building a parametric map [math]\displaystyle{ \mathcal{G}_\phi : \mathcal{A} \to \mathcal{U} }[/math]. Such parametric maps [math]\displaystyle{ \mathcal{G}_\phi }[/math] can generally be defined in the form

[math]\displaystyle{ \mathcal{G}_\phi := \mathcal{Q} \circ \sigma(W_T + \mathcal{K}_T + b_T) \circ \cdots \circ \sigma(W_1 + \mathcal{K}_1 + b_1) \circ \mathcal{P}, }[/math]

where [math]\displaystyle{ \mathcal{P}, \mathcal{Q} }[/math] are the lifting (lifting the codomain of the input function to a higher dimensional space) and projection (projecting the codomain of the intermediate function to the output codimension) operators, respectively. These operators act pointwise on functions and are typically parametrized as multilayer perceptrons. [math]\displaystyle{ \sigma }[/math] is a pointwise nonlinearity, such as a rectified linear unit (ReLU), or a Gaussian error linear unit (GeLU). Each layer [math]\displaystyle{ t=1, \dots, T }[/math] has a respective local operator [math]\displaystyle{ W_t }[/math] (usually parameterized by a pointwise neural network), a kernel integral operator [math]\displaystyle{ \mathcal{K}_t }[/math], and a bias function [math]\displaystyle{ b_t }[/math]. Given some intermediate functional representation [math]\displaystyle{ v_t }[/math] with domain [math]\displaystyle{ D }[/math] in the [math]\displaystyle{ t }[/math]-th hidden layer, a kernel integral operator [math]\displaystyle{ \mathcal{K}_\phi }[/math] is defined as

[math]\displaystyle{ (\mathcal{K}_\phi v_t)(x) := \int_D \kappa_\phi(x, y, v_t(x), v_t(y))v_t(y)dy, }[/math]

where the kernel [math]\displaystyle{ \kappa_\phi }[/math] is a learnable implicit neural network, parametrized by [math]\displaystyle{ \phi }[/math].

In practice, one is often given the input function to the neural operator at a specific resolution. For instance, consider the setting where one is given the evaluation of [math]\displaystyle{ v_t }[/math] at [math]\displaystyle{ n }[/math] points [math]\displaystyle{ \{y_j\}_j^n }[/math]. Borrowing from Nyström integral approximation methods such as Riemann sum integration and Gaussian quadrature, the above integral operation can be computed as follows:

[math]\displaystyle{ \int_D \kappa_\phi(x, y, v_t(x), v_t(y))v_t(y)dy\approx \sum_j^n \kappa_\phi(x, y_j, v_t(x), v_t(y_j))v_t(y_j)\Delta_{y_j}, }[/math]

where [math]\displaystyle{ \Delta_{y_j} }[/math] is the sub-area volume or quadrature weight associated to the point [math]\displaystyle{ y_j }[/math]. Thus, a simplified layer can be computed as

[math]\displaystyle{ v_{t+1}(x) \approx \sigma\left(\sum_j^n \kappa_\phi(x, y_j, v_t(x), v_t(y_j))v_t(y_j)\Delta_{y_j} + W_t(v_t(y_j)) + b_t(x)\right). }[/math]

The above approximation, along with parametrizing [math]\displaystyle{ \kappa_\phi }[/math] as an implicit neural network, results in the graph neural operator (GNO).[16]

There have been various parameterizations of neural operators for different applications.[6][7][16] These typically differ in their parameterization of [math]\displaystyle{ \kappa }[/math]. The most popular instantiation is the Fourier neural operator (FNO). FNO takes [math]\displaystyle{ \kappa_\phi(x, y, v_t(x), v_t(y)) := \kappa_\phi(x-y) }[/math] and by applying the convolution theorem, arrives at the following parameterization of the kernel integral operator:

[math]\displaystyle{ (\mathcal{K}_\phi v_t)(x) = \mathcal{F}^{-1} (R_\phi \cdot (\mathcal{F}v_t))(x), }[/math]

where [math]\displaystyle{ \mathcal{F} }[/math] represents the Fourier transform and [math]\displaystyle{ R_\phi }[/math] represents the Fourier transform of some periodic function [math]\displaystyle{ \kappa_\phi }[/math]. That is, FNO parameterizes the kernel integration directly in Fourier space, using a prescribed number of Fourier modes. When the grid at which the input function is presented is uniform, the Fourier transform can be approximated using the discrete Fourier transform (DFT) with frequencies below some specified threshold. The discrete Fourier transform can be computed using a fast Fourier transform (FFT) implementation.

Training

Training neural operators is similar to the training process for a traditional neural network. Neural operators are typically trained in some Lp norm or Sobolev norm. In particular, for a dataset [math]\displaystyle{ \{(a_i, u_i)\}_{i=1}^N }[/math] of size [math]\displaystyle{ N }[/math], neural operators minimize (a discretization of)

[math]\displaystyle{ \mathcal{L}_\mathcal{U}(\{(a_i, u_i)\}_{i=1}^N) := \sum_{i=1}^N \|u_i - \mathcal{G}_\theta (a_i) \|_\mathcal{U}^2 }[/math],

where [math]\displaystyle{ \|\cdot \|_\mathcal{U} }[/math] is a norm on the output function space [math]\displaystyle{ \mathcal{U} }[/math]. Neural operators can be trained directly using backpropagation and gradient descent-based methods.[1]

Another training paradigm is associated with physics-informed machine learning. In particular, physics-informed neural networks (PINNs) use complete physics laws to fit neural networks to solutions of PDEs. Extensions of this paradigm to operator learning are broadly called physics-informed neural operators (PINO),[17] where loss functions can include full physics equations or partial physical laws. As opposed to standard PINNs, the PINO paradigm incorporates a data loss (as defined above) in addition to the physics loss [math]\displaystyle{ \mathcal{L}_{PDE}(a, \mathcal{G}_\theta (a)) }[/math]. The physics loss [math]\displaystyle{ \mathcal{L}_{PDE}(a, \mathcal{G}_\theta (a)) }[/math] quantifies how much the predicted solution of [math]\displaystyle{ \mathcal{G}_\theta (a) }[/math] violates the PDEs equation for the input [math]\displaystyle{ a }[/math].

References

  1. 1.0 1.1 1.2 Patel, Ravi G.; Desjardins, Olivier (2018). "Nonlinear integro-differential operator regression with neural networks". arXiv:1810.08552 [cs.LG].
  2. 2.0 2.1 2.2 2.3 2.4 Kovachki, Nikola; Li, Zongyi; Liu, Burigede; Azizzadenesheli, Kamyar; Bhattacharya, Kaushik; Stuart, Andrew; Anandkumar, Anima (2021). "Neural operator: Learning maps between function spaces". Journal of Machine Learning Research 24: 1–97. https://www.jmlr.org/papers/volume24/21-1524/21-1524.pdf. 
  3. Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2. 
  4. X, S. (2023, September 6). How ai models are transforming weather forecasting: A showcase of data-driven systems. Phys.org. https://phys.org/news/2023-09-ai-weather-showcase-data-driven.html
  5. Kadri Umay, Y. O. (2023, September 20). Microsoft and accenture partner to tackle methane emissions with AI technology. Microsoft Azure Blog. https://azure.microsoft.com/en-us/blog/microsoft-and-accenture-partner-to-tackle-methane-emissions-with-ai-technology/
  6. 6.0 6.1 Patel, Ravi G.; Trask, Nathaniel A.; Wood, Mitchell A.; Cyr, Eric C. (January 2021). "A physics-informed operator regression framework for extracting data-driven continuum models". Computer Methods in Applied Mechanics and Engineering 373: 113500. doi:10.1016/j.cma.2020.113500. Bibcode2021CMAME.373k3500P. 
  7. 7.0 7.1 Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Fourier neural operator for parametric partial differential equations". arXiv:2010.08895 [cs.LG].
  8. Hao, K. (2021, October 20). Ai has cracked a key mathematical puzzle for understanding our world. MIT Technology Review. https://www.technologyreview.com/2020/10/30/1011435/ai-fourier-neural-network-cracks-navier-stokes-and-partial-differential-equations/
  9. Ananthaswamy, A., & Quanta Magazine moderates comments to facilitate an informed, substantive. (2021, September 10). Latest neural nets solve world’s hardest equations faster than ever before. Quanta Magazine. https://www.quantamagazine.org/latest-neural-nets-solve-worlds-hardest-equations-faster-than-ever-before-20210419/
  10. Sharma, A., Singh, S. & Ratna, S. Graph Neural Network Operators: a Review. Multimed Tools Appl (2023). https://doi.org/10.1007/s11042-023-16440-4
  11. Gege Wen, Zongyi Li, Kamyar Azizzadenesheli, Anima Anandkumar, Sally M. Benson, U-FNO—An enhanced Fourier neural operator-based deep-learning model for multiphase flow, Advances in Water Resources, Volume 163, 2022, 104180, ISSN 0309-1708, https://doi.org/10.1016/j.advwatres.2022.104180. (https://www.sciencedirect.com/science/article/pii/S0309170822000562)
  12. Choubineh A, Chen J, Wood DA, Coenen F, Ma F. Fourier Neural Operator for Fluid Flow in Small-Shape 2D Simulated Porous Media Dataset. Algorithms. 2023; 16(1):24. https://doi.org/10.3390/a16010024
  13. Yang, Q., Hernandez-Garcia, A., Harder, P., Ramesh, V., Sattegeri, P., Szwarcman, D., ... & Rolnick, D. (2023). Fourier Neural Operators for Arbitrary Resolution Climate Data Downscaling. arXiv preprint arXiv:2305.14452.
  14. Meshfreeflownet: A physics-constrained deep continuous space-time super-resolution framework. IEEE. 19 October 2020. pp. 1–15. 
  15. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. 3. Nature Publishing Group UK London. 19 October 2021. pp. 218–229. 
  16. 16.0 16.1 Li, Zongyi; Kovachki, Nikola; Azizzadenesheli, Kamyar; Liu, Burigede; Bhattacharya, Kaushik; Stuart, Andrew; Anima, Anandkumar (2020). "Neural operator: Graph kernel network for partial differential equations". arXiv:2003.03485 [cs.LG].
  17. Li, Zongyi; Hongkai, Zheng; Kovachki, Nikola; Jin, David; Chen, Haoxuan; Liu, Burigede; Azizzadenesheli, Kamyar; Anima, Anandkumar (2021). "Physics-Informed Neural Operator for Learning Partial Differential Equations". arXiv:2111.03794 [cs.LG].

External links

  • neuralop – Python library of various neural operator architectures