Gaussian process approximations

From HandWiki
Revision as of 15:09, 6 February 2024 by Raymond Straus (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics and machine learning, Gaussian process approximation is a computational method that accelerates inference tasks in the context of a Gaussian process model, most commonly likelihood evaluation and prediction. Like approximations of other models, they can often be expressed as additional assumptions imposed on the model, which do not correspond to any actual feature, but which retain its key properties while simplifying calculations. Many of these approximation methods can be expressed in purely linear algebraic or functional analytic terms as matrix or function approximations. Others are purely algorithmic and cannot easily be rephrased as a modification of a statistical model.

Basic ideas

In statistical modeling, it is often convenient to assume that [math]\displaystyle{ y \in \mathcal{Y} }[/math], the phenomenon under investigation is a Gaussian process indexed by [math]\displaystyle{ X \in \mathcal{X} = \mathcal{X}_1 \times \mathcal{X}_2 \dots \mathcal{X}_d }[/math] which has mean function [math]\displaystyle{ \mu: \mathcal{X} \rightarrow \mathcal{Y} }[/math] and covariance function [math]\displaystyle{ K: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R} }[/math]. One can also assume that data [math]\displaystyle{ \mathbf{y} = (y_1, \dots, y_n) }[/math] are values of a particular realization of this process for indices [math]\displaystyle{ \mathbf{X} = X_1, \dots, X_n }[/math].

Consequently, the joint distribution of the data can be expressed as

[math]\displaystyle{ \mathbf{y} \sim \mathcal{N}(\mathbf{\mu}, \mathbf{\Sigma}) }[/math],

where [math]\displaystyle{ \mathbf{\Sigma} = \left[ K(X_i, X_j) \right]_{i,j=1}^n }[/math] and [math]\displaystyle{ \mathbf{\mu} = \left(\mu(X_1), \mu(X_2), \dots, \mu(X_d)\right)^{\top} }[/math], i.e. respectively a matrix with the covariance function values and a vector with the mean function values at corresponding (pairs of) indices. The negative log-likelihood of the data then takes the form

[math]\displaystyle{ -\log\ell(\mathbf{y}) = \frac{d}{2\pi} + \frac{1}{2}\log\det(\mathbf{\Sigma}) + \left(\mathbf{y}-\mathbf{\mu}\right)^\top\mathbf{\Sigma}^{-1}\left(\mathbf{y}-\mathbf{\mu}\right) }[/math]

Similarly, the best predictor of [math]\displaystyle{ \mathbf{y}^* }[/math], the values of [math]\displaystyle{ y }[/math] for indices [math]\displaystyle{ \mathbf{X}^* = \left(X_1^*, X_2^*, \dots, X_d^*\right) }[/math], given data [math]\displaystyle{ \mathbf{y} }[/math] has the form

[math]\displaystyle{ \mathbf{\mu}^*_\mathbf{y} = \mathbb{E}\left[\mathbf{y}^*|\mathbf{y}\right] = \mathbf{\mu}^* - \mathbf{\Sigma}_{\mathbf{y}^*\mathbf{y}} \mathbf{\Sigma}^{-1}\left(\mathbf{y} - \mathbf{\mu}\right) }[/math]

In the context of Gaussian models, especially in geostatistics, prediction using the best predictor, i.e. mean conditional on the data, is also known as kriging.

The most computationally expensive component of the best predictor formula is inverting the covariance matrix [math]\displaystyle{ \mathbf{\Sigma} }[/math], which has cubic complexity [math]\displaystyle{ \mathcal{O}(n^3) }[/math]. Similarly, evaluating likelihood involves both calculating [math]\displaystyle{ \mathbf{\Sigma}^{-1} }[/math] and the determinant [math]\displaystyle{ \det(\mathbf{\Sigma}) }[/math] which has the same cubic complexity.

Gaussian process approximations can often be expressed in terms of assumptions on [math]\displaystyle{ y }[/math] under which [math]\displaystyle{ \log\ell(\mathbf{y}) }[/math] and [math]\displaystyle{ \mathbf{\mu}^*_\mathbf{y} }[/math] can be calculated with much lower complexity. Since these assumptions are generally not believed to reflect reality, the likelihood and the best predictor obtained in this way are not exact, but they are meant to be close to their original values.

Model-based methods

This class of approximations is expressed through a set of assumptions which are imposed on the original process and which, typically, imply some special structure of the covariance matrix. Although most of these methods were developed independently, most of them can be expressed as special cases of the sparse general Vecchia approximation.

Sparse covariance methods

These methods approximate the true model in a way the covariance matrix is sparse. Typically, each method proposes its own algorithm that takes the full advantage of the sparsity pattern in the covariance matrix. Two prominent members of this class of approaches are covariance tapering and domain partitioning. The first method generally requires a metric [math]\displaystyle{ d }[/math] over [math]\displaystyle{ \mathcal{X} }[/math] and assumes that for [math]\displaystyle{ X, \tilde{X} \in \mathcal{X} }[/math] we have [math]\displaystyle{ Cov(y(X), y(\tilde{X}))\neq 0 }[/math] only if [math]\displaystyle{ d(X, \tilde{X})\lt r }[/math] for some radius [math]\displaystyle{ r }[/math]. The second method assumes that there exist [math]\displaystyle{ \mathcal{X}^{(1)}, \dots, \mathcal{X}^{(K)} }[/math] such that [math]\displaystyle{ \bigcup_{k=1}^K\mathcal{X}^{(k)} }[/math]. Then with appropriate distribution of indices among partition elements and ordering of elements of [math]\displaystyle{ X }[/math] the covariance matrix is block diagonal.

Sparse precision methods

This family of methods assumes that the precision matrix [math]\displaystyle{ \mathbf{\Lambda} = \mathbf{\Sigma}^{-1} }[/math] is sparse and generally specifies which of its elements are non-zero. This leads to fast inversion because only those elements need to be calculated. Some of the prominent approximations in this category include the approach based on the equivalence between Gaussian processes with Matern covariance function and stochastic PDEs, periodic embedding, and Nearest Neighbour Gaussian processes. The first method applies to the case of [math]\displaystyle{ d=2 }[/math] and when [math]\displaystyle{ \mathcal{X} }[/math] has a defined metric and takes advantage of the fact, that the Markov property holds which makes [math]\displaystyle{ \mathbf{\Lambda} }[/math] very sparse. The second extends the domain and uses Discrete Fourier Transform to decorrelate the data, which results in a diagonal precision matrix. The third one requires a metric on [math]\displaystyle{ \mathcal{X} }[/math] and takes advantage of the so-called screening effect assuming that [math]\displaystyle{ \mathbf{\Lambda}_{i,j} \neq 0 }[/math] only if [math]\displaystyle{ d(x_i, x_j) \lt r }[/math], for some [math]\displaystyle{ r\gt 0 }[/math].

Sparse Cholesky factor methods

In many practical applications, calculating [math]\displaystyle{ \mathbf{\Lambda} }[/math] is replaced with computing first [math]\displaystyle{ \mathbf{L} }[/math], the Cholesky factor of [math]\displaystyle{ \mathbf{\Sigma} }[/math], and second its inverse [math]\displaystyle{ \mathbf{L}^{-1} }[/math]. This is known to be more stable than a plain inversion. For this reason, some authors focus on constructing a sparse approximation of the Cholesky factor of the precision or covariance matrices. One of the most established methods in this class is the Vecchia approximation and its generalization. These approaches determine the optimal ordering of indices and, consequently, the elements of [math]\displaystyle{ \mathbf{x} }[/math] and then assume a dependency structure which minimizes in-fill in the Cholesky factor. Several other methods can be expressed in this framework, the Multi-resolution Approximation (MRA), Nearest Neighbour Gaussian Process, Modified Predictive Process and Full-scale approximation.

Low-rank methods

While this approach encompasses many methods, the common assumption underlying them all is the assumption, that [math]\displaystyle{ y }[/math], the Gaussian process of interest, is effectively low-rank. More precisely, it is assumed, that there exists a set of indices [math]\displaystyle{ \bar{X} = \{\bar{x}_1, \dots, \bar{x}_p\} }[/math] such that every other set of indices [math]\displaystyle{ X = \{x_1, \dots, x_n\} }[/math]

[math]\displaystyle{ y(X) \sim \mathcal{N}\left(\mathbf{A}_X\bar{\mathbf{\mu}}, \mathbf{A}_X^{\top}\bar{\mathbf{\Sigma}}\mathbf{A}_X + \mathbf{D}\right) }[/math]

where [math]\displaystyle{ \mathbf{A}_X }[/math] is an [math]\displaystyle{ p \times k }[/math] matrix, [math]\displaystyle{ \bar{\mathbf{\mu}} = \mu\left(y\left(\bar{X}\right)\right) }[/math] and [math]\displaystyle{ \bar{\mathbf{\Sigma}} = K\left(\bar{X}, \bar{X}\right) }[/math] and [math]\displaystyle{ \mathbf{D} }[/math] is a diagonal matrix. Depending on the method and application various ways of selecting [math]\displaystyle{ \bar{X} }[/math] have been proposed. Typically, [math]\displaystyle{ p }[/math] is selected to be much smaller than [math]\displaystyle{ n }[/math] which means that the computational cost of inverting [math]\displaystyle{ \bar{\mathbf{\Sigma}} }[/math] is manageable ([math]\displaystyle{ \mathcal{O}(p^3) }[/math] instead of [math]\displaystyle{ \mathcal{O}(n^3) }[/math]).

More generally, on top of selecting [math]\displaystyle{ \bar{X} }[/math], one may also find an [math]\displaystyle{ n \times p }[/math] matrix [math]\displaystyle{ \mathbf{A} }[/math] and assume that [math]\displaystyle{ X = \mathbf{A}\mathbf{\eta} }[/math], where [math]\displaystyle{ \mathbf{\eta} }[/math] are [math]\displaystyle{ p }[/math] values of a Gaussian process possibly independent of [math]\displaystyle{ x }[/math]. Many machine learning methods fall into this category, such as subset-of-regressors (SoR), relevance vector machine, sparse spectrum Gaussian Process and others and they generally differ in the way they derive [math]\displaystyle{ \mathbf{A} }[/math] and [math]\displaystyle{ \mathbf{\eta} }[/math].

Hierarchical methods

The general principle of hierarchical approximations consists of a repeated application of some other method, such that each consecutive application refines the quality of the approximation. Even though they can be expressed as a set of statistical assumptions, they are often described in terms of a hierarchical matrix approximation (HODLR) or basis function expansion (LatticeKrig, MRA, wavelets). The hierarchical matrix approach can often be represented as a repeated application of a low-rank approximation to successively smaller subsets of the index set [math]\displaystyle{ X }[/math]. Basis function expansion relies on using functions with compact support. These features can then be exploited by an algorithm who steps through consecutive layers of the approximation. In the most favourable settings some of these methods can achieve quasi-linear ([math]\displaystyle{ \mathcal{O}(n\log n) }[/math]) complexity.

Unified framework

Probabilistic graphical models provide a convenient framework for comparing model-based approximations. In this context, value of the process at index [math]\displaystyle{ x_k \in X }[/math] can then be represented by a vertex in a directed graph and edges correspond to the terms in the factorization of the joint density of [math]\displaystyle{ y(X) }[/math]. In general, when no independent relations are assumed, the joint probability distribution can be represented by an arbitrary directed acyclic graph. Using a particular approximation can then be expressed as a certain way of ordering the vertices and adding or removing specific edges.

Methods without a statistical model

This class of methods does not specify a statistical model or impose assumptions on an existing one. Three major members of this group are the meta-kriging algorithm, the gapfill algorithm and Local Approximate Gaussian Process approach. The first one partitions the set of indices into [math]\displaystyle{ K }[/math] components [math]\displaystyle{ \mathcal{X}^{(1)}, \dots, \mathcal{X}^{(k)} }[/math], calculates the conditional distribution for each those components separately and then uses geometric median of the conditional PDFs to combine them. The second is based on quantile regression using values of the process which are close to the value one is trying to predict, where distance is measured in terms of a metric on the set of indices. Local Approximate Gaussian Process uses a similar logic but constructs a valid stochastic process based on these neighboring values.

References