Kernel methods for vector output

From HandWiki
Revision as of 10:56, 24 October 2022 by Corlink (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity. In typical machine learning algorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them to borrow strength from each other. Algorithms of this type include multi-task learning (also called multi-output learning or vector-valued learning), transfer learning, and co-kriging. Multi-label classification can be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes.

In Gaussian processes, kernels are called covariance functions. Multiple-output functions correspond to considering multiple processes. See Bayesian interpretation of regularization for the connection between the two perspectives.

History

The history of learning vector-valued functions is closely linked to transfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn,” which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning.[1] Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously.

Much of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees and k-nearest neighbors in the 1990s.[2] The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging.[3][4][5] Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s.[6][7] While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.[8]

Notation

In this context, the supervised learning problem is to learn the function [math]\displaystyle{ f }[/math] which best predicts vector-valued outputs [math]\displaystyle{ \mathbf{y_i} }[/math] given inputs (data) [math]\displaystyle{ \mathbf{x_i} }[/math].

[math]\displaystyle{ f(\mathbf{x_i}) = \mathbf{y_i} }[/math] for [math]\displaystyle{ i=1, \ldots ,N }[/math]
[math]\displaystyle{ \mathbf{x_i} \in \mathcal{X} }[/math], an input space (e.g. [math]\displaystyle{ \mathcal{X} = \mathbb{R}^p }[/math])
[math]\displaystyle{ \mathbf{y_i} \in \mathbb{R}^D }[/math]

In general, each component of ([math]\displaystyle{ \mathbf{y_i} }[/math]), could have different input data ([math]\displaystyle{ \mathbf{x_{d,i}} }[/math]) with different cardinality ([math]\displaystyle{ p }[/math]) and even different input spaces ([math]\displaystyle{ \mathcal{X} }[/math]).[8] Geostatistics literature calls this case heterotopic, and uses isotopic to indicate that the each component of the output vector has the same set of inputs.[9]

Here, for simplicity in the notation, we assume the number and sample space of the data for each output are the same.

Regularization perspective[8][10][11]

From the regularization perspective, the problem is to learn [math]\displaystyle{ f_* }[/math] belonging to a reproducing kernel Hilbert space of vector-valued functions ([math]\displaystyle{ \mathcal{H} }[/math]). This is similar to the scalar case of Tikhonov regularization, with some extra care in the notation.

Vector-valued case Scalar case
Reproducing kernel [math]\displaystyle{ \mathbf{K}: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}^{D \times D} }[/math] [math]\displaystyle{ k: \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R} }[/math]
Learning problem [math]\displaystyle{ f_* = \operatorname{argmin} \sum\limits_{j=1}^D \frac{1}{N} \sum\limits_{i=1}^N (f_j(\mathbf{x_i}) - y_{j,i})^2 + \lambda \Vert \mathbf{f} \Vert_\mathbf{K}^2 }[/math] [math]\displaystyle{ f_* = \operatorname{argmin} \frac{1}{N} \sum\limits_{i=1}^N (f(\mathbf{x_i}) - y_{i})^2 + \lambda \Vert \mathbf{f} \Vert_k^2 }[/math]
Solution

(derived via the representer theorem[math]\displaystyle{ ^{\dagger} }[/math])
[math]\displaystyle{ f_*(\mathbf{x}) = \sum\limits_{i=1}^N \mathbf{K}(\mathbf{x_i},\mathbf{x})c_i }[/math]

with [math]\displaystyle{ \bar{\mathbf{c}} = (\mathbf{K}(\mathbf{X},\mathbf{X}) + \lambda N\mathbf(I))^{-1}\bar{\mathbf{y}} }[/math],
where [math]\displaystyle{ \bar{\mathbf{c}} \text{ and } \bar{\mathbf{y}} }[/math] are the coefficients and output vectors concatenated to form [math]\displaystyle{ ND }[/math] vectors and [math]\displaystyle{ \mathbf{K}(\mathbf{X},\mathbf{X}) \text{ is an } ND \times ND }[/math] matrix of [math]\displaystyle{ N \times N }[/math] blocks: [math]\displaystyle{ (\mathbf{K}(\mathbf{x_i},\mathbf{x_j}))_{d,d'} }[/math]

[math]\displaystyle{ f_*(\mathbf{x}) = \sum\limits_{i=1}^N k(\mathbf{x_i},\mathbf{x})c_i = \mathbf{k}_\mathbf{x}^\intercal \mathbf{c} }[/math]

Solve for [math]\displaystyle{ \mathbf{c} }[/math] by taking the derivative of the learning problem, setting it equal to zero, and substituting in the above expression for [math]\displaystyle{ f_* }[/math]:

[math]\displaystyle{ \mathbf{c} = (\mathbf{K} + \lambda I)^{-1}\mathbf{y} }[/math]

where [math]\displaystyle{ \mathbf{K}_{ij} = k(\mathbf{x_i},\mathbf{x_j}) = i^{\text{th}} \text{ element of }\mathbf{k}_\mathbf{x_j} }[/math]

[math]\displaystyle{ ^{\dagger} }[/math]It is possible, though non-trivial, to show that a representer theorem also holds for Tikhonov regularization in the vector-valued setting.[8]

Note, the matrix-valued kernel [math]\displaystyle{ \mathbf{K} }[/math] can also be defined by a scalar kernel [math]\displaystyle{ R }[/math] on the space [math]\displaystyle{ \mathcal{X} \times \{1, \ldots ,D\} }[/math]. An isometry exists between the Hilbert spaces associated with these two kernels:

[math]\displaystyle{ (\mathbf{K}(x,x'))_{d,d'} = R((x,d),(x',d')) }[/math]

Gaussian process perspective

The estimator of the vector-valued regularization framework can also be derived from a Bayesian viewpoint using Gaussian process methods in the case of a finite dimensional Reproducing kernel Hilbert space. The derivation is similar to the scalar-valued case Bayesian interpretation of regularization. The vector-valued function [math]\displaystyle{ \textbf{f} }[/math], consisting of [math]\displaystyle{ D }[/math] outputs [math]\displaystyle{ \left\{f_d\right\}_{d=1}^D }[/math], is assumed to follow a Gaussian process:

[math]\displaystyle{ \textbf{f} \sim \mathcal{GP}(\textbf{m},\textbf{K}) }[/math]

where [math]\displaystyle{ \textbf{m}: \mathcal{X} \to \textbf{R}^D }[/math] is now a vector of the mean functions [math]\displaystyle{ \left\{m_d(\textbf{x})\right\}_{d=1}^D }[/math] for the outputs and [math]\displaystyle{ \textbf{K} }[/math] is a positive definite matrix-valued function with entry [math]\displaystyle{ (\textbf{K}(\textbf{x},\textbf{x}'))_{d,d'} }[/math] corresponding to the covariance between the outputs [math]\displaystyle{ f_d(\textbf{x}) }[/math] and [math]\displaystyle{ f_{d'}(\textbf{x}') }[/math].

For a set of inputs [math]\displaystyle{ \textbf{X} }[/math], the prior distribution over the vector [math]\displaystyle{ \textbf{f}(\textbf{X}) }[/math] is given by [math]\displaystyle{ \mathcal{N}(\textbf{m}(\textbf{X}),\textbf{K}(\textbf{X},\textbf{X})) }[/math], where [math]\displaystyle{ \textbf{m}(\textbf{X}) }[/math] is a vector that concatenates the mean vectors associated to the outputs and [math]\displaystyle{ \textbf{K}(\textbf{X},\textbf{X}) }[/math] is a block-partitioned matrix. The distribution of the outputs is taken to be Gaussian:

[math]\displaystyle{ p(\textbf{y}\mid \textbf{f},\textbf{x}, \Sigma) = \mathcal{N}(\textbf{f}(\textbf{x}),\Sigma) }[/math]

where [math]\displaystyle{ \Sigma \in \mathcal{\textbf{R}}^{D \times D} }[/math] is a diagonal matrix with elements [math]\displaystyle{ \left\{\sigma_d^2\right\}_{d=1}^{D} }[/math] specifying the noise for each output. Using this form for the likelihood, the predictive distribution for a new vector [math]\displaystyle{ \textbf{x}_* }[/math] is:

[math]\displaystyle{ p(\textbf{f}(\textbf{x}_*)\mid\textbf{S},\textbf{f},\textbf{x}_*,\phi) = \mathcal{N}(\textbf{f}_*(\textbf{x}_*),\textbf{K}_*(\textbf{x}_*,\textbf{x}_*)) }[/math]

where [math]\displaystyle{ \textbf{S} }[/math] is the training data, and [math]\displaystyle{ \phi }[/math] is a set of hyperparameters for [math]\displaystyle{ \textbf{K}(\textbf{x},\textbf{x}') }[/math] and [math]\displaystyle{ \Sigma }[/math].

Equations for [math]\displaystyle{ \textbf{f}_* }[/math] and [math]\displaystyle{ \textbf{K}_* }[/math] can then be obtained:

[math]\displaystyle{ \textbf{f}_*(\textbf{x}_*) = \textbf{K}_{\textbf{x}_*}^T(\textbf{K}(\textbf{X},\textbf{X}) + \boldsymbol\Sigma)^{-1}\bar{\textbf{y}} }[/math]
[math]\displaystyle{ \textbf{K}_*(\textbf{x}_*,\textbf{x}_*) = \textbf{K}(\textbf{x}_*,\textbf{x}_*) - \textbf{K}_{\textbf{x}_*}(\textbf{K}(\textbf{X},\textbf{X}) + \boldsymbol\Sigma)^{-1}\textbf{K}_{\textbf{x}_*}^T }[/math]

where [math]\displaystyle{ \boldsymbol\Sigma = \Sigma \otimes \textbf{I}_N, \textbf{K}_{\textbf{x}_*} \in \mathcal{\textbf{R}}^{D \times ND} }[/math] has entries [math]\displaystyle{ (\textbf{K}(\textbf{x}_*,\textbf{x}_j))_{d,d'} }[/math] for [math]\displaystyle{ j = 1,\cdots,N }[/math] and [math]\displaystyle{ d,d' = 1,\cdots,D }[/math]. Note that the predictor [math]\displaystyle{ \textbf{f}^* }[/math] is identical to the predictor derived in the regularization framework. For non-Gaussian likelihoods different methods such as Laplace approximation and variational methods are needed to approximate the estimators.

Example kernels

Separable

A simple, but broadly applicable, class of multi-output kernels can be separated into the product of a kernel on the input-space and a kernel representing the correlations among the outputs:[8]

[math]\displaystyle{ (\mathbf{K}(\mathbf{x},\mathbf{x'}))_{d,d'} = k(\mathbf{x},\mathbf{x'})k_T(d,d') }[/math]
[math]\displaystyle{ k }[/math]: scalar kernel on [math]\displaystyle{ \mathcal{X} \times \mathcal{X} }[/math]
[math]\displaystyle{ k_T }[/math]: scalar kernel on [math]\displaystyle{ \{1, \ldots ,D\} \times \{1, \ldots ,D\} }[/math]

In matrix form: [math]\displaystyle{ \mathbf{K}(\mathbf{x},\mathbf{x'}) = k(\mathbf{x},\mathbf{x'})\mathbf{B} }[/math]    where [math]\displaystyle{ \mathbf{B} }[/math] is a [math]\displaystyle{ D \times D }[/math] symmetric and positive semi-definite matrix. Note, setting [math]\displaystyle{ \mathbf{B} }[/math] to the identity matrix treats the outputs as unrelated and is equivalent to solving the scalar-output problems separately.

For a slightly more general form, adding several of these kernels yields sum of separable kernels (SoS kernels).

From regularization literature[8][10][12][13][14]

Derived from regularizer

One way of obtaining [math]\displaystyle{ k_T }[/math] is to specify a regularizer which limits the complexity of [math]\displaystyle{ f }[/math] in a desirable way, and then derive the corresponding kernel. For certain regularizers, this kernel will turn out to be separable.

Mixed-effect regularizer

[math]\displaystyle{ R(\mathbf{f}) = A_\omega(C_\omega \sum\limits_{l=1}^D \| f_l \|_k^2 + \omega D \sum\limits_{l=1}^D \| f_l - \bar{f} \|_k^2) }[/math]

where:

  • [math]\displaystyle{ A_\omega = \frac{1}{2(1 - \omega)(1 - \omega + \omega D)} }[/math]
  • [math]\displaystyle{ C_\omega = (2 - 2\omega + \omega D) }[/math]
  • [math]\displaystyle{ \bar{f} = \frac{1}{D} \sum\limits_{q=1}^D f_q }[/math]
  • [math]\displaystyle{ K_\omega(x,x') = k(x,x')(\omega \mathbf{1} + (1-\omega) \mathbf{I}_D }[/math]

where [math]\displaystyle{ \mathbf{1} \text{ is a } D \times D }[/math] matrix with all entries equal to 1.

This regularizer is a combination of limiting the complexity of each component of the estimator ([math]\displaystyle{ f_l }[/math]) and forcing each component of the estimator to be close to the mean of all the components. Setting [math]\displaystyle{ \omega = 0 }[/math] treats all the components as independent and is the same as solving the scalar problems separately. Setting [math]\displaystyle{ \omega = 1 }[/math] assumes all the components are explained by the same function.

Cluster-based regularizer

[math]\displaystyle{ R(\mathbf{f}) = \varepsilon_1 \sum_{c=1}^r \sum_{l \in I(c)} \| f_l - \bar{f_c}\|_k^2 + \varepsilon_2 \sum\limits_{c=1}^r m_c \| \bar{f_c}\|_k^2 }[/math]

where:

  • [math]\displaystyle{ I(c) }[/math] is the index set of components that belong to cluster [math]\displaystyle{ c }[/math]
  • [math]\displaystyle{ m_c }[/math] is the cardinality of cluster [math]\displaystyle{ c }[/math]
  • [math]\displaystyle{ \bar{f_c} = \frac{1}{m_c} \sum\limits_{q \in I(c)} f_q }[/math]
  • [math]\displaystyle{ \mathbf{M}_{l,q} = \frac{1}{m_c} }[/math] if [math]\displaystyle{ l }[/math] and [math]\displaystyle{ q }[/math] both belong to cluster [math]\displaystyle{ c }[/math]  ([math]\displaystyle{ \mathbf{M}_{l,q} = 0 }[/math] otherwise
  • [math]\displaystyle{ K(x,x') = k(x,x') \mathbf{G}^\dagger }[/math]

where [math]\displaystyle{ \mathbf{G}_{l,q} = \varepsilon_1 \delta_{lq} + (\varepsilon_2 - \varepsilon_1)\mathbf{M}_{l,q} }[/math]

This regularizer divides the components into [math]\displaystyle{ r }[/math] clusters and forces the components in each cluster to be similar.

Graph regularizer

[math]\displaystyle{ R(\mathbf{f}) = \frac{1}{2} \sum\limits_{l,q=1}^D \Vert f_l - f_q \Vert_k^2 \mathbf{M}_{lq} + \sum\limits_{l=1}^D \Vert f_l \Vert_k^2 \mathbf{M}_{l,l} }[/math]

where [math]\displaystyle{ \mathbf{M} \text{ is a } D \times D }[/math] matrix of weights encoding the similarities between the components

[math]\displaystyle{ K(x,x') = k(x,x') \mathbf{L}^\dagger }[/math]

where [math]\displaystyle{ \mathbf{L} = \mathbf{D} - \mathbf{M} }[/math],   [math]\displaystyle{ \mathbf{D}_{l,q} = \delta_{l,q}(\sum\limits_{h=1}^D \mathbf{M}_{l,h} + \mathbf{M}_{l,q}) }[/math]

Note, [math]\displaystyle{ \mathbf{L} }[/math] is the graph laplacian. See also: graph kernel.

Learned from data

Several approaches to learning [math]\displaystyle{ \mathbf{B} }[/math] from data have been proposed.[8] These include: performing a preliminary inference step to estimate [math]\displaystyle{ \mathbf{B} }[/math] from the training data,[9] a proposal to learn [math]\displaystyle{ \mathbf{B} }[/math] and [math]\displaystyle{ \mathbf{f} }[/math] together based on the cluster regularizer,[15] and sparsity-based approaches which assume only a few of the features are needed.[16] [17]

From Bayesian literature

Linear model of coregionalization (LMC)

In LMC, outputs are expressed as linear combinations of independent random functions such that the resulting covariance function (over all inputs and outputs) is a valid positive semidefinite function. Assuming [math]\displaystyle{ D }[/math] outputs [math]\displaystyle{ \left\{f_d(\textbf{x})\right\}_{d=1}^D }[/math] with [math]\displaystyle{ \textbf{x} \in \mathcal{\textbf{R}}^p }[/math], each [math]\displaystyle{ f_d }[/math] is expressed as:

[math]\displaystyle{ f_d(\textbf{x}) = \sum_{q=1}^Q{a_{d,q}u_q(\textbf{x})} }[/math]

where [math]\displaystyle{ a_{d,q} }[/math] are scalar coefficients and the independent functions [math]\displaystyle{ u_q(\textbf{x}) }[/math] have zero mean and covariance cov[math]\displaystyle{ [u_q(\textbf{x}),u_{q'}(\textbf{x}')] = k_q(\textbf{x},\textbf{x}') }[/math] if [math]\displaystyle{ q=q' }[/math] and 0 otherwise. The cross covariance between any two functions [math]\displaystyle{ f_d(\textbf{x}) }[/math] and [math]\displaystyle{ f_{d'}(\textbf{x}) }[/math] can then be written as:

[math]\displaystyle{ \operatorname{cov}[f_d(\textbf{x}),f_{d'}(\textbf{x}')] = \sum_{q=1}^Q{\sum_{i=1}^{R_q}{a_{d,q}^ia_{d',q}^{i}k_q(\textbf{x},\textbf{x}')}} = \sum_{q=1}^Q{b_{d,d'}^qk_q(\textbf{x},\textbf{x}')} }[/math]

where the functions [math]\displaystyle{ u_q^i(\textbf{x}) }[/math], with [math]\displaystyle{ q=1,\cdots,Q }[/math] and [math]\displaystyle{ i=1,\cdots,R_q }[/math] have zero mean and covariance cov[math]\displaystyle{ [u_q^i(\textbf{x}),u_{q'}^{i'}(\textbf{x})'] = k_q(\textbf{x},\textbf{x}') }[/math] if [math]\displaystyle{ i=i' }[/math] and [math]\displaystyle{ q=q' }[/math]. But [math]\displaystyle{ \operatorname{cov}[f_d(\textbf{x}),f_{d'}(\textbf{x}')] }[/math] is given by [math]\displaystyle{ (\textbf{K}(\textbf{x},\textbf{x}'))_{d,d'} }[/math]. Thus the kernel [math]\displaystyle{ \textbf{K}(\textbf{x},\textbf{x}') }[/math] can now be expressed as

[math]\displaystyle{ \textbf{K}(\textbf{x},\textbf{x}') = \sum_{q=1}^Q{\textbf{B}_qk_q(\textbf{x},\textbf{x}')} }[/math]

where each [math]\displaystyle{ \textbf{B}_q \in \mathcal{\textbf{R}}^{D \times D} }[/math] is known as a coregionalization matrix. Therefore, the kernel derived from LMC is a sum of the products of two covariance functions, one that models the dependence between the outputs, independently of the input vector [math]\displaystyle{ \textbf{x} }[/math] (the coregionalization matrix [math]\displaystyle{ \textbf{B}_q }[/math]), and one that models the input dependence, independently of [math]\displaystyle{ \left\{f_d(\textbf{x})\right\}_{d=1}^D }[/math](the covariance function [math]\displaystyle{ k_q(\textbf{x},\textbf{x}') }[/math]).

Intrinsic coregionalization model (ICM)

The ICM is a simplified version of the LMC, with [math]\displaystyle{ Q=1 }[/math]. ICM assumes that the elements [math]\displaystyle{ b_{d,d'}^q }[/math] of the coregionalization matrix [math]\displaystyle{ \mathbf{B}_q }[/math] can be written as [math]\displaystyle{ b_{d,d'}^q = v_{d,d'}b_q }[/math], for some suitable coefficients [math]\displaystyle{ v_{d,d'} }[/math]. With this form for [math]\displaystyle{ b_{d,d'}^q }[/math]:

[math]\displaystyle{ \operatorname{cov} \left [f_d(\mathbf{x}),f_{d'}(\mathbf{x}') \right ] = \sum_{q=1}^Q{v_{d,d'}b_qk_q (\mathbf{x},\mathbf{x}')} = v_{d,d'}\sum_{q=1}^Q{b_qk_q(\mathbf{x},\mathbf{x}')} = v_{d,d'}k(\mathbf{x},\mathbf{x}') }[/math]

where

[math]\displaystyle{ k(\mathbf{x},\mathbf{x}') = \sum_{q=1}^Q{b_qk_q(\mathbf{x},\mathbf{x}')}. }[/math]

In this case, the coefficients

[math]\displaystyle{ v_{d,d'} = \sum_{i=1}^{R_1}{a_{d,1}^ia_{d',1}^i} = b_{d,d'}^1 }[/math]

and the kernel matrix for multiple outputs becomes [math]\displaystyle{ \mathbf{K}(\mathbf{x},\mathbf{x}') = k(\mathbf{x},\mathbf{x}')\mathbf{B} }[/math]. ICM is much more restrictive than the LMC since it assumes that each basic covariance [math]\displaystyle{ k_q(\mathbf{x},\mathbf{x}') }[/math] contributes equally to the construction of the autocovariances and cross covariances for the outputs. However, the computations required for the inference are greatly simplified.

Semiparametric latent factor model (SLFM)

Another simplified version of the LMC is the semiparametric latent factor model (SLFM), which corresponds to setting [math]\displaystyle{ R_q = 1 }[/math] (instead of [math]\displaystyle{ Q = 1 }[/math] as in ICM). Thus each latent function [math]\displaystyle{ u_q }[/math] has its own covariance.

Non-separable

While simple, the structure of separable kernels can be too limiting for some problems.

Notable examples of non-separable kernels in the regularization literature include:

In the Bayesian perspective, LMC produces a separable kernel because the output functions evaluated at a point [math]\displaystyle{ \textbf{x} }[/math] only depend on the values of the latent functions at [math]\displaystyle{ \textbf{x} }[/math]. A non-trivial way to mix the latent functions is by convolving a base process with a smoothing kernel. If the base process is a Gaussian process, the convolved process is Gaussian as well. We can therefore exploit convolutions to construct covariance functions.[20] This method of producing non-separable kernels is known as process convolution. Process convolutions were introduced for multiple outputs in the machine learning community as "dependent Gaussian processes".[21]

Implementation

When implementing an algorithm using any of the kernels above, practical considerations of tuning the parameters and ensuring reasonable computation time must be considered.

Regularization perspective

Approached from the regularization perspective, parameter tuning is similar to the scalar-valued case and can generally be accomplished with cross validation. Solving the required linear system is typically expensive in memory and time. If the kernel is separable, a coordinate transform can convert [math]\displaystyle{ \mathbf{K}(\mathbf{X},\mathbf{X}) }[/math] to a block-diagonal matrix, greatly reducing the computational burden by solving D independent subproblems (plus the eigendecomposition of [math]\displaystyle{ \mathbf{B} }[/math]). In particular, for a least squares loss function (Tikhonov regularization), there exists a closed form solution for [math]\displaystyle{ \bar{\mathbf{c}} }[/math]:[8][14]

[math]\displaystyle{ \bar{\mathbf{c}}^d = \left (k(\mathbf{X},\mathbf{X}) + \frac{\lambda_N}{\sigma_d} \mathbf{I} \right )^{-1}\frac{\bar{\mathbf{y}}^d}{\sigma_d} }[/math]

Bayesian perspective

There are many works related to parameter estimation for Gaussian processes. Some methods such as maximization of the marginal likelihood (also known as evidence approximation, type II maximum likelihood, empirical Bayes), and least squares give point estimates of the parameter vector [math]\displaystyle{ \phi }[/math]. There are also works employing a full Bayesian inference by assigning priors to [math]\displaystyle{ \phi }[/math] and computing the posterior distribution through a sampling procedure. For non-Gaussian likelihoods, there is no closed form solution for the posterior distribution or for the marginal likelihood. However, the marginal likelihood can be approximated under a Laplace, variational Bayes or expectation propagation (EP) approximation frameworks for multiple output classification and used to find estimates for the hyperparameters.

The main computational problem in the Bayesian viewpoint is the same as the one appearing in regularization theory of inverting the matrix

[math]\displaystyle{ \overline{\mathbf{K}(\mathbf{X},\mathbf{X})} = \mathbf{K}(\mathbf{X},\mathbf{X}) + \boldsymbol{\Sigma}. }[/math]

This step is necessary for computing the marginal likelihood and the predictive distribution. For most proposed approximation methods to reduce computation, the computational efficiency gained is independent of the particular method employed (e.g. LMC, process convolution) used to compute the multi-output covariance matrix. A summary of different methods for reducing computational complexity in multi-output Gaussian processes is presented in.[8]

References

  1. S.J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transactions on Knowledge and Data Engineering, 22, 2010
  2. Rich Caruana, "Multitask Learning," Machine Learning, 41–76, 1997
  3. J. Ver Hoef and R. Barry, "Constructing and fitting models for cokriging and multivariable spatial prediction[|permanent dead link|dead link}}]," Journal of Statistical Planning and Inference, 69:275–294, 1998
  4. P. Goovaerts, "Geostatistics for Natural Resources Evaluation," Oxford University Press, USA, 1997
  5. N. Cressie "Statistics for Spatial Data," John Wiley & Sons Inc. (Revised Edition), USA, 1993
  6. C.A. Micchelli and M. Pontil, "On learning vector-valued functions," Neural Computation, 17:177–204, 2005
  7. C. Carmeli et al., "Vector valued reproducing kernel hilbert spaces of integrable functions and mercer theorem," Anal. Appl. (Singap.), 4
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 Mauricio A. Álvarez, Lorenzo Rosasco, and Neil D. Lawrence, "Kernels for Vector-Valued Functions: A Review," Foundations and Trends in Machine Learning 4, no. 3 (2012): 195–266. doi: 10.1561/2200000036 arXiv:1106.6251
  9. 9.0 9.1 Hans Wackernagel. Multivariate Geostatistics. Springer-Verlag Heidelberg New york, 2003.
  10. 10.0 10.1 C.A. Micchelli and M. Pontil. On learning vector–valued functions. Neural Computation, 17:177–204, 2005.
  11. C.Carmeli, E.DeVito, and A.Toigo. Vector valued reproducing kernel Hilbert spaces of integrable functions and Mercer theorem. Anal. Appl. (Singap.), 4(4):377–408, 2006.
  12. C. A. Micchelli and M. Pontil. Kernels for multi-task learning. In Advances in Neural Information Processing Systems (NIPS). MIT Press, 2004.
  13. T.Evgeniou, C.A.Micchelli, and M.Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615–637, 2005.
  14. 14.0 14.1 L. Baldassarre, L. Rosasco, A. Barla, and A. Verri. Multi-output learning via spectral filtering. Technical report, Massachusetts Institute of Technology, 2011. MIT-CSAIL-TR-2011-004, CBCL-296.
  15. Laurent Jacob, Francis Bach, and Jean-Philippe Vert. Clustered multi-task learning: A convex formulation. In NIPS 21, pages 745–752, 2008.
  16. Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243–272, 2008.
  17. Andreas Argyriou, Andreas Maurer, and Massimiliano Pontil. An algorithm for transfer learning in a heterogeneous environment. In ECML/PKDD (1), pages 71–85, 2008.
  18. I. Maceˆdo and R. Castro. Learning divergence-free and curl-free vector fields with matrix-valued kernels. Technical report, Instituto Nacional de Matematica Pura e Aplicada, 2008.
  19. A. Caponnetto, C.A. Micchelli, M. Pontil, and Y. Ying. Universal kernels for multi-task learning. Journal of Machine Learning Research, 9:1615–1646, 2008.
  20. D. Higdon, "Space and space-time modeling using process convolutions, Quantitative methods for current environmental issues, 37–56, 2002
  21. P. Boyle and M. Frean, "Dependent gaussian processes, Advances in Neural Information Processing Systems, 17:217–224, MIT Press, 2005