Modes of variation
In statistics, modes of variation[1] are a continuously indexed set of vectors or functions that are centered at a mean and are used to depict the variation in a population or sample. Typically, variation patterns in the data can be decomposed in descending order of eigenvalues with the directions represented by the corresponding eigenvectors or eigenfunctions. Modes of variation provide a visualization of this decomposition and an efficient description of variation around the mean. Both in principal component analysis (PCA) and in functional principal component analysis (FPCA), modes of variation play an important role in visualizing and describing the variation in the data contributed by each eigencomponent.[2] In real-world applications, the eigencomponents and associated modes of variation aid to interpret complex data, especially in exploratory data analysis (EDA).
Formulation
Modes of variation are a natural extension of PCA and FPCA.
Modes of variation in PCA
If a random vector [math]\displaystyle{ \mathbf{X}=(X_1, X_2, \cdots, X_p)^T }[/math] has the mean vector [math]\displaystyle{ \boldsymbol{\mu}_p }[/math], and the covariance matrix [math]\displaystyle{ \mathbf{\Sigma}_{p\times p} }[/math] with eigenvalues [math]\displaystyle{ \lambda_1\geq \lambda_2\geq \cdots \geq \lambda_p\geq0 }[/math] and corresponding orthonormal eigenvectors [math]\displaystyle{ \mathbf{e}_1, \mathbf{e}_2, \cdots,\mathbf{e}_p }[/math], by eigendecomposition of a real symmetric matrix, the covariance matrix [math]\displaystyle{ \mathbf{\Sigma} }[/math] can be decomposed as
- [math]\displaystyle{ \mathbf{\Sigma}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^T, }[/math]
where [math]\displaystyle{ \mathbf{Q} }[/math] is an orthogonal matrix whose columns are the eigenvectors of [math]\displaystyle{ \mathbf{\Sigma} }[/math], and [math]\displaystyle{ \mathbf{\Lambda} }[/math] is a diagonal matrix whose entries are the eigenvalues of [math]\displaystyle{ \mathbf{\Sigma} }[/math]. By the Karhunen–Loève expansion for random vectors, one can express the centered random vector in the eigenbasis
- [math]\displaystyle{ \mathbf{X}-\boldsymbol{\mu}=\sum_{k=1}^p\xi_k\mathbf{e}_k, }[/math]
where [math]\displaystyle{ \xi_k=\mathbf{e}_k^T(\mathbf{X}-\boldsymbol{\mu}) }[/math] is the principal component[3] associated with the [math]\displaystyle{ k }[/math]-th eigenvector [math]\displaystyle{ \mathbf{e}_k }[/math], with the properties
- [math]\displaystyle{ \operatorname{E}(\xi_k)=0, \operatorname{Var}(\xi_k)=\lambda_k, }[/math] and [math]\displaystyle{ \operatorname{E}(\xi_k\xi_l)=0\ \text{for}\ l\neq k. }[/math]
Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ \mathbf{X} }[/math] is the set of vectors, indexed by [math]\displaystyle{ \alpha }[/math],
- [math]\displaystyle{ \mathbf{m}_{k, \alpha}=\boldsymbol{\mu}\pm \alpha\sqrt{\lambda_k}\mathbf{e}_k, \alpha\in[-A, A], }[/math]
where [math]\displaystyle{ A }[/math] is typically selected as [math]\displaystyle{ 2\ \text{or}\ 3 }[/math].
Modes of variation in FPCA
For a square-integrable random function [math]\displaystyle{ X(t), t \in \mathcal{T}\subset R^p }[/math], where typically [math]\displaystyle{ p=1 }[/math] and [math]\displaystyle{ \mathcal{T} }[/math] is an interval, denote the mean function by [math]\displaystyle{ \mu(t) = \operatorname{E}(X(t)) }[/math], and the covariance function by
- [math]\displaystyle{ G(s, t) = \operatorname{Cov}(X(s), X(t)) = \sum_{k=1}^\infty \lambda_k \varphi_k(s) \varphi_k(t), }[/math]
where [math]\displaystyle{ \lambda_1\geq \lambda_2\geq \cdots \geq 0 }[/math] are the eigenvalues and [math]\displaystyle{ \{\varphi_1, \varphi_2, \cdots\} }[/math] are the orthonormal eigenfunctions of the linear Hilbert–Schmidt operator
- [math]\displaystyle{ G: L^2(\mathcal{T}) \rightarrow L^2(\mathcal{T}),\, G(f) = \int_\mathcal{T} G(s, t) f(s) ds. }[/math]
By the Karhunen–Loève theorem, one can express the centered function in the eigenbasis,
- [math]\displaystyle{ X(t) - \mu(t) = \sum_{k=1}^\infty \xi_k \varphi_k(t), }[/math]
where
- [math]\displaystyle{ \xi_k = \int_\mathcal{T} (X(t) - \mu(t)) \varphi_k(t) dt }[/math]
is the [math]\displaystyle{ k }[/math]-th principal component with the properties
- [math]\displaystyle{ \operatorname{E}(\xi_k) = 0, \operatorname{Var}(\xi_k) = \lambda_k, }[/math] and [math]\displaystyle{ \operatorname{E}(\xi_k \xi_l) = 0 \text{ for } l \ne k. }[/math]
Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ X(t) }[/math] is the set of functions, indexed by [math]\displaystyle{ \alpha }[/math],
- [math]\displaystyle{ m_{k, \alpha}(t)=\mu(t)\pm \alpha\sqrt{\lambda_k}\varphi_k(t),\ t\in \mathcal{T},\ \alpha\in [-A, A] }[/math]
that are viewed simultaneously over the range of [math]\displaystyle{ \alpha }[/math], usually for [math]\displaystyle{ A=2\ \text{or}\ 3 }[/math].[2]
Estimation
The formulation above is derived from properties of the population. Estimation is needed in real-world applications. The key idea is to estimate mean and covariance.
Modes of variation in PCA
Suppose the data [math]\displaystyle{ \mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n }[/math] represent [math]\displaystyle{ n }[/math] independent drawings from some [math]\displaystyle{ p }[/math]-dimensional population [math]\displaystyle{ \mathbf{X} }[/math] with mean vector [math]\displaystyle{ \boldsymbol{\mu} }[/math] and covariance matrix [math]\displaystyle{ \mathbf{\Sigma} }[/math]. These data yield the sample mean vector [math]\displaystyle{ \overline\mathbf{{x}} }[/math], and the sample covariance matrix [math]\displaystyle{ \mathbf{S} }[/math] with eigenvalue-eigenvector pairs [math]\displaystyle{ (\hat{\lambda}_1, \hat{\mathbf{e}}_1), (\hat{\lambda}_2, \hat{\mathbf{e}}_2), \cdots, (\hat{\lambda}_p, \hat{\mathbf{e}}_p) }[/math]. Then the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ \mathbf{X} }[/math] can be estimated by
- [math]\displaystyle{ \hat{\mathbf{m}}_{k, \alpha}=\overline{\mathbf{x}}\pm \alpha\sqrt{\hat{\lambda}_k}\hat{\mathbf{e}}_k, \alpha\in [-A, A]. }[/math]
Modes of variation in FPCA
Consider [math]\displaystyle{ n }[/math] realizations [math]\displaystyle{ X_1(t), X_2(t), \cdots, X_n(t) }[/math] of a square-integrable random function [math]\displaystyle{ X(t), t \in \mathcal{T} }[/math] with the mean function [math]\displaystyle{ \mu(t) = \operatorname{E}(X(t)) }[/math] and the covariance function [math]\displaystyle{ G(s, t) = \operatorname{Cov}(X(s), X(t)) }[/math]. Functional principal component analysis provides methods for the estimation of [math]\displaystyle{ \mu(t) }[/math] and [math]\displaystyle{ G(s, t) }[/math] in detail, often involving point wise estimate and interpolation. Substituting estimates for the unknown quantities, the [math]\displaystyle{ k }[/math]-th mode of variation of [math]\displaystyle{ X(t) }[/math] can be estimated by
- [math]\displaystyle{ \hat{m}_{k, \alpha}(t)=\hat{\mu}(t)\pm \alpha\sqrt{\hat{\lambda}_k}\hat{\varphi}_k(t), t\in \mathcal{T}, \alpha\in [-A, A]. }[/math]
Applications
Modes of variation are useful to visualize and describe the variation patterns in the data sorted by the eigenvalues. In real-world applications, modes of variation associated with eigencomponents allow to interpret complex data, such as the evolution of function traits[5] and other infinite-dimensional data.[6] To illustrate how modes of variation work in practice, two examples are shown in the graphs to the right, which display the first two modes of variation. The solid curve represents the sample mean function. The dashed, dot-dashed, and dotted curves correspond to modes of variation with [math]\displaystyle{ \alpha=\pm1, \pm2, }[/math] and [math]\displaystyle{ \pm3 }[/math], respectively.
The first graph displays the first two modes of variation of female mortality data from 41 countries in 2003.[4] The object of interest is log hazard function between ages 0 and 100 years. The first mode of variation suggests that the variation of female mortality is smaller for ages around 0 or 100, and larger for ages around 25. An appropriate and intuitive interpretation is that mortality around 25 is driven by accidental death, while around 0 or 100, mortality is related to congenital disease or natural death.
Compared to female mortality data, modes of variation of male mortality data shows higher mortality after around age 20, possibly related to the fact that life expectancy for women is higher than that for men.
References
- ↑ Castro, P. E.; Lawton, W. H.; Sylvestre, E. A. (November 1986). "Principal Modes of Variation for Processes with Continuous Sample Curves". Technometrics 28 (4): 329. doi:10.2307/1268982. ISSN 0040-1706.
- ↑ 2.0 2.1 Wang, Jane-Ling; Chiou, Jeng-Min; Müller, Hans-Georg (June 2016). "Functional Data Analysis". Annual Review of Statistics and Its Application 3 (1): 257–295. doi:10.1146/annurev-statistics-041715-033624. ISSN 2326-8298. https://zenodo.org/record/895750.
- ↑ Kleffe, Jürgen (January 1973). "Principal components of random variables with values in a seperable hilbert space". Mathematische Operationsforschung und Statistik 4 (5): 391–406. doi:10.1080/02331887308801137. ISSN 0047-6277.
- ↑ 4.0 4.1 4.2 "Human Mortality Database". https://www.mortality.org/.
- ↑ Kirkpatrick, Mark; Heckman, Nancy (August 1989). "A quantitative genetic model for growth, shape, reaction norms, and other infinite-dimensional characters". Journal of Mathematical Biology 27 (4): 429–450. doi:10.1007/bf00290638. ISSN 0303-6812. PMID 2769086.
- ↑ Jones, M. C.; Rice, John A. (May 1992). "Displaying the Important Features of Large Collections of Similar Curves". The American Statistician 46 (2): 140–145. doi:10.1080/00031305.1992.10475870. ISSN 0003-1305.
Original source: https://en.wikipedia.org/wiki/Modes of variation.
Read more |