Sufficient dimension reduction

From HandWiki
Revision as of 17:57, 6 February 2024 by Jworkorg (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In statistics, sufficient dimension reduction (SDR) is a paradigm for analyzing data that combines the ideas of dimension reduction with the concept of sufficiency.

Dimension reduction has long been a primary goal of regression analysis. Given a response variable y and a p-dimensional predictor vector [math]\displaystyle{ \textbf{x} }[/math], regression analysis aims to study the distribution of [math]\displaystyle{ y\mid\textbf{x} }[/math], the conditional distribution of [math]\displaystyle{ y }[/math] given [math]\displaystyle{ \textbf{x} }[/math]. A dimension reduction is a function [math]\displaystyle{ R(\textbf{x}) }[/math] that maps [math]\displaystyle{ \textbf{x} }[/math] to a subset of [math]\displaystyle{ \mathbb{R}^k }[/math], k < p, thereby reducing the dimension of [math]\displaystyle{ \textbf{x} }[/math].[1] For example, [math]\displaystyle{ R(\textbf{x}) }[/math] may be one or more linear combinations of [math]\displaystyle{ \textbf{x} }[/math].

A dimension reduction [math]\displaystyle{ R(\textbf{x}) }[/math] is said to be sufficient if the distribution of [math]\displaystyle{ y\mid R(\textbf{x}) }[/math] is the same as that of [math]\displaystyle{ y\mid\textbf{x} }[/math]. In other words, no information about the regression is lost in reducing the dimension of [math]\displaystyle{ \textbf{x} }[/math] if the reduction is sufficient.[1]

Graphical motivation

In a regression setting, it is often useful to summarize the distribution of [math]\displaystyle{ y\mid\textbf{x} }[/math] graphically. For instance, one may consider a scatterplot of [math]\displaystyle{ y }[/math] versus one or more of the predictors or a linear combination of the predictors. A scatterplot that contains all available regression information is called a sufficient summary plot.

When [math]\displaystyle{ \textbf{x} }[/math] is high-dimensional, particularly when [math]\displaystyle{ p\geq 3 }[/math], it becomes increasingly challenging to construct and visually interpret sufficiency summary plots without reducing the data. Even three-dimensional scatter plots must be viewed via a computer program, and the third dimension can only be visualized by rotating the coordinate axes. However, if there exists a sufficient dimension reduction [math]\displaystyle{ R(\textbf{x}) }[/math] with small enough dimension, a sufficient summary plot of [math]\displaystyle{ y }[/math] versus [math]\displaystyle{ R(\textbf{x}) }[/math] may be constructed and visually interpreted with relative ease.

Hence sufficient dimension reduction allows for graphical intuition about the distribution of [math]\displaystyle{ y\mid\textbf{x} }[/math], which might not have otherwise been available for high-dimensional data.

Most graphical methodology focuses primarily on dimension reduction involving linear combinations of [math]\displaystyle{ \textbf{x} }[/math]. The rest of this article deals only with such reductions.

Dimension reduction subspace

Suppose [math]\displaystyle{ R(\textbf{x}) = A^T\textbf{x} }[/math] is a sufficient dimension reduction, where [math]\displaystyle{ A }[/math] is a [math]\displaystyle{ p\times k }[/math] matrix with rank [math]\displaystyle{ k\leq p }[/math]. Then the regression information for [math]\displaystyle{ y\mid\textbf{x} }[/math] can be inferred by studying the distribution of [math]\displaystyle{ y\mid A^T\textbf{x} }[/math], and the plot of [math]\displaystyle{ y }[/math] versus [math]\displaystyle{ A^T\textbf{x} }[/math] is a sufficient summary plot.

Without loss of generality, only the space spanned by the columns of [math]\displaystyle{ A }[/math] need be considered. Let [math]\displaystyle{ \eta }[/math] be a basis for the column space of [math]\displaystyle{ A }[/math], and let the space spanned by [math]\displaystyle{ \eta }[/math] be denoted by [math]\displaystyle{ \mathcal{S}(\eta) }[/math]. It follows from the definition of a sufficient dimension reduction that

[math]\displaystyle{ F_{y\mid x} = F_{y\mid\eta^Tx}, }[/math]

where [math]\displaystyle{ F }[/math] denotes the appropriate distribution function. Another way to express this property is

[math]\displaystyle{ y\perp\!\!\!\perp\textbf{x}\mid\eta^T\textbf{x}, }[/math]

or [math]\displaystyle{ y }[/math] is conditionally independent of [math]\displaystyle{ \textbf{x} }[/math], given [math]\displaystyle{ \eta^T\textbf{x} }[/math]. Then the subspace [math]\displaystyle{ \mathcal{S}(\eta) }[/math] is defined to be a dimension reduction subspace (DRS).[2]

Structural dimensionality

For a regression [math]\displaystyle{ y\mid\textbf{x} }[/math], the structural dimension, [math]\displaystyle{ d }[/math], is the smallest number of distinct linear combinations of [math]\displaystyle{ \textbf{x} }[/math] necessary to preserve the conditional distribution of [math]\displaystyle{ y\mid\textbf{x} }[/math]. In other words, the smallest dimension reduction that is still sufficient maps [math]\displaystyle{ \textbf{x} }[/math] to a subset of [math]\displaystyle{ \mathbb{R}^d }[/math]. The corresponding DRS will be d-dimensional.[2]

Minimum dimension reduction subspace

A subspace [math]\displaystyle{ \mathcal{S} }[/math] is said to be a minimum DRS for [math]\displaystyle{ y\mid\textbf{x} }[/math] if it is a DRS and its dimension is less than or equal to that of all other DRSs for [math]\displaystyle{ y\mid\textbf{x} }[/math]. A minimum DRS [math]\displaystyle{ \mathcal{S} }[/math] is not necessarily unique, but its dimension is equal to the structural dimension [math]\displaystyle{ d }[/math] of [math]\displaystyle{ y\mid\textbf{x} }[/math], by definition.[2]

If [math]\displaystyle{ \mathcal{S} }[/math] has basis [math]\displaystyle{ \eta }[/math] and is a minimum DRS, then a plot of y versus [math]\displaystyle{ \eta^T\textbf{x} }[/math] is a minimal sufficient summary plot, and it is (d + 1)-dimensional.

Central subspace

If a subspace [math]\displaystyle{ \mathcal{S} }[/math] is a DRS for [math]\displaystyle{ y\mid\textbf{x} }[/math], and if [math]\displaystyle{ \mathcal{S}\subset\mathcal{S}_\text{drs} }[/math] for all other DRSs [math]\displaystyle{ \mathcal{S}_\text{drs} }[/math], then it is a central dimension reduction subspace, or simply a central subspace, and it is denoted by [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math]. In other words, a central subspace for [math]\displaystyle{ y\mid\textbf{x} }[/math] exists if and only if the intersection [math]\displaystyle{ \bigcap\mathcal{S}_\text{drs} }[/math] of all dimension reduction subspaces is also a dimension reduction subspace, and that intersection is the central subspace [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math].[2]

The central subspace [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math] does not necessarily exist because the intersection [math]\displaystyle{ \bigcap\mathcal{S}_\text{drs} }[/math] is not necessarily a DRS. However, if [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math] does exist, then it is also the unique minimum dimension reduction subspace.[2]

Existence of the central subspace

While the existence of the central subspace [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math] is not guaranteed in every regression situation, there are some rather broad conditions under which its existence follows directly. For example, consider the following proposition from Cook (1998):

Let [math]\displaystyle{ \mathcal{S}_1 }[/math] and [math]\displaystyle{ \mathcal{S}_2 }[/math] be dimension reduction subspaces for [math]\displaystyle{ y\mid\textbf{x} }[/math]. If [math]\displaystyle{ \textbf{x} }[/math] has density [math]\displaystyle{ f(a) \gt 0 }[/math] for all [math]\displaystyle{ a\in\Omega_x }[/math] and [math]\displaystyle{ f(a) = 0 }[/math] everywhere else, where [math]\displaystyle{ \Omega_x }[/math] is convex, then the intersection [math]\displaystyle{ \mathcal{S}_1\cap\mathcal{S}_2 }[/math] is also a dimension reduction subspace.

It follows from this proposition that the central subspace [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math] exists for such [math]\displaystyle{ \textbf{x} }[/math].[2]

Methods for dimension reduction

There are many existing methods for dimension reduction, both graphical and numeric. For example, sliced inverse regression (SIR) and sliced average variance estimation (SAVE) were introduced in the 1990s and continue to be widely used.[3] Although SIR was originally designed to estimate an effective dimension reducing subspace, it is now understood that it estimates only the central subspace, which is generally different.

More recent methods for dimension reduction include likelihood-based sufficient dimension reduction,[4] estimating the central subspace based on the inverse third moment (or kth moment),[5] estimating the central solution space,[6] graphical regression,[2] envelope model, and the principal support vector machine.[7] For more details on these and other methods, consult the statistical literature.

Principal components analysis (PCA) and similar methods for dimension reduction are not based on the sufficiency principle.

Example: linear regression

Consider the regression model

[math]\displaystyle{ y = \alpha + \beta^T\textbf{x} + \varepsilon,\text{ where }\varepsilon\perp\!\!\!\perp\textbf{x}. }[/math]

Note that the distribution of [math]\displaystyle{ y\mid\textbf{x} }[/math] is the same as the distribution of [math]\displaystyle{ y\mid\beta^T\textbf{x} }[/math]. Hence, the span of [math]\displaystyle{ \beta }[/math] is a dimension reduction subspace. Also, [math]\displaystyle{ \beta^T\textbf{x} }[/math] is 1-dimensional (unless [math]\displaystyle{ \beta=\textbf{0} }[/math]), so the structural dimension of this regression is [math]\displaystyle{ d=1 }[/math].

The OLS estimate [math]\displaystyle{ \hat{\beta} }[/math] of [math]\displaystyle{ \beta }[/math] is consistent, and so the span of [math]\displaystyle{ \hat{\beta} }[/math] is a consistent estimator of [math]\displaystyle{ \mathcal{S}_{y\mid x} }[/math]. The plot of [math]\displaystyle{ y }[/math] versus [math]\displaystyle{ \hat{\beta}^T\textbf{x} }[/math] is a sufficient summary plot for this regression.

See also

Notes

  1. 1.0 1.1 Cook & Adragni (2009) Sufficient Dimension Reduction and Prediction in Regression In: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906): 4385–4405
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Cook, R.D. (1998) Regression Graphics: Ideas for Studying Regressions Through Graphics, Wiley ISBN:0471193658
  3. Li, K-C. (1991) Sliced Inverse Regression for Dimension Reduction In: Journal of the American Statistical Association, 86(414): 316–327
  4. Cook, R.D. and Forzani, L. (2009) "Likelihood-Based Sufficient Dimension Reduction", Journal of the American Statistical Association, 104(485): 197–208
  5. Yin, X. and Cook, R.D. (2003) Estimating Central Subspaces via Inverse Third Moments In: Biometrika, 90(1): 113–125
  6. Li, B. and Dong, Y.D. (2009) Dimension Reduction for Nonelliptically Distributed Predictors In: Annals of Statistics, 37(3): 1272–1298
  7. Li, Bing; Artemiou, Andreas; Li, Lexin (2011). "Principal support vector machines for linear and nonlinear sufficient dimension reduction". The Annals of Statistics 39 (6): 3182–3210. doi:10.1214/11-AOS932. 

References

  • Cook, R.D. (1998) Regression Graphics: Ideas for Studying Regressions through Graphics, Wiley Series in Probability and Statistics. Regression Graphics.
  • Cook, R.D. and Adragni, K.P. (2009) "Sufficient Dimension Reduction and Prediction in Regression", Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906), 4385–4405. Full-text
  • Cook, R.D. and Weisberg, S. (1991) "Sliced Inverse Regression for Dimension Reduction: Comment", Journal of the American Statistical Association, 86(414), 328–332. Jstor
  • Li, K-C. (1991) "Sliced Inverse Regression for Dimension Reduction", Journal of the American Statistical Association, 86(414), 316–327. Jstor

External links