Sliced inverse regression

From HandWiki
Short description: Method for dimension reduction in statistics

Sliced inverse regression (or SIR) is a tool for dimensionality reduction in the field of multivariate statistics.[1]

In statistics, regression analysis is a method of studying the relationship between a response variable y and its input variable [math]\displaystyle{ \underline{x} }[/math], which is a p-dimensional vector. There are several approaches in the category of regression. For example, parametric methods include multiple linear regression, and non-parametric methods include local smoothing.

As the number of observations needed to use local smoothing methods scales exponentially with high-dimensional data (as p grows), reducing the number of dimensions can make the operation computable. Dimensionality reduction aims to achieve this by showing only the most important dimension of the data. SIR uses the inverse regression curve, [math]\displaystyle{ E(\underline{x}\,|\,y) }[/math], to perform a weighted principal component analysis.

Model

Given a response variable [math]\displaystyle{ \,Y }[/math] and a (random) vector [math]\displaystyle{ X \in \R^p }[/math] of explanatory variables, SIR is based on the model

[math]\displaystyle{ Y=f(\beta_1^\top X,\ldots,\beta_k^\top X,\varepsilon)\quad\quad\quad\quad\quad(1) }[/math]

where [math]\displaystyle{ \beta_1,\ldots,\beta_k }[/math] are unknown projection vectors, [math]\displaystyle{ \,k }[/math] is an unknown number smaller than [math]\displaystyle{ \,p }[/math], [math]\displaystyle{ \;f }[/math] is an unknown function on [math]\displaystyle{ \R^{k+1} }[/math]as it only depends on[math]\displaystyle{ \,k }[/math] arguments, and [math]\displaystyle{ \varepsilon }[/math] is a random variable representing error with [math]\displaystyle{ E[\varepsilon|X]=0 }[/math] and a finite variance of [math]\displaystyle{ \sigma^2 }[/math]. The model describes an ideal solution, where [math]\displaystyle{ \,Y }[/math] depends on [math]\displaystyle{ X \in \R^p }[/math] only through a[math]\displaystyle{ \,k }[/math] dimensional subspace; i.e., one can reduce the dimension of the explanatory variables from[math]\displaystyle{ \,p }[/math] to a smaller number[math]\displaystyle{ \,k }[/math] without losing any information.

An equivalent version of [math]\displaystyle{ \,(1) }[/math] is: the conditional distribution of [math]\displaystyle{ \,Y }[/math] given [math]\displaystyle{ \, X }[/math] depends on [math]\displaystyle{ \, X }[/math] only through the [math]\displaystyle{ \,k }[/math] dimensional random vector [math]\displaystyle{ (\beta_1^\top X,\ldots,\beta_k^\top X) }[/math]. It is assumed that this reduced vector is as informative as the original [math]\displaystyle{ \,X }[/math] in explaining [math]\displaystyle{ \, Y }[/math].

The unknown [math]\displaystyle{ \,\beta_i's }[/math] are called the effective dimension reducing directions (EDR-directions). The space that is spanned by these vectors is denoted by the effective dimension reducing space (EDR-space).

Relevant linear algebra background

Given [math]\displaystyle{ \underline{a}_1,\ldots,\underline{a}_r \in \R^n }[/math], then [math]\displaystyle{ V:=L(\underline{a}_1,\ldots,\underline{a}_r) }[/math], the set of all linear combinations of these vectors is called a linear subspace and is therefore a vector space. The equation says that vectors [math]\displaystyle{ \underline{a}_1,\ldots,\underline{a}_r }[/math] span [math]\displaystyle{ \,V }[/math], but the vectors that span space [math]\displaystyle{ \,V }[/math] are not unique.

The dimension of [math]\displaystyle{ \,V (\in \R^n) }[/math] is equal to the maximum number of linearly independent vectors in [math]\displaystyle{ \,V }[/math]. A set of [math]\displaystyle{ \,n }[/math] linear independent vectors of [math]\displaystyle{ \R^n }[/math] makes up a basis of [math]\displaystyle{ \R^n }[/math]. The dimension of a vector space is unique, but the basis itself is not. Several bases can span the same space. Dependent vectors can still span a space, but the linear combinations of the latter are only suitable to a set of vectors lying on a straight line.

Inverse regression

Computing the inverse regression curve (IR) means instead of looking for

  • [math]\displaystyle{ \,E[Y|X=x] }[/math], which is a curve in [math]\displaystyle{ \R^p }[/math]

it is actually

  • [math]\displaystyle{ \,E[X|Y=y] }[/math], which is also a curve in [math]\displaystyle{ \R^p }[/math], but consisting of [math]\displaystyle{ \,p }[/math] one-dimensional regressions.

The center of the inverse regression curve is located at [math]\displaystyle{ \,E[E[X|Y]]=E[X] }[/math]. Therefore, the centered inverse regression curve is

  • [math]\displaystyle{ \,E[X|Y=y]-E[X] }[/math]

which is a [math]\displaystyle{ \,p }[/math] dimensional curve in [math]\displaystyle{ \R^p }[/math].

Inverse regression versus dimension reduction

The centered inverse regression curve lies on a [math]\displaystyle{ \,k }[/math]-dimensional subspace spanned by [math]\displaystyle{ \,\Sigma_{xx}\beta_i\,'s }[/math]. This is a connection between the model and inverse regression.

Given this condition and [math]\displaystyle{ \,(1) }[/math], the centered inverse regression curve [math]\displaystyle{ \,E[X|Y=y]-E[X] }[/math] is contained in the linear subspace spanned by [math]\displaystyle{ \,\Sigma_{xx}\beta_k(k=1,\ldots,K) }[/math], where [math]\displaystyle{ \,\Sigma_{xx}=Cov(X) }[/math].

Estimation of the EDR-directions

After having had a look at all the theoretical properties, the aim now is to estimate the EDR-directions. For that purpose, weighted principal component analyses are needed. If the sample means [math]\displaystyle{ \,\hat{m}_h\,'s }[/math], [math]\displaystyle{ \,X }[/math] would have been standardized to [math]\displaystyle{ \,Z=\Sigma_{xx}^{-1/2}\{X-E(X)\} }[/math]. Corresponding to the theorem above, the IR-curve [math]\displaystyle{ \,m_1(y)=E[Z|Y=y] }[/math] lies in the space spanned by [math]\displaystyle{ \,(\eta_1,\ldots,\eta_k) }[/math], where [math]\displaystyle{ \,\eta_i=\Sigma^{1/2}_{xx} \beta_i }[/math]. As a consequence, the covariance matrix [math]\displaystyle{ \,cov[E[Z|Y]] }[/math] is degenerate in any direction orthogonal to the [math]\displaystyle{ \,\eta_i\,'s }[/math]. Therefore, the eigenvectors [math]\displaystyle{ \,\eta_k (k=1,\ldots,K) }[/math] associated with the largest[math]\displaystyle{ \,K }[/math] eigenvalues are the standardized EDR-directions.

Algorithm

The algorithm to estimate the EDR-directions via SIR is as follows.

1. Let [math]\displaystyle{ \,\Sigma_{xx} }[/math] be the covariance matrix of [math]\displaystyle{ \,X }[/math]. Standardize [math]\displaystyle{ \,X }[/math] to

[math]\displaystyle{ \,Z=\Sigma_{xx}^{-1/2}\{X-E(X)\} }[/math]

([math]\displaystyle{ \,(1) }[/math] can also be rewritten as

[math]\displaystyle{ Y=f(\eta_1^\top Z,\ldots,\eta_k^\top Z,\varepsilon) }[/math]

where [math]\displaystyle{ \,\eta_k=\beta_k\Sigma_{xx}^{1/2}\quad\forall\; k }[/math].)

2. Divide the range of [math]\displaystyle{ \,y_i }[/math] into [math]\displaystyle{ \,S }[/math] non-overlapping slices [math]\displaystyle{ \,H_s(s=1,\ldots,S).\; n_s }[/math] is the number of observations within each slice and [math]\displaystyle{ \,I_{H_s} }[/math] is the indicator function for the slice:

[math]\displaystyle{ n_s=\sum_{i=1}^n I_{H_s}(y_i) }[/math]

3. Compute the mean of [math]\displaystyle{ \,z_i }[/math] over all slices, which is a crude estimate [math]\displaystyle{ \,\hat{m}_1 }[/math] of the inverse regression curve [math]\displaystyle{ \,m_1 }[/math]:

[math]\displaystyle{ \,\bar{z}_s=n_s^{-1}\sum_{i=1}^n z_i I_{H_s}(y_i) }[/math]

4. Calculate the estimate for [math]\displaystyle{ \,Cov\{m_1(y)\} }[/math]:

[math]\displaystyle{ \,\hat{V}=n^{-1}\sum_{i=1}^S n_s \bar{z}_s \bar{z}_s^\top }[/math]

5. Identify the eigenvalues [math]\displaystyle{ \,\hat{\lambda}_i }[/math] and the eigenvectors [math]\displaystyle{ \,\hat{\eta}_i }[/math] of [math]\displaystyle{ \,\hat{V} }[/math], which are the standardized EDR-directions.

6. Transform the standardized EDR-directions back to the original scale. The estimates for the EDR-directions are given by:

[math]\displaystyle{ \,\hat{\beta}_i=\hat{\Sigma}_{xx}^{-1/2}\hat{\eta}_i }[/math]

(which are not necessarily orthogonal)

References

  1. Li, Ker-Chau (1991). "Sliced Inverse Regression for Dimension Reduction". Journal of the American Statistical Association 86 (414): 316–327. doi:10.2307/2290563. ISSN 0162-1459. https://www.jstor.org/stable/2290563.