Cochran's theorem

From HandWiki

In statistics, Cochran's theorem, devised by William G. Cochran,[1] is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.[2]

Statement

Let U1, ..., UN be i.i.d. standard normally distributed random variables, and [math]\displaystyle{ U = [U_1, ..., U_N]^T }[/math]. Let [math]\displaystyle{ B^{(1)},B^{(2)},\ldots, B^{(k)} }[/math]be symmetric matrices. Define ri to be the rank of [math]\displaystyle{ B^{(i)} }[/math]. Define [math]\displaystyle{ Q_i=U^T B^{(i)}U }[/math], so that the Qi are quadratic forms. Further assume [math]\displaystyle{ \sum_i Q_i = U^T U }[/math].

Cochran's theorem states that the following are equivalent:

Often it's stated as [math]\displaystyle{ \sum_i A_i = A }[/math], where [math]\displaystyle{ A }[/math] is idempotent, and [math]\displaystyle{ \sum_i r_i = N }[/math] is replaced by [math]\displaystyle{ \sum_i r_i = rank(A) }[/math]. But after an orthogonal transform, [math]\displaystyle{ A = diag(I_M, 0) }[/math], and so we reduce to the above theorem.

Proof

Claim: Let [math]\displaystyle{ X }[/math] be a standard Gaussian in [math]\displaystyle{ \R^n }[/math], then for any symmetric matrices [math]\displaystyle{ Q, Q' }[/math], if [math]\displaystyle{ X^T Q X }[/math] and [math]\displaystyle{ X^T Q' X }[/math] have the same distribution, then [math]\displaystyle{ Q, Q' }[/math] have the same eigenvalues (up to multiplicity).

Claim: [math]\displaystyle{ I = \sum_i B_i }[/math].

Lemma: If [math]\displaystyle{ \sum_i M_i = I }[/math], all [math]\displaystyle{ M_i }[/math] symmetric, and have eigenvalues 0, 1, then they are simultaneously diagonalizable.

Now we prove the original theorem. We prove that the three cases are equivalent by proving that each case implies the next one in a cycle ([math]\displaystyle{ 1 \to 2 \to 3 \to 1 }[/math]).


Examples

Sample mean and sample variance

If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then

[math]\displaystyle{ U_i = \frac{X_i-\mu}{\sigma} }[/math]

is standard normal for each i. Note that the total Q is equal to sum of squared Us as shown here:

[math]\displaystyle{ \sum_iQ_i=\sum_{jik} U_j B_{jk}^{(i)} U_k = \sum_{jk} U_j U_k \sum_i B_{jk}^{(i)} = \sum_{jk} U_j U_k\delta_{jk} = \sum_{j} U_j^2 }[/math]

which stems from the original assumption that [math]\displaystyle{ B_{1} + B_{2} \ldots = I }[/math]. So instead we will calculate this quantity and later separate it into Qi's. It is possible to write

[math]\displaystyle{ \sum_{i=1}^n U_i^2=\sum_{i=1}^n\left(\frac{X_i-\overline{X}}{\sigma}\right)^2 + n\left(\frac{\overline{X}-\mu}{\sigma}\right)^2 }[/math]

(here [math]\displaystyle{ \overline{X} }[/math] is the sample mean). To see this identity, multiply throughout by [math]\displaystyle{ \sigma^2 }[/math] and note that

[math]\displaystyle{ \sum(X_i-\mu)^2= \sum(X_i-\overline{X}+\overline{X}-\mu)^2 }[/math]

and expand to give

[math]\displaystyle{ \sum(X_i-\mu)^2= \sum(X_i-\overline{X})^2+\sum(\overline{X}-\mu)^2+ 2\sum(X_i-\overline{X})(\overline{X}-\mu). }[/math]

The third term is zero because it is equal to a constant times

[math]\displaystyle{ \sum(\overline{X}-X_i)=0, }[/math]

and the second term has just n identical terms added together. Thus

[math]\displaystyle{ \sum(X_i-\mu)^2 = \sum(X_i-\overline{X})^2+n(\overline{X}-\mu)^2 , }[/math]

and hence

[math]\displaystyle{ \sum\left(\frac{X_i-\mu}{\sigma}\right)^2= \sum\left(\frac{X_i-\overline{X}}{\sigma}\right)^2 +n\left(\frac{\overline{X}-\mu}{\sigma}\right)^2= \overbrace{\sum_i\left(U_i-\frac{1}{n}\sum_j{U_j}\right)^2}^{Q_1} +\overbrace{\frac{1}{n}\left(\sum_j{U_j}\right)^2}^{Q_2}= Q_1+Q_2. }[/math]

Now [math]\displaystyle{ B^{(2)}=\frac{J_n}{n} }[/math] with [math]\displaystyle{ J_n }[/math] the matrix of ones which has rank 1. In turn [math]\displaystyle{ B^{(1)}= I_n-\frac{J_n}{n} }[/math] given that [math]\displaystyle{ I_n=B^{(1)}+B^{(2)} }[/math]. This expression can be also obtained by expanding [math]\displaystyle{ Q_1 }[/math] in matrix notation. It can be shown that the rank of [math]\displaystyle{ B^{(1)} }[/math] is [math]\displaystyle{ n-1 }[/math] as the addition of all its rows is equal to zero. Thus the conditions for Cochran's theorem are met.

Cochran's theorem then states that Q1 and Q2 are independent, with chi-squared distributions with n − 1 and 1 degree of freedom respectively. This shows that the sample mean and sample variance are independent. This can also be shown by Basu's theorem, and in fact this property characterizes the normal distribution – for no other distribution are the sample mean and sample variance independent.[4]

Distributions

The result for the distributions is written symbolically as

[math]\displaystyle{ \sum\left(X_i-\overline{X}\right)^2 \sim \sigma^2 \chi^2_{n-1}. }[/math]
[math]\displaystyle{ n(\overline{X}-\mu)^2\sim \sigma^2 \chi^2_1, }[/math]

Both these random variables are proportional to the true but unknown variance σ2. Thus their ratio does not depend on σ2 and, because they are statistically independent. The distribution of their ratio is given by

[math]\displaystyle{ \frac{n\left(\overline{X}-\mu\right)^2} {\frac{1}{n-1}\sum\left(X_i-\overline{X}\right)^2}\sim \frac{\chi^2_1}{\frac{1}{n-1}\chi^2_{n-1}} \sim F_{1,n-1} }[/math]

where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution). The final step here is effectively the definition of a random variable having the F-distribution.

Estimation of variance

To estimate the variance σ2, one estimator that is sometimes used is the maximum likelihood estimator of the variance of a normal distribution

[math]\displaystyle{ \widehat{\sigma}^2= \frac{1}{n}\sum\left( X_i-\overline{X}\right)^2. }[/math]

Cochran's theorem shows that

[math]\displaystyle{ \frac{n\widehat{\sigma}^2}{\sigma^2}\sim\chi^2_{n-1} }[/math]

and the properties of the chi-squared distribution show that

[math]\displaystyle{ \begin{align} E \left(\frac{n \widehat{\sigma}^2}{\sigma^2}\right) &= E \left(\chi^2_{n-1}\right) \\ \frac{n}{\sigma^2}E \left(\widehat{\sigma}^2\right) &= (n-1) \\ E \left(\widehat{\sigma}^2\right) &= \frac{\sigma^2 (n-1)}{n} \end{align} }[/math]

Alternative formulation

The following version is often seen when considering linear regression.[5] Suppose that [math]\displaystyle{ Y\sim N_n(0,\sigma^2I_n) }[/math] is a standard multivariate normal random vector (here [math]\displaystyle{ I_n }[/math] denotes the n-by-n identity matrix), and if [math]\displaystyle{ A_1,\ldots,A_k }[/math] are all n-by-n symmetric matrices with [math]\displaystyle{ \sum_{i=1}^kA_i=I_n }[/math]. Then, on defining [math]\displaystyle{ r_i= \operatorname{Rank}(A_i) }[/math], any one of the following conditions implies the other two:

  • [math]\displaystyle{ \sum_{i=1}^kr_i=n , }[/math]
  • [math]\displaystyle{ Y^TA_iY\sim\sigma^2\chi^2_{r_i} }[/math] (thus the [math]\displaystyle{ A_i }[/math] are positive semidefinite)
  • [math]\displaystyle{ Y^TA_iY }[/math] is independent of [math]\displaystyle{ Y^TA_jY }[/math] for [math]\displaystyle{ i\neq j . }[/math]

See also


References

  1. 1.0 1.1 Cochran, W. G. (April 1934). "The distribution of quadratic forms in a normal system, with applications to the analysis of covariance". Mathematical Proceedings of the Cambridge Philosophical Society 30 (2): 178–191. doi:10.1017/S0305004100016595. 
  2. Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 978-0-387-98871-9. 
  3. "Cochran's theorem" (in en), A Dictionary of Statistics (Oxford University Press), 2008-01-01, doi:10.1093/acref/9780199541454.001.0001/acref-9780199541454-e-294, ISBN 978-0-19-954145-4, https://www.oxfordreference.com/view/10.1093/acref/9780199541454.001.0001/acref-9780199541454-e-294, retrieved 2022-05-18 
  4. "The Distribution of "Student's" Ratio for Non-Normal Samples". Supplement to the Journal of the Royal Statistical Society 3 (2): 178–184. 1936. doi:10.2307/2983669. 
  5. "Cochran's Theorem (A quick tutorial)". http://yangfeng.hosting.nyu.edu//slides/cochran's-theorem.pdf.