Pooled variance
In statistics, pooled variance (also known as combined variance, composite variance, or overall variance, and written [math]\displaystyle{ \sigma^2 }[/math]) is a method for estimating variance of several different populations when the mean of each population may be different, but one may assume that the variance of each population is the same. The numerical estimate resulting from the use of this method is also called the pooled variance.
Under the assumption of equal population variances, the pooled sample variance provides a higher precision estimate of variance than the individual sample variances. This higher precision can lead to increased statistical power when used in statistical tests that compare the populations, such as the t-test.
The square root of a pooled variance estimator is known as a pooled standard deviation (also known as combined standard deviation, composite standard deviation, or overall standard deviation).
Motivation
In statistics, many times, data are collected for a dependent variable, y, over a range of values for the independent variable, x. For example, the observation of fuel consumption might be studied as a function of engine speed while the engine load is held constant. If, in order to achieve a small variance in y, numerous repeated tests are required at each value of x, the expense of testing may become prohibitive. Reasonable estimates of variance can be determined by using the principle of pooled variance after repeating each test at a particular x only a few times.
Definition and computation
The pooled variance is an estimate of the fixed common variance [math]\displaystyle{ \sigma ^2 }[/math] underlying various populations that have different means.
We are given a set of sample variances [math]\displaystyle{ s^2_i }[/math], where the populations are indexed [math]\displaystyle{ i = 1, \ldots, m }[/math],
- [math]\displaystyle{ s^2_i }[/math] = [math]\displaystyle{ \frac{1}{n_i-1} \sum_{j=1}^{n_i} \left(y_j - \overline{y}_i \right)^2. }[/math]
Assuming uniform sample sizes, [math]\displaystyle{ n_i=n }[/math], then the pooled variance [math]\displaystyle{ s^2_p }[/math] can be computed by the arithmetic mean:
- [math]\displaystyle{ s_p^2=\frac{\sum_{i=1}^m s_i^2}{m} = \frac{s_1^2+s_2^2+\cdots+s_m^2}{m}. }[/math]
If the sample sizes are non-uniform, then the pooled variance [math]\displaystyle{ s^2_p }[/math] can be computed by the weighted average, using as weights [math]\displaystyle{ w_i=n_i-1 }[/math] the respective degrees of freedom (see also: Bessel's correction):
- [math]\displaystyle{ s_p^2=\frac{\sum_{i=1}^m (n_i - 1)s_i^2}{\sum_{i=1}^m(n_i - 1)} = \frac{(n_1 - 1)s_1^2+(n_2 - 1)s_2^2+\cdots+(n_m - 1)s_m^2}{n_1+n_2+\cdots+n_m - m}. }[/math]
Variants
The unbiased least squares estimate of [math]\displaystyle{ \sigma^2 }[/math] (as presented above), and the biased maximum likelihood estimate below:
- [math]\displaystyle{ s_p^2=\frac{\sum_{i=1}^N (n_i - 1)s_i^2}{\sum_{i=1}^N n_i }, }[/math]
are used in different contexts. The former can give an unbiased [math]\displaystyle{ s_p^2 }[/math] to estimate [math]\displaystyle{ \sigma^2 }[/math] when the two groups share an equal population variance. The latter one can give a more efficient [math]\displaystyle{ s_p^2 }[/math] to estimate [math]\displaystyle{ \sigma^2 }[/math], although subject to bias. Note that the quantities [math]\displaystyle{ s_i^2 }[/math] in the right hand sides of both equations are the unbiased estimates.
Example
Consider the following set of data for y obtained at various levels of the independent variable x.
x | y |
---|---|
1 | 31, 30, 29 |
2 | 42, 41, 40, 39 |
3 | 31, 28 |
4 | 23, 22, 21, 19, 18 |
5 | 21, 20, 19, 18,17 |
The number of trials, mean, variance and standard deviation are presented in the next table.
x | n | y_{mean} | s_{i}^{2} | s_{i} |
---|---|---|---|---|
1 | 3 | 30.0 | 1.0 | 1.0 |
2 | 4 | 40.5 | 1.67 | 1.29 |
3 | 2 | 29.5 | 4.5 | 2.12 |
4 | 5 | 20.6 | 4.3 | 2.07 |
5 | 5 | 19.0 | 2.5 | 1.58 |
These statistics represent the variance and standard deviation for each subset of data at the various levels of x. If we can assume that the same phenomena are generating random error at every level of x, the above data can be “pooled” to express a single estimate of variance and standard deviation. In a sense, this suggests finding a mean variance or standard deviation among the five results above. This mean variance is calculated by weighting the individual values with the size of the subset for each level of x. Thus, the pooled variance is defined by
- [math]\displaystyle{ s_p^2 = \frac{(n_1-1)s_1^2+(n_2-1)s_2^2 + \cdots + (n_k - 1)s_k^2}{(n_1 - 1) + (n_2 - 1) + \cdots +(n_k - 1)} }[/math]
where n_{1}, n_{2}, . . ., n_{k} are the sizes of the data subsets at each level of the variable x, and s_{1}^{2}, s_{2}^{2}, . . ., s_{k}^{2} are their respective variances.
The pooled variance of the data shown above is therefore:
- [math]\displaystyle{ s_p^2 = 2.764 \, }[/math]
Effect on precision
Pooled variance is an estimate when there is a correlation between pooled data sets or the average of the data sets is not identical. Pooled variation is less precise the more non-zero the correlation or distant the averages between data sets.
The variation of data for non-overlapping data sets is:
- [math]\displaystyle{ \sigma_X^2 =\frac{ \sum_i \left[(N_{X_i} - 1) \sigma_{X_i}^2 + N_{X_i} \mu_{X_i}^2\right] - \left[\sum_i N_{X_i} \right] \mu_X^2 }{\sum_i N_{X_i} - 1} }[/math]
where the mean is defined as:
- [math]\displaystyle{ \mu_X = \frac{ \sum_i N_{X_i} \mu_{X_i} }{\sum_i N_{X_i} } }[/math]
Given a biased maximum likelihood defined as:
- [math]\displaystyle{ s_p^2=\frac{\sum_{i=1}^k (n_i - 1)s_i^2}{\sum_{i=1}^k n_i }, }[/math]
Then the error in the biased maximum likelihood estimate is:
- [math]\displaystyle{ \begin{align} \text{Error} & = s_p^2 - \sigma_X^2 \\[6pt] & =\frac{\sum_i (N_{X_i} - 1)s_i^2}{\sum_i N_{X_i} } - \frac{1}{\sum_i N_{X_i} - 1} \left( \sum_i \left[(N_{X_i} - 1) \sigma_{X_i}^2 + N_{X_i} \mu_{X_i}^2\right] - \left[\sum_i N_{X_i} \right]\mu_X^2 \right) \end{align} }[/math]
Assuming N is large such that:
- [math]\displaystyle{ \sum_i N_{X_i} \approx \sum_i N_{X_i} - 1 }[/math]
Then the error in the estimate reduces to:
- [math]\displaystyle{ \begin{align} E & =- \frac{\left( \sum_i \left[N_{X_i} \mu_{X_i}^2\right] - \left[\sum_i N_{X_i} \right]\mu_X^2 \right)}{\sum_i N_{X_i}}\\[3pt] & =\mu_X^2 - \frac{\sum_i \left[N_{X_i} \mu_{X_i}^2\right] }{\sum_i N_{X_i}} \end{align} }[/math]
Or alternatively:
- [math]\displaystyle{ \begin{align} E & =\left[ \frac{\sum_i N_{X_i} \mu_{X_i}}{\sum_i N_{X_i}} \right]^2 - \frac{\sum_i \left[N_{X_i} \mu_{X_i}^2\right] }{\sum_i N_{X_i}}\\[3pt] & =\frac{\left[\sum_i N_{X_i} \mu_{X_i} \right]^2 - \sum_i N_{X_i} \sum_i \left[N_{X_i} \mu_{X_i}^2\right] }{\left[\sum_i N_{X_i} \right]^2} \end{align} }[/math]
Aggregation of standard deviation data
Rather than estimating pooled standard deviation, the following is the way to exactly aggregate standard deviation when more statistical information is available.
Population-based statistics
The populations of sets, which may overlap, can be calculated simply as follows:
- [math]\displaystyle{ \begin{align} &&N_{X \cup Y} &= N_X + N_Y - N_{X \cap Y}\\ \end{align} }[/math]
The populations of sets, which do not overlap, can be calculated simply as follows:
- [math]\displaystyle{ \begin{align} X \cap Y = \varnothing &\Rightarrow &N_{X \cap Y} &= 0\\ &\Rightarrow &N_{X \cup Y} &= N_X + N_Y \end{align} }[/math]
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-populations can be aggregated as follows if the size (actual or relative to one another) and means of each are known:
- [math]\displaystyle{ \begin{align} \mu_{X \cup Y} &= \frac{ N_X \mu_X + N_Y \mu_Y }{N_X + N_Y} \\[3pt] \sigma_{X\cup Y} &= \sqrt{ \frac{N_X \sigma_X^2 + N_Y \sigma_Y^2}{N_X + N_Y} + \frac{N_X N_Y}{(N_X+N_Y)^2}(\mu_X - \mu_Y)^2 } \end{align} }[/math]
For example, suppose it is known that the average American man has a mean height of 70 inches with a standard deviation of three inches and that the average American woman has a mean height of 65 inches with a standard deviation of two inches. Also assume that the number of men, N, is equal to the number of women. Then the mean and standard deviation of heights of American adults could be calculated as
- [math]\displaystyle{ \begin{align} \mu &= \frac{N\cdot70 + N\cdot65}{N + N} = \frac{70+65}{2} = 67.5 \\[3pt] \sigma &= \sqrt{ \frac{3^2 + 2^2}{2} + \frac{(70-65)^2}{2^2} } = \sqrt{12.75} \approx 3.57 \end{align} }[/math]
For the more general case of M non-overlapping populations, X_{1} through X_{M}, and the aggregate population [math]\displaystyle{ X \,=\, \bigcup_i X_i }[/math],
- [math]\displaystyle{ \begin{align} \mu_X &= \frac{ \sum_i N_{X_i}\mu_{X_i} }{ \sum_i N_{X_i} } \\[3pt] \sigma_X &= \sqrt{ \frac{ \sum_i N_{X_i}\sigma_{X_i}^2 }{ \sum_i N_{X_i} } + \frac{ \sum_{i\lt j} N_{X_i}N_{X_j} (\mu_{X_i}-\mu_{X_j})^2 }{\big(\sum_i N_{X_i}\big)^2} } \end{align} }[/math],
where
- [math]\displaystyle{ X_i \cap X_j = \varnothing, \quad \forall\ i\lt j. }[/math]
If the size (actual or relative to one another), mean, and standard deviation of two overlapping populations are known for the populations as well as their intersection, then the standard deviation of the overall population can still be calculated as follows:
- [math]\displaystyle{ \begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y - N_{X \cap Y}\mu_{X \cap Y}\right)\\[3pt] \sigma_{X \cup Y} &= \sqrt{\frac{1}{N_{X \cup Y}}\left(N_X[\sigma_X^2 + \mu _X^2] + N_Y[\sigma_Y^2 + \mu _Y^2] - N_{X \cap Y}[\sigma_{X \cap Y}^2 + \mu _{X \cap Y}^2]\right) - \mu_{X\cup Y}^2} \end{align} }[/math]
If two or more sets of data are being added together datapoint by datapoint, the standard deviation of the result can be calculated if the standard deviation of each data set and the covariance between each pair of data sets is known:
- [math]\displaystyle{ \sigma_X = \sqrt{\sum_i{\sigma_{X_i}^2} + 2\sum_{i,j}\operatorname{cov}(X_i,X_j)} }[/math]
For the special case where no correlation exists between any pair of data sets, then the relation reduces to the root sum of squares:
- [math]\displaystyle{ \begin{align} &\operatorname{cov}(X_i, X_j) = 0,\quad \forall i\lt j\\ \Rightarrow &\;\sigma_X = \sqrt{\sum_i {\sigma_{X_i}^2}}. \end{align} }[/math]
Sample-based statistics
Standard deviations of non-overlapping (X ∩ Y = ∅) sub-samples can be aggregated as follows if the actual size and means of each are known:
- [math]\displaystyle{ \begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y\right)\\[3pt] \sigma_{X \cup Y} &= \sqrt{\frac{1}{N_{X \cup Y} - 1}\left([N_X - 1]\sigma_X^2 + N_X\mu_X^2 + [N_Y - 1]\sigma_Y^2 + N_Y\mu _Y^2 - [N_X + N_Y]\mu_{X \cup Y}^2\right) } \end{align} }[/math]
For the more general case of M non-overlapping data sets, X_{1} through X_{M}, and the aggregate data set [math]\displaystyle{ X \,=\, \bigcup_i X_i }[/math],
- [math]\displaystyle{ \begin{align} \mu_X &= \frac{1}{\sum_i { N_{X_i}}} \left(\sum_i { N_{X_i} \mu_{X_i}}\right)\\[3pt] \sigma_X &= \sqrt{\frac{1}{\sum_i {N_{X_i} - 1}} \left( \sum_i { \left[(N_{X_i} - 1) \sigma_{X_i}^2 + N_{X_i} \mu_{X_i}^2\right] } - \left[\sum_i {N_{X_i}}\right]\mu_X^2 \right) } \end{align} }[/math]
where
- [math]\displaystyle{ X_i \cap X_j = \varnothing,\quad \forall i\lt j. }[/math]
If the size, mean, and standard deviation of two overlapping samples are known for the samples as well as their intersection, then the standard deviation of the aggregated sample can still be calculated. In general,
- [math]\displaystyle{ \begin{align} \mu_{X \cup Y} &= \frac{1}{N_{X \cup Y}}\left(N_X\mu_X + N_Y\mu_Y - N_{X\cap Y}\mu_{X\cap Y}\right)\\[3pt] \sigma_{X \cup Y} &= \sqrt{ \frac{[N_X - 1]\sigma_X^2 + N_X\mu_X^2 + [N_Y - 1]\sigma_Y^2 + N_Y\mu _Y^2 - [N_{X \cap Y}-1]\sigma_{X \cap Y}^2 - N_{X \cap Y}\mu_{X \cap Y}^2 - [N_X + N_Y - N_{X \cap Y}]\mu_{X \cup Y}^2}{N_{X \cup Y} - 1} } \end{align} }[/math]
See also
- Chi-squared distribution
- Used for calculating Cohen's d (effect size)
- Distribution of the sample variance
- Pooled covariance matrix
- Pooled degree of freedom
- Pooled mean
References
- Killeen PR (May 2005). "An alternative to null-hypothesis significance tests". Psychol Sci 16 (5): 345–53. doi:10.1111/j.0956-7976.2005.01538.x. PMID 15869691.
External links
Original source: https://en.wikipedia.org/wiki/Pooled variance.
Read more |