F-test of equality of variances

From HandWiki

In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any F-test can be regarded as a comparison of two variances, but the specific case being discussed in this article is that of two populations, where the test statistic used is the ratio of two sample variances.[1] This particular situation is of importance in mathematical statistics since it provides a basic exemplar case in which the F-distribution can be derived.[2] For application in applied statistics, there is concern[3] that the test is so sensitive to the assumption of normality that it would be inadvisable to use it as a routine test for the equality of variances. In other words, this is a case where "approximate normality" (which in similar contexts would often be justified using the central limit theorem), is not good enough to make the test procedure approximately valid to an acceptable degree.

The test

Let X1, ..., Xn and Y1, ..., Ym be independent and identically distributed samples from two populations which each has a normal distribution. The expected values for the two populations can be different, and the hypothesis to be tested is that the variances are equal. Let

[math]\displaystyle{ \overline{X} = \frac{1}{n}\sum_{i=1}^n X_i\text{ and }\overline{Y} = \frac{1}{m}\sum_{i=1}^m Y_i }[/math]

be the sample means. Let

[math]\displaystyle{ S_X^2 = \frac{1}{n-1}\sum_{i=1}^n \left(X_i - \overline{X}\right)^2\text{ and }S_Y^2 = \frac{1}{m-1}\sum_{i=1}^m \left(Y_i - \overline{Y}\right)^2 }[/math]

be the sample variances. Then the test statistic

[math]\displaystyle{ F = \frac{S_X^2}{S_Y^2} }[/math]

has an F-distribution with n − 1 and m − 1 degrees of freedom if the null hypothesis of equality of variances is true. Otherwise it follows an F-distribution scaled by the ratio of true variances. The null hypothesis is rejected if F is either too large or too small based on the desired alpha level (i.e., statistical significance).

Properties

This F-test is known to be extremely sensitive to non-normality,[4][5] so Levene's test, Bartlett's test, or the Brown–Forsythe test are better tests for testing the equality of two variances. (However, all of these tests create experiment-wise type I error inflations when conducted as a test of the assumption of homoscedasticity prior to a test of effects.[6]) F-tests for the equality of variances can be used in practice, with care, particularly where a quick check is required, and subject to associated diagnostic checking: practical text-books[7] suggest both graphical and formal checks of the assumption.

F-tests are used for other statistical tests of hypotheses, such as testing for differences in means in three or more groups, or in factorial layouts. These F-tests are generally not robust when there are violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts.[8] However, for large alpha levels (e.g., at least 0.05) and balanced layouts, the F-test is relatively robust, although (if the normality assumption does not hold) it suffers from a loss in comparative statistical power as compared with non-parametric counterparts.

Generalization

The immediate generalization of the problem outlined above is to situations where there are more than two groups or populations, and the hypothesis is that all of the variances are equal. This is the problem treated by Hartley's test and Bartlett's test.

See also

References

  1. Snedecor, George W. and Cochran, William G. (1989), Statistical Methods, Eighth Edition, Iowa State University Press.
  2. Johnson, N.L., Kotz, S., Balakrishnan, N. (1995) Continuous Univariate Distributions, Volume 2, Wiley. ISBN:0-471-58494-0 (Section 27.1)
  3. Agresti, A. and Kateri, M. (2021), Foundations of Statistics for Data Scientists: With R and Python, CRC Press. ISBN:978-0-367-74845-6 (Section 5.3.2)
  4. Box, G.E.P. (1953). "Non-Normality and Tests on Variances". Biometrika 40 (3/4): 318–335. doi:10.1093/biomet/40.3-4.318. 
  5. Markowski, Carol A; Markowski, Edward P. (1990). "Conditions for the Effectiveness of a Preliminary Test of Variance". The American Statistician 44 (4): 322–326. doi:10.2307/2684360. 
  6. Sawilowsky, S. (2002). "Fermat, Schubert, Einstein, and Behrens–Fisher:The Probable Difference Between Two Means When σ12 ≠ σ22", Journal of Modern Applied Statistical Methods, 1(2), 461–472.
  7. Rees, D.G. (2001) Essential Statistics (4th Edition), Chapman & Hall/CRC, ISBN:1-58488-007-4. Section 10.15
  8. Blair, R. C. (1981). "A reaction to 'Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance'". Review of Educational Research 51 (4): 499–507. doi:10.3102/00346543051004499.