Bartlett's test
In statistics, Bartlett's test, named after Maurice Stevenson Bartlett,[1] is used to test homoscedasticity, that is, if multiple samples are from populations with equal variances.[2] Some statistical tests, such as the analysis of variance, assume that variances are equal across groups or samples, which can be verified with Bartlett's test.
In a Bartlett test, we construct the null and alternative hypothesis. For this purpose several test procedures have been devised. The test procedure due to M.S.E (Mean Square Error/Estimator) Bartlett test is represented here. This test procedure is based on the statistic whose sampling distribution is approximately a Chi-Square distribution with (k − 1) degrees of freedom, where k is the number of random samples, which may vary in size and are each drawn from independent normal distributions. Bartlett's test is sensitive to departures from normality. That is, if the samples come from non-normal distributions, then Bartlett's test may simply be testing for non-normality. Levene's test and the Brown–Forsythe test are alternatives to the Bartlett test that are less sensitive to departures from normality.[3]
Specification
Bartlett's test is used to test the null hypothesis, H0 that all k population variances are equal against the alternative that at least two are different.
If there are k samples with sizes [math]\displaystyle{ n_i }[/math] and sample variances [math]\displaystyle{ S_i^2 }[/math] then Bartlett's test statistic is
- [math]\displaystyle{ \chi^2 = \frac{(N-k)\ln(S_p^2) - \sum_{i=1}^k(n_i - 1)\ln(S_i^2)}{1 + \frac{1}{3(k-1)}\left(\sum_{i=1}^k(\frac{1}{n_i-1}) - \frac{1}{N-k}\right)} }[/math]
where [math]\displaystyle{ N = \sum_{i=1}^k n_i }[/math] and [math]\displaystyle{ S_p^2 = \frac{1}{N-k} \sum_i (n_i-1)S_i^2 }[/math] is the pooled estimate for the variance.
The test statistic has approximately a [math]\displaystyle{ \chi^2_{k-1} }[/math] distribution. Thus, the null hypothesis is rejected if [math]\displaystyle{ \chi^2 \gt \chi^2_{k-1,\alpha} }[/math] (where [math]\displaystyle{ \chi^2_{k-1,\alpha} }[/math] is the upper tail critical value for the [math]\displaystyle{ \chi^2_{k-1} }[/math] distribution).
Bartlett's test is a modification of the corresponding likelihood ratio test designed to make the approximation to the [math]\displaystyle{ \chi^2_{k-1} }[/math] distribution better (Bartlett, 1937).
Notes
The test statistics may be written in some sources with logarithms of base 10 as:[4]
- [math]\displaystyle{ \chi^2 = 2.3026 \frac{(N-k)\log_{10}(S_p^2) - \sum_{i=1}^k(n_i - 1)\log_{10}(S_i^2)}{1 + \frac{1}{3(k-1)}\left(\sum_{i=1}^k(\frac{1}{n_i-1}) - \frac{1}{N-k}\right)} }[/math]
See also
References
- ↑ Bartlett, M. S. (1937). "Properties of sufficiency and statistical tests". Proceedings of the Royal Statistical Society, Series A 160, 268–282 JSTOR 96803
- ↑ (see Snedecor, George W. and Cochran, William G. (1989), Statistical Methods, Eighth Edition, Iowa State University Press. ISBN:978-0-8138-1561-9
- ↑ NIST/SEMATECH e-Handbook of Statistical Methods. Available online, URL: http://www.itl.nist.gov/div898/handbook/eda/section3/eda357.htm . Retrieved 31 December 2013.
- ↑ F., Gunst, Richard; L., Hess, James (1 January 2003). Statistical design and analysis of experiments : with applications to engineering and science. Wiley. pp. 98. ISBN 0471372161. OCLC 856653529.
External links
Original source: https://en.wikipedia.org/wiki/Bartlett's test.
Read more |