Levene's test
In statistics, Levene's test is an inferential statistic used to assess the equality of variances for a variable calculated for two or more groups.[1] This test is used because some common statistical procedures assume that variances of the populations from which different samples are drawn are equal. Levene's test assesses this assumption. It tests the null hypothesis that the population variances are equal (called homogeneity of variance or homoscedasticity). If the resulting p-value of Levene's test is less than some significance level (typically 0.05), the obtained differences in sample variances are unlikely to have occurred based on random sampling from a population with equal variances. Thus, the null hypothesis of equal variances is rejected and it is concluded that there is a difference between the variances in the population.
Some of the procedures typically assuming homoscedasticity, for which one can use Levene's tests, include analysis of variance and t-tests.
Levene's test is sometimes used before a comparison of means, informing the decision on whether to use a pooled t-test or the Welch's t-test. However, it was shown that such a two-step procedure may markedly inflate the type 1 error obtained with the t-tests and thus should not be done in the first place.[2] Instead, the choice of pooled or Welch's test should be made a priori based on the study design.
Levene's test may also be used as a main test for answering a stand-alone question of whether two sub-samples in a given population have equal or different variances.[3]
Levene's test was developed by and named after American statistician and geneticist Howard Levene.
Definition
Levene's test is equivalent to a 1-way between-groups analysis of variance (ANOVA) with the dependent variable being the absolute value of the difference between a score and the mean of the group to which the score belongs (shown below as [math]\displaystyle{ Z_{ij} = |Y_{ij} - \bar{Y}_{i\cdot}| }[/math] ). The test statistic, [math]\displaystyle{ W }[/math], is equivalent to the [math]\displaystyle{ F }[/math] statistic that would be produced by such an ANOVA, and is defined as follows:
- [math]\displaystyle{ W = \frac{(N-k)}{(k-1)} \cdot \frac{\sum_{i=1}^k N_i (Z_{i\cdot}-Z_{\cdot\cdot})^2} {\sum_{i=1}^k \sum_{j=1}^{N_i} (Z_{ij}-Z_{i\cdot})^2}, }[/math]
where
- [math]\displaystyle{ k }[/math] is the number of different groups to which the sampled cases belong,
- [math]\displaystyle{ N_i }[/math] is the number of cases in the [math]\displaystyle{ i }[/math]th group,
- [math]\displaystyle{ N }[/math] is the total number of cases in all groups,
- [math]\displaystyle{ Y_{ij} }[/math] is the value of the measured variable for the[math]\displaystyle{ j }[/math]th case from the [math]\displaystyle{ i }[/math]th group,
- [math]\displaystyle{ Z_{ij} = \begin{cases} |Y_{ij} - \bar{Y}_{i\cdot}|, & \bar{Y}_{i\cdot} \text{ is a mean of the } i\text{-th group}, \\ |Y_{ij} - \tilde{Y}_{i\cdot}|, & \tilde{Y}_{i\cdot} \text{ is a median of the } i\text{-th group}. \end{cases} }[/math]
(Both definitions are in use though the second one is, strictly speaking, the Brown–Forsythe test – see below for comparison.)
- [math]\displaystyle{ Z_{i\cdot} = \frac{1}{N_i} \sum_{j=1}^{N_i} Z_{ij} }[/math] is the mean of the [math]\displaystyle{ Z_{ij} }[/math] for group [math]\displaystyle{ i }[/math],
- [math]\displaystyle{ Z_{\cdot\cdot} = \frac{1}{N} \sum_{i=1}^k \sum_{j=1}^{N_i} Z_{ij} }[/math] is the mean of all [math]\displaystyle{ Z_{ij} }[/math].
The test statistic [math]\displaystyle{ W }[/math] is approximately F-distributed with [math]\displaystyle{ k-1 }[/math] and [math]\displaystyle{ N-k }[/math] degrees of freedom, and hence is the significance of the outcome [math]\displaystyle{ w }[/math] of [math]\displaystyle{ W }[/math] tested against [math]\displaystyle{ F(1-\alpha;k-1,N-k) }[/math] where [math]\displaystyle{ F }[/math] is a quantile of the F-distribution, with [math]\displaystyle{ k-1 }[/math] and [math]\displaystyle{ N-k }[/math] degrees of freedom, and [math]\displaystyle{ \alpha }[/math] is the chosen level of significance (usually 0.05 or 0.01).
Comparison with the Brown–Forsythe test
The Brown–Forsythe test uses the median instead of the mean in computing the spread within each group ([math]\displaystyle{ \bar{Y} }[/math] vs. [math]\displaystyle{ \tilde{Y} }[/math], above). Although the optimal choice depends on the underlying distribution, the definition based on the median is recommended as the choice that provides good robustness against many types of non-normal data while retaining good statistical power.[3] If one has knowledge of the underlying distribution of the data, this may indicate using one of the other choices. Brown and Forsythe performed Monte Carlo studies that indicated that using the trimmed mean performed best when the underlying data followed a Cauchy distribution (a heavy-tailed distribution) and the median performed best when the underlying data followed a chi-squared distribution with four degrees of freedom (a heavily skewed distribution). Using the mean provided the best power for symmetric, moderate-tailed, distributions.
Software implementations
Many spreadsheet programs and statistics packages, such as R, Python, Julia, and MATLAB include implementations of Levene's test.
Language/Program | Function | Notes |
---|---|---|
Python | scipy.stats.levene(group1, group2, group3) |
See [1] |
MATLAB | vartestn(data,groups,'TestType','LeveneAbsolute') |
See [2] |
R | leveneTest(lm(y ~ x, data=data)) |
See [3] |
Julia | HypothesisTests.LeveneTest(group1, group2, group3) |
See [4] |
See also
References
- ↑ Levene, Howard (1960). "Robust tests for equality of variances". in Ingram Olkin. Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. Stanford University Press. pp. 278–292.
- ↑ Zimmermann, Donald W. (2004). "A note on preliminary tests of equality of variances". British Journal of Mathematical and Statistical Psychology 57 (1): 173-81. doi:10.1348/000711004849222. https://pubmed.ncbi.nlm.nih.gov/15171807/.
- ↑ 3.0 3.1 Derrick, B; Ruck, A; Toher, D; White, P (2018). "Tests for equality of variances between two samples which contain both paired observations and independent observations". Journal of Applied Quantitative Methods 13 (2): 36–47. http://jaqm.ro/issues/volume-13,issue-2/pdfs/3_BE_AN_DE_PA_.pdf.
External links
- Parametric and nonparametric Levene's test in SPSS
- http://www.itl.nist.gov/div898/handbook/eda/section3/eda35a.htm
Original source: https://en.wikipedia.org/wiki/Levene's test.
Read more |