Kruskal–Wallis one-way analysis of variance

From HandWiki
Short description: Non-parametric method for testing whether samples originate from the same distribution
Difference between ANOVA and Kruskal-Wallis test with ranks

The Kruskal–Wallis test by ranks, Kruskal–Wallis H test[1] (named after William Kruskal and W. Allen Wallis), or one-way ANOVA on ranks[1] is a non-parametric method for testing whether samples originate from the same distribution.[2][3][4] It is used for comparing two or more independent samples of equal or different sample sizes. It extends the Mann–Whitney U test, which is used for comparing only two groups. The parametric equivalent of the Kruskal–Wallis test is the one-way analysis of variance (ANOVA).

A significant Kruskal–Wallis test indicates that at least one sample stochastically dominates one other sample. The test does not identify where this stochastic dominance occurs or for how many pairs of groups stochastic dominance obtains. For analyzing the specific sample pairs for stochastic dominance, Dunn's test,[5] pairwise Mann–Whitney tests with Bonferroni correction,[6] or the more powerful but less well known Conover–Iman test[6] are sometimes used.

It is supposed that the treatments significantly affect the response level and then there is an order among the treatments: one tends to give the lowest response, another gives the next lowest response is second, and so forth.[7] Since it is a nonparametric method, the Kruskal–Wallis test does not assume a normal distribution of the residuals, unlike the analogous one-way analysis of variance. If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group. Otherwise, it is impossible to say, whether the rejection of the null hypothesis comes from the shift in locations or group dispersions. This is the same issue that happens also with the Mann-Whitney test.[8][9][10] If the data contains potential outliers, if the population distributions have heavy tails, or if the population distributions are significantly skewed, the Kruskal-Wallis test is more powerful at detecting differences among treatments than ANOVA F-test. On the other hand, if the population distributions are normal or are light-tailed and symmetric, then ANOVA F-test will generally have greater power which is the probability of rejecting the null hypothesis when it indeed should be rejected.[11][12]

Method

An illustration of how to assign any tied values the average of the rank
  1. Rank all data from all groups together; i.e., rank the data from 1 to N ignoring group membership. Assign any tied values the average of the ranks they would have received had they not been tied.
  2. The test statistic is given by
    [math]\displaystyle{ \definecolor{ChromeYellow}{RGB}{255, 167, 3} \definecolor{Green}{RGB}{0, 128, 0} \definecolor{green}{RGB}{0, 128, 0} \definecolor{Blue}{RGB}{0, 0, 255} \definecolor{Purple}{RGB}{128, 0, 128} H = ({\color{Red}N}-1)\frac{\sum_{i=1}^{\color{Orange}g} {\color{ChromeYellow}n_i}({\color{Blue}\bar{r}_{i\cdot}} - {\color{Purple}\bar{r}})^2}{\sum_{i=1}^ {\color{Orange}g} \sum_{j=1}^{{\color{ChromeYellow}n_i}}({\color{Green}r_{ij}} - {\color{Purple}\bar{r}})^2}, }[/math] where
    • [math]\displaystyle{ \color{Red}N }[/math] is the total number of observations across all groups
    • [math]\displaystyle{ \color{Orange}g }[/math] is the number of groups
    • [math]\displaystyle{ \definecolor{ChromeYellow}{RGB}{255, 167, 3} \color{ChromeYellow}n_i }[/math] is the number of observations in group [math]\displaystyle{ i }[/math]
    • [math]\displaystyle{ \definecolor{Green}{RGB}{0, 128, 0} \definecolor{green}{RGB}{0, 128, 0} \color{Green}r_{ij} }[/math] is the rank (among all observations) of observation [math]\displaystyle{ j }[/math] from group [math]\displaystyle{ i }[/math]
    • [math]\displaystyle{ \definecolor{blue}{RGB}{0, 0, 255} {\color{blue}\bar{r}_{i\cdot}} = \frac{\sum_{j=1}^{n_i}{r_{ij}}}{n_i} }[/math] is the average rank of all observations in group [math]\displaystyle{ i }[/math]
    • [math]\displaystyle{ \definecolor{Purple}{RGB}{128, 0, 128} {\color{Purple}\bar{r}} =\tfrac 12 (N+1) }[/math] is the average of all the [math]\displaystyle{ \definecolor{Green}{RGB}{0, 128, 0} \definecolor{green}{RGB}{0, 128, 0} \color{Green}r_{ij} }[/math].
  3. If the data contain no ties the denominator of the expression for [math]\displaystyle{ H }[/math] is exactly [math]\displaystyle{ (N-1)N(N+1)/12 }[/math] and [math]\displaystyle{ \bar{r}=\tfrac{N+1}{2} }[/math]. Thus
    [math]\displaystyle{ \begin{align} H & = \frac{12}{N(N+1)}\sum_{i=1}^g n_i \left(\bar{r}_{i\cdot} - \frac{N+1}{2}\right)^2 \\ & = \frac{12}{N(N+1)}\sum_{i=1}^g n_i \bar{r}_{i\cdot }^2 -\ 3(N+1) \end{align} }[/math]
    The last formula only contains the squares of the average ranks.
  4. A correction for ties if using the short-cut formula described in the previous point can be made by dividing [math]\displaystyle{ H }[/math] by [math]\displaystyle{ 1 - \frac{\sum_{i=1}^G (t_i^3 - t_i)}{N^3-N} }[/math], where G is the number of groupings of different tied ranks, and ti is the number of tied values within group i that are tied at a particular value. This correction usually makes little difference in the value of H unless there are a large number of ties.
  5. When performing multiple sample comparisons, the type I error tends to become inflated. Therefore, Bonferroni procedure is used to adjust the significance level, that is, [math]\displaystyle{ \bar{a}=\frac{\alpha}{\Bbbk} }[/math], where [math]\displaystyle{ \bar{a} }[/math] is the adjusted significance level, [math]\displaystyle{ \alpha }[/math] is the initial significance level, and [math]\displaystyle{ \Bbbk }[/math] is the number of the contrasts.[13]
  6. Finally, the decision to reject or not the null hypothesis is made by comparing [math]\displaystyle{ H }[/math] to a critical value [math]\displaystyle{ H_c }[/math] obtained from a table or a software for a given significance or alpha level. If [math]\displaystyle{ H }[/math] is bigger than [math]\displaystyle{ H_c }[/math], the null hypothesis is rejected. If possible (no ties, sample not too big) one should compare [math]\displaystyle{ H }[/math] to the critical value obtained from the exact distribution of [math]\displaystyle{ H }[/math]. Otherwise, the distribution of H can be approximated by a chi-squared distribution with g-1 degrees of freedom. If some [math]\displaystyle{ n_i }[/math] values are small (i.e., less than 5) the exact probability distribution of [math]\displaystyle{ H }[/math] can be quite different from this chi-squared distribution. If a table of the chi-squared probability distribution is available, the critical value of chi-squared, [math]\displaystyle{ \chi^2_{\alpha: g-1} }[/math], can be found by entering the table at g − 1 degrees of freedom and looking under the desired significance or alpha level.[14]
  7. If the statistic is not significant, then there is no evidence of stochastic dominance between the samples. However, if the test is significant then at least one sample stochastically dominates another sample. Therefore, a researcher might use sample contrasts between individual sample pairs, or post hoc tests using Dunn's test, which (1) properly employs the same rankings as the Kruskal–Wallis test, and (2) properly employs the pooled variance implied by the null hypothesis of the Kruskal–Wallis test in order to determine which of the sample pairs are significantly different.[5] When performing multiple sample contrasts or tests, the Type I error rate tends to become inflated, raising concerns about multiple comparisons.

Exact probability tables

A large amount of computing resources is required to compute exact probabilities for the Kruskal–Wallis test. Existing software only provides exact probabilities for sample sizes of less than about 30 participants. These software programs rely on the asymptotic approximation for larger sample sizes.

Exact probability values for larger sample sizes are available. Spurrier (2003) published exact probability tables for samples as large as 45 participants.[15] Meyer and Seaman (2006) produced exact probability distributions for samples as large as 105 participants.[16]

Exact distribution of H

Choi et al.[17] made a review of two methods that had been developed to compute the exact distribution of [math]\displaystyle{ H }[/math], proposed a new one, and compared the exact distribution to its chi-squared approximation.

Example

Test for differences in ozone levels by month

The following example uses data from Chambers et al.[18] on daily readings of ozone for May 1 to September 30, 1973, in New York City. The data are in the R data set airquality, and the analysis is included in the documentation for the R function kruskal.test. Boxplots of ozone values by month are shown in the figure.

Ozone by month boxplots.svg

The Kruskal-Wallis test finds a significant difference (p = 6.901e-06) indicating that ozone differs among the 5 months.

kruskal.test(Ozone ~ Month, data = airquality)

	Kruskal-Wallis rank sum test

data:  Ozone by Month
Kruskal-Wallis chi-squared = 29.267, df = 4, p-value = 6.901e-06

To determine which months differ, post-hoc tests may be performed using a Wilcoxon test for each pair of months, with a Bonferroni (or other) correction for multiple hypothesis testing.

pairwise.wilcox.test(airquality$Ozone, airquality$Month, p.adjust.method = "bonferroni")

	Pairwise comparisons using Wilcoxon rank sum test

data:  airquality$Ozone and airquality$Month

  5      6      7      8     
6 1.0000 -      -      -     
7 0.0003 0.1414 -      -     
8 0.0012 0.2591 1.0000 -     
9 1.0000 1.0000 0.0074 0.0325

P value adjustment method: bonferroni

The post-hoc tests indicate that, after Bonferroni correction for multiple testing, the following differences are significant (adjusted p < 0.05).

  • Month 5 vs Months 7 and 8
  • Month 9 vs Months 7 and 8

Implementation

The Kruskal-Wallis test can be implemented in many programming tools and languages.

  • Mathematica implements the test as LocationEquivalenceTest.[19]
  • MATLAB's Statistics Toolbox has kruskalwallis to compute the p-value for a hypothesis test and display ANOVA table.[20]
  • SAS has the "NPAR1WAY" procedure for the test.[21]
  • SPSS implements the test with the "Nonparametric Tests" procedure.[22]
  • Minitab has the implement in the "Nonparametrics" option.[23]
  • In Python's SciPy package, the function scipy.stats.kruskal can return the test result and p-value.[24]
  • R base-package has an implement of this test using kruskal.test.[25]
  • Java has the implement provided by provided by Apache Commons.[26]
  • In Julia, the package HypothesisTests.jl has the function KruskalWallisTest(groups::AbstractVector{<:Real}...) to compute the p-value.[27]

See also

References

  1. 1.0 1.1 Kruskal–Wallis H Test using SPSS Statistics, Laerd Statistics
  2. Kruskal; Wallis (1952). "Use of ranks in one-criterion variance analysis". Journal of the American Statistical Association 47 (260): 583–621. doi:10.1080/01621459.1952.10483441. 
  3. Corder, Gregory W.; Foreman, Dale I. (2009). Nonparametric Statistics for Non-Statisticians. Hoboken: John Wiley & Sons. pp. 99–105. ISBN 9780470454619. https://archive.org/details/nonparametricsta00cord. 
  4. Siegel; Castellan (1988). Nonparametric Statistics for the Behavioral Sciences (Second ed.). New York: McGraw–Hill. ISBN 0070573573. 
  5. 5.0 5.1 Dunn, Olive Jean (1964). "Multiple comparisons using rank sums". Technometrics 6 (3): 241–252. doi:10.2307/1266041. 
  6. 6.0 6.1 Conover, W. Jay; Iman, Ronald L. (1979). "On multiple-comparisons procedures" (Report). Los Alamos Scientific Laboratory. http://library.lanl.gov/cgi-bin/getfile?00209046.pdf. Retrieved 2016-10-28. 
  7. Lehmann, E. L., & D'Abrera, H. J. (1975). Nonparametrics: Statistical methods based on ranks. Holden-Day.
  8. Divine; Norton; Barón; Juarez-Colunga (2018). "The Wilcoxon–Mann–Whitney Procedure Fails as a Test of Medians". The American Statistician. doi:10.1080/00031305.2017.1305291. 
  9. Hart (2001). "Mann-Whitney test is not just a test of medians: differences in spread can be important". BMJ. doi:10.1136/bmj.323.7309.391. 
  10. Bruin (2006). "FAQ: Why is the Mann-Whitney significant when the medians are equal?". UCLA: Statistical Consulting Group. 
  11. Higgins, James J.; Jeffrey Higgins, James (2004). An introduction to modern nonparametric statistics. Duxbury advanced series. Pacific Gove, CA: Brooks-Cole; Thomson Learning. ISBN 978-0-534-38775-4. 
  12. Berger, Paul D.; Maurer, Robert E.; Celli, Giovana B. (2018) (in en). Experimental Design. Cham: Springer International Publishing. doi:10.1007/978-3-319-64583-4. ISBN 978-3-319-64582-7. http://link.springer.com/10.1007/978-3-319-64583-4. 
  13. Corder, G.W. & Foreman, D.I. (2010). Nonparametric Statistics for Non-statisticians: A Step-by-Step Approach. Hoboken, NJ: Wiley.
  14. Montgomery, Douglas C.; Runger, George C. (2018). Applied statistics and probability for engineers. EMEA edition (Seventh ed.). Hoboken, NJ: Wiley. ISBN 978-1-119-40036-3. 
  15. Spurrier, J. D. (2003). "On the null distribution of the Kruskal–Wallis statistic". Journal of Nonparametric Statistics 15 (6): 685–691. doi:10.1080/10485250310001634719. 
  16. Meyer; Seaman (April 2006). "Expanded tables of critical values for the Kruskal–Wallis H statistic". Paper presented at the annual meeting of the American Educational Research Association, San Francisco.  Critical value tables and exact probabilities from Meyer and Seaman are available for download at http://faculty.virginia.edu/kruskal-wallis/ . A paper describing their work may also be found there.
  17. Won Choi, Jae Won Lee, Myung-Hoe Huh, and Seung-Ho Kang (2003). "An Algorithm for Computing the Exact Distribution of the Kruskal–Wallis Test". Communications in Statistics - Simulation and Computation (32, number 4): 1029–1040. doi:10.1081/SAC-120023876. 
  18. John M. Chambers, William S. Cleveland, Beat Kleiner, and Paul A. Tukey (1983). Graphical Methods for Data Analysis. Belmont, Calif: Wadsworth International Group, Duxbury Press. ISBN 053498052X. 
  19. Wolfram Research (2010), LocationEquivalenceTest, Wolfram Language function, https://reference.wolfram.com/language/ref/LocationEquivalenceTest.html.
  20. "Kruskal-Wallis test - MATLAB kruskalwallis". https://www.mathworks.com/help/stats/kruskalwallis.html?searchHighlight=kruskalwallis%20test&s_tid=srchtitle_support_results_1_kruskalwallis%20test. 
  21. "The NPAR1WAY Procedure". https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.4/statug/statug_npar1way_syntax01.htm. 
  22. Ruben Geert van den Berg. "How to Run a Kruskal-Wallis Test in SPSS?". https://www.spss-tutorials.com/kruskal-wallis-test-in-spss/. 
  23. "Overview for Kruskal-Wallis Test". https://support.minitab.com/en-us/minitab/21/help-and-how-to/statistics/nonparametrics/how-to/kruskal-wallis-test/before-you-start/overview/. 
  24. "scipy.stats.kruskal — SciPy v1.11.4 Manual". https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kruskal.html. 
  25. "kruskal.test function - RDocumentation". https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/kruskal.test. 
  26. "Math – The Commons Math User Guide - Statistics". https://commons.apache.org/proper/commons-math/userguide/stat.html#a1.8_Statistical_tests. 
  27. "Nonparametric tests · HypothesisTests.jl" (in en). https://juliastats.org/HypothesisTests.jl/stable/nonparametric/#Kruskal-Wallis-rank-sum-test. 

Further reading

External links