Šidák correction

From HandWiki
Short description: Multiple comparisons correction

In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method provides familywise error control that is exact for tests that are stochastically independent, conservative for tests that are positively dependent, and liberal for tests that are negatively dependent. It is credited to a 1967 paper [1] by the statistician and probabilist Zbyněk Šidák.[2] The Šidák method can be used to determine the statistical significance, and evaluate adjusted P value and confidence intervals.

Usage

  • Given m different null hypotheses and a familywise alpha level of [math]\displaystyle{ \alpha }[/math], each null hypothesis is rejected that has a p-value lower than [math]\displaystyle{ \alpha_{SID} = 1-(1-\alpha)^\frac{1}{m} }[/math].
  • This test produces a familywise Type I error rate of exactly [math]\displaystyle{ \alpha }[/math] when the tests are independent of each other and all null hypotheses are true. It is less stringent than the Bonferroni correction, but only slightly. For example, for [math]\displaystyle{ \alpha }[/math] = 0.05 and m = 10, the Bonferroni-adjusted level is 0.005 and the Šidák-adjusted level is approximately 0.005116.
  • One can also compute confidence intervals matching the test decision using the Šidák correction by using 100 [math]\displaystyle{ \cdot }[/math] (1 − α)1/m % confidence intervals.
  • For continuous problems, one can employ Bayesian logic to compute [math]\displaystyle{ m }[/math] from the prior-to-posterior volume ratio.[3]

When there are considerably large numbers of hypotheses or when the hypotheses are correlated, correction factors like Bonferroni and Šidák give in quite conservative results, which leads us to consider other approaches.

Proof

The Šidák correction is derived by assuming that the individual tests are independent. Let the significance threshold for each test be [math]\displaystyle{ \alpha_1 }[/math]; then the probability that at least one of the tests is significant under this threshold is (1 - the probability that none of them are significant). Since it is assumed that they are independent, the probability that all of them are not significant is the product of the probability that each of them is not significant, or [math]\displaystyle{ 1 - (1 - \alpha_1)^m }[/math]. Our intention is for this probability to equal [math]\displaystyle{ \alpha }[/math], the significance threshold for the entire series of tests. By solving for [math]\displaystyle{ \alpha_1 }[/math], we obtain [math]\displaystyle{ \alpha_1 = 1 - (1 - \alpha)^{1/m}. }[/math] It shows that in order to reach a given [math]\displaystyle{ \alpha }[/math] level, we need to adapt the [math]\displaystyle{ \alpha_1 }[/math]values used for each test.[4]

Šidák correction for t-test

Main page: Šidák correction for t-test

See also

References

  1. Šidák, Z. K. (1967). "Rectangular Confidence Regions for the Means of Multivariate Normal Distributions". Journal of the American Statistical Association 62 (318): 626–633. doi:10.1080/01621459.1967.10482935. 
  2. Seidler, J.; Vondráček, J. Í.; Saxl, I. (2000). "The life and work of Zbyněk Šidák (1933–1999)". Applications of Mathematics 45 (5): 321. doi:10.1023/A:1022238410461. 
  3. Bayer, Adrian E.; Seljak, Uroš (2020). "The look-elsewhere effect from a unified Bayesian and frequentist perspective". Journal of Cosmology and Astroparticle Physics 2020 (10): 009-009. doi:10.1088/1475-7516/2020/10/009. https://doi.org/10.1088%2F1475-7516%2F2020%2F10%2F009. 
  4. "Abdi-Bonferonni2007-pretty.dvi". https://www.utdallas.edu/~herve/Abdi-Bonferroni2007-pretty.pdf. 

External links