Multiple comparisons problem

From HandWiki
Revision as of 17:55, 8 February 2024 by Jworkorg (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Statistical interpretation with many tests
An example of a coincidence produced by data dredging (showing a correlation between the number of letters in a spelling bee's winning word and the number of people in the United States killed by venomous spiders). Given a large enough pool of variables for the same time period, it is possible to find a pair of graphs that show a correlation with no causation.

In statistics, the multiple comparisons, multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously[1] or estimates a subset of parameters selected based on the observed values.[2]

The larger the number of inferences made, the more likely erroneous inferences become. Several statistical techniques have been developed to address this problem, for example, by requiring a stricter significance threshold for individual comparisons, so as to compensate for the number of inferences being made. Methods for Family-wise error rate control provide guarantees on the rate of false positives resulting from the multiple comparisons problem.

History

The problem of multiple comparisons received increased attention in the 1950s with the work of statisticians such as Tukey and Scheffé. Over the ensuing decades, many procedures were developed to address the problem. In 1996, the first international conference on multiple comparison procedures took place in Tel Aviv.[3] This is an active research area with work being done by, for example Emmanuel Candès and Vladimir Vovk.

Definition

Multiple comparisons arise when a statistical analysis involves multiple simultaneous statistical tests, each of which has a potential to produce a "discovery". A stated confidence level generally applies only to each test considered individually, but often it is desirable to have a confidence level for the whole family of simultaneous tests.[4] Failure to compensate for multiple comparisons can have important real-world consequences, as illustrated by the following examples:

  • Suppose the treatment is a new way of teaching writing to students, and the control is the standard way of teaching writing. Students in the two groups can be compared in terms of grammar, spelling, organization, content, and so on. As more attributes are compared, it becomes increasingly likely that the treatment and control groups will appear to differ on at least one attribute due to random sampling error alone.
  • Suppose we consider the efficacy of a drug in terms of the reduction of any one of a number of disease symptoms. As more symptoms are considered, it becomes increasingly likely that the drug will appear to be an improvement over existing drugs in terms of at least one symptom.

In both examples, as the number of comparisons increases, it becomes more likely that the groups being compared will appear to differ in terms of at least one attribute. Our confidence that a result will generalize to independent data should generally be weaker if it is observed as part of an analysis that involves multiple comparisons, rather than an analysis that involves only a single comparison.

For example, if one test is performed at the 5% level and the corresponding null hypothesis is true, there is only a 5% risk of incorrectly rejecting the null hypothesis. However, if 100 tests are each conducted at the 5% level and all corresponding null hypotheses are true, the expected number of incorrect rejections (also known as false positives or Type I errors) is 5. If the tests are statistically independent from each other (i.e. are performed on independent samples), the probability of at least one incorrect rejection is approximately 99.4%.

The multiple comparisons problem also applies to confidence intervals. A single confidence interval with a 95% coverage probability level will contain the true value of the parameter in 95% of samples. However, if one considers 100 confidence intervals simultaneously, each with 95% coverage probability, the expected number of non-covering intervals is 5. If the intervals are statistically independent from each other, the probability that at least one interval does not contain the population parameter is 99.4%.

Techniques have been developed to prevent the inflation of false positive rates and non-coverage rates that occur with multiple statistical tests.

Classification of multiple hypothesis tests

The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi  yields the following random variables:

Null hypothesis is true (H0) Alternative hypothesis is true (HA) Total
Test is declared significant V S R
Test is declared non-significant U T [math]\displaystyle{ m - R }[/math]
Total [math]\displaystyle{ m_0 }[/math] [math]\displaystyle{ m - m_0 }[/math] m

In m hypothesis tests of which [math]\displaystyle{ m_0 }[/math] are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.

Controlling procedures

<graph>{"legends":[],"scales":[{"type":"linear","name":"x","zero":false,"domain":{"data":"chart","field":"x"},"range":"width","nice":true},{"type":"linear","name":"y","domain":{"data":"chart","field":"y"},"zero":false,"range":"height","nice":true},{"domain":{"data":"chart","field":"series"},"type":"ordinal","name":"color","range":"category10"}],"version":2,"marks":[{"type":"line","properties":{"hover":{"stroke":{"value":"red"}},"update":{"stroke":{"scale":"color","field":"series"}},"enter":{"y":{"scale":"y","field":"y"},"x":{"scale":"x","field":"x"},"stroke":{"scale":"color","field":"series"},"strokeWidth":{"value":2.5}}},"from":{"data":"chart"}}],"height":100,"axes":[{"type":"x","scale":"x","format":"d","properties":{"title":{"fill":{"value":"#54595d"}},"grid":{"stroke":{"value":"#54595d"}},"ticks":{"stroke":{"value":"#54595d"}},"axis":{"strokeWidth":{"value":2},"stroke":{"value":"#54595d"}},"labels":{"fill":{"value":"#54595d"}}},"grid":false},{"type":"y","title":"P(at least 1 H_0 is wrongly rejected)","scale":"y","properties":{"title":{"fill":{"value":"#54595d"}},"grid":{"stroke":{"value":"#54595d"}},"ticks":{"stroke":{"value":"#54595d"}},"axis":{"strokeWidth":{"value":2},"stroke":{"value":"#54595d"}},"labels":{"fill":{"value":"#54595d"}}},"grid":false}],"data":[{"format":{"parse":{"y":"number","x":"integer"},"type":"json"},"name":"chart","values":[{"y":0.050000000000000044,"series":"y","x":1},{"y":0.09750000000000003,"series":"y","x":2},{"y":0.1426250000000001,"series":"y","x":3},{"y":0.18549375000000012,"series":"y","x":4},{"y":0.22621906250000023,"series":"y","x":5},{"y":0.2649081093750002,"series":"y","x":6},{"y":0.3016627039062503,"series":"y","x":7},{"y":0.33657956871093775,"series":"y","x":8},{"y":0.3697505902753909,"series":"y","x":9},{"y":0.4012630607616213,"series":"y","x":10},{"y":0.43119990772354033,"series":"y","x":11},{"y":0.45963991233736334,"series":"y","x":12},{"y":0.4866579167204952,"series":"y","x":13},{"y":0.5123250208844705,"series":"y","x":14},{"y":0.536708769840247,"series":"y","x":15},{"y":0.5598733313482347,"series":"y","x":16},{"y":0.5818796647808229,"series":"y","x":17},{"y":0.6027856815417818,"series":"y","x":18},{"y":0.6226463974646927,"series":"y","x":19},{"y":0.6415140775914581,"series":"y","x":20},{"y":0.6594383737118852,"series":"y","x":21},{"y":0.676466455026291,"series":"y","x":22},{"y":0.6926431322749764,"series":"y","x":23},{"y":0.7080109756612276,"series":"y","x":24},{"y":0.7226104268781662,"series":"y","x":25},{"y":0.7364799055342579,"series":"y","x":26},{"y":0.7496559102575451,"series":"y","x":27},{"y":0.7621731147446679,"series":"y","x":28},{"y":0.7740644590074345,"series":"y","x":29},{"y":0.7853612360570628,"series":"y","x":30},{"y":0.7960931742542097,"series":"y","x":31},{"y":0.8062885155414992,"series":"y","x":32},{"y":0.8159740897644242,"series":"y","x":33},{"y":0.8251753852762029,"series":"y","x":34},{"y":0.8339166160123929,"series":"y","x":35},{"y":0.8422207852117732,"series":"y","x":36},{"y":0.8501097459511846,"series":"y","x":37},{"y":0.8576042586536253,"series":"y","x":38},{"y":0.8647240457209441,"series":"y","x":39},{"y":0.8714878434348969,"series":"y","x":40},{"y":0.877913451263152,"series":"y","x":41},{"y":0.8840177786999944,"series":"y","x":42},{"y":0.8898168897649947,"series":"y","x":43},{"y":0.895326045276745,"series":"y","x":44},{"y":0.9005597430129078,"series":"y","x":45},{"y":0.9055317558622624,"series":"y","x":46},{"y":0.9102551680691493,"series":"y","x":47},{"y":0.9147424096656918,"series":"y","x":48},{"y":0.9190052891824072,"series":"y","x":49}]}],"width":300}</graph>
Probability that at least one null hypothesis is wrongly rejected, for [math]\displaystyle{ \alpha_\text{per comparison}=0.05 }[/math], as a function of the number of independent tests [math]\displaystyle{ m }[/math].

Multiple testing correction

Multiple testing correction refers to making statistical tests more stringent in order to counteract the problem of multiple testing. The best known such adjustment is the Bonferroni correction, but other methods have been developed. Such methods are typically designed to control the family-wise error rate or the false discovery rate.

If m independent comparisons are performed, the family-wise error rate (FWER), is given by

[math]\displaystyle{ \bar{\alpha} = 1-\left( 1-\alpha_{\{\text{per comparison}\}} \right)^m. }[/math]

Hence, unless the tests are perfectly positively dependent (i.e., identical), [math]\displaystyle{ \bar{\alpha} }[/math] increases as the number of comparisons increases. If we do not assume that the comparisons are independent, then we can still say:

[math]\displaystyle{ \bar{\alpha} \le m \cdot \alpha_{\{\text{per comparison}\}}, }[/math]

which follows from Boole's inequality. Example: [math]\displaystyle{ 0.2649=1-(1-.05)^6 \le .05 \times 6 = 0.3 }[/math]

There are different ways to assure that the family-wise error rate is at most [math]\displaystyle{ \alpha }[/math]. The most conservative method, which is free of dependence and distributional assumptions, is the Bonferroni correction [math]\displaystyle{ \alpha_\mathrm{\{per\ comparison\}}={\alpha}/m }[/math]. A marginally less conservative correction can be obtained by solving the equation for the family-wise error rate of [math]\displaystyle{ m }[/math] independent comparisons for [math]\displaystyle{ \alpha_\mathrm{\{per\ comparison\}} }[/math]. This yields [math]\displaystyle{ \alpha_{\{\text{per comparison}\}} = 1-{(1-{\alpha})}^{1/m} }[/math], which is known as the Šidák correction. Another procedure is the Holm–Bonferroni method, which uniformly delivers more power than the simple Bonferroni correction, by testing only the lowest p-value ([math]\displaystyle{ i=1 }[/math]) against the strictest criterion, and the higher p-values ([math]\displaystyle{ i\gt 1 }[/math]) against progressively less strict criteria.[5] [math]\displaystyle{ \alpha_\mathrm{\{per\ comparison\}}={\alpha}/(m-i+1) }[/math].

For continuous problems, one can employ Bayesian logic to compute [math]\displaystyle{ m }[/math] from the prior-to-posterior volume ratio. Continuous generalizations of the Bonferroni and Šidák correction are presented in.[6]

Large-scale multiple testing

Traditional methods for multiple comparisons adjustments focus on correcting for modest numbers of comparisons, often in an analysis of variance. A different set of techniques have been developed for "large-scale multiple testing", in which thousands or even greater numbers of tests are performed. For example, in genomics, when using technologies such as microarrays, expression levels of tens of thousands of genes can be measured, and genotypes for millions of genetic markers can be measured. Particularly in the field of genetic association studies, there has been a serious problem with non-replication — a result being strongly statistically significant in one study but failing to be replicated in a follow-up study. Such non-replication can have many causes, but it is widely considered that failure to fully account for the consequences of making multiple comparisons is one of the causes.[7] It has been argued that advances in measurement and information technology have made it far easier to generate large datasets for exploratory analysis, often leading to the testing of large numbers of hypotheses with no prior basis for expecting many of the hypotheses to be true. In this situation, very high false positive rates are expected unless multiple comparisons adjustments are made.

For large-scale testing problems where the goal is to provide definitive results, the family-wise error rate remains the most accepted parameter for ascribing significance levels to statistical tests. Alternatively, if a study is viewed as exploratory, or if significant results can be easily re-tested in an independent study, control of the false discovery rate (FDR)[8][9][10] is often preferred. The FDR, loosely defined as the expected proportion of false positives among all significant tests, allows researchers to identify a set of "candidate positives" that can be more rigorously evaluated in a follow-up study.[11]

The practice of trying many unadjusted comparisons in the hope of finding a significant one is a known problem, whether applied unintentionally or deliberately, is sometimes called "p-hacking".[12][13]

Assessing whether any alternative hypotheses are true

A normal quantile plot for a simulated set of test statistics that have been standardized to be Z-scores under the null hypothesis. The departure of the upper tail of the distribution from the expected trend along the diagonal is due to the presence of substantially more large test statistic values than would be expected if all null hypotheses were true. The red point corresponds to the fourth largest observed test statistic, which is 3.13, versus an expected value of 2.06. The blue point corresponds to the fifth smallest test statistic, which is -1.75, versus an expected value of -1.96. The graph suggests that it is unlikely that all the null hypotheses are true, and that most or all instances of a true alternative hypothesis result from deviations in the positive direction.

A basic question faced at the outset of analyzing a large set of testing results is whether there is evidence that any of the alternative hypotheses are true. One simple meta-test that can be applied when it is assumed that the tests are independent of each other is to use the Poisson distribution as a model for the number of significant results at a given level α that would be found when all null hypotheses are true.[citation needed] If the observed number of positives is substantially greater than what should be expected, this suggests that there are likely to be some true positives among the significant results.

For example, if 1000 independent tests are performed, each at level α = 0.05, we expect 0.05 × 1000 = 50 significant tests to occur when all null hypotheses are true. Based on the Poisson distribution with mean 50, the probability of observing more than 61 significant tests is less than 0.05, so if more than 61 significant results are observed, it is very likely that some of them correspond to situations where the alternative hypothesis holds. A drawback of this approach is that it overstates the evidence that some of the alternative hypotheses are true when the test statistics are positively correlated, which commonly occurs in practice.[citation needed]. On the other hand, the approach remains valid even in the presence of correlation among the test statistics, as long as the Poisson distribution can be shown to provide a good approximation for the number of significant results. This scenario arises, for instance, when mining significant frequent itemsets from transactional datasets. Furthermore, a careful two stage analysis can bound the FDR at a pre-specified level.[14]

Another common approach that can be used in situations where the test statistics can be standardized to Z-scores is to make a normal quantile plot of the test statistics. If the observed quantiles are markedly more dispersed than the normal quantiles, this suggests that some of the significant results may be true positives.[citation needed]

See also

Key concepts
General methods of alpha adjustment for multiple comparisons
Related concepts

References

  1. Miller, R.G. (1981). Simultaneous Statistical Inference 2nd Ed. Springer Verlag New York. ISBN 978-0-387-90548-8. 
  2. Benjamini, Y. (2010). "Simultaneous and selective inference: Current successes and future challenges". Biometrical Journal 52 (6): 708–721. doi:10.1002/bimj.200900299. PMID 21154895. 
  3. "Home". http://www.mcp-conference.org/. 
  4. Kutner, Michael; Nachtsheim, Christopher; Neter, John; Li, William (2005). Applied Linear Statistical Models. McGraw-Hill Irwin. pp. 744–745. ISBN 9780072386882. https://archive.org/details/appliedlinearsta00kutn_164. 
  5. Aickin, M; Gensler, H (May 1996). "Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods". Am J Public Health 86 (5): 726–728. doi:10.2105/ajph.86.5.726. PMID 8629727. 
  6. Bayer, Adrian E.; Seljak, Uroš (2020). "The look-elsewhere effect from a unified Bayesian and frequentist perspective". Journal of Cosmology and Astroparticle Physics 2020 (10): 009. doi:10.1088/1475-7516/2020/10/009. Bibcode2020JCAP...10..009B. https://doi.org/10.1088%2F1475-7516%2F2020%2F10%2F009. 
  7. Qu, Hui-Qi; Tien, Matthew; Polychronakos, Constantin (2010-10-01). "Statistical significance in genetic association studies". Clinical and Investigative Medicine 33 (5): E266–E270. ISSN 0147-958X. PMID 20926032. 
  8. Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing". Journal of the Royal Statistical Society, Series B 57 (1): 125–133. 
  9. Storey, JD; Tibshirani, Robert (2003). "Statistical significance for genome-wide studies". PNAS 100 (16): 9440–9445. doi:10.1073/pnas.1530509100. PMID 12883005. Bibcode2003PNAS..100.9440S. 
  10. Efron, Bradley; Tibshirani, Robert; Storey, John D.; Tusher, Virginia (2001). "Empirical Bayes analysis of a microarray experiment". Journal of the American Statistical Association 96 (456): 1151–1160. doi:10.1198/016214501753382129. 
  11. Noble, William S. (2009-12-01). "How does multiple testing correction work?" (in en). Nature Biotechnology 27 (12): 1135–1137. doi:10.1038/nbt1209-1135. ISSN 1087-0156. PMID 20010596. 
  12. Young, S. S., Karr, A. (2011). "Deming, data and observational studies". Significance 8 (3): 116–120. doi:10.1111/j.1740-9713.2011.00506.x. http://www.niss.org/sites/default/files/Young%20Karr%20Obs%20Study%20Problem.pdf. 
  13. Smith, G. D., Shah, E. (2002). "Data dredging, bias, or confounding". BMJ 325 (7378): 1437–1438. doi:10.1136/bmj.325.7378.1437. PMID 12493654. 
  14. Kirsch, A; Mitzenmacher, M; Pietracaprina, A; Pucci, G; Upfal, E; Vandin, F (June 2012). "An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets". Journal of the ACM 59 (3): 12:1–12:22. doi:10.1145/2220357.2220359. 

Further reading

  • F. Betz, T. Hothorn, P. Westfall (2010), Multiple Comparisons Using R, CRC Press
  • S. Dudoit and M. J. van der Laan (2008), Multiple Testing Procedures with Application to Genomics, Springer
  • Farcomeni, A. (2008). "A Review of Modern Multiple Hypothesis Testing, with particular attention to the false discovery proportion". Statistical Methods in Medical Research 17 (4): 347–388. doi:10.1177/0962280206079046. PMID 17698936. 
  • Phipson, B.; Smyth, G. K. (2010). "Permutation P-values Should Never Be Zero: Calculating Exact P-values when Permutations are Randomly Drawn". Statistical Applications in Genetics and Molecular Biology 9: Article39. doi:10.2202/1544-6115.1585. PMID 21044043. 
  • P. H. Westfall and S. S. Young (1993), Resampling-based Multiple Testing: Examples and Methods for p-Value Adjustment, Wiley
  • P. Westfall, R. Tobias, R. Wolfinger (2011) Multiple comparisons and multiple testing using SAS, 2nd edn, SAS Institute
  • A gallery of examples of implausible correlations sourced by data dredging
  • [1] An xkcd comic about the multiple comparisons problem, using jelly beans and acne as an example