Harmonic mean p-value
The harmonic mean p-value[1][2][3] (HMP) is a statistical technique for addressing the multiple comparisons problem that controls the strong-sense family-wise error rate[2] (this claim has been disputed[4]). It improves on the power of Bonferroni correction by performing combined tests, i.e. by testing whether groups of p-values are statistically significant, like Fisher's method.[5] However, it avoids the restrictive assumption that the p-values are independent, unlike Fisher's method.[2][3] Consequently, it controls the false positive rate when tests are dependent, at the expense of less power (i.e. a higher false negative rate) when tests are independent.[2] Besides providing an alternative to approaches such as Bonferroni correction that controls the stringent family-wise error rate, it also provides an alternative to the widely-used Benjamini-Hochberg procedure (BH) for controlling the less-stringent false discovery rate.[6] This is because the power of the HMP to detect significant groups of hypotheses is greater than the power of BH to detect significant individual hypotheses.[2]
There are two versions of the technique: (i) direct interpretation of the HMP as an approximate p-value and (ii) a procedure for transforming the HMP into an asymptotically exact p-value. The approach provides a multilevel test procedure in which the smallest groups of p-values that are statistically significant may be sought.
Direct interpretation of the harmonic mean p-value
The weighted harmonic mean of p-values [math]\displaystyle{ p_1, \dots, p_L }[/math] is defined as [math]\displaystyle{ \overset{\circ}{p} = \frac{\sum_{i=1}^L w_{i}}{\sum_{i=1}^L w_{i}/p_{i}}, }[/math] where [math]\displaystyle{ w_1, \dots, w_L }[/math] are weights that must sum to one, i.e. [math]\displaystyle{ \sum_{i=1}^L w_i=1 }[/math]. Equal weights may be chosen, in which case [math]\displaystyle{ w_i=1/L }[/math].
In general, interpreting the HMP directly as a p-value is anti-conservative, meaning that the false positive rate is higher than expected. However, as the HMP becomes smaller, under certain assumptions, the discrepancy decreases, so that direct interpretation of significance achieves a false positive rate close to that implied for sufficiently small values (e.g. [math]\displaystyle{ \overset{\circ}{p}\lt 0.05 }[/math]).[2]
The HMP is never anti-conservative by more than a factor of [math]\displaystyle{ e\,\log L }[/math] for small [math]\displaystyle{ L }[/math], or [math]\displaystyle{ \log L }[/math] for large [math]\displaystyle{ L }[/math].[3] However, these bounds represent worst case scenarios under arbitrary dependence that are likely to be conservative in practice. Rather than applying these bounds, asymptotically exact p-values can be produced by transforming the HMP.
Asymptotically exact harmonic mean p-value procedure
Generalized central limit theorem shows that an asymptotically exact p-value, [math]\displaystyle{ p_{\overset{\circ}{p}} }[/math], can be computed from the HMP, [math]\displaystyle{ \overset{\circ}{p} }[/math], using the formula[2] [math]\displaystyle{ p_{\overset{\circ}{p}} = \int_{1/\overset{\circ}{p}}^\infty f_\textrm{Landau}\left(x\,|\,\log L+0.874,\frac{\pi}{2}\right) \mathrm{d} x. }[/math]Subject to the assumptions of generalized central limit theorem, this transformed p-value becomes exact as the number of tests, [math]\displaystyle{ L }[/math], becomes large. The computation uses the Landau distribution, whose density function can be written[math]\displaystyle{ f_\textrm{Landau}(x\,|\,\mu,\sigma) = \frac{1}{\pi\sigma}\int_0^\infty \textrm{e}^{
-t\frac{(x-\mu)}{\sigma} -\frac{2}{\pi}t \log t
}\,\sin(2t)\,\textrm{d}t. }[/math]The test is implemented by the p.hmp
command of the harmonicmeanp
R package; a tutorial is available online.
Equivalently, one can compare the HMP to a table of critical values (Table 1). The table illustrates that the smaller the false positive rate, and the smaller the number of tests, the closer the critical value is to the false positive rate.
[math]\displaystyle{ L }[/math] | [math]\displaystyle{ \alpha=0.05 }[/math] | [math]\displaystyle{ \alpha=0.01 }[/math] | [math]\displaystyle{ \alpha=0.001 }[/math] |
---|---|---|---|
10 | 0.040 | 0.0094 | 0.00099 |
100 | 0.036 | 0.0092 | 0.00099 |
1,000 | 0.034 | 0.0090 | 0.00099 |
10,000 | 0.031 | 0.0088 | 0.00098 |
100,000 | 0.029 | 0.0086 | 0.00098 |
1,000,000 | 0.027 | 0.0084 | 0.00098 |
10,000,000 | 0.026 | 0.0083 | 0.00098 |
100,000,000 | 0.024 | 0.0081 | 0.00098 |
1,000,000,000 | 0.023 | 0.0080 | 0.00097 |
Multiple testing via the multilevel test procedure
If the HMP is significant at some level [math]\displaystyle{ \alpha }[/math] for a group of [math]\displaystyle{ L }[/math] p-values, one may search all subsets of the [math]\displaystyle{ L }[/math] p-values for the smallest significant group, while maintaining the strong-sense family-wise error rate.[2] Formally, this constitutes a closed-testing procedure.[7]
When [math]\displaystyle{ \alpha }[/math] is small (e.g. [math]\displaystyle{ \alpha\lt 0.05 }[/math]), the following multilevel test based on direct interpretation of the HMP controls the strong-sense family-wise error rate at level approximately [math]\displaystyle{ \alpha: }[/math]
- Define the HMP of any subset [math]\displaystyle{ \mathcal{R} }[/math] of the [math]\displaystyle{ L }[/math] p-values to be[math]\displaystyle{ \overset{\circ}{p}_\mathcal{R} = \frac{\sum_{i\in\mathcal{R}} w_{i}}{\sum_{i\in\mathcal{R}} w_{i}/p_{i}}. }[/math]
- Reject the null hypothesis that none of the p-values in subset [math]\displaystyle{ \mathcal{R} }[/math] are significant if [math]\displaystyle{ \overset{\circ}{p}_\mathcal{R}\leq\alpha\,w_\mathcal{R} }[/math], where [math]\displaystyle{ w_\mathcal{R}=\sum_{i\in\mathcal{R}}w_i }[/math]. (Recall that, by definition, [math]\displaystyle{ \sum_{i=1}^L w_i=1 }[/math].)
An asymptotically exact version of the above replaces [math]\displaystyle{ \overset{\circ}{p}_\mathcal{R} }[/math]in step 2 with [math]\displaystyle{ p_{\overset{\circ}{p}_\mathcal{R}} = \max\left\{\overset{\circ}{p}_\mathcal{R}, w_{\mathcal{R}} \int_{w_{\mathcal{R}}/\overset{\circ}{p}_\mathcal{R}}^\infty f_\textrm{Landau}\left(x\,|\,\log L +0.874,\frac{\pi}{2}\right) \mathrm{d} x\right\}, }[/math] where [math]\displaystyle{ L }[/math] gives the number of p-values, not just those in subset [math]\displaystyle{ \mathcal{R} }[/math].[8]
Since direct interpretation of the HMP is faster, a two-pass procedure may be used to identify subsets of p-values that are likely to be significant using direct interpretation, subject to confirmation using the asymptotically exact formula.
Properties of the HMP
The HMP has a range of properties that arise from generalized central limit theorem.[2] It is:
- Robust to positive dependency between the p-values.
- Insensitive to the exact number of tests, L.
- Robust to the distribution of weights, w.
- Most influenced by the smallest p-values.
When the HMP is not significant, neither is any subset of the constituent tests. Conversely, when the multilevel test deems a subset of p-values to be significant, the HMP for all the p-values combined is likely to be significant; this is certain when the HMP is interpreted directly. When the goal is to assess the significance of individual p-values, so that combined tests concerning groups of p-values are of no interest, the HMP is equivalent to the Bonferroni procedure but subject to the more stringent significance threshold [math]\displaystyle{ \alpha_L\lt \alpha }[/math] (Table 1).
The HMP assumes the individual p-values have (not necessarily independent) standard uniform distributions when their null hypotheses are true. Large numbers of underpowered tests can therefore harm the power of the HMP.
While the choice of weights is unimportant for the validity of the HMP under the null hypothesis, the weights influence the power of the procedure. Supplementary Methods §5C of [2] and an online tutorial consider the issue in more detail.
Bayesian interpretations of the HMP
The HMP was conceived by analogy to Bayesian model averaging and can be interpreted as inversely proportional to a model-averaged Bayes factor when combining p-values from likelihood ratio tests.[1][2]
The harmonic mean rule-of-thumb
I. J. Good reported an empirical relationship between the Bayes factor and the p-value from a likelihood ratio test.[1] For a null hypothesis [math]\displaystyle{ H_0 }[/math] nested in a more general alternative hypothesis [math]\displaystyle{ H_A, }[/math] he observed that often,[math]\displaystyle{ \textrm{BF}_i\approx \frac{1}{\gamma\,p_i},\quad3\frac{1}{3}\lt \gamma\lt 30, }[/math] where [math]\displaystyle{ \textrm{BF}_i }[/math] denotes the Bayes factor in favour of [math]\displaystyle{ H_A }[/math] versus [math]\displaystyle{ H_0. }[/math] Extrapolating, he proposed a rule of thumb in which the HMP is taken to be inversely proportional to the model-averaged Bayes factor for a collection of [math]\displaystyle{ L }[/math] tests with common null hypothesis:[math]\displaystyle{ \overline{\textrm{BF}}=\sum_{i=1}^L w_i\,\textrm{BF}_i \approx \sum_{i=1}^L \frac{w_i}{\gamma\,p_i} = \frac{1}{\gamma\,\overset{\circ}{p}}. }[/math]For Good, his rule-of-thumb supported an interchangeability between Bayesian and classical approaches to hypothesis testing.[9][10][11][12][13]
Bayesian calibration of p-values
If the distributions of the p-values under the alternative hypotheses follow Beta distributions with parameters [math]\displaystyle{ \left(0\lt \xi_i\lt 1, 1\right) }[/math], a form considered by Sellke, Bayarri and Berger,[14] then the inverse proportionality between the model-averaged Bayes factor and the HMP can be formalized as[2][15][math]\displaystyle{ \overline{\textrm{BF}}=\sum_{i=1}^L \mu_i\,\textrm{BF}_i=\sum_{i=1}^L \mu_i\,\xi_i\,p_i^{\xi_i-1}\approx\bar\xi\sum_{i=1}^L w_i\,p_i^{-1}=\frac{\bar\xi}{\overset{\circ}{p}}, }[/math] where
- [math]\displaystyle{ \mu_i }[/math] is the prior probability of alternative hypothesis [math]\displaystyle{ i, }[/math] such that [math]\displaystyle{ \sum_{i=1}^L\mu_i=1, }[/math]
- [math]\displaystyle{ \xi_i/(1+\xi_i) }[/math] is the expected value of [math]\displaystyle{ p_i }[/math] under alternative hypothesis [math]\displaystyle{ i, }[/math]
- [math]\displaystyle{ w_i=u_i/\bar\xi }[/math] is the weight attributed to p-value [math]\displaystyle{ i, }[/math]
- [math]\displaystyle{ u_i = \left(\mu_i\,\xi_i\right)^{1/(1-\xi_i)} }[/math] incorporates the prior model probabilities and powers into the weights, and
- [math]\displaystyle{ \bar\xi = \sum_{i=1}^L u_i }[/math] normalizes the weights.
The approximation works best for well-powered tests ([math]\displaystyle{ \xi_i\ll 1 }[/math]).
The harmonic mean p-value as a bound on the Bayes factor
For likelihood ratio tests with exactly two degrees of freedom, Wilks' theorem implies that [math]\displaystyle{ p_i=1/R_i }[/math], where [math]\displaystyle{ R_i }[/math] is the maximized likelihood ratio in favour of alternative hypothesis [math]\displaystyle{ i, }[/math] and therefore [math]\displaystyle{ \overset{\circ}{p}=1/\bar{R} }[/math], where [math]\displaystyle{ \bar{R} }[/math] is the weighted mean maximized likelihood ratio, using weights [math]\displaystyle{ w_1,\dots,w_L. }[/math] Since [math]\displaystyle{ R_i }[/math] is an upper bound on the Bayes factor, [math]\displaystyle{ \textrm{BF}_i }[/math], then [math]\displaystyle{ 1/\overset{\circ}{p} }[/math] is an upper bound on the model-averaged Bayes factor:[math]\displaystyle{ \overline{\textrm{BF}}\leq\frac{1}{\overset{\circ}{p}}. }[/math]While the equivalence holds only for two degrees of freedom, the relationship between [math]\displaystyle{ \overset{\circ}{p} }[/math] and [math]\displaystyle{ \bar{R}, }[/math] and therefore [math]\displaystyle{ \overline{\textrm{BF}}, }[/math] behaves similarly for other degrees of freedom.[2]
Under the assumption that the distributions of the p-values under the alternative hypotheses follow Beta distributions with parameters [math]\displaystyle{ \left(1, \kappa_i\gt 1\right), }[/math] and that the weights [math]\displaystyle{ w_i=\mu_i, }[/math] the HMP provides a tighter upper bound on the model-averaged Bayes factor:[math]\displaystyle{ \overline{\textrm{BF}}\leq \frac{1}{e\,\overset{\circ}{p}}, }[/math]a result that again reproduces the inverse proportionality of Good's empirical relationship.[16]
References
- ↑ 1.0 1.1 1.2 "Significance tests in parallel and in series". Journal of the American Statistical Association 53 (284): 799–813. 1958. doi:10.1080/01621459.1958.10501480.
- ↑ 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 "The harmonic mean p-value for combining dependent tests". Proceedings of the National Academy of Sciences USA 116 (4): 1195–1200. 2019. doi:10.1073/pnas.1814092116. PMID 30610179.
- ↑ 3.0 3.1 3.2 Vovk, Vladimir; Wang, Ruodu (April 25, 2019). "Combining p-values via averaging". http://alrw.net/articles/21.pdf.
- ↑ Goeman, Jelle J.; Rosenblatt, Jonathan D.; Nichols, Thomas E. (2019-11-19). "The harmonic mean p-value: Strong versus weak control, and the assumption of independence" (in en). Proceedings of the National Academy of Sciences 116 (47): 23382–23383. doi:10.1073/pnas.1909339116. ISSN 0027-8424. PMID 31662466.
- ↑ Fisher, R A (1934). Statistical Methods for Research Workers (5th ed.). Edinburgh, UK: Oliver and Boyd.
- ↑ "Controlling the false discovery rate: A practical and powerful approach to multiple testing". Journal of the Royal Statistical Society. Series B (Methodological) 57 (1): 289–300. 1995. doi:10.1111/j.2517-6161.1995.tb02031.x.
- ↑ "On closed testing procedures with special reference to ordered analysis of variance.". Biometrika 63 (3): 655–660. 1976. doi:10.1093/biomet/63.3.655.
- ↑ Wilson, Daniel J (August 17, 2019). "Updated correction to "The harmonic mean p-value for combining independent tests".". http://www.danielwilson.me.uk/files/wilson_2019_annotated_corrections.v2.pdf.
- ↑ "C192. One tail versus two-tails, and the harmonic-mean rule of thumb.". Journal of Statistical Computation and Simulation 19 (2): 174–176. 1984. doi:10.1080/00949658408810727.
- ↑ "C193. Paired versus unpaired comparisons and the harmonic-mean rule of thumb.". Journal of Statistical Computation and Simulation 19 (2): 176–177. 1984. doi:10.1080/00949658408810728.
- ↑ "C213. A sharpening of the harmonic-mean rule of thumb for combining tests "in parallel".". Journal of Statistical Computation and Simulation 20 (2): 173–176. 1984. doi:10.1080/00949658408810770.
- ↑ "C214. The harmonic-mean rule of thumb: Some classes of applications.". Journal of Statistical Computation and Simulation 20 (2): 176–179. 1984. doi:10.1080/00949658408810771.
- ↑ Good, Irving John. (2009). Good thinking : the foundations of probability and its applications. Dover Publications. ISBN 9780486474380. OCLC 319491702.
- ↑ Sellke, Thomas; Bayarri, M. J; Berger, James O (2001). "Calibration of p Values for Testing Precise Null Hypotheses". The American Statistician 55 (1): 62–71. doi:10.1198/000313001300339950. ISSN 0003-1305.
- ↑ "Reply to Held: When is a harmonic mean p-value a Bayes factor?". Proceedings of the National Academy of Sciences USA 116 (13): 5857–5858. 2019. doi:10.1073/pnas.1902157116. PMID 30890643. PMC 6442550. http://www.danielwilson.me.uk/preprints/wilson_2019b.pdf.
- ↑ "On the Bayesian interpretation of the harmonic mean p-value". Proceedings of the National Academy of Sciences USA 116 (13): 5855–5856. 2019. doi:10.1073/pnas.1900671116. PMID 30890644.
Original source: https://en.wikipedia.org/wiki/Harmonic mean p-value.
Read more |