False coverage rate
In statistics, a false coverage rate (FCR) is the average rate of false coverage, i.e. not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a (1 − α)×100% level for all of the parameters considered in the problem. The FCR has a strong connection to the false discovery rate (FDR). Both methods address the problem of multiple comparisons, FCR from confidence intervals (CIs) and FDR from P-value's point of view.
FCR was needed because of dangers caused by selective inference. Researchers and scientists tend to report or highlight only the portion of data that is considered significant without clearly indicating the various hypothesis that were considered. It is therefore necessary to understand how the data is falsely covered. There are many FCR procedures which can be used depending on the length of the CI – Bonferroni-selected–Bonferroni-adjusted,[citation needed] Adjusted BH-Selected CIs (Benjamini and Yekutieli 2005[1]). The incentive of choosing one procedure over another is to ensure that the CI is as narrow as possible and to keep the FCR. For microarray experiments and other modern applications, there are a huge number of parameters, often tens of thousands or more and it is very important to choose the most powerful procedure.
The FCR was first introduced by Daniel Yekutieli in his PhD thesis in 2001.[2]
Definitions
Not keeping the FCR means [math]\displaystyle{ \text{FCR}\gt q }[/math] when [math]\displaystyle{ q= \frac{V}{R} = \frac{\alpha m_0}{R} }[/math], where [math]\displaystyle{ m_0 }[/math] is the number of true null hypotheses, [math]\displaystyle{ R }[/math] is the number of rejected hypothesis, [math]\displaystyle{ V }[/math] is the number of false positives, and [math]\displaystyle{ \alpha }[/math] is the significance level. Intervals with simultaneous coverage probability [math]\displaystyle{ 1-q }[/math] can control the FCR to be bounded by [math]\displaystyle{ q }[/math].
Classification of multiple hypothesis tests
The following table defines the possible outcomes when testing multiple null hypotheses. Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing each type of outcome over all Hi yields the following random variables:
Null hypothesis is true (H0) | Alternative hypothesis is true (HA) | Total | |
---|---|---|---|
Test is declared significant | V | S | R |
Test is declared non-significant | U | T | [math]\displaystyle{ m - R }[/math] |
Total | [math]\displaystyle{ m_0 }[/math] | [math]\displaystyle{ m - m_0 }[/math] | m |
- m is the total number hypotheses tested
- [math]\displaystyle{ m_0 }[/math] is the number of true null hypotheses, an unknown parameter
- [math]\displaystyle{ m - m_0 }[/math] is the number of true alternative hypotheses
- V is the number of false positives (Type I error) (also called "false discoveries")
- S is the number of true positives (also called "true discoveries")
- T is the number of false negatives (Type II error)
- U is the number of true negatives
- [math]\displaystyle{ R=V+S }[/math] is the number of rejected null hypotheses (also called "discoveries", either true or false)
In m hypothesis tests of which [math]\displaystyle{ m_0 }[/math] are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.
The problems addressed by FCR
Selection
Selection causes reduced average coverage. Selection can be presented as conditioning on an event defined by the data and may affect the coverage probability of a CI for a single parameter. Equivalently, the problem of selection changes the basic sense of P-values. FCR procedures consider that the goal of conditional coverage following any selection rule for any set of (unknown) values for the parameters is impossible to achieve. A weaker property when it comes to selective CIs is possible and will avoid false coverage statements. FCR is a measure of interval coverage following selection. Therefore, even though a 1 − α CI does not offer selective (conditional) coverage, the probability of constructing a no covering CI is at most α, where
- [math]\displaystyle{ \Pr[\theta \not\in \mathrm{CI},\ \text{CI constructed}] \leq \Pr[\theta \not\in \mathrm{CI}] \leq \alpha }[/math]
Selection and multiplicity
When facing both multiplicity (inference about multiple parameters) and selection, not only is the expected proportion of coverage over selected parameters at 1−α not equivalent to the expected proportion of no coverage at α, but also the latter can no longer be ensured by constructing marginal CIs for each selected parameter. FCR procedures solve this by taking the expected proportion of parameters not covered by their CIs among the selected parameters, where the proportion is 0 if no parameter is selected. This false coverage-statement rate (FCR) is a property of any procedure that is defined by the way in which parameters are selected and the way in which the multiple intervals are constructed.
Controlling procedures
Bonferroni procedure (Bonferroni-selected–Bonferroni-adjusted) for simultaneous CI
Simultaneous CIs with Bonferroni procedure when we have m parameters, each marginal CI constructed at the 1 − α/m level. Without selection, these CIs offer simultaneous coverage, in the sense that the probability that all CIs cover their respective parameters is at least 1 − α. unfortunately, even such a strong property does not ensure the conditional confidence property following selection.
FCR for Bonferroni-selected–Bonferroni-adjusted simultaneous CI
The Bonferroni–Bonferroni procedure cannot offer conditional coverage, however it does control the FCR at <α In fact it does so too well, in the sense that the FCR is much too close to 0 for large values of θ. Intervals selection is based on Bonferroni testing, and Bonferroni CIs are then constructed. The FCR is estimated as, the proportion of intervals failing to cover their respective parameters among the constructed CIs is calculated (setting the proportion to 0 when none are selected). Where selection is based on unadjusted individual testing and unadjusted CIs are constructed.
FCR-adjusted BH-selected CIs
In BH procedure for FDR after sorting the p values P(1) ≤ • • • ≤ P(m) and calculating R = max{ j : P( j) ≤ j • q/m}, the R null hypotheses for which P(i) ≤ R • q/m are rejected. If testing is done using the Bonferroni procedure, then the lower bound of the FCR may drop well below the desired level q, implying that the intervals are too long. In contrast, applying the following procedure, which combines the general procedure with the FDR controlling testing in the BH procedure, also yields a lower bound for the FCR, q/2 ≤ FCR. This procedure is sharp in the sense that for some configurations, the FCR approaches q.
1. Sort the p values used for testing the m hypotheses regarding the parameters, P(1) ≤ • • • ≤P(m).
2. Calculate R = max{i : P(i) ≤ i • q/m}.
3. Select the R parameters for which P(i) ≤ R • q/m, corresponding to the rejected hypotheses.
4. Construct a 1 − R • q/m CI for each parameter selected.
See also
- False positive rate
- Post-hoc analysis
References
Footnotes
- ↑ Benjamini, Yoav; Yekutieli, Daniel (March 2005). "False Discovery Rate–Adjusted Multiple Confidence Intervals for Selected Parameters" (pdf). Journal of the American Statistical Association 100 (469): 71–93. doi:10.1198/016214504000001907. http://www.math.tau.ac.il/~ybenja/MyPapers/benjamini_yekutieli_JASA2005.pdf.
- ↑ Theoretical Results Needed for Applying the False Discovery Rate in Statistical Problems. April, 2001 (Section 3.2, Page 51)
Other Sources
Original source: https://en.wikipedia.org/wiki/False coverage rate.
Read more |