Šidák correction for t-test

From HandWiki
Revision as of 08:07, 27 June 2023 by MedAI (talk | contribs) (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Statistical method

One of the application of Student's t-test is to test the location of one sequence of independent and identically distributed random variables. If we want to test the locations of multiple sequences of such variables, Šidák correction should be applied in order to calibrate the level of the Student's t-test. Moreover, if we want to test the locations of nearly infinitely many sequences of variables, then Šidák correction should be used, but with caution. More specifically, the validity of Šidák correction depends on how fast the number of sequences goes to infinity.

Introduction

Suppose we are interested in m different hypotheses, [math]\displaystyle{ H_{1},...,H_{m} }[/math], and would like to check if all of them are true. Now the hypothesis test scheme becomes

[math]\displaystyle{ H_{null} }[/math]: all of [math]\displaystyle{ H_{i} }[/math] are true;
[math]\displaystyle{ H_{alternative} }[/math]: at least one of [math]\displaystyle{ H_{i} }[/math] is false.

Let [math]\displaystyle{ \alpha }[/math] be the level of this test (the type-I error), that is, the probability that we falsely reject [math]\displaystyle{ H_{null} }[/math] when it is true.

We aim to design a test with certain level [math]\displaystyle{ \alpha }[/math].

Suppose when testing each hypothesis [math]\displaystyle{ H_{i} }[/math], the test statistic we use is [math]\displaystyle{ t_{i} }[/math].

If these [math]\displaystyle{ t_{i} }[/math]'s are independent, then a test for [math]\displaystyle{ H_{null} }[/math] can be developed by the following procedure, known as Šidák correction.

Step 1, we test each of m null hypotheses at level [math]\displaystyle{ 1-(1-\alpha)^\frac{1}{m} }[/math].
Step 2, if any of these m null hypotheses is rejected, we reject [math]\displaystyle{ H_{null} }[/math].

Finite case

For finitely many t-tests, suppose [math]\displaystyle{ Y_{ij}=\mu_{i}+\epsilon_{ij}, i=1,...,N, j=1,...,n, }[/math] where for each i, [math]\displaystyle{ \epsilon_{i1},...,\epsilon_{in} }[/math] are independently and identically distributed, for each j [math]\displaystyle{ \epsilon_{1j},...,\epsilon_{Nj} }[/math] are independent but not necessarily identically distributed, and [math]\displaystyle{ \epsilon_{ij} }[/math] has finite fourth moment.

Our goal is to design a test for [math]\displaystyle{ H_{null}: \mu_{i}=0, \forall i=1,...,N }[/math] with level α. This test can be based on the t-statistic of each sequences, that is,

[math]\displaystyle{ t_{i}=\frac{\bar{Y}_{i}}{S_{i}/\sqrt{n}}, }[/math]

where:

[math]\displaystyle{ \bar{Y}_{i}=\frac{1}{n}\sum_{j=1}^{n}Y_{ij}, \qquad S_{i}^{2}=\frac{1}{n}\sum_{j=1}^{n}(Y_{ij}-\bar{Y}_{i})^{2}. }[/math]

Using Šidák correction, we reject [math]\displaystyle{ H_{null} }[/math] if any of the t-tests based on the t-statistics above reject at level [math]\displaystyle{ 1-(1-\alpha)^{\frac{1}{N}}. }[/math] More specifically, we reject [math]\displaystyle{ H_{null} }[/math] when

[math]\displaystyle{ \exists i \in \{1,\ldots,N\} : |t_{i}|\gt \zeta_{\alpha,N}, }[/math]

where

[math]\displaystyle{ P(|Z|\gt \zeta_{\alpha,N})=1-(1-\alpha)^{\frac{1}{N}}, \qquad Z\sim N(0,1) }[/math]

The test defined above has asymptotic level α, because

[math]\displaystyle{ \begin{align} \text{level} &= P_{null} \left (\text{reject } H_{null} \right) \\ &= P_{null} \left(\exists i \in \{1,\ldots,N\} : |t_{i}|\gt \zeta_{\alpha,N} \right ) \\ &= 1-P_{null} \left (\forall i \in \{1,\ldots,N\} : |t_{i}|\leq\zeta_{\alpha,N} \right ) \\ &=1-\prod_{i=1}^{N}P_{null} \left (|t_{i}|\leq\zeta_{\alpha,N} \right ) \\ &\to 1-\prod_{i=1}^{N}P \left (|Z_{i}|\leq\zeta_{\alpha,N} \right ) && Z_{i}\sim N(0,1) \\ &=\alpha \end{align} }[/math]

Infinite case

In some cases, the number of sequences, [math]\displaystyle{ N }[/math], increase as the data size of each sequences, [math]\displaystyle{ n }[/math], increase. In particular, suppose [math]\displaystyle{ N(n)\rightarrow \infty \text{ as } n \rightarrow \infty }[/math]. If this is true, then we will need to test a null including infinitely many hypotheses, that is

[math]\displaystyle{ H_{null}: \text{ all of } H_{i} \text{ are true, } i=1,2,.... }[/math]

To design a test, Šidák correction may be applied, as in the case of finitely many t-test. However, when [math]\displaystyle{ N(n)\rightarrow \infty \text{ as } n\rightarrow \infty }[/math], the Šidák correction for t-test may not achieve the level we want, that is, the true level of the test may not converges to the nominal level [math]\displaystyle{ \alpha }[/math] as n goes to infinity. This result is related to high-dimensional statistics and is proven by (Fan Hall).[1] Specifically, if we want the true level of the test converges to the nominal level [math]\displaystyle{ \alpha }[/math], then we need a restraint on how fast [math]\displaystyle{ N(n)\rightarrow \infty }[/math]. Indeed,

  • When all of [math]\displaystyle{ \epsilon_{ij} }[/math] have distribution symmetric about zero, then it is sufficient to require [math]\displaystyle{ \log N = o (n^{1/3}) }[/math] to guarantee the true level converges to [math]\displaystyle{ \alpha }[/math].
  • When the distributions of [math]\displaystyle{ \epsilon_{ij} }[/math] are asymmetric, then it is necessary to impose [math]\displaystyle{ \log N = o(n^{1/2}) }[/math] to ensure the true level converges to [math]\displaystyle{ \alpha }[/math].
  • Actually, if we apply bootstrapping method to the calibration of level, then we will only need [math]\displaystyle{ \log N = o (n^{1/3}) }[/math] even if [math]\displaystyle{ \epsilon_{ij} }[/math] has asymmetric distribution.

The results above are based on Central Limit Theorem. According to Central Limit Theorem, each of our t-statistics [math]\displaystyle{ t_{i} }[/math] possesses asymptotic standard normal distribution, and so the difference between the distribution of each [math]\displaystyle{ t_{i} }[/math] and the standard normal distribution is asymptotically negligible. The question is, if we aggregate all the differences between the distribution of each [math]\displaystyle{ t_{i} }[/math] and the standard normal distribution, is this aggregation of differences still asymptotically ignorable?

When we have finitely many [math]\displaystyle{ t_{i} }[/math], the answer is yes. But when we have infinitely many [math]\displaystyle{ t_{i} }[/math], the answer some time becomes no. This is because in the latter case we are summing up infinitely many infinitesimal terms. If the number of the terms goes to infinity too fast, that is, [math]\displaystyle{ N(n) \rightarrow \infty }[/math] too fast, then the sum may not be zero, the distribution of the t-statistics can not be approximated by the standard normal distribution, the true level does not converges to the nominal level [math]\displaystyle{ \alpha }[/math], and then the Šidák correction fails.

See also

References

  1. Fan, Jianqing; Hall, Peter; Yao, Qiwei (2007). "To How Many Simultaneous Hypothesis Tests Can Normal, Student's t or Bootstrap Calibration Be Applied". Journal of the American Statistical Association 102 (480): 1282–1288. doi:10.1198/016214507000000969.