Boschloo's test

From HandWiki
Short description: Statistical test for analysis of contingency tables

Boschloo's test is a statistical hypothesis test for analysing 2x2 contingency tables. It examines the association of two Bernoulli distributed random variables and is a uniformly more powerful alternative to Fisher's exact test. It was proposed in 1970 by R. D. Boschloo.[1]

Setting

A 2x2 contingency table visualizes [math]\displaystyle{ n }[/math] independent observations of two binary variables [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math]:

[math]\displaystyle{ \begin{array}{c|cc|c} & B = 1 & B = 0 & \mbox{Total}\\ \hline A = 1 & x_{11} & x_{10} & n_1 \\ A = 0 & x_{01} & x_{00} & n_0 \\ \hline \mbox{Total} & s_1 & s_0 & n\\ \end{array} }[/math]

The probability distribution of such tables can be classified into three distinct cases.[2]

  1. The row sums [math]\displaystyle{ n_1, n_0 }[/math] and column sums [math]\displaystyle{ s_1, s_0 }[/math] are fixed in advance and not random.
    Then all [math]\displaystyle{ x_{ij} }[/math] are determined by [math]\displaystyle{ x_{11} }[/math]. If [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are independent, [math]\displaystyle{ x_{11} }[/math] follows a hypergeometric distribution with parameters [math]\displaystyle{ n, n_1, s_1 }[/math]:
    [math]\displaystyle{ x_{11} \sim \mbox{Hypergeometric}(n, n_1, s_1) }[/math].
  2. The row sums [math]\displaystyle{ n_1, n_0 }[/math] are fixed in advance but the column sums [math]\displaystyle{ s_1, s_0 }[/math] are not.
    Then all random parameters are determined by [math]\displaystyle{ x_{11} }[/math] and [math]\displaystyle{ x_{01} }[/math] and [math]\displaystyle{ x_{11}, x_{01} }[/math] follow a binomial distribution with probabilities [math]\displaystyle{ p_1, p_0 }[/math]:
    [math]\displaystyle{ x_{11} \sim B(n_1, p_1) }[/math]
    [math]\displaystyle{ x_{01} \sim B(n_0, p_0) }[/math]
  3. Only the total number [math]\displaystyle{ n }[/math] is fixed but the row sums [math]\displaystyle{ n_1, n_0 }[/math] and the column sums [math]\displaystyle{ s_1, s_0 }[/math] are not.
    Then the random vector [math]\displaystyle{ (x_{11}, x_{10}, x_{01}, x_{00}) }[/math] follows a multinomial distribution with probability vector [math]\displaystyle{ (p_{11}, p_{10}, p_{01}, p_{00}) }[/math].

Fisher's exact test is designed for the first case and therefore an exact conditional test (because it conditions on the column sums). The typical example of such a case is the Lady tasting tea: A lady tastes 8 cups of tea with milk. In 4 of those cups the milk is poured in before the tea. In the other 4 cups the tea is poured in first. The lady tries to assign the cups to the two categories. Following our notation, the random variable [math]\displaystyle{ A }[/math] represents the used method (1 = milk first, 0 = milk last) and [math]\displaystyle{ B }[/math] represents the lady's guesses (1 = milk first guessed, 0 = milk last guessed). Then the row sums are the fixed numbers of cups prepared with each method: [math]\displaystyle{ n_1 = 4, n_0 = 4 }[/math]. The lady knows that there are 4 cups in each category, so will assign 4 cups to each method. Thus, the column sums are also fixed in advance: [math]\displaystyle{ s_1 = 4, s_0 = 4 }[/math]. If she is not able to tell the difference, [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] are independent and the number [math]\displaystyle{ x_{11} }[/math] of correctly classified cups with milk first follows the hypergeometric distribution [math]\displaystyle{ \mbox{Hypergeometric}(8, 4, 4) }[/math].

Boschloo's test is designed for the second case and therefore an exact unconditional test. Examples of such a case are often found in medical research, where a binary endpoint is compared between two patient groups. Following our notation, [math]\displaystyle{ A = 1 }[/math] represents the first group that receives some medication of interest. [math]\displaystyle{ A = 0 }[/math] represents the second group that receives a placebo. [math]\displaystyle{ B }[/math] indicates the cure of a patient (1 = cure, 0 = no cure). Then the row sums equal the group sizes and are usually fixed in advance. The column sums are the total number of cures respectively disease continuations and not fixed in advance.

An example for the third case can be constructed as follows: Simultaneously flip two distinguishable coins [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] and do this [math]\displaystyle{ n }[/math] times. If we count the number of results in our 2x2 table (1 = head, 0 = tail), we neither know in advance how often coin [math]\displaystyle{ A }[/math] shows head or tail (row sums random), nor do we know how often coin [math]\displaystyle{ B }[/math] shows head or tail (column sums random).

Test hypothesis

The null hypothesis of Boschloo's one-tailed test (high values of [math]\displaystyle{ x_1 }[/math] favor the alternative hypothesis) is:

[math]\displaystyle{ H_0: p_1 \le p_0 }[/math]

The null hypothesis of the one-tailed test can also be formulated in the other direction (small values of [math]\displaystyle{ x_1 }[/math] favor the alternative hypothesis):

[math]\displaystyle{ H_0: p_1 \ge p_0 }[/math]

The null hypothesis of the two-tailed test is:

[math]\displaystyle{ H_0: p_1 = p_0 }[/math]

There is no universal definition of the two-tailed version of Fisher's exact test.[3] Since Boschloo's test is based on Fisher's exact test, a universal two-tailed version of Boschloo's test also doesn't exist. In the following we deal with the one-tailed test and [math]\displaystyle{ H_0: p_1 \le p_0 }[/math].

Boschloo's idea

We denote the desired significance level by [math]\displaystyle{ \alpha }[/math]. Fisher's exact test is a conditional test and appropriate for the first of the above mentioned cases. But if we treat the observed column sum [math]\displaystyle{ s_1 }[/math] as fixed in advance, Fisher's exact test can also be applied to the second case. The true size of the test then depends on the nuisance parameters [math]\displaystyle{ p_1 }[/math] and [math]\displaystyle{ p_0 }[/math]. It can be shown that the size maximum [math]\displaystyle{ \max\limits_{p_1 \le p_0}\big(\mbox{size}(p_1, p_0)\big) }[/math] is taken for equal proportions [math]\displaystyle{ p=p_1=p_0 }[/math][4] and is still controlled by [math]\displaystyle{ \alpha }[/math].[1] However, Boschloo stated that for small sample sizes, the maximal size is often considerably smaller than [math]\displaystyle{ \alpha }[/math]. This leads to an undesirable loss of power.

Boschloo proposed to use Fisher's exact test with a greater nominal level [math]\displaystyle{ \alpha^* \gt \alpha }[/math]. Here, [math]\displaystyle{ \alpha^* }[/math] should be chosen as large as possible such that the maximal size is still controlled by [math]\displaystyle{ \alpha }[/math]: [math]\displaystyle{ \max\limits_{p \in [0, 1]}\big(\mbox{size}(p)\big) \le \alpha }[/math]. This method was especially advantageous at the time of Boschloo's publication because [math]\displaystyle{ \alpha^* }[/math] could be looked up for common values of [math]\displaystyle{ \alpha, n_1 }[/math] and [math]\displaystyle{ n_0 }[/math]. This made performing Boschloo's test computationally easy.

Test statistic

The decision rule of Boschloo's approach is based on Fisher's exact test. An equivalent way of formulating the test is to use the p-value of Fisher's exact test as test statistic. Fisher's p-value is calculated from the hypergeometric distribution (for ease of notation we write [math]\displaystyle{ x_1, x_0 }[/math] instead of [math]\displaystyle{ x_{11}, x_{01} }[/math]):

[math]\displaystyle{ p_F = 1-F_{\mbox{Hypergeometric}(n, n_1, x_1+x_0)}(x_1-1) }[/math]

The distribution of [math]\displaystyle{ p_F }[/math] is determined by the binomial distributions of [math]\displaystyle{ x_1 }[/math] and [math]\displaystyle{ x_0 }[/math] and depends on the unknown nuisance parameter [math]\displaystyle{ p }[/math]. For a specified significance level [math]\displaystyle{ \alpha, }[/math] the critical value of [math]\displaystyle{ p_F }[/math] is the maximal value [math]\displaystyle{ \alpha^* }[/math] that satisfies [math]\displaystyle{ \max\limits_{p \in [0, 1]}P(p_F \le \alpha^*) \le \alpha }[/math]. The critical value [math]\displaystyle{ \alpha^* }[/math] is equal to the nominal level of Boschloo's original approach.

Modification

Boschloo's test deals with the unknown nuisance parameter [math]\displaystyle{ p }[/math] by taking the maximum over the whole parameter space [math]\displaystyle{ [0,1] }[/math]. The Berger & Boos procedure takes a different approach by maximizing [math]\displaystyle{ P(p_F \le \alpha^*) }[/math] over a [math]\displaystyle{ (1-\gamma) }[/math] confidence interval of [math]\displaystyle{ p = p_1 = p_0 }[/math] and adding [math]\displaystyle{ \gamma }[/math].[5] [math]\displaystyle{ \gamma }[/math] is usually a small value such as 0.001 or 0.0001. This results in a modified Boschloo's test which is also exact.[6]

Comparison to other exact tests

All exact tests hold the specified significance level but can have varying power in different situations. Mehrotra et al. compared the power of some exact tests in different situations.[6] The results regarding Boschloo's test are summarized in the following.

Modified Boschloo's test

Boschloo's test and the modified Boschloo's test have similar power in all considered scenarios. Boschloo's test has slightly more power in some cases, and vice versa in some other cases.

Fisher's exact test

Boschloo's test is by construction uniformly more powerful than Fisher's exact test. For small sample sizes (e.g. 10 per group) the power difference is large, ranging from 16 to 20 percentage points in the regarded cases. The power difference is smaller for greater sample sizes.

Exact Z-Pooled test

This test is based on the test statistic

[math]\displaystyle{ Z_P(x_1, x_0) = \frac{\hat p_1 - \hat p_0}{\sqrt{\tilde p(1-\tilde p)(\frac{1}{n_1} + \frac{1}{n_0})}}, }[/math]

where [math]\displaystyle{ \hat p_i = \frac{x_i}{n_i} }[/math] are the group event rates and [math]\displaystyle{ \tilde p = \frac{x_1+x_0}{n_1+n_0} }[/math] is the pooled event rate.

The power of this test is similar to that of Boschloo's test in most scenarios. In some cases, the [math]\displaystyle{ Z }[/math]-Pooled test has greater power, with differences mostly ranging from 1 to 5 percentage points. In very few cases, the difference goes up to 9 percentage points.

This test can also be modified by the Berger & Boos procedure. However, the resulting test has very similar power to the unmodified test in all scenarios.

Exact Z-Unpooled test

This test is based on the test statistic

[math]\displaystyle{ Z_U(x_1, x_0) = \frac{\hat p_1 - \hat p_0}{\sqrt{\frac{\hat p_1(1-\hat p_1)}{n_1} + \frac{\hat p_0(1-\hat p_0)}{n_0}}}, }[/math]

where [math]\displaystyle{ \hat p_i = \frac{x_i}{n_i} }[/math] are the group event rates.

The power of this test is similar to that of Boschloo's test in many scenarios. In some cases, the [math]\displaystyle{ Z }[/math]-Unpooled test has greater power, with differences ranging from 1 to 5 percentage points. However, in some other cases, Boschloo's test has noticeably greater power, with differences up to 68 percentage points.

This test can also be modified by the Berger & Boos procedure. The resulting test has similar power to the unmodified test in most scenarios. In some cases, the power is considerably improved by the modification but the overall power comparison to Boschloo's test remains unchanged.

Software

The calculation of Boschloo's test can be performed in following software:

  • The function scipy.stats.boschloo_exact from SciPy
  • Packages Exact and exact2x2 of the programming language R
  • StatXact

See also

References

  1. 1.0 1.1 Boschloo R.D. (1970). "Raised Conditional Level of Significance for the 2x2-table when Testing the Equality of Two Probabilities". Statistica Neerlandica 24: 1–35. doi:10.1111/j.1467-9574.1970.tb00104.x. 
  2. Lydersen, S., Fagerland, M.W. and Laake, P. (2009). "Recommended tests for association in 2×2 tables". Statist. Med. 28 (7): 1159–1175. doi:10.1002/sim.3531. PMID 19170020. 
  3. Martín Andrés, A, and I. Herranz Tejedor (1995). "Is Fisher's exact test very conservative?". Computational Statistics and Data Analysis 19 (5): 579–591. doi:10.1016/0167-9473(94)00013-9. 
  4. Finner, H, and Strassburger, K (2002). "Structural properties of UMPU-tests for 2x2 tables and some applications". Journal of Statistical Planning and Inference 104: 103–120. doi:10.1016/S0378-3758(01)00122-7. 
  5. Berger, R L, and Boos, D D (1994). "P Values Maximized Over a Confidence Set for the Nuisance Parameter". Journal of the American Statistical Association 89 (427): 1012–1016. doi:10.2307/2290928. http://www.lib.ncsu.edu/resolver/1840.4/237. 
  6. 6.0 6.1 Mehrotra, D V, Chan, I S F, and Berger, R L (2003). "A cautionary note on exact unconditional inference for a difference between two independent binomial proportions". Biometrics 59 (2): 441–450. doi:10.1111/1541-0420.00051. PMID 12926729.