Binomial test

From HandWiki
Short description: Test of statistical significance

In statistics, the binomial test is an exact test of the statistical significance of deviations from a theoretically expected distribution of observations into two categories using sample data.

Usage

The binomial test is useful to test hypotheses about the probability ([math]\displaystyle{ \pi }[/math]) of success:

[math]\displaystyle{ H_0\colon\pi=\pi_0 }[/math]

where [math]\displaystyle{ \pi_0 }[/math] is a user-defined value between 0 and 1.

If in a sample of size [math]\displaystyle{ n }[/math] there are [math]\displaystyle{ k }[/math] successes, while we expect [math]\displaystyle{ n\pi_0 }[/math], the formula of the binomial distribution gives the probability of finding this value:

[math]\displaystyle{ \Pr(X=k)=\binom{n}{k}p^k(1-p)^{n-k} }[/math]

If the null hypothesis [math]\displaystyle{ H_0 }[/math] were correct, then the expected number of successes would be [math]\displaystyle{ n\pi_0 }[/math]. We find our [math]\displaystyle{ p }[/math]-value for this test by considering the probability of seeing an outcome as, or more, extreme. For a one-tailed test, this is straightforward to compute. Suppose that we want to test if [math]\displaystyle{ \pi\lt \pi_0 }[/math]. Then our [math]\displaystyle{ p }[/math]-value would be,

[math]\displaystyle{ p = \sum_{i=0}^k\Pr(X=i)=\sum_{i=0}^k\binom{n}{i}\pi_0^i(1-\pi_0)^{n-i} }[/math]

An analogous computation can be done if we're testing if [math]\displaystyle{ \pi\gt \pi_0 }[/math] using the summation of the range from [math]\displaystyle{ k }[/math] to [math]\displaystyle{ n }[/math] instead.

Calculating a [math]\displaystyle{ p }[/math]-value for a two-tailed test is slightly more complicated, since a binomial distribution isn't symmetric if [math]\displaystyle{ \pi_0\neq 0.5 }[/math]. This means that we can't just double the [math]\displaystyle{ p }[/math]-value from the one-tailed test. Recall that we want to consider events that are as, or more, extreme than the one we've seen, so we should consider the probability that we would see an event that is as, or less, likely than [math]\displaystyle{ X=k }[/math]. Let [math]\displaystyle{ \mathcal{I}=\{i\colon\Pr(X=i)\leq \Pr(X=k)\} }[/math] denote all such events. Then the two-tailed [math]\displaystyle{ p }[/math]-value is calculated as,

[math]\displaystyle{ p = \sum_{i\in\mathcal{I}}\Pr(X=i)=\sum_{i\in\mathcal{I}}\binom{n}{i}\pi_0^i(1-\pi_0)^{n-i} }[/math]

Common use

One common use of the binomial test is the case where the null hypothesizes that two categories occur with equal frequency ([math]\displaystyle{ H_0\colon\pi=0.5 }[/math]), such as a coin toss. Tables are widely available to give the significance observed numbers of observations in the categories for this case. However, as the example below shows, the binomial test is not restricted to this case.

When there are more than two categories, and an exact test is required, the multinomial test, based on the multinomial distribution, must be used instead of the binomial test.[1]

Large samples

For large samples such as the example below, the binomial distribution is well approximated by convenient continuous distributions, and these are used as the basis for alternative tests that are much quicker to compute, such as Pearson's chi-squared test and the G-test. However, for small samples these approximations break down, and there is no alternative to the binomial test.

The most usual (and easiest) approximation is through the standard normal distribution, in which a z-test is performed of the test statistic [math]\displaystyle{ Z }[/math], given by

[math]\displaystyle{ Z=\frac{k-n\pi}{\sqrt{n\pi(1-\pi)}} }[/math]

where [math]\displaystyle{ k }[/math] is the number of successes observed in a sample of size [math]\displaystyle{ n }[/math] and [math]\displaystyle{ \pi }[/math] is the probability of success according to the null hypothesis. An improvement on this approximation is possible by introducing a continuity correction:

[math]\displaystyle{ Z=\frac{k-n\pi\pm \frac{1}{2}}{\sqrt{n\pi(1-\pi)}} }[/math]

For very large [math]\displaystyle{ n }[/math], this continuity correction will be unimportant, but for intermediate values, where the exact binomial test doesn't work, it will yield a substantially more accurate result.

In notation in terms of a measured sample proportion [math]\displaystyle{ \hat{p} }[/math], null hypothesis for the proportion [math]\displaystyle{ p_0 }[/math], and sample size [math]\displaystyle{ n }[/math], where [math]\displaystyle{ \hat{p}=k/n }[/math] and [math]\displaystyle{ p_0=\pi }[/math], one may rearrange and write the z-test above as

[math]\displaystyle{ Z=\frac{ \hat{p}-p_0 } { \sqrt{ \frac{p_0(1-p_0)}{n} } } }[/math]

by dividing by [math]\displaystyle{ n }[/math] in both numerator and denominator, which is a form that may be more familiar to some readers.

Example

Suppose we have a board game that depends on the roll of one die and attaches special importance to rolling a 6. In a particular game, the die is rolled 235 times, and 6 comes up 51 times. If the die is fair, we would expect 6 to come up

[math]\displaystyle{ 235\times1/6 = 39.17 }[/math]

times. We have now observed that the number of 6s is higher than what we would expect on average by pure chance had the die been a fair one. But, is the number significantly high enough for us to conclude anything about the fairness of the die? This question can be answered by the binomial test. Our null hypothesis would be that the die is fair (probability of each number coming up on the die is 1/6).

To find an answer to this question using the binomial test, we use the binomial distribution

[math]\displaystyle{ B(N=235, p=1/6) }[/math] with pmf [math]\displaystyle{ f(k,n,p) = \Pr(k;n,p) = \Pr(X = k) = \binom{n}{k}p^k(1-p)^{n-k} }[/math] .

As we have observed a value greater than the expected value, we could consider the probability of observing 51 6s or higher under the null, which would constitute a one-tailed test (here we are basically testing whether this die is biased towards generating more 6s than expected). In order to calculate the probability of 51 or more 6s in a sample of 235 under the null hypothesis we add up the probabilities of getting exactly 51 6s, exactly 52 6s, and so on up to probability of getting exactly 235 6s:

[math]\displaystyle{ \sum_{i=51}^{235} {235\choose i}p^i(1-p)^{235-i} = 0.02654 }[/math]

If we have a significance level of 5%, then this result (0.02654 < 5%) indicates that we have evidence that is significant enough to reject the null hypothesis that the die is fair.

Normally, when we are testing for fairness of a die, we are also interested if the die is biased towards generating fewer 6s than expected, and not only more 6s as we considered in the one-tailed test above. In order to consider both the biases, we use a two-tailed test. Note that to do this we cannot simply double the one-tailed p-value unless the probability of the event is 1/2. This is because the binomial distribution becomes asymmetric as that probability deviates from 1/2. There are two methods to define the two-tailed p-value. One method is to sum the probability that the total deviation in numbers of events in either direction from the expected value is either more than or less than the expected value. The probability of that occurring in our example is 0.0437. The second method involves computing the probability that the deviation from the expected value is as unlikely or more unlikely than the observed value, i.e. from a comparison of the probability density functions. This can create a subtle difference, but in this example yields the same probability of 0.0437. In both cases, the two-tailed test reveals significance at the 5% level, indicating that the number of 6s observed was significantly different for this die than the expected number at the 5% level.

In statistical software packages

Binomial tests are available in most software used for statistical purposes. E.g.

  • In R the above example could be calculated with the following code:
    • binom.test(51, 235, 1/6, alternative = "less") (one-tailed test)
    • binom.test(51, 235, 1/6, alternative = "greater") (one-tailed test)
    • binom.test(51, 235, 1/6, alternative = "two.sided") (two-tailed test)
  • In Java using the Apache Commons library:
    • new BinomialTest().binomialTest(235, 51, 1.0 / 6, AlternativeHypothesis.LESS_THAN) (one-tailed test)
    • new BinomialTest().binomialTest(235, 51, 1.0 / 6, AlternativeHypothesis.GREATER_THAN) (one-tailed test)
    • new BinomialTest().binomialTest(235, 51, 1.0 / 6, AlternativeHypothesis.TWO_SIDED) (two-tailed test)
  • In SAS the test is available in the Frequency procedure
    PROC FREQ DATA=DiceRoll ;

TABLES Roll / BINOMIAL (P=0.166667) ALPHA=0.05 ; EXACT BINOMIAL ; WEIGHT Freq ;

RUN;

  • In SPSS the test can be utilized through the menu Analyze > Nonparametric test > Binomial
    npar tests

/binomial (.5) = node1 node2.

  • In Python, use SciPy's binomtest:
    • scipy.stats.binomtest(51, 235, 1.0/6, alternative='greater') (one-tailed test)
    • scipy.stats.binomtest(51, 235, 1.0/6, alternative='two-sided') (two-tailed test)
  • In MATLAB, use myBinomTest, which is available via Mathworks' community File Exchange website. myBinomTest will directly calculate the p-value for the observations given the hypothesized probability of a success. [pout]=myBinomTest(51, 235, 1/6) (generally two-tailed, but can optionally perform a one-tailed test).
  • In Stata, use bitest.
  • In Microsoft Excel, use Binom.Dist. The function takes parameters (Number of successes, Trials, Probability of Success, Cumulative). The "Cumulative" parameter takes a boolean True or False, with True giving the Cumulative probability of finding this many successes (a left-tailed test), and False the exact probability of finding this many successes.

See also

References

  1. Howell, David C. (2007). Statistical methods for psychology (6. ed.). Belmont, Calif.: Thomson. ISBN 978-0495012870. 

External links