Binomial sum variance inequality

From HandWiki

The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the same n and p parameters. In probability theory and statistics, the sum of independent binomial random variables is itself a binomial random variable if all the component variables share the same success probability. If success probabilities differ, the probability distribution of the sum is not binomial.[1] The lack of uniformity in success probabilities across independent trials leads to a smaller variance.[2][3][4][5][6] and is a special case of a more general theorem involving the expected value of convex functions.[7] In some statistical applications, the standard binomial variance estimator can be used even if the component probabilities differ, though with a variance estimate that has an upward bias.

Inequality statement

Consider the sum, Z, of two independent binomial random variables, X ~ B(m0, p0) and Y ~ B(m1, p1), where Z = X + Y. Then, the variance of Z is less than or equal to its variance under the assumption that p0 = p1, that is, if Z had a binomial distribution.[8] Symbolically, [math]\displaystyle{ Var(Z) \leqslant E[Z] (1 - \tfrac{E[Z]}{m_0+m_1}) }[/math].

[Proof]

We wish to prove that

[math]\displaystyle{ Var(Z) \leqslant E[Z] (1 - \frac{E[Z]}{m_0+m_1}) }[/math]

We will prove this inequality by finding an expression for Var(Z) and substituting it on the left-hand side, then showing that the inequality always holds.

If Z has a binomial distribution with parameters n and p, then the expected value of Z is given by E[Z] = np and the variance of Z is given by Var[Z] = np(1 – p). Letting n = m0 + m1 and substituting E[Z] for np gives

[math]\displaystyle{ Var(Z) = E[Z] (1 - \frac{E[Z]}{m_0+m_1}) }[/math]

The random variables X and Y are independent, so the variance of the sum is equal to the sum of the variances, that is

[math]\displaystyle{ Var(Z) = E[X] (1-\frac{E[X]}{m_0}) + E[Y] (1-\frac{E[Y]}{m_1}) }[/math]

In order to prove the theorem, it is therefore sufficient to prove that

[math]\displaystyle{ E[X](1 - \frac{E[X]}{m_0}) + E[Y](1 - \frac{E[Y]}{m_1}) \leqslant E[Z](1 - \frac{E[Z]}{m_0+m_1}) }[/math]


Substituting E[X] + E[Y] for E[Z] gives

[math]\displaystyle{ E[X](1 - \frac{E[X]}{m_0}) + E[Y](1 - \frac{E[Y]}{m_1}) \leqslant (E[X]+E[Y])(1 - \frac{E[X]+E[Y]}{m_0+m_1}) }[/math]

Multiplying out the brackets and subtracting E[X] + E[Y] from both sides yields

[math]\displaystyle{ - \frac{E[X]^2}{m_0} - \frac{E[Y]^2}{m_1} \leqslant - \frac{(E[X]+E[Y])^2}{m_0+m_1} }[/math]

Multiplying out the brackets yields

[math]\displaystyle{ E[X] - \frac{E[X]^2}{m_0} + E[Y] - \frac{E[Y]^2}{m_1} \leqslant E[X] + E[Y] - \frac{(E[X]+E[Y])^2}{m_0+m_1} }[/math]

Subtracting E[X] and E[Y] from both sides and reversing the inequality gives

[math]\displaystyle{ \frac{E[X]^2}{m_0} + \frac{E[Y]^2}{m_1} \geqslant \frac{(E[X]+E[Y])^2}{m_0+m_1} }[/math]

Expanding the right-hand side gives

[math]\displaystyle{ \frac{E[X]^2}{m_0} + \frac{E[Y]^2}{m_1} \geqslant \frac{E[X]^2+2E[X]E[Y]+E[Y]^2}{m_0+m_1} }[/math]

Multiplying by [math]\displaystyle{ m_0 m_1 (m_0+m_1) }[/math] yields

[math]\displaystyle{ (m_0m_1+{m_1}^2){E[X]^2}+ ({m_0}^2+m_0m_1){E[Y]^2} \geqslant m_0m_1({E[X]}^2+2E[X]E[Y]+{E[Y]]^2}) }[/math]

Deducting the right-hand side gives the relation

[math]\displaystyle{ {m_1}^2{E[X]^2} -2m_0m_1E[X]E[Y] + {m_0}^2{E[Y]^2} \geqslant 0 }[/math]

or equivalently

[math]\displaystyle{ (m_1E[X] - m_0E[Y])^2 \geqslant 0 }[/math]

The square of a real number is always greater than or equal to zero, so this is true for all independent binomial distributions that X and Y could take. This is sufficient to prove the theorem.


Although this proof was developed for the sum of two variables, it is easily generalized to greater than two. Additionally, if the individual success probabilities are known, then the variance is known to take the form[6]

[math]\displaystyle{ \operatorname{Var}(Z) = n \bar{p} (1 - \bar{p}) - ns^2, }[/math]

where [math]\displaystyle{ s^2 = \frac{1}{n}\sum_{i=1}^n (p_i-\bar{p})^2 }[/math]. This expression also implies that the variance is always less than that of the binomial distribution with [math]\displaystyle{ p=\bar{p} }[/math], because the standard expression for the variance is decreased by ns2, a positive number.

Applications

The inequality can be useful in the context of multiple testing, where many statistical hypothesis tests are conducted within a particular study. Each test can be treated as a Bernoulli variable with a success probability p. Consider the total number of positive tests as a random variable denoted by S. This quantity is important in the estimation of false discovery rates (FDR), which quantify uncertainty in the test results. If the null hypothesis is true for some tests and the alternative hypothesis is true for other tests, then success probabilities are likely to differ between these two groups. However, the variance inequality theorem states that if the tests are independent, the variance of S will be no greater than it would be under a binomial distribution.

References

  1. Butler, Ken; Stephens, Michael (1993). "The distribution of a sum of binomial random variables". Technical Report No. 467 (Department of Statistics, Stanford University). https://apps.dtic.mil/sti/pdfs/ADA266969.pdf. 
  2. Nedelman, J and Wallenius, T., 1986. Bernoulli trials, Poisson trials, surprising variances, and Jensen’s Inequality. The American Statistician, 40(4):286–289.
  3. Feller, W. 1968. An introduction to probability theory and its applications (Vol. 1, 3rd ed.). New York: John Wiley.
  4. Johnson, N. L. and Kotz, S. 1969. Discrete distributions. New York: John Wiley
  5. Kendall, M. and Stuart, A. 1977. The advanced theory of statistics. New York: Macmillan.
  6. 6.0 6.1 Drezner, Zvi; Farnum, Nicholas (1993). "A generalized binomial distribution". Communications in Statistics - Theory and Methods 22 (11): 3051–3063. doi:10.1080/03610929308831202. ISSN 0361-0926. 
  7. Hoeffding, W. 1956. On the distribution of the number of successes in independent trials. Annals of Mathematical Statistics (27):713–721.
  8. Millstein, J.; Volfson, D. (2013). "Computationally efficient permutation-based confidence interval estimation for tail-area FDR". Frontiers in Genetics 4 (179): 1–11. doi:10.3389/fgene.2013.00179. PMID 24062767.