Pairwise independence
In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent.^{[1]} Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finite variance are uncorrelated.
A pair of random variables X and Y are independent if and only if the random vector (X, Y) with joint cumulative distribution function (CDF) [math]\displaystyle{ F_{X,Y}(x,y) }[/math] satisfies
 [math]\displaystyle{ F_{X,Y}(x,y) = F_X(x) F_Y(y), }[/math]
or equivalently, their joint density [math]\displaystyle{ f_{X,Y}(x,y) }[/math] satisfies
 [math]\displaystyle{ f_{X,Y}(x,y) = f_X(x) f_Y(y). }[/math]
That is, the joint distribution is equal to the product of the marginal distributions.^{[2]}
Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so that independence means mutual independence. A statement such as " X, Y, Z are independent random variables" means that X, Y, Z are mutually independent.
Example
Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein.^{[3]}
Suppose X and Y are two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variable Z be equal to 1 if exactly one of those coin tosses resulted in "heads", and 0 otherwise (i.e., [math]\displaystyle{ Z = X \oplus Y }[/math]). Then jointly the triple (X, Y, Z) has the following probability distribution:
 [math]\displaystyle{ (X,Y,Z)=\left\{\begin{matrix} (0,0,0) & \text{with probability}\ 1/4, \\ (0,1,1) & \text{with probability}\ 1/4, \\ (1,0,1) & \text{with probability}\ 1/4, \\ (1,1,0) & \text{with probability}\ 1/4. \end{matrix}\right. }[/math]
Here the marginal probability distributions are identical: [math]\displaystyle{ f_X(0)=f_Y(0)=f_Z(0)=1/2, }[/math] and [math]\displaystyle{ f_X(1)=f_Y(1)=f_Z(1)=1/2. }[/math] The bivariate distributions also agree: [math]\displaystyle{ f_{X,Y}=f_{X,Z}=f_{Y,Z}, }[/math] where [math]\displaystyle{ f_{X,Y}(0,0)=f_{X,Y}(0,1)=f_{X,Y}(1,0)=f_{X,Y}(1,1)=1/4. }[/math]
Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent:
 X and Y are independent, and
 X and Z are independent, and
 Y and Z are independent.
However, X, Y, and Z are not mutually independent, since [math]\displaystyle{ f_{X,Y,Z}(x,y,z) \neq f_X(x)f_Y(y)f_Z(z), }[/math] the left side equalling for example 1/4 for (x, y, z) = (0, 0, 0) while the right side equals 1/8 for (x, y, z) = (0, 0, 0). In fact, any of [math]\displaystyle{ \{X,Y,Z\} }[/math] is completely determined by the other two (any of X, Y, Z is the sum (modulo 2) of the others). That is as far from independence as random variables can get.
Probability of the union of pairwise independent events
Bounds on the probability that the sum of Bernoulli random variables is at least one, commonly known as the union bound, are provided by the Boole–Fréchet^{[4]}^{[5]} inequalities. While these bounds assume only univariate information, several bounds with knowledge of general bivariate probabilities, have been proposed too. Denote by [math]\displaystyle{ \{{A}_i, i \in \{1,2,...,n\}\} }[/math] a set of [math]\displaystyle{ n }[/math] Bernoulli events with probability of occurrence [math]\displaystyle{ \mathbb{P}(A_{i})=p_i }[/math] for each [math]\displaystyle{ i }[/math]. Suppose the bivariate probabilities are given by [math]\displaystyle{ \mathbb{P}(A_{i} \cap A_{j})=p_{ij} }[/math] for every pair of indices [math]\displaystyle{ (i,j) }[/math]. Kounias ^{[6]} derived the following upper bound:
 [math]\displaystyle{
\mathbb{P}(\displaystyle {\cup}_iA_{i}) \leq \displaystyle \sum_{i=1}^n p_{i}\underset {j\in \{1,2,..,n\}}{\max} \sum_{i\neq j} p_{ij},
}[/math]
 [math]\displaystyle{
\mathbb{P}(\displaystyle {\cup}_iA_{i}) \leq \displaystyle \sum_{i=1}^n p_{i}\underset {j\in \{1,2,..,n\}}{\max} \sum_{i\neq j} p_{ij},
}[/math]
which subtracts the maximum weight of a star spanning tree on a complete graph with [math]\displaystyle{ n }[/math] nodes (where the edge weights are given by [math]\displaystyle{ p_{ij} }[/math]) from the sum of the marginal probabilities [math]\displaystyle{ \sum_i p_i }[/math].
HunterWorsley^{[7]}^{[8]} tightened this upper bound by optimizing over [math]\displaystyle{ \tau \in T }[/math] as follows:
 [math]\displaystyle{ \mathbb{P}(\displaystyle {\cup}_i A_{i}) \leq \displaystyle \sum_{i=1}^n p_{i}\underset {\tau \in T}{\max}\sum_{(i,j) \in \tau} p_{ij}, }[/math]
where [math]\displaystyle{ T }[/math] is the set of all spanning trees on the graph. These bounds are not the tightest possible with general bivariates [math]\displaystyle{ p_{ij} }[/math] even when feasibility is guaranteed as shown in Boros et.al.^{[9]} However, when the variables are pairwise independent ([math]\displaystyle{ p_{ij}=p_ip_j }[/math]), Ramachandra—Natarajan ^{[10]} showed that the KouniasHunterWorsley ^{[6]}^{[7]}^{[8]} bound is tight by proving that the maximum probability of the union of events admits a closedform expression given as:

[math]\displaystyle{ \max \mathbb{P}(\displaystyle {\cup}_i A_{i}) = \displaystyle \min\left(\sum_{i=1}^n p_{i}p_{n}\left(\sum_{i=1}^{n1} p_{i}\right),1\right) }[/math]
(
)

where the probabilities are sorted in increasing order as [math]\displaystyle{ 0 \leq p_{1} \leq p_{2} \leq \ldots \leq p_{n} \leq 1 }[/math]. It is interesting to note that the tight bound in Eq. 1 depends only on the sum of the smallest [math]\displaystyle{ n1 }[/math] probabilities [math]\displaystyle{ \sum_{i=1}^{n1} p_{i} }[/math] and the largest probability [math]\displaystyle{ p_n }[/math]. Thus, while ordering of the probabilities plays a role in the derivation of the bound, the ordering among the smallest [math]\displaystyle{ n1 }[/math] probabilities [math]\displaystyle{ \{p_1,p_2,...,p_{n1}\} }[/math] is inconsequential since only their sum is used.
Comparison with the Boole–Fréchet union bound
It is useful to compare the smallest bounds on the probability of the union with arbitrary dependence and pairwise independence respectively. The tightest Boole–Fréchet upper union bound (assuming only univariate information) is given as:

[math]\displaystyle{ \displaystyle \max \mathbb{P}(\displaystyle {\cup}_i A_{i}) = \displaystyle \min\left(\sum_{i=1}^n p_{i},1\right) }[/math]
(
)

As shown in RamachandraNatarajan,^{[10]} it can be easily verified that the ratio of the two tight bounds in Eq. 2 and Eq. 1 is upper bounded by [math]\displaystyle{ 4/3 }[/math] where the maximum value of [math]\displaystyle{ 4/3 }[/math] is attained when
 [math]\displaystyle{ \sum_{i=1}^{n1} p_{i}=1/2 }[/math], [math]\displaystyle{ p_n=1/2 }[/math]
 [math]\displaystyle{ \sum_{i=1}^{n1} p_{i}=1/2 }[/math], [math]\displaystyle{ p_n=1/2 }[/math]
where the probabilities are sorted in increasing order as [math]\displaystyle{ 0 \leq p_{1} \leq p_{2} \leq \ldots \leq p_{n} \leq 1 }[/math]. In other words, in the bestcase scenario, the pairwise independence bound in Eq. 1 provides an improvement of [math]\displaystyle{ 25\% }[/math] over the univariate bound in Eq. 2.
Generalization
More generally, we can talk about kwise independence, for any k ≥ 2. The idea is similar: a set of random variables is kwise independent if every subset of size k of those variables is independent. kwise independence has been used in theoretical computer science, where it was used to prove a theorem about the problem MAXEkSAT.
kwise independence is used in the proof that kindependent hashing functions are secure unforgeable message authentication codes.
See also
 Pairwise
 Disjoint sets
References
 ↑ Gut, A. (2005) Probability: a Graduate Course, SpringerVerlag. ISBN 0387273328. pp. 71–72.
 ↑ Hogg, R. V., McKean, J. W., Craig, A. T. (2005). Introduction to Mathematical Statistics (6 ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0130085073. Definition 2.5.1, page 109.
 ↑ Hogg, R. V., McKean, J. W., Craig, A. T. (2005). Introduction to Mathematical Statistics (6 ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 0130085073. Remark 2.6.1, p. 120.
 ↑ Boole, G. (1854). An Investigation of the Laws of Thought, On Which Are Founded the Mathematical Theories of Logic and Probability. Walton and Maberly, London. See Boole's "major" and "minor" limits of a conjunction on page 299.
 ↑ Fréchet, M. (1935). Généralisations du théorème des probabilités totales. Fundamenta Mathematicae 25: 379–387.
 ↑ ^{6.0} ^{6.1} E. G. Kounias (1968). "Bounds for the probability of a union, with applications". The Annals of Mathematical Statistics 39 (6): 2154–2158. doi:10.1214/aoms/1177698049.
 ↑ ^{7.0} ^{7.1} D. Hunter (1976). "An upper bound for the probability of a union". Journal of Applied Probability 13 (3): 597–603. doi:10.2307/3212481.
 ↑ ^{8.0} ^{8.1} K. J. Worsley (1982). "An improved Bonferroni inequality and applications". Biometrika 69 (2): 297–302. doi:10.1093/biomet/69.2.297.
 ↑ "Polynomially computable bounds for the probability of the union of events". Mathematics of Operations Research 39 (4): 1311–1329. 2014. doi:10.1287/moor.2014.0657.
 ↑ ^{10.0} ^{10.1} Ramachandra, Arjun Kodagehalli; Natarajan, Karthik (2023). "Tight Probability Bounds with Pairwise Independence". SIAM Journal on Discrete Mathematics 37 (2): 516–555. doi:10.1137/21M140829.
Original source: https://en.wikipedia.org/wiki/Pairwise independence.
Read more 