Second moment method

From HandWiki
Revision as of 21:57, 6 February 2024 by Rjetedi (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Method in probability theory


In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment method" consists of bounding the probability that a random variable fluctuates far from its mean, by using its moments.[1]

The method is often quantitative, in that one can often deduce a lower bound on the probability that the random variable is larger than some constant times its expectation. The method involves comparing the second moment of random variables to the square of the first moment.

First moment method

The first moment method is a simple application of Markov's inequality for integer-valued variables. For a non-negative, integer-valued random variable X, we may want to prove that X = 0 with high probability. To obtain an upper bound for Pr(X > 0), and thus a lower bound for Pr(X = 0), we first note that since X takes only integer values, Pr(X > 0) = Pr(X ≥ 1). Since X is non-negative we can now apply Markov's inequality to obtain Pr(X ≥ 1) ≤ E[X]. Combining these we have Pr(X > 0) ≤ E[X]; the first moment method is simply the use of this inequality.

Second moment method

In the other direction, E[X] being "large" does not directly imply that Pr(X = 0) is small. However, we can often use the second moment to derive such a conclusion, using Cauchy–Schwarz inequality.

Theorem — If X ≥ 0 is a random variable with finite variance, then [math]\displaystyle{ \Pr( X \gt 0 ) \ge \frac{(\operatorname{E}[X])^2}{\operatorname{E}[X^2]}. }[/math]

The method can also be used on distributional limits of random variables. Furthermore, the estimate of the previous theorem can be refined by means of the so-called Paley–Zygmund inequality. Suppose that Xn is a sequence of non-negative real-valued random variables which converge in law to a random variable X. If there are finite positive constants c1, c2 such that [math]\displaystyle{ \begin{align} \operatorname{E} \left [X_n^2 \right ] &\le c_1 \operatorname{E}[X_n]^2 \\ \operatorname{E} \left [X_n \right ] &\ge c_2 \end{align} }[/math]

hold for every n, then it follows from the Paley–Zygmund inequality that for every n and θ in (0, 1) [math]\displaystyle{ \Pr (X_n \geq c_2 \theta) \geq \frac{(1-\theta)^2}{c_1}. }[/math]

Consequently, the same inequality is satisfied by X.

Example application of method

Setup of problem

The Bernoulli bond percolation subgraph of a graph G at parameter p is a random subgraph obtained from G by deleting every edge of G with probability 1−p, independently. The infinite complete binary tree T is an infinite tree where one vertex (called the root) has two neighbors and every other vertex has three neighbors. The second moment method can be used to show that at every parameter p(1/2, 1] with positive probability the connected component of the root in the percolation subgraph of T is infinite.

Application of method

Let K be the percolation component of the root, and let Tn be the set of vertices of T that are at distance n from the root. Let Xn be the number of vertices in TnK. To prove that K is infinite with positive probability, it is enough to show that [math]\displaystyle{ \limsup_{n\to\infty}1_{X_n\gt 0}\gt 0 }[/math] with positive probability. By the reverse Fatou lemma, it suffices to show that [math]\displaystyle{ \inf_{n\to\infty}\Pr(X_n\gt 0)\gt 0 }[/math]. The Cauchy–Schwarz inequality gives [math]\displaystyle{ \operatorname{E}[X_n]^2\le \operatorname{E}[X_n^2] \, \operatorname{E}\left [(1_{X_n\gt 0})^2\right ] = \operatorname{E}[X_n^2]\,\Pr(X_n\gt 0). }[/math] Therefore, it is sufficient to show that [math]\displaystyle{ \inf_n \frac{\operatorname{E} \left[ X_n \right ]^2}{\operatorname{E} \left[ X_n^2 \right ]}\gt 0\,, }[/math] that is, that the second moment is bounded from above by a constant times the first moment squared (and both are nonzero). In many applications of the second moment method, one is not able to calculate the moments precisely, but can nevertheless establish this inequality.

In this particular application, these moments can be calculated. For every specific v in Tn, [math]\displaystyle{ \Pr(v\in K) = p^n. }[/math]

Since [math]\displaystyle{ |T_n| = 2^n }[/math], it follows that [math]\displaystyle{ \operatorname{E}[X_n] = 2^n\,p^n }[/math] which is the first moment. Now comes the second moment calculation.

[math]\displaystyle{ \operatorname{E}\!\left[X_n^2 \right ] = \operatorname{E}\!\left[\sum_{v\in T_n} \sum_{u\in T_n}1_{v\in K}\,1_{u\in K}\right] = \sum_{v\in T_n} \sum_{u\in T_n} \Pr(v,u\in K). }[/math]

For each pair v, u in Tn let w(v, u) denote the vertex in T that is farthest away from the root and lies on the simple path in T to each of the two vertices v and u, and let k(v, u) denote the distance from w to the root. In order for v, u to both be in K, it is necessary and sufficient for the three simple paths from w(v, u) to v, u and the root to be in K. Since the number of edges contained in the union of these three paths is 2nk(v, u), we obtain [math]\displaystyle{ \Pr(v,u\in K) = p^{2n-k(v,u)}. }[/math]

The number of pairs (v, u) such that k(v, u) = s is equal to [math]\displaystyle{ 2^s\,2^{n-s}\,2^{n-s-1} = 2^{2n-s-1} }[/math], for s = 0, 1, ..., n. Hence, [math]\displaystyle{ \operatorname{E}\!\left[X_n^2 \right ] = \sum_{s=0}^n 2^{2n-s-1} p^{2n-s} = \frac 1 2\,(2p)^n \sum_{s=0}^n (2p)^s = \frac 1 2 \,(2p)^n \, \frac{(2p)^{n+1}-1}{2p-1} \le \frac p{2p-1} \,\operatorname{E}[X_n]^2, }[/math] which completes the proof.

Discussion

  • The choice of the random variables Xn was rather natural in this setup. In some more difficult applications of the method, some ingenuity might be required in order to choose the random variables Xn for which the argument can be carried through.
  • The Paley–Zygmund inequality is sometimes used instead of the Cauchy–Schwarz inequality and may occasionally give more refined results.
  • Under the (incorrect) assumption that the events v, u in K are always independent, one has [math]\displaystyle{ \Pr(v,u\in K) = \Pr(v\in K) \, \Pr(u\in K) }[/math], and the second moment is equal to the first moment squared. The second moment method typically works in situations in which the corresponding events or random variables are “nearly independent".
  • In this application, the random variables Xn are given as sums [math]\displaystyle{ X_n = \sum_{v \in T_n} 1_{v\in K}. }[/math] In other applications, the corresponding useful random variables are integrals [math]\displaystyle{ X_n = \int f_n(t)\,d\mu(t), }[/math] where the functions fn are random. In such a situation, one considers the product measure μ × μ and calculates [math]\displaystyle{ \begin{align} \operatorname{E} \left[X_n^2 \right ] & = \operatorname{E}\left[\iint f_n(x)\,f_n(y)\,d\mu(x)\,d\mu(y)\right ] \\ & = \operatorname{E}\left[ \iint \operatorname{E}\left[f_n(x)\,f_n(y)\right]\,d\mu(x)\,d\mu(y)\right ], \end{align} }[/math] where the last step is typically justified using Fubini's theorem.

References