# Law of total variance

Short description: Theorem in probability theory

In probability theory, the law of total variance[1] or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law,[2] states that if $\displaystyle{ X }$ and $\displaystyle{ Y }$ are random variables on the same probability space, and the variance of $\displaystyle{ Y }$ is finite, then

$\displaystyle{ \operatorname{Var}(Y) = \operatorname{E}[\operatorname{Var}(Y \mid X)] + \operatorname{Var}(\operatorname{E}[Y \mid X]). }$

In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM).[3] These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation".

## Example

Suppose X is a coin flip with the probability of heads being h. Suppose that when X = heads then Y is drawn from a normal distribution with mean μh and standard deviation σh, and that when X = tails then Y is drawn from normal distribution with mean μt and standard deviation σt. Then the first, "unexplained" term on the right-hand side of the above formula is the weighted average of the variances, h2 + (1 − h)σt2, and the second, "explained" term is the variance of the distribution that gives μh with probability h and gives μt with probability 1 − h.

## Formulation

There is a general variance decomposition formula for $\displaystyle{ c \geq 2 }$ components (see below).[4] For example, with two conditioning random variables: $\displaystyle{ \operatorname{Var}[Y] = \operatorname{E}\left[\operatorname{Var}\left(Y \mid X_1, X_2\right)\right] + \operatorname{E}[\operatorname{Var}(\operatorname{E}\left[Y \mid X_1, X_2\right] \mid X_1)] + \operatorname{Var}(\operatorname{E}\left[Y \mid X_1\right]), }$ which follows from the law of total conditional variance:[4] $\displaystyle{ \operatorname{Var}(Y \mid X_1) = \operatorname{E} \left[\operatorname{Var}(Y \mid X_1, X_2) \mid X_1\right] + \operatorname{Var} \left(\operatorname{E}\left[Y \mid X_1, X_2 \right] \mid X_1\right). }$

Note that the conditional expected value $\displaystyle{ \operatorname{E}(Y \mid X) }$ is a random variable in its own right, whose value depends on the value of $\displaystyle{ X. }$ Notice that the conditional expected value of $\displaystyle{ Y }$ given the event $\displaystyle{ X = x }$ is a function of $\displaystyle{ x }$ (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write $\displaystyle{ \operatorname{E}(Y \mid X = x) = g(x) }$ then the random variable $\displaystyle{ \operatorname{E}(Y \mid X) }$ is just $\displaystyle{ g(X). }$ Similar comments apply to the conditional variance.

One special case, (similar to the law of total expectation) states that if $\displaystyle{ A_1, \ldots, A_n }$ is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then \displaystyle{ \begin{align} \operatorname{Var} (X) = {} & \sum_{i=1}^n \operatorname{Var}(X\mid A_i) \Pr(A_i) + \sum_{i=1}^n \operatorname{E}[X\mid A_i]^2 (1-\Pr(A_i))\Pr(A_i) \\[4pt] & {} - 2\sum_{i=2}^n \sum_{j=1}^{i-1} \operatorname{E}[X \mid A_i] \Pr(A_i)\operatorname{E}[X\mid A_j] \Pr(A_j). \end{align} }

In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation.

## Proof

The law of total variance can be proved using the law of total expectation.[5] First, $\displaystyle{ \operatorname{Var}[Y] = \operatorname{E}\left[Y^2\right] - \operatorname{E}[Y]^2 }$ from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have $\displaystyle{ \operatorname{E}\left[Y^2\right] = \operatorname{E}\left[\operatorname{E}[Y^2\mid X]\right] = \operatorname{E} \left[\operatorname{Var}[Y \mid X] + \operatorname{E}[Y \mid X]^2\right]. }$

Now we rewrite the conditional second moment of $\displaystyle{ Y }$ in terms of its variance and first moment, and apply the law of total expectation on the right hand side: $\displaystyle{ \operatorname{E}\left[Y^2\right] - \operatorname{E}[Y]^2 = \operatorname{E} \left[\operatorname{Var}[Y \mid X] + \operatorname{E}[Y \mid X]^2\right] - \operatorname{E} [\operatorname{E}[Y \mid X]]^2. }$

Since the expectation of a sum is the sum of expectations, the terms can now be regrouped: $\displaystyle{ = \left(\operatorname{E} [\operatorname{Var}[Y \mid X]]\right) + \left(\operatorname{E} \left[\operatorname{E}[Y \mid X]^2\right] - \operatorname{E} [\operatorname{E}[Y \mid X]]^2\right). }$

Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation $\displaystyle{ \operatorname{E}[Y \mid X] }$: $\displaystyle{ = \operatorname{E} [\operatorname{Var}[Y \mid X]] + \operatorname{Var} [\operatorname{E}[Y \mid X]]. }$

## General variance decomposition applicable to dynamic systems

The following formula shows how to apply the general, measure theoretic variance decomposition formula [4] to stochastic dynamic systems. Let $\displaystyle{ Y(t) }$ be the value of a system variable at time $\displaystyle{ t. }$ Suppose we have the internal histories (natural filtrations) $\displaystyle{ H_{1t},H_{2t},\ldots,H_{c-1,t} }$, each one corresponding to the history (trajectory) of a different collection of system variables. The collections need not be disjoint. The variance of $\displaystyle{ Y(t) }$ can be decomposed, for all times $\displaystyle{ t, }$ into $\displaystyle{ c \geq 2 }$ components as follows: \displaystyle{ \begin{align} \operatorname{Var}[Y(t)] = {} & \operatorname{E}(\operatorname{Var}[Y(t)\mid H_{1t},H_{2t},\ldots,H_{c-1,t}]) \\[4pt] & {} + \sum_{j=2}^{c-1}\operatorname{E}(\operatorname{Var}[\operatorname{E}[Y(t)\mid H_{1t},H_{2t},\ldots,H_{jt}] \mid H_{1t},H_{2t},\ldots,H_{j-1,t}]) \\[4pt] & {} + \operatorname{Var}(\operatorname{E}[Y(t)\mid H_{1t}]). \end{align} }

The decomposition is not unique. It depends on the order of the conditioning in the sequential decomposition.

## The square of the correlation and explained (or informational) variation

In cases where $\displaystyle{ (Y, X) }$ are such that the conditional expected value is linear; that is, in cases where $\displaystyle{ \operatorname{E}(Y \mid X) = a X + b, }$ it follows from the bilinearity of covariance that $\displaystyle{ a={\operatorname{Cov}(Y, X) \over \operatorname{Var}(X)} }$ and $\displaystyle{ b = \operatorname{E}(Y)-{\operatorname{Cov}(Y, X) \over \operatorname{Var}(X)} \operatorname{E}(X) }$ and the explained component of the variance divided by the total variance is just the square of the correlation between $\displaystyle{ Y }$ and $\displaystyle{ X; }$ that is, in such cases, $\displaystyle{ {\operatorname{Var}(\operatorname{E}(Y \mid X)) \over \operatorname{Var}(Y)} = \operatorname{Corr}(X, Y)^2. }$

One example of this situation is when $\displaystyle{ (X, Y) }$ have a bivariate normal (Gaussian) distribution.

More generally, when the conditional expectation $\displaystyle{ \operatorname{E}(Y \mid X) }$ is a non-linear function of $\displaystyle{ X }$[4] $\displaystyle{ \iota_{Y\mid X} = {\operatorname{Var}(\operatorname{E}(Y \mid X)) \over \operatorname{Var}(Y)} = \operatorname{Corr}(\operatorname{E}(Y \mid X), Y)^2, }$ which can be estimated as the $\displaystyle{ R }$ squared from a non-linear regression of $\displaystyle{ Y }$ on $\displaystyle{ X, }$ using data drawn from the joint distribution of $\displaystyle{ (X, Y). }$ When $\displaystyle{ \operatorname{E}(Y \mid X) }$ has a Gaussian distribution (and is an invertible function of $\displaystyle{ X }$), or $\displaystyle{ Y }$ itself has a (marginal) Gaussian distribution, this explained component of variation sets a lower bound on the mutual information:[4] $\displaystyle{ \operatorname{I}(Y; X) \geq \ln \left([1 - \iota_{Y \mid X}]^{-1/2}\right). }$

## Higher moments

A similar law for the third central moment $\displaystyle{ \mu_3 }$ says $\displaystyle{ \mu_3(Y)=\operatorname{E}\left(\mu_3(Y \mid X)\right) + \mu_3(\operatorname{E}(Y \mid X)) + 3\operatorname{cov}(\operatorname{E}(Y \mid X), \operatorname{var}(Y \mid X)). }$

For higher cumulants, a generalization exists. See law of total cumulance.