# Markov chain tree theorem

In the mathematical theory of Markov chains, the **Markov chain tree theorem** is an expression for the stationary distribution of a Markov chain with finitely many states. It sums up terms for the rooted spanning trees of the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related to Kirchhoff's theorem on counting the spanning trees of a graph, from which it can be derived.^{[1]} It was first stated by (Hill 1966), for certain Markov chains arising in thermodynamics,^{[1]}^{[2]} and proved in full generality by (Leighton Rivest), motivated by an application in limited-memory estimation of the probability of a biased coin.^{[1]}^{[3]}
A finite Markov chain consists of a finite set of states, and a transition probability [math]\displaystyle{ p_{i,j} }[/math] for changing from state [math]\displaystyle{ i }[/math] to state [math]\displaystyle{ j }[/math], such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state have greatest common divisor one. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state.^{[1]}

The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to be trees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from state [math]\displaystyle{ i }[/math] to state [math]\displaystyle{ j }[/math] has transition probability [math]\displaystyle{ p_{i,j} }[/math], then a tree [math]\displaystyle{ T }[/math] with edge set [math]\displaystyle{ E(T) }[/math] is defined to have weight equal to the product of its transition probabilities:
[math]\displaystyle{ w(T)=\prod_{(i,j)\in E(T)} p_{i,j}. }[/math]
Let [math]\displaystyle{ \mathcal{T}_i }[/math] denote the set of all spanning trees having state [math]\displaystyle{ i }[/math] at their root. Then, according to the Markov chain tree theorem, the stationary probability [math]\displaystyle{ \pi_i }[/math] for state [math]\displaystyle{ i }[/math] is proportional to the sum of the weights of the trees rooted at [math]\displaystyle{ i }[/math]. That is,
[math]\displaystyle{ \pi_i=\frac{1}{Z}\sum_{T\in\mathcal{T}_i} w(T), }[/math]
where the normalizing constant [math]\displaystyle{ Z }[/math] is the sum of [math]\displaystyle{ w(T) }[/math] over all spanning trees.^{[1]}

## References

- ↑
^{1.0}^{1.1}^{1.2}^{1.3}^{1.4}"The combinatorics of hopping particles and positivity in Markov chains",*London Mathematical Society Newsletter*(500): 50–59, May 2022 - ↑ Hill, Terrell L. (April 1966), "Studies in irreversible thermodynamics IV: diagrammatic representation of steady state fluxes for unimolecular systems",
*Journal of Theoretical Biology***10**(3): 442–459, doi:10.1016/0022-5193(66)90137-8, PMID 5964691 - ↑ "Estimating a probability using finite memory",
*IEEE Transactions on Information Theory***32**(6): 733–742, 1986, doi:10.1109/TIT.1986.1057250

Original source: https://en.wikipedia.org/wiki/Markov chain tree theorem.
Read more |