Moran process

From HandWiki
Revision as of 17:51, 6 February 2024 by Jport (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Stochastic process used in biology to describe finite populations

A Moran process or Moran model is a simple stochastic process used in biology to describe finite populations. The process is named after Patrick Moran, who first proposed the model in 1958.[1] It can be used to model variety-increasing processes such as mutation as well as variety-reducing effects such as genetic drift and natural selection. The process can describe the probabilistic dynamics in a finite population of constant size N in which two alleles A and B are competing for dominance. The two alleles are considered to be true replicators (i.e. entities that make copies of themselves).

In each time step a random individual (which is of either type A or B) is chosen for reproduction and a random individual is chosen for death; thus ensuring that the population size remains constant. To model selection, one type has to have a higher fitness and is thus more likely to be chosen for reproduction. The same individual can be chosen for death and for reproduction in the same step.

Neutral drift

Neutral drift is the idea that a neutral mutation can spread throughout a population, so that eventually the original allele is lost. A neutral mutation does not bring any fitness advantage or disadvantage to its bearer. The simple case of the Moran process can describe this phenomenon.

The Moran process is defined on the state space i = 0, ..., N which count the number of A individuals. Since the number of A individuals can change at most by one at each time step, a transition exists only between state i and state i − 1, i and i + 1. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

[math]\displaystyle{ \begin{align} P_{i,i-1} &= \frac{N-i}{N} \frac{i}{N}\\ P_{i,i} &= 1- P_{i,i-1} - P_{i,i+1}\\ P_{i,i+1} &= \frac{i}{N} \frac{N-i}{N}\\ \end{align} }[/math]

The entry [math]\displaystyle{ P_{i,j} }[/math] denotes the probability to go from state i to state j. To understand the formulas for the transition probabilities one has to look at the definition of the process which states that always one individual will be chosen for reproduction and one is chosen for death. Once the A individuals have died out, they will never be reintroduced into the population since the process does not model mutations (A cannot be reintroduced into the population once it has died out and vice versa) and thus [math]\displaystyle{ P_{0,0}=1 }[/math]. For the same reason the population of A individuals will always stay N once they have reached that number and taken over the population and thus [math]\displaystyle{ P_{N,N}=1 }[/math]. The states 0 and N are called absorbing while the states 1, ..., N − 1 are called transient. The intermediate transition probabilities can be explained by considering the first term to be the probability to choose the individual whose abundance will increase by one and the second term the probability to choose the other type for death. Obviously, if the same type is chosen for reproduction and for death, then the abundance of one type does not change.

Eventually the population will reach one of the absorbing states and then stay there forever. In the transient states, random fluctuations will occur but eventually the population of A will either go extinct or reach fixation. This is one of the most important differences to deterministic processes which cannot model random events. The expected value and the variance of the number of A individuals X(t) at timepoint t can be computed when an initial state X(0) = i is given:

[math]\displaystyle{ \begin{align} \operatorname{E}[X(t)\mid X(0) = i] &= i \\ \operatorname{Var}(X(t)\mid X(0) = i) &= \tfrac{2i}{N} \left(1-\tfrac{i}{N} \right ) \frac{1- \left(1-\frac{2}{N^2} \right )^t}{\frac{2}{N^2}} \end{align} }[/math]
For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows. Writing p = i/N,

[math]\displaystyle{ \begin{align} \operatorname{E}[X(t) \mid X(t-1) = i] &= (i-1)P_{i,i-1} + iP_{i,i} + (i+1)P_{i,i+1}\\ &= 2ip(1-p) + i(p^2 + (1-p)^2) \\ &= i. \end{align} }[/math]

Writing [math]\displaystyle{ Y = X(t) }[/math] and [math]\displaystyle{ Z = X(t-1) }[/math], and applying the law of total expectation, [math]\displaystyle{ \operatorname{E}[Y] = \operatorname{E}[\operatorname{E}[Y\mid Z]] = \operatorname{E}[Z]. }[/math] Applying the argument repeatedly gives [math]\displaystyle{ \operatorname{E}[X(t)] = \operatorname{E}[X(0)], }[/math] or [math]\displaystyle{ \operatorname{E}[X(t)\mid X(0) = i] = i. }[/math]

For the variance the calculation runs as follows. Writing [math]\displaystyle{ V_t = \operatorname{Var}(X(t)\mid X(0) = i), }[/math] we have

[math]\displaystyle{ \begin{align} V_1 &= E \left[X(1)^2\mid X(0) = i \right] - \operatorname{E}[X(1)\mid X(0)=i]^2 \\ &= (i-1)^2p(1-p) + i^2 \left (p^2+(1-p)^2 \right ) + (i+1)^2p(1-p) - i^2 \\ &= 2p(1-p) \end{align} }[/math]

For all t, [math]\displaystyle{ (X(t)\mid X(t-1) = i) }[/math] and [math]\displaystyle{ (X(1)\mid X(0) = i) }[/math] are identically distributed, so their variances are equal. Writing as before [math]\displaystyle{ Y = X(t) }[/math] and [math]\displaystyle{ Z = X(t-1) }[/math], and applying the law of total variance,

[math]\displaystyle{ \begin{align} \operatorname{Var}(Y) &= \operatorname{E}[\operatorname{Var}(Y\mid Z)] + \operatorname{Var}(\operatorname{E}[Y\mid Z]) \\ &= E \left [\left (\frac{2Z}{N} \right) \left(1-\frac Z N \right ) \right ] + \operatorname{Var}(Z)\\ &= \left(\frac{2\operatorname{E}[Z]}{N} \right ) \left (1-\frac{\operatorname{E}[Z]}{N} \right) + \left(1-\frac{2}{N^2}\right)\operatorname{Var}(Z). \end{align} }[/math]

If [math]\displaystyle{ X(0) = i }[/math], we obtain

[math]\displaystyle{ V_t = V_1 + \left (1-\frac{2}{N^2} \right)V_{t-1}. }[/math]

Rewriting this equation as

[math]\displaystyle{ V_t - \frac{V_1}{\frac{2}{N^2}} = \left (1-\frac{2}{N^2} \right )\left(V_{t-1}-\frac{V_1}{\frac{2}{N^2}}\right) = \left (1-\frac{2}{N^2} \right)^{t-1} \left(V_1-\frac{V_1}{\frac{2}{N^2}}\right) }[/math]

yields

[math]\displaystyle{ V_t = V_1 \frac{1-\left (1-\frac{2}{N^2} \right)^t}{\frac{2}{N^2}} }[/math]

as desired.


The probability that A reaches fixation is called fixation probability. For the simple Moran process this probability is xi = i/N.

Since all individuals have the same fitness, they also have the same chance of becoming the ancestor of the whole population; this probability is 1/N and thus the sum of all i probabilities (for all A individuals) is just i/N. The mean time to absorption starting in state i is given by

[math]\displaystyle{ k_i = N \left[ \sum_{j=1}^{i} \frac{N-i}{N-j} + \sum_{j=i+1}^{N-1} \frac{i}{j} \right] }[/math]
For a mathematical derivation of the equation above, click on "show" to reveal

The mean time spent in state j when starting in state i which is given by

[math]\displaystyle{ k_i^j = \delta_{ij}+P_{i,i-1}k_{i-1}^j + P_{i,i}k_{i}^j + P_{i,i+1}k_{i+1}^j }[/math]

Here δij denotes the Kroenecker delta. This recursive equation can be solved using a new variable qi so that [math]\displaystyle{ P_{i,i-1} = P_{i,i+1} = q_i }[/math] and thus [math]\displaystyle{ P_{i,i} = 1-2 q_i }[/math] and rewritten

[math]\displaystyle{ k_{i+1}^j = 2 k_{i}^j- k_{i-1}^j -\frac{\delta_{ij}}{q_i} }[/math]

The variable [math]\displaystyle{ y_i^{j} = k_{i}^j- k_{i-1}^j }[/math] is used and the equation becomes

[math]\displaystyle{ \begin{align} y_{i+1}^{j} &= y_i^{j} -\frac{\delta_{ij}}{q_i} \\ \\ \sum_{i=1}^m y_i^{j} &= (k_{1}^j- k_{0}^j) + (k_{2}^j- k_{1}^j) + \cdots + (k_{m-1}^j- k_{m-2}^j) + (k_{m}^j- k_{m-1}^j) \\ &= k_{m}^j - k_{0}^j \\ \sum_{i=1}^m y_i^{j} &= k_{m}^j \\ \\ y_1^{j} &= (k_{1}^j- k_{0}^j) = k_{1}^j \\ y_2^{j} &= y_1^{j} -\frac{\delta_{1j}}{q_1} = k_1^{j} -\frac{\delta_{1j}}{ q_1 } \\ y_3^{j} &= k_1^{j} -\frac{\delta_{1j}}{q_1} -\frac{\delta_{2j}}{ q_2 } \\ & \vdots \\ y_i^{j} &= k_1^{j} -\sum_{r=1}^{i-1} \frac{\delta_{rj}}{q_r} = \begin{cases} k_1^j & j \geq i\\ k_1^j - \frac{1}{q_j} & j \leq i \end{cases} \\ \\ k_i^j &= \sum_{m=1}^i y_m^{j} = \begin{cases} i \cdot k_1^j & j \geq i\\ i \cdot k_1^j - \frac{i-j}{q_j} & j \leq i \end{cases} \end{align} }[/math]

Knowing that [math]\displaystyle{ k_N^j = 0 }[/math] and

[math]\displaystyle{ q_j = P_{j,j+1}=\frac{j}{N} \frac{N-j}{N} }[/math]

we can calculate [math]\displaystyle{ k_1^j }[/math]:

[math]\displaystyle{ \begin{align} k_N^j = \sum_{i=1}^m y_i^{j} = N \cdot k_1^j &- \frac{N-j}{q_j} = 0 \\ k_1^j &= \frac{N}{j} \end{align} }[/math]

Therefore

[math]\displaystyle{ k_i^j = \begin{cases} \frac{i}{j} \cdot k_j^j & j \geq i\\ \frac{N - i}{N-j} \cdot k_j^j & j \leq i\end{cases} }[/math]

with [math]\displaystyle{ k_j^j = N }[/math]. Now ki, the total time until fixation starting from state i, can be calculated

[math]\displaystyle{ \begin{align} k_i = \sum_{j=1}^{N-1}k_i^j &= \sum_{j=1}^{i}k_i^j + \sum_{j=i+1}^{N-1}k_i^j \\ &= \sum_{j=1}^{i}N \frac{N-i}{N-j} + \sum_{j=i+1}^{N-1}N \frac{i}{j} \end{align} }[/math]

For large N the approximation

[math]\displaystyle{ \lim_{N\to \infty} k_i \approx -N^2 \left[ (1-x_i) \ln(1-x_i) + x_i \ln(x_i) \right] }[/math]

holds.

Selection

If one allele has a fitness advantage over the other allele, it will be more likely to be chosen for reproduction. This can be incorporated into the model if individuals with allele A have fitness [math]\displaystyle{ f_i }[/math] and individuals with allele B have fitness gi where i is the number of individuals of type A; thus describing a general birth-death process. The transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

[math]\displaystyle{ \begin{align} P_{i,i-1} &= \frac{g_i (N-i) }{f_i \cdot i + g_i (N-i)} \cdot \frac{i}{N}\\ P_{i,i} &= 1- P_{i,i-1} - P_{i,i+1}\\ P_{i,i+1} &= \frac{f_i \cdot i}{f_i \cdot i + g_i (N-i)} \cdot \frac{N-i}{N}\\ \end{align} }[/math]

The entry [math]\displaystyle{ P_{i,j} }[/math] denotes the probability to go from state i to state j. To understand the formulas for the transition probabilities one has to look again at the definition of the process and see that the fitness enters only the first term in the equations which is concerned with reproduction. Thus the probability that individual A is chosen for reproduction is not i / N any more but dependent on the fitness of A and thus

[math]\displaystyle{ \frac{f_i \cdot i}{f_i \cdot i + g_i (N-i)}. }[/math]

Also in this case, fixation probabilities when starting in state i is defined by recurrence

[math]\displaystyle{ x_i = \begin{cases} 0 & i=0\\ \beta_i x_{i-1}+(1-\alpha_i-\beta_i)x_i+\alpha_ix_{i+1} & 1 \leq i \leq N-1\\ 1 & i =N \end{cases} }[/math]

And the closed form is given by

[math]\displaystyle{ x_i = \frac{{\displaystyle 1 + \sum_{j=1}^{i-1}\prod_{k=1}^{j}\gamma_k}} {{\displaystyle 1 + \sum_{j=1}^{N-1}\prod_{k=1}^{j}\gamma_k}} \qquad \text{(1)} }[/math]

where [math]\displaystyle{ \gamma_i = P_{i,i-1} / P_{i,i+1} }[/math] per definition and will just be [math]\displaystyle{ g_i / f_i }[/math] for the general case.

For a mathematical derivation of the equation above, click on "show" to reveal

Also in this case, fixation probabilities can be computed, but the transition probabilities are not symmetric. The notation [math]\displaystyle{ P_{i,i+1}=\alpha_i, P_{i,i-1}=\beta_i, P_{i,i}=1-\alpha_i- \beta_i }[/math] and [math]\displaystyle{ \gamma_i = \beta_i / \alpha_i }[/math] is used. The fixation probability can be defined recursively and a new variable [math]\displaystyle{ y_i = x_i - x_{i-1} }[/math] is introduced.

[math]\displaystyle{ \begin{align} x_i &= \beta_i x_{i-1} + (1-\alpha_i - \beta_i)x_i + \alpha_i x_{i+1} \\ \beta_i (x_i - x_{i-1} ) &= \alpha_i (x_{i+1} - x_i ) \\ \gamma_i \cdot y_i &= y_{i+1} \end{align} }[/math]

Now two properties from the definition of the variable yi can be used to find a closed form solution for the fixation probabilities:

[math]\displaystyle{ \begin{align} \sum_{i=1}^{m} y_i &= x_m && 1\\ y_k &= x_1 \cdot \prod_{l=1}^{k-1}\gamma_l && 2\\ \Rightarrow \sum_{m=1}^{i}y_m &= x_1 + x_1 \sum_{j=1}^{i-1}\prod_{k=1}^{j}\gamma_k = x_i && 3 \end{align} }[/math]

Combining (3) and xN = 1:

[math]\displaystyle{ x_1 \left(1 + \sum_{j=1}^{N-1}\prod_{k=1}^{j}\gamma_k \right) = x_N = 1. }[/math]

which implies:

[math]\displaystyle{ x_1 = \frac{1}{1 + \sum_{j=1}^{N-1} \prod_{k=1}^{j}\gamma_k} }[/math]

This in turn gives us:

[math]\displaystyle{ x_i = \frac{{\displaystyle 1 + \sum_{j=1}^{i-1}\prod_{k=1}^{j}\gamma_k}}{{\displaystyle 1 + \sum_{j=1}^{N-1}\prod_{k=1}^{j}\gamma_k}} }[/math]

This general case where the fitness of A and B depends on the abundance of each type is studied in evolutionary game theory.

Less complex results are obtained if a constant fitness difference r is assumed. Individuals of type A reproduce with a constant rate r and individuals with allele B reproduce with rate 1. Thus if A has a fitness advantage over B, r will be larger than one, otherwise it will be smaller than one. Thus the transition matrix of the stochastic process is tri-diagonal in shape and the transition probabilities are

[math]\displaystyle{ \begin{align} P_{0,0}&=1\\ P_{i,i-1} &= \frac{N-i}{r \cdot i + N-i} \cdot \frac{i}{N}\\ P_{i,i} &= 1- P_{i,i-1} - P_{i,i+1}\\ P_{i,i+1} &= \frac{r \cdot i}{r \cdot i + N-i} \cdot \frac{N-i}{N}\\ P_{N,N}&=1. \end{align} }[/math]

In this case [math]\displaystyle{ \gamma_i = 1/r }[/math] is a constant factor for each composition of the population and thus the fixation probability from equation (1) simplifies to

[math]\displaystyle{ x_i = \frac{1-r^{-i}} { 1-r^{-N} } \quad \Rightarrow \quad x_1 = \rho = \frac{1-r^{-1}} {1-r^{-N}} \qquad \text{(2)} }[/math]

where the fixation probability of a single mutant A in a population of otherwise all B is often of interest and is denoted by ρ.

Also in the case of selection, the expected value and the variance of the number of A individuals may be computed

[math]\displaystyle{ \begin{align} \operatorname{E}[ X(t) \mid X(t-1) = i ] &= p s \dfrac{1-p}{p s + 1} + i \\ \operatorname{Var}( X(t+1) \mid X(t)=i) &=p(1-p)\dfrac{ (s+1) + (p s + 1)^2 }{(p s +1)^2} \end{align} }[/math]

where p = i/N, and r = 1 + s.

For a mathematical derivation of the equation above, click on "show" to reveal

For the expected value the calculation runs as follows

[math]\displaystyle{ \begin{align} \operatorname{E}[ \Delta(1) \mid X(0) = i ] &= (i-1-i) \cdot P_{i,i-1} + (i-i) \cdot P_{i,i} + (i+1-i) \cdot P_{i,i+1} \\ &= -\frac{N-i}{r i + N -i} \frac{i}{N} + \frac{ri}{r i + N -i} \frac{N-i}{N} \\ &= -\frac{(N-i)i}{(r i + N -i)N} + \frac{i(N-i)}{(r i + N -i)N} + \frac{si(N-i)}{(r i + N -i)N} \\ &= p s \dfrac{1-p}{p s + 1}\\ \operatorname{E}[ X(t) \mid X(t-1) = i ] &= p s \dfrac{1-p}{p s + 1}+i \end{align} }[/math]

For the variance the calculation runs as follows, using the variance of a single step

[math]\displaystyle{ \begin{align} \operatorname{Var}( X(t+1) \mid X(t)=i) &= \operatorname{Var}(X(t)) + \operatorname{Var}(\Delta(t+1)\mid X(t)=i) \\ &= 0 + E\left [\Delta(t+1)^2\mid X(t)=i \right ] - \operatorname{E}[\Delta(t+1)\mid X(t)=i]^2\\ &= (i-1-i)^2 \cdot P_{i,i-1} + (i-i)^2 \cdot P_{i,i} + (i+1-i)^2 \cdot P_{i,i+1} - \operatorname{E}[\Delta(t+1)\mid X(t)=i]^2\\ &= P_{i,i-1} + P_{i,i+1} - \operatorname{E}[\Delta(t+1)\mid X(t)=i]^2\\ &= \frac{(N-i)i}{(r i + N -i)N} + \frac{(N-i)i(1+s)}{(r i + N -i)N} - \operatorname{E}[\Delta(t+1)\mid X(t)=i]^2\\ &= i (N-i)\frac{2+s}{(r i + N -i)N} - \operatorname{E}[\Delta(t+1)\mid X(t)=i]^2\\ &= i (N-i)\frac{2+s}{(r i + N -i)N} - \left (p s \dfrac{1-p}{ps + 1} \right )^2\\ &= p(1-p)\frac{2+s (ps + 1)}{(ps + 1)^2} - p(1-p) \frac{p s^2(1-p)}{(p s + 1)^2}\\ &= p(1-p)\dfrac{2+2ps + s + p^2 s^2}{(ps +1)^2} \end{align} }[/math]

Rate of evolution

In a population of all B individuals, a single mutant A will take over the whole population with the probability

[math]\displaystyle{ \rho = \frac{1-r^{-1}}{1-r^{-N}}. \qquad \text{(2)} }[/math]

If the mutation rate (to go from the B to the A allele) in the population is u then the rate with which one member of the population will mutate to A is given by N × u and the rate with which the whole population goes from all B to all A is the rate that a single mutant A arises times the probability that it will take over the population (fixation probability):

[math]\displaystyle{ R = N \cdot u \cdot \rho = u \quad \text{if} \quad \rho = \frac{1}{N}. }[/math]

Thus if the mutation is neutral (i.e. the fixation probability is just 1/N) then the rate with which an allele arises and takes over a population is independent of the population size and is equal to the mutation rate. This important result is the basis of the neutral theory of evolution and suggests that the number of observed point mutations in the genomes of two different species would simply be given by the mutation rate multiplied by two times the time since divergence. Thus the neutral theory of evolution provides a molecular clock, given that the assumptions are fulfilled which may not be the case in reality.

See also

  • Weak Selection

References

  1. Moran, P. A. P. (1958). "Random processes in genetics". Mathematical Proceedings of the Cambridge Philosophical Society 54 (1): 60–71. doi:10.1017/S0305004100033193. 

Further reading

External links