Physics:Darwin–Fowler method

From HandWiki
Short description: Method for deriving the distribution functions with mean probability

In statistical mechanics, the Darwin–Fowler method is used for deriving the distribution functions with mean probability. It was developed by Charles Galton Darwin and Ralph H. Fowler in 1922–1923.[1][2]

Distribution functions are used in statistical physics to estimate the mean number of particles occupying an energy level (hence also called occupation numbers). These distributions are mostly derived as those numbers for which the system under consideration is in its state of maximum probability. But one really requires average numbers. These average numbers can be obtained by the Darwin–Fowler method. Of course, for systems in the thermodynamic limit (large number of particles), as in statistical mechanics, the results are the same as with maximization.

Darwin–Fowler method

In most texts on statistical mechanics the statistical distribution functions [math]\displaystyle{ f }[/math] in Maxwell–Boltzmann statistics, Bose–Einstein statistics, Fermi–Dirac statistics) are derived by determining those for which the system is in its state of maximum probability. But one really requires those with average or mean probability, although – of course – the results are usually the same for systems with a huge number of elements, as is the case in statistical mechanics. The method for deriving the distribution functions with mean probability has been developed by C. G. Darwin and Fowler[2] and is therefore known as the Darwin–Fowler method. This method is the most reliable general procedure for deriving statistical distribution functions. Since the method employs a selector variable (a factor introduced for each element to permit a counting procedure) the method is also known as the Darwin–Fowler method of selector variables. Note that a distribution function is not the same as the probability – cf. Maxwell–Boltzmann distribution, Bose–Einstein distribution, Fermi–Dirac distribution. Also note that the distribution function [math]\displaystyle{ f_i }[/math] which is a measure of the fraction of those states which are actually occupied by elements, is given by [math]\displaystyle{ f_i = n_i/g_i }[/math] or [math]\displaystyle{ n_i= f_ig_i }[/math], where [math]\displaystyle{ g_i }[/math] is the degeneracy of energy level [math]\displaystyle{ i }[/math] of energy [math]\displaystyle{ \varepsilon_i }[/math] and [math]\displaystyle{ n_i }[/math] is the number of elements occupying this level (e.g. in Fermi–Dirac statistics 0 or 1). Total energy [math]\displaystyle{ E }[/math] and total number of elements [math]\displaystyle{ N }[/math] are then given by [math]\displaystyle{ E = \sum_i n_i\varepsilon_i }[/math] and [math]\displaystyle{ N = \sum n_i }[/math].

The Darwin–Fowler method has been treated in the texts of E. Schrödinger,[3] Fowler[4] and Fowler and E. A. Guggenheim,[5] of K. Huang,[6] and of H. J. W. Müller–Kirsten.[7] The method is also discussed and used for the derivation of Bose–Einstein condensation in the book of R. B. Dingle.[8]

Classical statistics

For [math]\displaystyle{ N=\sum_in_i }[/math] independent elements with [math]\displaystyle{ n_i }[/math] on level with energy [math]\displaystyle{ \varepsilon_i }[/math] and [math]\displaystyle{ E=\sum_in_i\varepsilon_i }[/math] for a canonical system in a heat bath with temperature [math]\displaystyle{ T }[/math] we set

[math]\displaystyle{ Z = \sum_\text{arrangements}e^{-E/kT} = \sum_\text{arrangements}\prod_iz_i^{n_i}, \;\;\; z_i = e^{-\varepsilon_i/kT}. }[/math]

The average over all arrangements is the mean occupation number

[math]\displaystyle{ (n_i)_\text{av} = \frac{\sum_jn_jZ}{Z} = z_j\frac{\partial}{\partial z_j}\ln Z. }[/math]

Insert a selector variable [math]\displaystyle{ \omega }[/math] by setting

[math]\displaystyle{ Z_\omega = \sum \prod_i(\omega z_i)^{n_i}. }[/math]

In classical statistics the [math]\displaystyle{ N }[/math] elements are (a) distinguishable and can be arranged with packets of [math]\displaystyle{ n_i }[/math] elements on level [math]\displaystyle{ \varepsilon_i }[/math] whose number is

[math]\displaystyle{ \frac{N!}{\prod_in_i!}, }[/math]

so that in this case

[math]\displaystyle{ Z_\omega = N!\sum_{n_i}\prod_i\frac{(\omega z_i)^{n_i}}{n_i!}. }[/math]

Allowing for (b) the degeneracy [math]\displaystyle{ g_i }[/math] of level [math]\displaystyle{ \varepsilon_i }[/math] this expression becomes

[math]\displaystyle{ Z_\omega = N!\prod_{i=1}^{\infty}\left(\sum_{n_i=0,1,2,\ldots}\frac{(\omega z_i)^{n_i}}{n_i!}\right)^{g_i} = N!e^{\omega\sum_ig_iz_i}. }[/math]

The selector variable [math]\displaystyle{ \omega }[/math] allows one to pick out the coefficient of [math]\displaystyle{ \omega^N }[/math] which is [math]\displaystyle{ Z }[/math]. Thus

[math]\displaystyle{ Z = \left(\sum_ig_iz_i\right)^N, }[/math]

and hence

[math]\displaystyle{ (n_j)_\text{av} = z_j\frac{\partial}{\partial z_j}\ln Z = N\frac{g_je^{-\varepsilon_j/kT}}{\sum_ig_ie^{-\varepsilon_i/kT}}. }[/math]

This result which agrees with the most probable value obtained by maximization does not involve a single approximation and is therefore exact, and thus demonstrates the power of this Darwin–Fowler method.

Quantum statistics

We have as above

[math]\displaystyle{ Z_{\omega}=\sum\prod (\omega z_i)^{n_i}, \;\; z_i=e^{-\varepsilon_i/kT}, }[/math]

where [math]\displaystyle{ n_i }[/math] is the number of elements in energy level [math]\displaystyle{ \varepsilon_i }[/math]. Since in quantum statistics elements are indistinguishable no preliminary calculation of the number of ways of dividing elements into packets [math]\displaystyle{ n_1, n_2, n_3, ... }[/math] is required. Therefore the sum [math]\displaystyle{ \sum }[/math] refers only to the sum over possible values of [math]\displaystyle{ n_i }[/math].

In the case of Fermi–Dirac statistics we have

[math]\displaystyle{ n_i=0 }[/math] or [math]\displaystyle{ n_i=1 }[/math]

per state. There are [math]\displaystyle{ g_i }[/math] states for energy level [math]\displaystyle{ \varepsilon_i }[/math]. Hence we have

[math]\displaystyle{ Z_\omega=(1+\omega z_1)^{g_1}(1+\omega z_2)^{g_2}\cdots=\prod(1+\omega z_i)^{g_i}. }[/math]

In the case of Bose–Einstein statistics we have

[math]\displaystyle{ n_i=0,1,2,3, \ldots \infty. }[/math]

By the same procedure as before we obtain in the present case

[math]\displaystyle{ Z_{\omega}=(1+\omega z_1+(\omega z_1)^2 + (\omega z_1)^3 + \cdots)^{g_1}(1+\omega z_2 + (\omega z_2)^2 + \cdots)^{g_2} \cdots. }[/math]

But

[math]\displaystyle{ 1 + \omega z_1 + (\omega z_1)^2 + \cdots = \frac{1}{(1 - \omega z_1)}. }[/math]

Therefore

[math]\displaystyle{ Z_\omega=\prod_i(1-\omega z_i)^{-g_i}. }[/math]

Summarizing both cases and recalling the definition of [math]\displaystyle{ Z }[/math], we have that [math]\displaystyle{ Z }[/math] is the coefficient of [math]\displaystyle{ \omega^N }[/math] in

[math]\displaystyle{ Z_\omega=\prod_i(1\pm \omega z_i)^{\pm g_i}, }[/math]

where the upper signs apply to Fermi–Dirac statistics, and the lower signs to Bose–Einstein statistics.

Next we have to evaluate the coefficient of [math]\displaystyle{ \omega^N }[/math] in [math]\displaystyle{ Z_\omega. }[/math] In the case of a function [math]\displaystyle{ \phi(\omega) }[/math] which can be expanded as

[math]\displaystyle{ \phi(\omega) = a_0 + a_1\omega + a_2\omega^2 + \cdots, }[/math]

the coefficient of [math]\displaystyle{ \omega^N }[/math] is, with the help of the residue theorem of Cauchy,

[math]\displaystyle{ a_N = \frac{1}{2\pi i}\oint \frac{\phi(\omega)d\omega}{\omega^{N+1}}. }[/math]

We note that similarly the coefficient [math]\displaystyle{ Z }[/math] in the above can be obtained as

[math]\displaystyle{ Z=\frac{1}{2\pi i}\oint\frac{Z_{\omega}}{\omega^{N+1}}d\omega\equiv \frac{1}{2\pi i}\int e^{f(\omega)}d\omega, }[/math]

where

[math]\displaystyle{ f(\omega)=\pm\sum_ig_i\ln (1\pm \omega z_i)-(N+1)\ln\omega. }[/math]

Differentiating one obtains

[math]\displaystyle{ f'(\omega) = \frac{1}{\omega}\left[\sum_i\frac{g_i}{(\omega z_i)^{-1}\pm 1}-(N+1)\right], }[/math]

and

[math]\displaystyle{ f''(\omega) = \frac{N+1}{\omega^2}\mp \frac{1}{\omega^2}\sum_i\frac{g_i}{[(\omega z_i)^{-1}\pm 1]^2}. }[/math]

One now evaluates the first and second derivatives of [math]\displaystyle{ f(\omega) }[/math] at the stationary point [math]\displaystyle{ \omega_0 }[/math] at which [math]\displaystyle{ f'(\omega_0)=0. }[/math]. This method of evaluation of [math]\displaystyle{ Z }[/math] around the saddle point [math]\displaystyle{ \omega_0 }[/math]is known as the method of steepest descent. One then obtains

[math]\displaystyle{ Z = \frac{e^{f(\omega_0)}}{\sqrt{2\pi f''(\omega_0)}}. }[/math]

We have [math]\displaystyle{ f'(\omega_0) = 0 }[/math] and hence

[math]\displaystyle{ (N+1) = \sum_i\frac{g_i}{(\omega_0z_i)^{-1}\pm 1} }[/math]

(the +1 being negligible since [math]\displaystyle{ N }[/math] is large). We shall see in a moment that this last relation is simply the formula

[math]\displaystyle{ N = \sum_in_i. }[/math]

We obtain the mean occupation number [math]\displaystyle{ (n_i)_{av} }[/math] by evaluating

[math]\displaystyle{ (n_j)_{av} = z_j\frac{d}{dz_j}\ln Z = \frac{g_j}{(\omega_0z_j)^{-1}\pm 1} = \frac{g_j}{e^{(\varepsilon_j-\mu)/kT} \pm 1}, \quad e^{\mu/kT}= \omega_0. }[/math]

This expression gives the mean number of elements of the total of [math]\displaystyle{ N }[/math] in the volume [math]\displaystyle{ V }[/math] which occupy at temperature [math]\displaystyle{ T }[/math] the 1-particle level [math]\displaystyle{ \varepsilon_j }[/math] with degeneracy [math]\displaystyle{ g_j }[/math] (see e.g. a priori probability). For the relation to be reliable one should check that higher order contributions are initially decreasing in magnitude so that the expansion around the saddle point does indeed yield an asymptotic expansion.

References

  1. "Darwin–Fowler method" (in en). https://www.encyclopediaofmath.org/index.php/Darwin-Fowler_method. 
  2. 2.0 2.1 Darwin, C. G.; Fowler, R. H. (1922). "On the partition of energy". Phil. Mag. 44: 450–479, 823–842. doi:10.1080/14786440908565189. 
  3. Schrödinger, E. (1952). Statistical Thermodynamics. Cambridge University Press. 
  4. Fowler, R. H. (1952). Statistical Mechanics. Cambridge University Press. 
  5. Fowler, R. H.; Guggenheim, E. (1960). Statistical Thermodynamics. Cambridge University Press. 
  6. Huang, K. (1963). Statistical Mechanics. Wiley. 
  7. Müller–Kirsten, H. J. W. (2013). Basics of Statistical Physics (2nd ed.). World Scientific. ISBN 978-981-4449-53-3. 
  8. Dingle, R. B. (1973). Asymptotic Expansions: Their Derivation and Interpretation. Academic Press. pp. 267–271. ISBN 0-12-216550-0. 

Further reading