Physics:Helmholtz free energy

From HandWiki
Short description: Thermodynamic potential

In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.

In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.

The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes".[1] From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy.[2] In physics, the symbol F is also used in reference to free energy or Helmholtz function.

Definition

The Helmholtz free energy is defined as[3] [math]\displaystyle{ F \equiv U - TS, }[/math] where

  • F is the Helmholtz free energy (sometimes also called A, particularly in the field of chemistry) (SI: joules, CGS: ergs),
  • U is the internal energy of the system (SI: joules, CGS: ergs),
  • T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
  • S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).

The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.

Formal development

The first law of thermodynamics in a closed system provides

[math]\displaystyle{ \mathrm{d}U = \delta Q\ + \delta W, }[/math]

where [math]\displaystyle{ U }[/math] is the internal energy, [math]\displaystyle{ \delta Q }[/math] is the energy added as heat, and [math]\displaystyle{ \delta W }[/math] is the work done on the system. The second law of thermodynamics for a reversible process yields [math]\displaystyle{ \delta Q = T\,\mathrm{d}S }[/math]. In case of a reversible change, the work done can be expressed as [math]\displaystyle{ \delta W = -p\,\mathrm{d}V }[/math] (ignoring electrical and other non-PV work) and so:

[math]\displaystyle{ \mathrm{d}U = T\,\mathrm{d}S - p\,\mathrm{d}V. }[/math]

Applying the product rule for differentiation to [math]\displaystyle{ \mathrm{d}(TS) = T \mathrm{d}S\, + S\mathrm{d}T }[/math], it follows

[math]\displaystyle{ \mathrm{d}U = \mathrm{d}(TS) - S\,\mathrm{d}T - p\,\mathrm{d}V, }[/math]

and

[math]\displaystyle{ \mathrm{d}(U - TS) = -S\,\mathrm{d}T - p\,\mathrm{d}V. }[/math]

The definition of [math]\displaystyle{ F = U - TS }[/math] enables to rewrite this as

[math]\displaystyle{ \mathrm{d}F = -S\,\mathrm{d}T - p\,\mathrm{d}V. }[/math]

Because F is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.

Minimum free energy and maximum work principles

The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.

Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase [math]\displaystyle{ \Delta U }[/math], the entropy increase [math]\displaystyle{ \Delta S }[/math], and the total amount of work that can be extracted, performed by the system, [math]\displaystyle{ W }[/math], are well defined quantities. Conservation of energy implies

[math]\displaystyle{ \Delta U_\text{bath} + \Delta U + W = 0. }[/math]

The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by

[math]\displaystyle{ Q_\text{bath} = \Delta U_\text{bath} = -(\Delta U + W). }[/math]

The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is

[math]\displaystyle{ \Delta S_\text{bath} = \frac{Q_\text{bath}}{T} = -\frac{\Delta U + W}{T}. }[/math]

The total entropy change is thus given by

[math]\displaystyle{ \Delta S_\text{bath} + \Delta S = -\frac{\Delta U - T\Delta S + W}{T}. }[/math]

Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:

[math]\displaystyle{ \Delta S_\text{bath} + \Delta S = -\frac{\Delta F + W}{T}. }[/math]

Since the total change in entropy must always be larger or equal to zero, we obtain the inequality

[math]\displaystyle{ W \leq -\Delta F. }[/math]

We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then

[math]\displaystyle{ \Delta F \leq 0, }[/math]

and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.

This result seems to contradict the equation dF = −S dTP dV, as keeping T and V constant seems to imply dF = 0, and hence F = constant. In reality there is no contradiction: In a simple one-component system, to which the validity of the equation dF = −S dTP dV is restricted, no process can occur at constant T and V, since there is a unique P(T, V) relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to

[math]\displaystyle{ dF = -S\,dT - P\,dV + \sum_j \mu_j\,dN_j, }[/math]

where the [math]\displaystyle{ N_{j} }[/math] are the numbers of particles of type j and the [math]\displaystyle{ \mu_{j} }[/math] are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.

In case there are other external parameters, the above relation further generalizes to

[math]\displaystyle{ dF = -S\,dT - \sum_i X_i\,dx_i + \sum_j \mu_j\,dN_j. }[/math]

Here the [math]\displaystyle{ x_i }[/math] are the external variables, and the [math]\displaystyle{ X_i }[/math] the corresponding generalized forces.

Relation to the canonical partition function

A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by [math]\displaystyle{ P_r = \frac{e^{-\beta E_r}}{Z}, }[/math] where

  • [math]\displaystyle{ \beta = \frac{1}{k T}, }[/math]
  • [math]\displaystyle{ E_r }[/math] is the energy of accessible state [math]\displaystyle{ r }[/math]
  • [math]\displaystyle{ Z = \sum_i e^{-\beta E_i}. }[/math]

Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.

The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:

[math]\displaystyle{ U \equiv \langle E \rangle = \sum_r P_r E_r = \sum_r \frac{e^{-\beta E_r} E_r}{Z} = \sum_r \frac{-\frac{\partial}{\partial \beta} e^{-\beta E_r}}{Z} = \frac{-\frac{\partial}{\partial \beta} \sum_r e^{-\beta E_r}}{Z} = -\frac{\partial \log Z}{\partial \beta}. }[/math]

If the system is in state r, then the generalized force corresponding to an external variable x is given by

[math]\displaystyle{ X_r = -\frac{\partial E_r}{\partial x}. }[/math]

The thermal average of this can be written as

[math]\displaystyle{ X = \sum_r P_r X_r = \frac{1}{\beta} \frac{\partial \log Z}{\partial x}. }[/math]

Suppose that the system has one external variable [math]\displaystyle{ x }[/math]. Then changing the system's temperature parameter by [math]\displaystyle{ d\beta }[/math] and the external variable by [math]\displaystyle{ dx }[/math] will lead to a change in [math]\displaystyle{ \log Z }[/math]:

[math]\displaystyle{ d(\log Z) = \frac{\partial\log Z}{\partial\beta}\,d\beta + \frac{\partial\log Z}{\partial x}\,dx = -U\,d\beta + \beta X\,dx. }[/math]

If we write [math]\displaystyle{ U\,d\beta }[/math] as

[math]\displaystyle{ U\,d\beta = d(\beta U) - \beta\, dU, }[/math]

we get

[math]\displaystyle{ d(\log Z) = -d(\beta U) + \beta\, dU + \beta X \,dx. }[/math]

This means that the change in the internal energy is given by

[math]\displaystyle{ dU = \frac{1}{\beta}\,d(\log Z + \beta U) - X\,dx. }[/math]

In the thermodynamic limit, the fundamental thermodynamic relation should hold:

[math]\displaystyle{ dU = T\, dS - X\, dx. }[/math]

This then implies that the entropy of the system is given by

[math]\displaystyle{ S = k\log Z + \frac{U}{T} + c, }[/math]

where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes [math]\displaystyle{ S = k \log \Omega_0 }[/math], where [math]\displaystyle{ \Omega_0 }[/math] is the ground-state degeneracy. The partition function in this limit is [math]\displaystyle{ \Omega_0 e^{-\beta U_0} }[/math], where [math]\displaystyle{ U_0 }[/math] is the ground-state energy. Thus, we see that [math]\displaystyle{ c = 0 }[/math] and that

[math]\displaystyle{ F = -kT\log Z. }[/math]

Relating free energy to other variables

Combining the definition of Helmholtz free energy

[math]\displaystyle{ F = U - T S }[/math]

along with the fundamental thermodynamic relation

[math]\displaystyle{ dF = -S\,dT - P\,dV + \mu\,dN, }[/math]

one can find expressions for entropy, pressure and chemical potential:[4]

[math]\displaystyle{ S = \left.-\left( \frac{\partial F}{\partial T} \right) \right|_{V,N}, \quad P = \left.-\left( \frac{\partial F}{\partial V} \right) \right|_{T,N}, \quad \mu = \left.\left( \frac{\partial F}{\partial N} \right) \right|_{T,V}. }[/math]

These three equations, along with the free energy in terms of the partition function,

[math]\displaystyle{ F = -kT\log Z, }[/math]

allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that

[math]\displaystyle{ m = \left.-\left( \frac{\partial F}{\partial B} \right) \right|_{T,N}, \quad V = \left.\left ( \frac{\partial F}{\partial Q} \right) \right|_{N,T}. }[/math]

Bogoliubov inequality

Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.

Suppose we replace the real Hamiltonian [math]\displaystyle{ H }[/math] of the model by a trial Hamiltonian [math]\displaystyle{ \tilde{H} }[/math], which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that

[math]\displaystyle{ \left\langle\tilde{H}\right\rangle = \langle H \rangle, }[/math]

where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian [math]\displaystyle{ \tilde{H} }[/math], then the Bogoliubov inequality states

[math]\displaystyle{ F \leq \tilde{F}, }[/math]

where [math]\displaystyle{ F }[/math] is the free energy of the original Hamiltonian, and [math]\displaystyle{ \tilde{F} }[/math] is the free energy of the trial Hamiltonian. We will prove this below.

By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.

The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as

[math]\displaystyle{ H = H_0 + \Delta H, }[/math]

where [math]\displaystyle{ H_0 }[/math] is some exactly solvable Hamiltonian, then we can apply the above inequality by defining

[math]\displaystyle{ \tilde{H} = H_0 + \langle\Delta H\rangle_0. }[/math]

Here we have defined [math]\displaystyle{ \langle X\rangle_0 }[/math] to be the average of X over the canonical ensemble defined by [math]\displaystyle{ H_0 }[/math]. Since [math]\displaystyle{ \tilde{H} }[/math] defined this way differs from [math]\displaystyle{ H_0 }[/math] by a constant, we have in general

[math]\displaystyle{ \langle X\rangle_0 = \langle X\rangle. }[/math]

where [math]\displaystyle{ \langle X\rangle }[/math] is still the average over [math]\displaystyle{ \tilde{H} }[/math], as specified above. Therefore,

[math]\displaystyle{ \left\langle\tilde{H}\right\rangle = \big\langle H_0 + \langle\Delta H\rangle \big\rangle = \langle H\rangle, }[/math]

and thus the inequality

[math]\displaystyle{ F \leq \tilde{F} }[/math]

holds. The free energy [math]\displaystyle{ \tilde{F} }[/math] is the free energy of the model defined by [math]\displaystyle{ H_0 }[/math] plus [math]\displaystyle{ \langle\Delta H\rangle }[/math]. This means that

[math]\displaystyle{ \tilde{F} = \langle H_0\rangle_0 - T S_0 + \langle\Delta H\rangle_0 = \langle H\rangle_0 - T S_0, }[/math]

and thus

[math]\displaystyle{ F \leq \langle H\rangle_0 - T S_0. }[/math]

Proof of the Bogoliubov inequality

For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by [math]\displaystyle{ P_{r} }[/math] and [math]\displaystyle{ \tilde{P}_{r} }[/math], respectively. From Gibbs' inequality we know that:

[math]\displaystyle{ \sum_{r} \tilde{P}_{r}\log\left(\tilde{P}_{r}\right)\geq \sum_{r} \tilde{P}_{r}\log\left(P_{r}\right) \, }[/math]

holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:

[math]\displaystyle{ \sum_{r} \tilde{P}_{r}\log\left(\frac{\tilde{P}_{r}}{P_{r}}\right) \, }[/math]

Since

[math]\displaystyle{ \log\left(x\right)\geq 1 - \frac{1}{x}\, }[/math]

it follows that:

[math]\displaystyle{ \sum_{r} \tilde{P}_{r}\log\left(\frac{\tilde{P}_{r}}{P_{r}}\right)\geq \sum_{r}\left(\tilde{P}_{r} - P_{r}\right) = 0 \, }[/math]

where in the last step we have used that both probability distributions are normalized to 1.

We can write the inequality as:

[math]\displaystyle{ \left\langle\log\left(\tilde{P}_{r}\right)\right\rangle\geq \left\langle\log\left(P_{r}\right)\right\rangle\, }[/math]

where the averages are taken with respect to [math]\displaystyle{ \tilde{P}_{r} }[/math]. If we now substitute in here the expressions for the probability distributions:

[math]\displaystyle{ P_{r}=\frac{\exp\left[-\beta H\left(r\right)\right]}{Z}\, }[/math]

and

[math]\displaystyle{ \tilde{P}_{r}=\frac{\exp\left[-\beta\tilde{H}\left(r\right)\right]}{\tilde{Z}}\, }[/math]

we get:

[math]\displaystyle{ \left\langle -\beta \tilde{H} - \log\left(\tilde{Z}\right)\right\rangle\geq \left\langle -\beta H - \log\left(Z\right)\right\rangle }[/math]

Since the averages of [math]\displaystyle{ H }[/math] and [math]\displaystyle{ \tilde{H} }[/math] are, by assumption, identical we have:

[math]\displaystyle{ F\leq\tilde{F} }[/math]

Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.

We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of [math]\displaystyle{ \tilde{H} }[/math] by [math]\displaystyle{ \left|r\right\rangle }[/math]. We denote the diagonal components of the density matrices for the canonical distributions for [math]\displaystyle{ H }[/math] and [math]\displaystyle{ \tilde{H} }[/math] in this basis as:

[math]\displaystyle{ P_{r}=\left\langle r\left|\frac{\exp\left[-\beta H\right]}{Z}\right|r\right\rangle\, }[/math]

and

[math]\displaystyle{ \tilde{P}_{r}=\left\langle r\left|\frac{\exp\left[-\beta\tilde{H}\right]}{\tilde{Z}}\right|r\right\rangle=\frac{\exp\left(-\beta\tilde{E}_{r}\right)}{\tilde{Z}}\, }[/math]

where the [math]\displaystyle{ \tilde{E}_{r} }[/math] are the eigenvalues of [math]\displaystyle{ \tilde{H} }[/math]

We assume again that the averages of H and [math]\displaystyle{ \tilde{H} }[/math] in the canonical ensemble defined by [math]\displaystyle{ \tilde{H} }[/math] are the same:

[math]\displaystyle{ \left\langle\tilde{H}\right\rangle = \left\langle H\right\rangle \, }[/math]

where

[math]\displaystyle{ \left\langle H\right\rangle = \sum_{r}\tilde{P}_{r}\left\langle r\left|H\right|r\right\rangle\, }[/math]

The inequality

[math]\displaystyle{ \sum_{r} \tilde{P}_{r}\log\left(\tilde{P}_{r}\right)\geq \sum_{r} \tilde{P}_{r}\log\left(P_{r}\right) \, }[/math]

still holds as both the [math]\displaystyle{ P_{r} }[/math] and the [math]\displaystyle{ \tilde{P}_{r} }[/math] sum to 1. On the l.h.s. we can replace:

[math]\displaystyle{ \log\left(\tilde{P}_{r}\right)= -\beta \tilde{E}_{r} - \log\left(\tilde{Z}\right)\, }[/math]

On the right-hand side we can use the inequality

[math]\displaystyle{ \left\langle\exp\left(X\right)\right\rangle_{r}\geq\exp\left(\left\langle X\right\rangle_{r}\right)\, }[/math]

where we have introduced the notation

[math]\displaystyle{ \left\langle Y\right\rangle_{r}\equiv\left\langle r\left|Y\right|r\right\rangle\, }[/math]

for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:

[math]\displaystyle{ \log\left[\left\langle\exp\left(X\right)\right\rangle_{r}\right]\geq\left\langle X\right\rangle_{r}\, }[/math]

This allows us to write:

[math]\displaystyle{ \log\left(P_{r}\right)=\log\left[\left\langle\exp\left(-\beta H - \log\left(Z\right)\right)\right\rangle_{r}\right]\geq\left\langle -\beta H - \log\left(Z\right)\right\rangle_{r}\, }[/math]

The fact that the averages of H and [math]\displaystyle{ \tilde{H} }[/math] are the same then leads to the same conclusion as in the classical case:

[math]\displaystyle{ F\leq\tilde{F} }[/math]

Generalized Helmholtz energy

In the more general case, the mechanical term [math]\displaystyle{ p\mathrm{d}V }[/math] must be replaced by the product of volume, stress, and an infinitesimal strain:[5]

[math]\displaystyle{ \mathrm{d}F = V \sum_{ij} \sigma_{ij}\,\mathrm{d} \varepsilon_{ij} - S\,\mathrm{d}T + \sum_i \mu_i \,\mathrm{d}N_i, }[/math]

where [math]\displaystyle{ \sigma_{ij} }[/math] is the stress tensor, and [math]\displaystyle{ \varepsilon_{ij} }[/math] is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by

[math]\displaystyle{ \sigma_{ij} = C_{ijkl}\varepsilon_{kl}, }[/math]

where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for [math]\displaystyle{ \mathrm{d}F }[/math] to obtain the Helmholtz energy:

[math]\displaystyle{ \begin{align} F &= \frac{1}{2}VC_{ijkl}\varepsilon_{ij}\varepsilon_{kl} - ST + \sum_i \mu_i N_i \\ &= \frac{1}{2}V\sigma_{ij}\varepsilon_{ij} - ST + \sum_i \mu_i N_i. \end{align} }[/math]

Application to fundamental equations of state

The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.

Application to training auto-encoders

Hinton and Zemel[6] "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is

[math]\displaystyle{ F = \sum_i p_i E_i - H, }[/math]

"which has exactly the form of Helmholtz free energy".

See also

References

  1. von Helmholtz, H. (1882). Physical memoirs, selected and translated from foreign sources. Taylor & Francis. 
  2. Gold, Victor, ed (2019). Gold Book. IUPAC. doi:10.1351/goldbook. http://goldbook.iupac.org/H02772.html. Retrieved 2012-08-19. 
  3. Levine, Ira. N. (1978). "Physical Chemistry" McGraw-Hill: University of Brooklyn.
  4. "4.3 Entropy, Helmholtz Free Energy and the Partition Function". http://theory.physics.manchester.ac.uk/~judith/stat_therm/node70.html. 
  5. Landau, L. D. (1986). Theory of Elasticity (Course of Theoretical Physics Volume 7). (Translated from Russian by J. B. Sykes and W. H. Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN 0-7506-2633-X. 
  6. Hinton, G. E.; Zemel, R. S. (1994). "Autoencoders, minimum description length and Helmholtz free energy". Advances in Neural Information Processing Systems: 3–10. https://proceedings.neurips.cc/paper/1993/file/9e3cfc48eccf81a0d57663e129aef3cb-Paper.pdf. 

Further reading

  • Atkins' Physical Chemistry, 7th edition, by Peter Atkins and Julio de Paula, Oxford University Press
  • HyperPhysics Helmholtz Free Energy Helmholtz and Gibbs Free Energies