Physics:Quantum statistical mechanics
Modern physics |
---|
[math]\displaystyle{ \hat{H} | \psi_n(t) \rangle = i \hbar \frac{\partial}{\partial t} | \psi_n(t) \rangle }[/math] [math]\displaystyle{ \frac { 1 }{ { c }^{ 2 } } \frac { { \partial }^{ 2 }{ \phi }_{n} }{ { \partial t }^{ 2 } } -{ { \nabla }^{ 2 }{ \phi }_{n} }+{ \left( \frac { mc }{ \hbar } \right) }^{ 2 }{ \phi }_{n}=0 }[/math] |
Part of a series on |
Quantum mechanics |
---|
[math]\displaystyle{ i \hbar \frac{\partial}{\partial t} | \psi (t) \rangle = \hat{H} | \psi (t) \rangle }[/math] |
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
Expectation
From classical probability theory, we know that the expectation of a random variable X is defined by its distribution DX by
- [math]\displaystyle{ \mathbb{E}(X) = \int_\mathbb{R} \lambda \, d \, \operatorname{D}_X(\lambda) }[/math]
assuming, of course, that the random variable is integrable or that the random variable is non-negative. Similarly, let A be an observable of a quantum mechanical system. A is given by a densely defined self-adjoint operator on H. The spectral measure of A defined by
- [math]\displaystyle{ \operatorname{E}_A(U) = \int_U \lambda d \operatorname{E}(\lambda), }[/math]
uniquely determines A and conversely, is uniquely determined by A. EA is a boolean homomorphism from the Borel subsets of R into the lattice Q of self-adjoint projections of H. In analogy with probability theory, given a state S, we introduce the distribution of A under S which is the probability measure defined on the Borel subsets of R by
- [math]\displaystyle{ \operatorname{D}_A(U) = \operatorname{Tr}(\operatorname{E}_A(U) S). }[/math]
Similarly, the expected value of A is defined in terms of the probability distribution DA by
- [math]\displaystyle{ \mathbb{E}(A) = \int_\mathbb{R} \lambda \, d \, \operatorname{D}_A(\lambda). }[/math]
Note that this expectation is relative to the mixed state S which is used in the definition of DA.
Remark. For technical reasons, one needs to consider separately the positive and negative parts of A defined by the Borel functional calculus for unbounded operators.
One can easily show:
- [math]\displaystyle{ \mathbb{E}(A) = \operatorname{Tr}(A S) = \operatorname{Tr}(S A). }[/math]
Note that if S is a pure state corresponding to the vector [math]\displaystyle{ \psi }[/math], then:
- [math]\displaystyle{ \mathbb{E}(A) = \langle \psi | A | \psi \rangle. }[/math]
The trace of an operator A is written as follows:
- [math]\displaystyle{ \operatorname{Tr}(A) = \sum_{m} \langle m | A | m \rangle . }[/math]
Von Neumann entropy
Of particular significance for describing randomness of a state is the von Neumann entropy of S formally defined by
- [math]\displaystyle{ \operatorname{H}(S) = -\operatorname{Tr}(S \log_2 S) }[/math].
Actually, the operator S log2 S is not necessarily trace-class. However, if S is a non-negative self-adjoint operator not of trace class we define Tr(S) = +∞. Also note that any density operator S can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form
- [math]\displaystyle{ \begin{bmatrix} \lambda_1 & 0 & \cdots & 0 & \cdots \\ 0 & \lambda_2 & \cdots & 0 & \cdots\\ \vdots & \vdots & \ddots & \\ 0 & 0 & & \lambda_n & \\ \vdots & \vdots & & & \ddots \end{bmatrix} }[/math]
and we define
- [math]\displaystyle{ \operatorname{H}(S) = - \sum_i \lambda_i \log_2 \lambda_i. }[/math]
The convention is that [math]\displaystyle{ \; 0 \log_2 0 = 0 }[/math], since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of S.
Remark. It is indeed possible that H(S) = +∞ for some density operator S. In fact T be the diagonal matrix
- [math]\displaystyle{ T = \begin{bmatrix} \frac{1}{2 (\log_2 2)^2 }& 0 & \cdots & 0 & \cdots \\ 0 & \frac{1}{3 (\log_2 3)^2 } & \cdots & 0 & \cdots\\ \vdots & \vdots & \ddots & \\ 0 & 0 & & \frac{1}{n (\log_2 n)^2 } & \\ \vdots & \vdots & & & \ddots \end{bmatrix} }[/math]
T is non-negative trace class and one can show T log2 T is not trace-class.
Theorem. Entropy is a unitary invariant.
In analogy with classical entropy (notice the similarity in the definitions), H(S) measures the amount of randomness in the state S. The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space H is finite-dimensional, entropy is maximized for the states S which in diagonal form have the representation
- [math]\displaystyle{ \begin{bmatrix} \frac{1}{n} & 0 & \cdots & 0 \\ 0 & \frac{1}{n} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \frac{1}{n} \end{bmatrix} }[/math]
For such an S, H(S) = log2 n. The state S is called the maximally mixed state.
Recall that a pure state is one of the form
- [math]\displaystyle{ S = | \psi \rangle \langle \psi |, }[/math]
for ψ a vector of norm 1.
Theorem. H(S) = 0 if and only if S is a pure state.
For S is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1.
Entropy can be used as a measure of quantum entanglement.
Gibbs canonical ensemble
Consider an ensemble of systems described by a Hamiltonian H with average energy E. If H has pure-point spectrum and the eigenvalues [math]\displaystyle{ E_n }[/math] of H go to +∞ sufficiently fast, e−r H will be a non-negative trace-class operator for every positive r.
The Gibbs canonical ensemble is described by the state
- [math]\displaystyle{ S= \frac{\mathrm{e}^{- \beta H}}{\operatorname{Tr}(\mathrm{e}^{- \beta H})}. }[/math]
Where β is such that the ensemble average of energy satisfies
- [math]\displaystyle{ \operatorname{Tr}(S H) = E }[/math]
and
- [math]\displaystyle{ \operatorname{Tr}(\mathrm{e}^{- \beta H}) = \sum_n \mathrm{e}^{- \beta E_n} = Z(\beta) }[/math]
This is called the partition function; it is the quantum mechanical version of the canonical partition function of classical statistical mechanics. The probability that a system chosen at random from the ensemble will be in a state corresponding to energy eigenvalue [math]\displaystyle{ E_m }[/math] is
- [math]\displaystyle{ \mathcal{P}(E_m) = \frac{\mathrm{e}^{- \beta E_m}}{\sum_n \mathrm{e}^{- \beta E_n}}. }[/math]
Under certain conditions, the Gibbs canonical ensemble maximizes the von Neumann entropy of the state subject to the energy conservation requirement.[clarification needed]
Grand canonical ensemble
For open systems where the energy and numbers of particles may fluctuate, the system is described by the grand canonical ensemble, described by the density matrix
- [math]\displaystyle{ \rho = \frac{\mathrm{e}^{\beta (\sum_i \mu_iN_i - H)}}{\operatorname{Tr}\left(\mathrm{e}^{ \beta ( \sum_i \mu_iN_i - H)}\right)}. }[/math]
where the N1, N2, ... are the particle number operators for the different species of particles that are exchanged with the reservoir. Note that this is a density matrix including many more states (of varying N) compared to the canonical ensemble.
The grand partition function is
- [math]\displaystyle{ \mathcal Z(\beta, \mu_1, \mu_2, \cdots) = \operatorname{Tr}(\mathrm{e}^{\beta (\sum_i \mu_iN_i - H)}) }[/math]
See also
References
- J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955.
- F. Reif, Statistical and Thermal Physics, McGraw-Hill, 1965.
Original source: https://en.wikipedia.org/wiki/Quantum statistical mechanics.
Read more |