Physics:Two-state quantum system
In quantum mechanics, a two-state system (also known as a two-level system) is a quantum system that can exist in any quantum superposition of two independent (physically distinguishable) quantum states. The Hilbert space describing such a system is two-dimensional. Therefore, a complete basis spanning the space will consist of two independent states. Any two-state system can also be seen as a qubit.
Two-state systems are the simplest quantum systems that are of interest, since the dynamics of a one-state system is trivial (as there are no other states the system can exist in). The mathematical framework required for the analysis of two-state systems is that of linear differential equations and linear algebra of two-dimensional spaces. As a result, the dynamics of a two-state system can be solved analytically without any approximation. The generic behavior of the system is that the wavefunction's amplitude oscillates between the two states.
A very well known example of a two-state system is the spin of a spin-1/2 particle such as an electron, whose spin can have values +ħ/2 or −ħ/2, where ħ is the reduced Planck constant.
The two-state system cannot be used as a description of absorption or decay, because such processes require coupling to a continuum. Such processes would involve exponential decay of the amplitudes, but the solutions of the two-state system are oscillatory.
Analytical solutions for stationary state energies and time-dependence
Representation
Supposing the two available basis states of the system are [math]\displaystyle{ |1\rangle }[/math] and [math]\displaystyle{ |2\rangle }[/math], in general the state can be written as a superposition of these two states with probability amplitudes [math]\displaystyle{ c_1, c_2 }[/math], [math]\displaystyle{ |\psi\rangle = c_1 |1\rangle + c_2 |2\rangle . }[/math]
Since the basis states are orthonormal, [math]\displaystyle{ \langle i|j\rangle=\delta_{ij} }[/math] where [math]\displaystyle{ i,j\in{1,2} }[/math] and [math]\displaystyle{ \delta_{ij} }[/math] is the Kronecker delta, so [math]\displaystyle{ c_i=\langle i|\psi\rangle }[/math]. These two complex numbers may be considered coordinates in a two-dimensional complex Hilbert space.[1] Thus the state vector corresponding to the state [math]\displaystyle{ |\psi\rangle }[/math] is [math]\displaystyle{ |\psi\rangle \equiv \begin{pmatrix} \langle 1|\psi\rangle \\ \langle 2|\psi\rangle\end{pmatrix}=\begin{pmatrix} c_1 \\ c_2\end{pmatrix}=c_1\begin{pmatrix} 1 \\ 0\end{pmatrix} + c_2\begin{pmatrix} 0 \\ 1\end{pmatrix}=\mathbf{c}, }[/math] and the basis states correspond to the basis vectors, [math]\displaystyle{ |1\rangle \equiv \begin{pmatrix} \langle 1|1\rangle \\ \langle 2|1\rangle\end{pmatrix} = \begin{pmatrix} 1 \\ 0\end{pmatrix} }[/math] and [math]\displaystyle{ |2\rangle \equiv \begin{pmatrix} \langle 1|2\rangle \\ \langle 2|2\rangle\end{pmatrix} = \begin{pmatrix} 0 \\ 1\end{pmatrix}. }[/math]
If the state [math]\displaystyle{ |\psi\rangle }[/math] is normalized, the norm of the state vector is unity, i.e. [math]\displaystyle{ {|c_1|}^2+{|c_2|}^2 = 1 }[/math].
All observable physical quantities, such as energy, are associated with hermitian operators. In the case of energy and the corresponding Hamiltonian, H, this means [math]\displaystyle{ H_{ij}=\langle i|H|j \rangle=\langle j|H|i \rangle^*=H_{ji}^*, }[/math] i.e. [math]\displaystyle{ H_{11} }[/math] and [math]\displaystyle{ H_{22} }[/math] are real, and [math]\displaystyle{ H_{12} = H_{21}^* }[/math]. Thus, these four matrix elements [math]\displaystyle{ H_{ij} }[/math] produce a 2×2 hermitian matrix, [math]\displaystyle{ \mathbf{H} = \begin{pmatrix} \langle 1|H|1 \rangle & \langle 1|H|2 \rangle \\ \langle 2|H|1 \rangle & \langle 2|H|2 \rangle \end{pmatrix} = \begin{pmatrix} H_{11} & H_{12} \\ H_{12}^* & H_{22} \end{pmatrix} . }[/math]
The time-independent Schrödinger equation states that [math]\displaystyle{ H|\psi\rangle=E|\psi\rangle }[/math]; substituting for [math]\displaystyle{ |\psi\rangle }[/math] in terms of the basis states from above, and multiplying both sides by [math]\displaystyle{ \langle 1| }[/math] or [math]\displaystyle{ \langle 2| }[/math] produces a system of two linear equations that can be written in matrix form, [math]\displaystyle{ \begin{pmatrix} H_{11} & H_{12} \\ H_{12}^* & H_{22} \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \end{pmatrix} = E \begin{pmatrix} c_1 \\ c_2 \end{pmatrix} , }[/math] or [math]\displaystyle{ \mathbf{Hc} = E\mathbf{c} }[/math] which is a 2×2 matrix eigenvalues and eigenvectors problem. As mentioned above, this equation comes from plugging a general state into the time-independent Schrödinger equation. Remember that the time-independent Schrödinger equation is a restrictive condition used to specify the eigenstates. Therefore, when plugging a general state into it, we are seeing what form the general state must take to be an eigenstate. Doing so, and distributing, we get [math]\displaystyle{ c_1 H | 1 \rangle + c_2 H | 2 \rangle = c_1 E | 1 \rangle + c_2 E | 2 \rangle }[/math], which requires [math]\displaystyle{ c_1 }[/math] or [math]\displaystyle{ c_2 }[/math] to be zero ([math]\displaystyle{ E }[/math] cannot be equal to both [math]\displaystyle{ \varepsilon_1 }[/math] and [math]\displaystyle{ \varepsilon_2 }[/math], the energies of the individual states, which are by definition different). Upon setting [math]\displaystyle{ c_1 }[/math] or [math]\displaystyle{ c_2 }[/math] to be 0, only one state remains, and [math]\displaystyle{ E }[/math] is the energy of the surviving state. This result is a redundant reminder that the time-independent Schrödinger equation is only satisfied by eigenstates of H, which are (by definition of the state vector) the states where all except one coefficient are zero. Now, if we follow the same derivation, but before acting with the Hamiltonian on the individual states, we multiply both sides by [math]\displaystyle{ \langle 1 | }[/math] or [math]\displaystyle{ \langle 2 | }[/math], we get a system of two linear equations that can be combined into the above matrix equation. Like before, this can only be satisfied if [math]\displaystyle{ c_1 }[/math] or [math]\displaystyle{ c_2 }[/math] is zero, and when this happens, the constant [math]\displaystyle{ E }[/math] will be the energy of the remaining state. The above matrix equation should thus be interpreted as a restrictive condition on a general state vector to yield an eigenvector of [math]\displaystyle{ H }[/math], exactly analogous to the time-independent Schrödinger equation.
Of course, in general, commuting the matrix with a state vector will not result in the same vector multiplied by a constant E. For general validity, one has to write the equation in the form [math]\displaystyle{ \begin{pmatrix} H_{11} & H_{12} \\ H_{12}^* & H_{22} \end{pmatrix} \begin{pmatrix} c_1 \\ c_2 \end{pmatrix} = \begin{pmatrix} \varepsilon_1 c_1 \\ \varepsilon_2 c_2 \end{pmatrix} , }[/math] with the individual eigenstate energies still inside the product vector. In either case, the Hamiltonian matrix can be derived using the method specified above, or via the more traditional method of constructing a matrix using boundary conditions; specifically, by using the requirement that when it acts on either basis state, it must return that state multiplied by the energy of that state. (There are no boundary conditions on how it acts on a general state.) This results in a diagonal matrix with the diagonal elements being the energies of the eigenstates and the off-diagonal elements being zero. The form of the matrix above that uses bra-ket-enclosed Hamiltonians is a more generalized version of this matrix.
One might ask why it is necessary to write the Hamiltonian matrix in such a general form with bra-ket-enclosed Hamiltonians, since [math]\displaystyle{ H_{ij}, i\neq j }[/math] should always equal zero and [math]\displaystyle{ H_{ii} }[/math] should always equal [math]\displaystyle{ \varepsilon_i }[/math]. The reason is that, in some more complex problems, the state vectors may not be eigenstates of the Hamiltonian used in the matrix. One place where this occurs is in degenerate perturbation theory, where the off-diagonal elements are nonzero until the problem is solved by diagonalization.
Because of the hermiticity of [math]\displaystyle{ \mathbf{H} }[/math] the eigenvalues are real; or, rather, conversely, it is the requirement that the energies are real that implies the hermiticity of [math]\displaystyle{ \mathbf{H} }[/math]. The eigenvectors represent the stationary states, i.e., those for whom the absolute magnitude of the squares of the probability amplitudes do not change with time.
Eigenvalues of the Hamiltonian
The most general form of a 2×2 Hermitian matrix such as the Hamiltonian of a two-state system is given by [math]\displaystyle{ \mathbf{H} = \begin{pmatrix} \varepsilon_1 & \beta - i \gamma \\ \beta + i\gamma & \varepsilon_2\end{pmatrix}, }[/math] where [math]\displaystyle{ \varepsilon_1, \varepsilon_2, \beta }[/math] and γ are real numbers with units of energy. The allowed energy levels of the system, namely the eigenvalues of the Hamiltonian matrix, can be found in the usual way.
Equivalently, this matrix can be decomposed as, [math]\displaystyle{ \mathbf{H} = \alpha\cdot\sigma_0 + \beta \cdot\sigma_1 + \gamma\cdot\sigma_2 + \delta\cdot\sigma_3 = \begin{pmatrix} \alpha+\delta & \beta-i\gamma\\ \beta+i\gamma & \alpha-\delta \end{pmatrix}. }[/math] Here, [math]\displaystyle{ \alpha = \frac 1 2 \left(\varepsilon_1 + \varepsilon_2\right) }[/math] and [math]\displaystyle{ \delta = \frac 1 2 \left(\varepsilon_1 - \varepsilon_2\right) }[/math] are real numbers. The matrix [math]\displaystyle{ \sigma_0 }[/math] is the 2×2 identity matrix and the matrices [math]\displaystyle{ \sigma_k }[/math] with [math]\displaystyle{ k = 1,2,3 }[/math] are the Pauli matrices. This decomposition simplifies the analysis of the system, especially in the time-independent case, where the values of [math]\displaystyle{ \alpha,\beta,\gamma }[/math] and [math]\displaystyle{ \delta }[/math] are constants.
The Hamiltonian can be further condensed as [math]\displaystyle{ \mathbf{H} = \alpha \cdot \sigma_0 + \mathbf{r}\cdot \boldsymbol{\sigma} . }[/math]
The vector [math]\displaystyle{ \mathbf{r} }[/math] is given by [math]\displaystyle{ (\beta,\gamma,\delta) }[/math] and [math]\displaystyle{ \sigma }[/math] is given by [math]\displaystyle{ (\sigma_1,\sigma_2,\sigma_3) }[/math]. This representation simplifies the analysis of the time evolution of the system and is easier to use with other specialized representations such as the Bloch sphere.
If the two-state system's time-independent Hamiltonian H is defined as above, then its eigenvalues are given by [math]\displaystyle{ E_{\pm} = \alpha \pm |\mathbf{r}| }[/math]. Evidently, α is the average energy of the two levels, and the norm of [math]\displaystyle{ \mathbf{r} }[/math] is the splitting between them. The corresponding eigenvectors are denoted as [math]\displaystyle{ |+\rangle }[/math] and [math]\displaystyle{ |-\rangle }[/math].
Time dependence
We now assume that the probability amplitudes are time-dependent, though the basis states are not. The Time-dependent Schrödinger equation states [math]\displaystyle{ i\hbar \partial_t |\psi\rangle = H|\psi\rangle }[/math], and proceeding as before (substituting for [math]\displaystyle{ |\psi\rangle }[/math] and premultiplying by [math]\displaystyle{ \langle 1|, \langle 2| }[/math] again produces a pair of coupled linear equations, but this time they are first order partial differential equations: [math]\displaystyle{ i\hbar \partial_t \mathbf{c} = \mathbf{Hc} }[/math]. If [math]\displaystyle{ \mathbf{H} }[/math] is time independent there are several approaches to find the time dependence of [math]\displaystyle{ c_1, c_2 }[/math], such as normal modes. The result is that [math]\displaystyle{ \mathbf{c}(t) = e^{-i \mathbf{H} t / \hbar} \mathbf{c}_0 = \mathbf{U}(t) \mathbf{c}_0. }[/math] where [math]\displaystyle{ \mathbf{c}_0 = \mathbf{c}(0) }[/math] is the statevector at [math]\displaystyle{ t = 0 }[/math]. Here the exponential of a matrix may be found from the series expansion. The matrix [math]\displaystyle{ \mathbf{U}(t) }[/math] is called the time evolution matrix (which comprises the matrix elements of the corresponding time evolution operator [math]\displaystyle{ U(t) }[/math]). It is easily proved that [math]\displaystyle{ \mathbf{U}(t) }[/math] is unitary, meaning that [math]\displaystyle{ \mathbf{U}^\dagger \mathbf{U} = 1 }[/math].
It can be shown that [math]\displaystyle{ \mathbf{U}(t) = e^{-i\mathbf{H}t/\hbar} = e^{-i \alpha t / \hbar} \left(\cos\left(\frac{|\mathbf{r}|}{\hbar}t\right)\sigma_0 - i \sin\left(\frac{|\mathbf{r}|}{\hbar}t\right) \hat{r} \cdot \boldsymbol{\sigma}\right) , }[/math] where [math]\displaystyle{ \hat{r} = \frac{\mathbf{r}}{|\mathbf{r}|}. }[/math]
When one changes the basis to the eigenvectors of the Hamiltonian, in other words, if the basis states [math]\displaystyle{ |1\rangle , |2\rangle }[/math] are chosen to be the eigenvectors, then [math]\displaystyle{ \epsilon_1=H_{11}=\langle 1|H|1 \rangle=E_1\langle 1|1 \rangle=E_1 }[/math] and [math]\displaystyle{ \beta+i\gamma = H_{21} =\langle 2|H|1 \rangle = E_1\langle 2|1 \rangle = 0 }[/math] and so the Hamiltonian is diagonal, i.e. [math]\displaystyle{ |\mathbf{r}| = \delta }[/math] and is of the form, [math]\displaystyle{ \mathbf{H} = \begin{pmatrix} E_1 & 0 \\ 0 & E_2 \end{pmatrix}. }[/math]
Now, the unitary time evolution operator [math]\displaystyle{ U }[/math] is easily seen to be given by: [math]\displaystyle{ \mathbf{U}(t) = e^{-i\mathbf{H}t/\hbar} = \begin{pmatrix} e^{-i E_1 t/\hbar} & 0\\ 0 & e^{-i E_2vt/\hbar} \end{pmatrix} = e^{-i \alpha t/\hbar} \begin{pmatrix} e^{-i \delta t/\hbar} & 0\\ 0 & e^{i \delta t/\hbar} \end{pmatrix} = e^{-i\alpha t/\hbar} \left(\cos\left(\frac \delta \hbar t\right)\sigma_0 - i \sin\left(\frac \delta \hbar t\right) \boldsymbol{\sigma}_3\right) . }[/math] The [math]\displaystyle{ e^{-i\alpha t/\hbar} }[/math] factor merely contributes to the overall phase of the operator, and can usually be ignored to yield a new time evolution operator that is physically indistinguishable from the original operator. Moreover, any perturbation to the system (which will be of the same form as the Hamiltonian) can be added to the system in the eigenbasis of the unperturbed Hamiltonian and analysed in the same way as above. Therefore, for any perturbation the new eigenvectors of the perturbed system can be solved for exactly, as mentioned in the introduction.
Rabi formula for a static perturbation
Suppose that the system starts in one of the basis states at [math]\displaystyle{ t = 0 }[/math], say [math]\displaystyle{ |1\rangle }[/math] so that [math]\displaystyle{ \mathbf{c}_0 = \begin{pmatrix} 1\\ 0 \end{pmatrix} }[/math], and we are interested in the probability of occupation of each of the basis states as a function of time when [math]\displaystyle{ \mathbf{H} }[/math] is the time-independent Hamiltonian. [math]\displaystyle{ \mathbf{c}(t) =\mathbf{U}(t)\mathbf{c}_0= \begin{pmatrix} U_{11}(t) & U_{12}(t)\\ U_{21}(t) & U_{22}(t) \end{pmatrix}\begin{pmatrix} 1\\ 0 \end{pmatrix}=\begin{pmatrix} U_{11}(t) \\ U_{21}(t) \end{pmatrix}. }[/math]
The probability of occupation of state i is [math]\displaystyle{ P_i(t) = |c_i(t)|^2 = |U_{i1}(t)|^2 }[/math]. In the case of the starting state, [math]\displaystyle{ P_1(t) = |c_1(t)|^2 = |U_{11}(t)|^2 }[/math], and from above, [math]\displaystyle{ U_{11}(t) = e^{\frac{-i\alpha t}{\hbar}} \left(\cos\left(\frac{|\mathbf{r}|}\hbar t\right) - i \sin\left(\frac{|\mathbf{r}|}\hbar t\right)\frac{\delta}{|\mathbf{r}|}\right) . }[/math] Hence, [math]\displaystyle{ P_1(t)= \cos^2(\Omega t) + \sin^2(\Omega t)\frac{\Delta^2}{\Omega^2}. }[/math]
Obviously, [math]\displaystyle{ P_1(0)=1 }[/math] due to the initial condition. The frequency [math]\displaystyle{ \Omega = \frac{|\mathbf{r}|}{\hbar} = \frac 1 \hbar \sqrt{\beta^2+\gamma^2+\delta^2} = \sqrt{|\Omega_R|^2+\Delta^2} }[/math] is called the generalised Rabi frequency, [math]\displaystyle{ \Omega_R = (\beta+i \gamma)/\hbar }[/math] is called the Rabi frequency, and [math]\displaystyle{ \Delta = \delta/\hbar }[/math] is called the detuning.
At zero detuning, [math]\displaystyle{ P_1(t) = \cos^2(|\Omega_R| t) }[/math], i.e., there is Rabi flopping from guaranteed occupation of state 1, to guaranteed occupation of state 2, and back to state 1, etc., with frequency [math]\displaystyle{ |\Omega_R| }[/math]. As the detuning is increased away from zero, the frequency of the flopping increases (to Ω) and the amplitude of exciting the electron decreases to [math]\displaystyle{ \Omega^2/\Delta^2 }[/math].
For time dependent Hamiltonians induced by light waves, see the articles on Rabi cycle and rotating wave approximation.
Some important two-state systems
Precession in a field
Consider the case of a spin-1/2 particle in a magnetic field [math]\displaystyle{ \mathbf{B} = B \mathbf{\hat n} }[/math]. The interaction Hamiltonian for this system is [math]\displaystyle{ H = -\boldsymbol{\mu} \cdot \mathbf{B} = -\mu\boldsymbol{\sigma} \cdot \mathbf{B}, }[/math] where [math]\displaystyle{ \mu }[/math] is the magnitude of the particle's magnetic moment and [math]\displaystyle{ \boldsymbol{\sigma} }[/math] is the vector of Pauli matrices. Solving the time dependent Schrödinger equation [math]\displaystyle{ H\psi = i\hbar \partial_t \psi }[/math] yields [math]\displaystyle{ \psi(t) = e^{i\omega t \boldsymbol{\sigma} \cdot \mathbf{\hat{n}}} \psi(0), }[/math] where [math]\displaystyle{ \omega = \mu B/\hbar }[/math] and [math]\displaystyle{ e^{i\omega t \boldsymbol{\sigma} \cdot \mathbf{\hat{n}}} = \cos{\left(\omega t\right)} I + i\; \mathbf{\hat{n}} \cdot \boldsymbol{\sigma} \sin{\left(\omega t\right)} }[/math]. Physically, this corresponds to the Bloch vector precessing around [math]\displaystyle{ \mathbf{\hat{n}} }[/math] with angular frequency [math]\displaystyle{ 2\omega }[/math]. Without loss of generality, assume the field is uniform points in [math]\displaystyle{ \mathbf{\hat{z}} }[/math], so that the time evolution operator is given as [math]\displaystyle{ e^{i\omega t \boldsymbol{\sigma} \cdot \mathbf{\hat{n}}} = \begin{pmatrix} e^{i\omega t} & 0 \\ 0 & e^{-i\omega t} \end{pmatrix}. }[/math]
It can be seen that such a time evolution operator acting on a general spin state of a spin-1/2 particle will lead to the precession about the axis defined by the applied magnetic field (this is the quantum mechanical equivalent of Larmor precession)[2]
The above method can be applied to the analysis of any generic two-state system that is interacting with some field (equivalent to the magnetic field in the previous case) if the interaction is given by an appropriate coupling term that is analogous to the magnetic moment. The precession of the state vector (which need not be a physical spinning as in the previous case) can be viewed as the precession of the state vector on the Bloch sphere.
The representation on the Bloch sphere for a state vector [math]\displaystyle{ \psi(0) }[/math] will simply be the vector of expectation values [math]\displaystyle{ \mathbf{R} = \left(\langle \sigma_x \rangle, \langle \sigma_y \rangle, \langle \sigma_z \rangle \right) }[/math]. As an example, consider a state vector [math]\displaystyle{ \psi(0) }[/math] that is a normalized superposition of [math]\displaystyle{ \left|\uparrow\right\rangle }[/math] and [math]\displaystyle{ \left|\downarrow\right\rangle }[/math], that is, a vector that can be represented in the [math]\displaystyle{ \sigma_z }[/math] basis as [math]\displaystyle{ \psi(0) = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix} }[/math]
The components of [math]\displaystyle{ \psi(t) }[/math] on the Bloch sphere will simply be [math]\displaystyle{ \mathbf{R} = \left(\cos{2\omega t}, -\sin{2\omega t}, 0\right) }[/math]. This is a unit vector that begins pointing along [math]\displaystyle{ \mathbf{\hat{x}} }[/math] and precesses around [math]\displaystyle{ \mathbf{\hat{z}} }[/math] in a left-handed manner. In general, by a rotation around [math]\displaystyle{ \mathbf{\hat{z}} }[/math], any state vector [math]\displaystyle{ \psi(0) }[/math] can be represented as [math]\displaystyle{ a\left|\uparrow\right\rangle + b\left|\downarrow\right\rangle }[/math] with real coefficients [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math]. Such a state vector corresponds to a Bloch vector in the xz-plane making an angle [math]\displaystyle{ \tan(\theta/2) = b/a }[/math] with the z-axis. This vector will proceed to precess around [math]\displaystyle{ \mathbf{\hat{z}} }[/math]. In theory, by allowing the system to interact with the field of a particular direction and strength for precise durations, it is possible to obtain any orientation of the Bloch vector, which is equivalent to obtaining any complex superposition. This is the basis for numerous technologies including quantum computing and MRI.
Evolution in a time-dependent field: Nuclear magnetic resonance
Nuclear magnetic resonance (NMR) is an important example in the dynamics of two-state systems because it involves the exact solution to a time dependent Hamiltonian. The NMR phenomenon is achieved by placing a nucleus in a strong, static field B0 (the "holding field") and then applying a weak, transverse field B1 that oscillates at some radiofrequency ωr.[3] Explicitly, consider a spin-1/2 particle in a holding field [math]\displaystyle{ B_0 \mathbf{\hat z} }[/math] and a transverse rf field B1 rotating in the xy-plane in a right-handed fashion around B0: [math]\displaystyle{ \mathbf{B} = \begin{pmatrix} B_1 \cos\omega_\mathrm{r} t \\ B_1 \sin\omega_\mathrm{r} t\\ B_0 \end{pmatrix}. }[/math]
As in the free precession case, the Hamiltonian is [math]\displaystyle{ H = -\mu \boldsymbol{\sigma} \cdot \mathbf{B} }[/math], and the evolution of a state vector [math]\displaystyle{ \psi(t) }[/math] is found by solving the time-dependent Schrödinger equation [math]\displaystyle{ H\psi = i\hbar\,\partial \psi/\partial t }[/math]. After some manipulation (given in the collapsed section below), it can be shown that the Schrödinger equation becomes [math]\displaystyle{ \frac{\partial \psi}{\partial t} = i\left(\omega_1 \sigma_x + \left(w_0+\frac{\omega_r}{2}\right)\sigma_z\right) \psi, }[/math] where [math]\displaystyle{ \omega_0 = \mu B_0/\hbar }[/math] and [math]\displaystyle{ \omega_1 = \mu B_1/\hbar }[/math].
As per the previous section, the solution to this equation has the Bloch vector precessing around [math]\displaystyle{ (\omega_1,0,\omega_0 + \omega_r/2) }[/math] with a frequency that is twice the magnitude of the vector. If [math]\displaystyle{ \omega_0 }[/math] is sufficiently strong, some proportion of the spins will be pointing directly down prior to the introduction of the rotating field. If the angular frequency of the rotating magnetic field is chosen such that [math]\displaystyle{ \omega_r = -2 \omega_0 }[/math], in the rotating frame the state vector will precess around [math]\displaystyle{ \hat{x} }[/math] with frequency [math]\displaystyle{ 2 \omega_1 }[/math], and will thus flip from down to up releasing energy in the form of detectable photons.[citation needed] This is the fundamental basis for NMR, and in practice is accomplished by scanning [math]\displaystyle{ \omega_r }[/math] until the resonant frequency is found at which point the sample will emit light. Similar calculations are done in atomic physics, and in the case that the field is not rotating, but oscillating with a complex amplitude, use is made of the rotating wave approximation in deriving such results.
Here the Schrödinger equation reads [math]\displaystyle{ -\mu \boldsymbol{\sigma} \cdot \mathbf{B} \psi = i \hbar \frac{\partial \psi}{\partial t}. }[/math]
Expanding the dot product and dividing by [math]\displaystyle{ i\hbar }[/math] yields [math]\displaystyle{ \frac{\partial\psi}{\partial t} = i\left(\omega_1\sigma_x \cos{\omega_r t} + \omega_1\sigma_y \sin{\omega_r t} + \omega_0 \sigma_z\right)\psi. }[/math]
To remove the time dependence from the problem, the wave function is transformed according to [math]\displaystyle{ \psi \rightarrow e^{-i \sigma_z \omega_r t/2}\psi }[/math]. The time dependent Schrödinger equation becomes [math]\displaystyle{ -i \sigma_z \frac{\omega_r}{2} e^{-i \sigma_z \omega_r t/2}\psi + e^{-i \sigma_z \omega_r t/2}\frac{\partial \psi}{\partial t} = i\left(\omega_1\sigma_x \cos{\omega_r t} + \omega_1\sigma_y \sin{\omega_r t} + \omega_0 \sigma_z\right) e^{-i \sigma_z \omega_r t/2}\psi, }[/math] which after some rearrangement yields [math]\displaystyle{ \frac{\partial \psi}{\partial t} = ie^{i \sigma_z \omega_r t/2} \left(\omega_1\sigma_x \cos{\omega_r t} + \omega_1\sigma_y \sin{\omega_r t} + \left(\omega_0+\frac{\omega_r}{2}\right) \sigma_z\right) e^{-i \sigma_z \omega_r t/2} \psi }[/math]
Evaluating each term on the right hand side of the equation [math]\displaystyle{ e^{i \sigma_z \omega_r t/2}\sigma_x e^{-i \sigma_z \omega_r t/2} = \begin{pmatrix} e^{i\omega_r t/2} & 0 \\ 0 & e^{-i\omega_r t/2} \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} e^{-i\omega_r t/2} & 0 \\ 0 & e^{i\omega_r t/2} \end{pmatrix}= \begin{pmatrix} 0 & e^{i\omega_r t} \\ e^{-i\omega_r t} & 0 \end{pmatrix} }[/math] [math]\displaystyle{ e^{i \sigma_z \omega_r t/2}\sigma_y e^{-i \sigma_z \omega_r t/2} = \begin{pmatrix} e^{i\omega_r t/2} & 0 \\ 0 & e^{-i\omega_r t/2} \end{pmatrix} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \begin{pmatrix} e^{-i\omega_r t/2} & 0 \\ 0 & e^{i\omega_r t/2} \end{pmatrix}= \begin{pmatrix} 0 & -i e^{i\omega_r t} \\ i e^{-i\omega_r t} & 0 \end{pmatrix} }[/math] [math]\displaystyle{ e^{i \sigma_z \omega_r t/2}\sigma_z e^{-i \sigma_z \omega_r t/2} = \begin{pmatrix} e^{i\omega_r t/2} & 0 \\ 0 & e^{-i\omega_r t/2} \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} e^{-i\omega_r t/2} & 0 \\ 0 & e^{i\omega_r t/2} \end{pmatrix}=\sigma_z }[/math]
The equation now reads [math]\displaystyle{ \frac{\partial \psi}{\partial t} = i\left(\omega_1 \begin{pmatrix} 0 & e^{i\omega_r t}\left(\cos{\omega_r t} - i\sin{\omega_r t}\right) \\ e^{-i\omega_r t}\left(\cos{\omega_r t} + i\sin{\omega_r t}\right) & 0 \end{pmatrix}+\left(w_0+\frac{\omega_r}{2}\right)\sigma_z\right) \psi, }[/math] which by Euler's identity becomes [math]\displaystyle{ \frac{\partial \psi}{\partial t} = i\left(\omega_1 \sigma_x + \left(w_0+\frac{\omega_r}{2}\right)\sigma_z\right) \psi }[/math]
Relation to Bloch equations
The optical Bloch equations for a collection of spin-1/2 particles can be derived from the time dependent Schrödinger equation for a two level system. Starting with the previously stated Hamiltonian [math]\displaystyle{ i \hbar\partial_t\psi = -\mu \boldsymbol{\sigma} \cdot \mathbf{B}\psi }[/math], it can be written in summation notation after some rearrangement as [math]\displaystyle{ \frac{\partial\psi}{\partial t} = i\frac{\mu}{\hbar}\sigma_i B_i \psi }[/math]
Multiplying by a Pauli matrix [math]\displaystyle{ \sigma_i }[/math] and the conjugate transpose of the wavefunction, and subsequently expanding the product of two Pauli matrices yields [math]\displaystyle{ \psi^\dagger \sigma_j \frac{\partial\psi}{\partial t} = i \frac{\mu}{\hbar} \psi^\dagger \sigma_j \sigma_i B_i \psi = i\frac{\mu}{\hbar}\psi^\dagger \left(I\delta_{ij} - i \sigma_k \varepsilon_{ijk}\right) B_i \psi = \frac{\mu}{\hbar} \psi^\dagger \left(iI\delta_{ij} + \sigma_k \varepsilon_{ijk}\right) B_i \psi }[/math]
Adding this equation to its own conjugate transpose yields a left hand side of the form [math]\displaystyle{ \psi^\dagger \sigma_j \frac{\partial\psi}{\partial t} + \frac{\partial\psi^\dagger}{\partial t} \sigma_j \psi = \frac{\partial \left( \psi^\dagger \sigma_j \psi\right)}{\partial t} }[/math]
And a right hand side of the form [math]\displaystyle{ \frac{\mu}{\hbar}\psi^\dagger \left(iI\delta_{ij} + \sigma_k \varepsilon_{ijk}\right) B_i \psi + \frac{\mu}{\hbar} \psi^\dagger \left(-iI\delta_{ij} + \sigma_k \varepsilon_{ijk}\right) B_i\psi = \frac{2\mu}{\hbar} \left( \psi^\dagger \sigma_k \psi\right) B_i \varepsilon_{ijk} }[/math]
As previously mentioned, the expectation value of each Pauli matrix is a component of the Bloch vector, [math]\displaystyle{ \langle \sigma_i \rangle = \psi^\dagger\sigma_i\psi = R_i }[/math]. Equating the left and right hand sides, and noting that [math]\displaystyle{ \frac{2\mu}{\hbar} }[/math] is the gyromagnetic ratio [math]\displaystyle{ \gamma }[/math], yields another form for the equations of motion of the Bloch vector [math]\displaystyle{ \frac{\partial R_j}{\partial t} = \gamma R_k B_i \varepsilon_{kij} }[/math] where the fact that [math]\displaystyle{ \varepsilon_{ijk} = \varepsilon_{kij} }[/math] has been used. In vector form these three equations can be expressed in terms of a cross product [math]\displaystyle{ \frac{\partial \mathbf{R}}{\partial t} = \gamma \mathbf{R} \times \mathbf{B} }[/math] Classically, this equation describes the dynamics of a spin in a magnetic field. An ideal magnet consists of a collection of identical spins behaving independently, and thus the total magnetization [math]\displaystyle{ \mathbf{M} }[/math] is proportional to the Bloch vector [math]\displaystyle{ \mathbf{R} }[/math]. All that is left to obtain the final form of the optical Bloch equations is the inclusion of the phenomenological relaxation terms.
As a final aside, the above equation can be derived by considering the time evolution of the angular momentum operator in the Heisenberg picture. [math]\displaystyle{ i\hbar\frac{d\sigma_j}{dt} = \left[\sigma_j,H\right] = \left[\sigma_j, -\mu \sigma_i B_i\right] = -\mu \left(\sigma_j\sigma_i B_i - \sigma_i\sigma_j B_i\right) = \mu [\sigma_i,\sigma_j] B_i = 2\mu i \varepsilon_{ijk}\sigma_k B_i }[/math]
When coupled with the fact that [math]\displaystyle{ \mathbf{R}_i = \langle \sigma_i \rangle }[/math], this equation is the same equation as before.
Validity
Two-state systems are the simplest non-trivial quantum systems that occur in nature, but the above-mentioned methods of analysis are not just valid for simple two-state systems. Any general multi-state quantum system can be treated as a two-state system as long as the observable one is interested in has two eigenvalues. For example, a spin-1/2 particle may in reality have additional translational or even rotational degrees of freedom, but those degrees of freedom are irrelevant to the preceding analysis. Mathematically, the neglected degrees of freedom correspond to the degeneracy of the spin eigenvalues.
Another case where the effective two-state formalism is valid is when the system under consideration has two levels that are effectively decoupled from the system. This is the case in the analysis of the spontaneous or stimulated emission of light by atoms and that of charge qubits. In this case it should be kept in mind that the perturbations (interactions with an external field) are in the right range and do not cause transitions to states other than the ones of interest.
Significance and other examples
Pedagogically, the two-state formalism is among the simplest of mathematical techniques used for the analysis of quantum systems. It can be used to illustrate fundamental quantum mechanical phenomena such as the interference exhibited by particles of the polarization states of the photon,[4] but also more complex phenomena such as neutrino oscillation or the neutral K-meson oscillation.
Two-state formalism can be used to describe simple mixing of states, which leads to phenomena such as resonance stabilization and other level crossing related symmetries. Such phenomena have a wide variety of application in chemistry. Phenomena with tremendous industrial applications such as the maser and laser can be explained using the two-state formalism.
The two-state formalism also forms the basis of quantum computing. Qubits, which are the building blocks of a quantum computer, are nothing but two-state systems. Any quantum computational operation is a unitary operation that rotates the state vector on the Bloch sphere.
Further reading
- A treatment of the two-state formalism, presented in the third volume of The Feynman Lectures on Physics.
- Lecture notes:
- from the Quantum mechanics II course offered at MIT, http://web.mit.edu/8.05/handouts/Twostates_03.pdf
- from the same course dealing with neutral particle oscillation, http://web.mit.edu/8.05/handouts/nukaon_07.pdf
- from the Quantum mechanics I course offered at TIFR, http://theory.tifr.res.in/~sgupta/courses/qm2013/hand4.pdf covers the essential mathematics
- http://theory.tifr.res.in/~sgupta/courses/qm2013/hand5.pdf ; from the same course deals with the some physical two-state systems and other important aspects of the formalism
- the mathematical in the initial section is done in a manner similar to these notes http://www.math.columbia.edu/~woit/QM/qubit.pdf, which are from the Quantum Mechanics for Mathematicians course offered at University of Columbia.
- a book version of the same ; http://www.math.columbia.edu/~woit/QM/qmbook.pdf
- Two-state systems and the two-sphere, R J Plymen, Il Nuovo Cimento B 13 (1973) 55-58
See also
References
Original source: https://en.wikipedia.org/wiki/Two-state quantum system.
Read more |