Quantum capacity
In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd,[1] Shor,[2] and Devetak[3] who proved it with increasing standards of rigor.[4]
Hashing bound for Pauli channels
The LSD theorem states that the coherent information of a quantum channel is an achievable rate for reliable quantum communication. For a Pauli channel, the coherent information has a simple form[citation needed] and the proof that it is achievable is particularly simple as well. We[who?] prove the theorem for this special case by exploiting random stabilizer codes and correcting only the likely errors that the channel produces.
Theorem (hashing bound). There exists a stabilizer quantum error-correcting code that achieves the hashing limit [math]\displaystyle{ R=1-H\left(\mathbf{p}\right) }[/math] for a Pauli channel of the following form:[math]\displaystyle{ \rho \mapsto p_{I}\rho+p_{X}X\rho X+p_{Y}Y\rho Y+p_{Z}Z\rho Z, }[/math]where [math]\displaystyle{ \mathbf{p}=\left(p_{I},p_{X},p_{Y},p_{Z}\right) }[/math] and [math]\displaystyle{ H\left(\mathbf{p}\right) }[/math] is the entropy of this probability vector.
Proof. Consider correcting only the typical errors. That is, consider defining the typical set of errors as follows:[math]\displaystyle{ T_{\delta}^{\mathbf{p}^{n}}\equiv\left\{ a^{n}:\left\vert -\frac{1}{n} \log_{2}\left( \Pr\left\{ E_{a^{n}}\right\} \right) -H\left( \mathbf{p}\right) \right\vert \leq\delta\right\} , }[/math]where [math]\displaystyle{ a^{n} }[/math] is some sequence consisting of the letters [math]\displaystyle{ \left\{I,X,Y,Z\right\} }[/math] and [math]\displaystyle{ \Pr\left\{E_{a^{n}}\right\} }[/math] is the probability that an IID Pauli channel issues some tensor-product error [math]\displaystyle{ E_{a^{n}}\equiv E_{a_{1}}\otimes\cdots\otimes E_{a_{n}} }[/math]. This typical set consists of the likely errors in the sense that[math]\displaystyle{ \sum_{a^{n}\in T_{\delta}^{\mathbf{p}^{n}}}\Pr\left\{E_{a^{n}}\right\} \geq 1-\epsilon, }[/math]for all [math]\displaystyle{ \epsilon\gt 0 }[/math] and sufficiently large [math]\displaystyle{ n }[/math]. The error-correcting conditions[5] for a stabilizer code [math]\displaystyle{ \mathcal{S} }[/math] in this case are that [math]\displaystyle{ \{E_{a^{n}}:a^{n}\in T_{\delta}^{\mathbf{p}^{n}}\} }[/math] is a correctable set of errors if
[math]\displaystyle{ E_{a^{n}}^{\dagger}E_{b^{n}}\notin N\left(\mathcal{S}\right) \backslash \mathcal{S}, }[/math]for all error pairs [math]\displaystyle{ E_{a^{n}} }[/math] and [math]\displaystyle{ E_{b^{n}} }[/math] such that [math]\displaystyle{ a^{n},b^{n}\in T_{\delta}^{\mathbf{p}^{n}} }[/math] where [math]\displaystyle{ N(\mathcal{S}) }[/math] is the normalizer of [math]\displaystyle{ \mathcal{S} }[/math]. Also, we consider the expectation of the error probability under a random choice of a stabilizer code.
Proceed as follows:[math]\displaystyle{ \begin{align} \mathbb{E}_{\mathcal{S}}\left\{p_{e}\right\} &= \mathbb{E}_{\mathcal{S}} \left\{ \sum_{a^{n}} \Pr \left\{ E_{a^{n}}\right\} \mathcal{I}\left(E_{a^{n}}\text{ is uncorrectable under }\mathcal{S}\right) \right\} \\ &\leq \mathbb{E}_{\mathcal{S}} \left\{ \sum_{a^{n} \in T_{\delta}^{\mathbf{p}^{n}}} \Pr\left\{E_{a^{n}}\right\} \mathcal{I}\left(E_{a^{n}}\text{ is uncorrectable under }\mathcal{S}\right) \right\} + \epsilon \\ &= \sum_{a^{n} \in T_{\delta}^{\mathbf{p}^{n}}} \Pr\left\{E_{a^{n}}\right\} \mathbb{E}_{\mathcal{S}} \left\{ \mathcal{I}\left(E_{a^{n}}\text{ is uncorrectable under }\mathcal{S}\right) \right\} + \epsilon \\ &= \sum_{a^{n} \in T_{\delta}^{\mathbf{p}^{n}}} \Pr\left\{E_{a^{n}}\right\} \Pr_{\mathcal{S}} \left\{E_{a^{n}}\text{ is uncorrectable under }\mathcal{S}\right\} + \epsilon. \end{align} }[/math]The first equality follows by definition—[math]\displaystyle{ \mathcal{I} }[/math] is an indicator function equal to one if [math]\displaystyle{ E_{a^{n}} }[/math] is uncorrectable under [math]\displaystyle{ \mathcal{S} }[/math] and equal to zero otherwise. The first inequality follows, since we correct only the typical errors because the atypical error set has negligible probability mass. The second equality follows by exchanging the expectation and the sum. The third equality follows because the expectation of an indicator function is the probability that the event it selects occurs.
Continuing, we have:[math]\displaystyle{
=\sum_{a^{n}\in T_{\delta}^{\mathbf{p}^{n}}}\Pr\left\{ E_{a^{n}}\right\}
\Pr_{\mathcal{S}}\left\{ \exists E_{b^{n}}:b^{n}\in T_{\delta}^{\mathbf{p}
^{n}},\ b^{n}\neq a^{n},\ E_{a^{n}}^{\dagger}E_{b^{n}}\in N\left( \mathcal{S}
\right) \backslash\mathcal{S}\right\} }[/math]
- [math]\displaystyle{ \leq\sum_{a^{n}\in T_{\delta}^{A^{n}}}\Pr\left\{ E_{a^{n}}\right\} \Pr_{\mathcal{S}}\left\{ \exists E_{b^{n}}:b^{n}\in T_{\delta}^{\mathbf{p} ^{n}},\ b^{n}\neq a^{n},\ E_{a^{n}}^{\dagger}E_{b^{n}}\in N\left( \mathcal{S} \right) \right\} }[/math]
- [math]\displaystyle{ =\sum_{a^{n}\in T_{\delta}^{\mathbf{p}^{n}}}\Pr\left\{ E_{a^{n}}\right\} \Pr_{\mathcal{S}}\left\{ \bigcup\limits_{b^{n}\in T_{\delta}^{\mathbf{p}^{n} },\ b^{n}\neq a^{n}}E_{a^{n}}^{\dagger}E_{b^{n}}\in N\left( \mathcal{S}\right) \right\} }[/math]
- [math]\displaystyle{ \leq\sum_{a^{n},b^{n}\in T_{\delta}^{\mathbf{p}^{n}},\ b^{n}\neq a^{n}} \Pr\left\{ E_{a^{n}}\right\} \Pr_{\mathcal{S}}\left\{ E_{a^{n}}^{\dagger }E_{b^{n}}\in N\left( \mathcal{S}\right) \right\} }[/math]
- [math]\displaystyle{ \leq\sum_{a^{n},b^{n}\in T_{\delta}^{\mathbf{p}^{n}},\ b^{n}\neq a^{n}} \Pr\left\{ E_{a^{n}}\right\} 2^{-\left( n-k\right) } }[/math]
- [math]\displaystyle{ \leq2^{2n\left[ H\left( \mathbf{p}\right) +\delta\right] }2^{-n\left[ H\left( \mathbf{p}\right) +\delta\right] }2^{-\left( n-k\right) } }[/math]
- [math]\displaystyle{ =2^{-n\left[ 1-H\left( \mathbf{p}\right) -k/n-3\delta\right] }. }[/math]
The first equality follows from the error-correcting conditions for a quantum stabilizer code, where [math]\displaystyle{ N\left( \mathcal{S}\right) }[/math] is the normalizer of [math]\displaystyle{ \mathcal{S} }[/math]. The first inequality follows by ignoring any potential degeneracy in the code—we consider an error uncorrectable if it lies in the normalizer [math]\displaystyle{ N\left( \mathcal{S}\right) }[/math] and the probability can only be larger because [math]\displaystyle{ N\left( \mathcal{S}\right) \backslash\mathcal{S}\in N\left( \mathcal{S}\right) }[/math]. The second equality follows by realizing that the probabilities for the existence criterion and the union of events are equivalent. The second inequality follows by applying the union bound. The third inequality follows from the fact that the probability for a fixed operator [math]\displaystyle{ E_{a^{n}}^{\dagger}E_{b^{n}} }[/math] not equal to the identity commuting with the stabilizer operators of a random stabilizer can be upper bounded as follows: [math]\displaystyle{ \Pr_{\mathcal{S}}\left\{ E_{a^{n}}^{\dagger}E_{b^{n}}\in N\left( \mathcal{S} \right) \right\} =\frac{2^{n+k}-1}{2^{2n}-1}\leq2^{-\left( n-k\right) }. }[/math] The reasoning here is that the random choice of a stabilizer code is equivalent to fixing operators [math]\displaystyle{ Z_{1} }[/math], ..., [math]\displaystyle{ Z_{n-k} }[/math] and performing a uniformly random Clifford unitary. The probability that a fixed operator commutes with [math]\displaystyle{ \overline{Z}_{1} }[/math], ..., [math]\displaystyle{ \overline{Z}_{n-k} }[/math] is then just the number of non-identity operators in the normalizer ([math]\displaystyle{ 2^{n+k}-1 }[/math]) divided by the total number of non-identity operators ([math]\displaystyle{ 2^{2n}-1 }[/math]). After applying the above bound, we then exploit the following typicality bounds: [math]\displaystyle{ \forall a^{n} \in T_{\delta}^{\mathbf{p}^{n}}:\Pr\left\{ E_{a^{n} }\right\} \leq2^{-n\left[ H\left( \mathbf{p}\right) +\delta\right] }, }[/math][math]\displaystyle{ \left\vert T_{\delta}^{\mathbf{p}^{n}}\right\vert \leq2^{n\left[ H\left( \mathbf{p}\right) +\delta\right] }. }[/math] We conclude that as long as the rate [math]\displaystyle{ k/n=1-H\left( \mathbf{p}\right) -4\delta }[/math], the expectation of the error probability becomes arbitrarily small, so that there exists at least one choice of a stabilizer code with the same bound on the error probability.
See also
References
- ↑ Seth Lloyd (1997). "Capacity of the noisy quantum channel". Physical Review A 55 (3): 1613–1622. doi:10.1103/PhysRevA.55.1613. Bibcode: 1997PhRvA..55.1613L.
- ↑ Peter Shor (2002). "The quantum channel capacity and coherent information". Lecture Notes, MSRI Workshop on Quantum Computation. http://www.msri.org/publications/ln/msri/2002/quantumcrypto/shor/1/meta/aux/shor.pdf.
- ↑ Igor Devetak (2005). "The private classical capacity and quantum capacity of a quantum channel". IEEE Transactions on Information Theory 51: 44–55. doi:10.1109/TIT.2004.839515.
- ↑ Wilde, Mark M. (2017). Quantum information theory (2nd ed.). Cambridge, UK. ISBN 978-1-316-80997-6. OCLC 972292559. https://www.worldcat.org/oclc/972292559.
- ↑ Nielsen, Michael A.; Chuang, Isaac L. (2000), Quantum Computation and Quantum Information, Cambridge University Press, ISBN 978-0-521-63503-5.
Original source: https://en.wikipedia.org/wiki/Quantum capacity.
Read more |