Classical capacity

From HandWiki
Short description: Term in quantum information theory

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel.

Background

Mixed states and quantum channels

A mixed quantum state is a unit trace, positive operator known as a density operator, and is often denoted by ρ, σ, ω, etc. The simplest model for a quantum channel is a classical-quantum channel

xρx.

which sends the classical letter x at the transmitting end to a quantum state ρx at the receiving end, with noise possibly introduced in between. The receiver's task is to perform a measurement to determine the input of the sender. If the states ρx are perfectly distinguishable from one another (i.e., if they have orthogonal supports such that Trρxρx=0 for xx) and the channel is noiseless, then perfect decoding is trivially possible. If the states ρx all commute with each other then the channel is effectively classical. The situation becomes nontrivial only when the states ρx have overlapping support and do not necessarily commute.

Quantum measurements

The most general way to describe a quantum measurement is with a positive operator-valued measure, whose elements are typically denoted as {Λm}m. These operators should satisfy positivity and completeness in order to form a valid POVM:

Λm0    m
mΛm=I.

The probabilistic interpretation of quantum mechanics states that if someone measures a quantum state ρ using a measurement device corresponding to the POVM {Λm}, then the probability p(m) for obtaining outcome m is equal to

p(m)=TrΛmρ,

and the post-measurement state is

ρm=1p(m)Λm1/2ρΛm1/2,

if the person measuring obtains outcome m.

Classical communication over quantum channels

The above is sufficient to consider a classical classical communication scheme over a cq channel. The sender uses a cq channel to map a classical letter x to a quantum state ρx, which is then sent through some noisy quantum channel, and then measured using some POVM by the receiver, who obtains another classical letter.

Precise definition

The classical capacity can be defined as the maximum rate achievable by a coding scheme for classical information transmission, which can be defined as follows.[1]

Definition. (Coding scheme) A (n,m,δ)-coding scheme for classical information transmission using a quantum channel T:B(A)B(A) is given by pair of an encoding map E:{0,1}nD(An) and a decoding POVM μ:{0,1}nB(B)n such that μ(x),Tn(E(x))1δ with respect to the Hilbert-Schmidt inner product for all x{0,1}n.

Definition. (Achievable rate) A rate R0 is achievable for the channel T if either R=0 or R>0 and for any n there exists a (n,mn,δn)-coding scheme such that R=limnmn/n and limnδn=0 both hold.

Holevo-Schumacher-Westmoreland theorem

The Holevo information (also called the Holevo χ quantity) of a quantum channel 𝒩 can be defined as

χ(𝒩)=maxρXAI(X;B)𝒩(ρ)

where ρXA is a classical-quantum state of the form

ρXA=xpX(x)|xx|XρxA

for some probability distribution pX(x) and density operators ρxA which can be input to the given channel.

Schumacher and Westmoreland in 1997,[2] and Holevo independently in 1998,[3] proved that the classical capacity of a quantum channel can be equivalently defined as

C(T)=limkχ(Tk).

Gentle measurement lemma

The gentle measurement lemma states that a measurement succeeding with high probability does not disturb the state too much on average.

Lemma. (Winter) Given an ensemble {pX(x),ρx} with expected density operator ρxpX(x)ρx, suppose that an operator Λ with IΛ0 succeeds with probability 1ϵ on the state ρ:

TrΛρ1ϵ.

Then the subnormalized state ΛρxΛ is close in expected trace distance to the original state ρx:

𝔼X[ΛρXΛρX1]2ϵ.

The gentle measurement lemma has the following analog which holds for any operators ρ, σ, Λ such that 0ρ,σ,ΛI:

TrΛρTrΛσ+ρσ1.

 

 

 

 

(1)

The quantum information-theoretic interpretation of this inequality is that the probability of obtaining outcome Λ from a quantum measurement acting on the state ρ is bounded by the sum of the probability of obtaining Λ on σ summed and the distinguishability of the two states ρ and σ.

Non-commutative union bound

Lemma. (Sen's bound)[4] For a subnormalized state σ such that 0σ and Trσ1, and for projectors Π1, ... , ΠN we have TrσTrΠNΠ1 σ Π1ΠN2i=1NTr(IΠi)σ.

Intuitively, Sen's bound is a sort of "non-commutative union bound" because it is analogous to the union bound from classical probability theory: Pr(A1AN)c=Pr(A1cANc)i=1NPr(Aic), where A1,,AN are events. The analogous quantum bound would be

Tr(IΠ1ΠNΠ1)ρi=1NTr(IΠi)ρ

if we think of Π1ΠN as a projector onto the intersection of subspaces. However, this only holds if the projectors Π1, ..., ΠN commute (choosing Π1=|++|, Π2=|00|, and ρ=|00| gives a counterexample). If the projectors are non-commuting, then one must use a non-commutative or quantum union bound.

Proof

We now prove the HSW theorem with Sen's non-commutative union bound. We first describe how the code is chosen, then give the construction of Bob's POVM, and finally analyze the error of the protocol.

Encoding map

We first describe how Alice and Bob agree on a random choice of code. They have the channel xρx and a distribution pX(x). They choose M classical sequences xn according to the IID distribution pXn(xn). After selecting them, they label them with indices as {xn(m)}m[M]. This leads to the following quantum codewords:

ρxn(m)=ρx1(m)ρxn(m).

The quantum codebook is then {ρxn(m)}m[M]. The average state of the codebook is then

𝔼Xn[ρXn]=xnpXn(xn)ρxn=ρn,

 

 

 

 

(2)

where ρ=xpX(x)ρx.

Decoding POVM construction

Sen's bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob should first ask "Is the received state in the average typical subspace?" He can do this operationally by performing a typical subspace measurement corresponding to {Πρ,δn,IΠρ,δn}. Next, he asks in sequential order, "Is the received codeword in the mth conditionally typical subspace?" This is in some sense equivalent to the question, "Is the received codeword the mth transmitted codeword?" He can ask these questions operationally by performing the measurements corresponding to the conditionally typical projectors {Πρxn(m),δ,IΠρxn(m),δ}.

Why should this sequential decoding scheme work well? The reason is that the transmitted codeword lies in the typical subspace on average:

𝔼Xn[TrΠρ,δρXn]=TrΠρ,δ 𝔼Xn[ρXn]
=TrΠρ,δρn
1ϵ,

where the inequality follows from (\ref{eq:1st-typ-prop}). Also, the projectors Πρxn(m),δ are "good detectors" for the states ρxn(m) (on average) because the following condition holds from conditional quantum typicality:

𝔼Xn[TrΠρXn,δρXn]1ϵ.

Error analysis

The probability of detecting the mth codeword correctly under our sequential decoding scheme is equal to

TrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ,

where we make the abbreviation Π^IΠ. (Observe that we project into the average typical subspace just once.) Thus, the probability of an incorrect detection for the mth codeword is given by

1TrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ,

and the average error probability of this scheme is equal to

11MmTrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ.

Instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code:

1𝔼Xn[1MmTrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ].

 

 

 

 

(3)

Our first step is to apply Sen's bound to the above quantity. But before doing so, we should rewrite the above expression just slightly, by observing that

1=𝔼Xn[1MmTrρXn(m)]
=𝔼Xn[1MmTrΠρ,δnρXn(m)+TrΠ^ρ,δnρXn(m)]
=𝔼Xn[1MmTrΠρ,δnρXn(m)Πρ,δn]+1MmTrΠ^ρδn𝔼Xn[ρXn(m)]
=𝔼Xn[1MmTrΠρ,δnρXn(m)Πρ,δn]+TrΠ^ρ,δnρn
𝔼Xn[1MmTrΠρ,δnρXn(m)Πρ,δn]+ϵ

Substituting into (3) (and forgetting about the small ϵ term for now) gives an upper bound of

𝔼Xn[1MmTrΠρ,δnρXn(m)Πρ,δn]
𝔼Xn[1MmTrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ].

We then apply Sen's bound to this expression with σ=Πρ,δnρXn(m)Πρ,δn and the sequential projectors as ΠρXn(m),δ, Π^ρXn(m1),δ, ..., Π^ρXn(1),δ. This gives the upper bound 𝔼Xn[1Mm2(Tr(IΠρXn(m),δ)Πρ,δnρXn(m)Πρ,δn+i=1m1TrΠρXn(i),δΠρ,δnρXn(m)Πρ,δn)1/2]. Due to concavity of the square root, we can bound this expression from above by

2(𝔼Xn[1MmTr(IΠρXn(m),δ)Πρ,δnρXn(m)Πρ,δn+i=1m1TrΠρXn(i),δΠρ,δnρXn(m)Πρ,δn])1/2
2(𝔼Xn[1MmTr(IΠρXn(m),δ)Πρ,δnρXn(m)Πρ,δn+imTrΠρXn(i),δΠρ,δnρXn(m)Πρ,δn])1/2

where the second bound follows by summing over all of the codewords not equal to the mth codeword (this sum can only be larger).

We now focus exclusively on showing that the term inside the square root can be made small. Consider the first term:

𝔼Xn[1MmTr(IΠρXn(m),δ)Πρ,δnρXn(m)Πρ,δn]
𝔼Xn[1MmTr(IΠρXn(m),δ)ρXn(m)+ρXn(m)Πρ,δnρXn(m)Πρ,δn1]
ϵ+2ϵ.

where the first inequality follows from (1) and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. Consider now the second term and the following chain of inequalities:

im𝔼Xn[TrΠρXn(i),δΠρ,δnρXn(m)Πρ,δn]
=imTr𝔼Xn[ΠρXn(i),δ]Πρ,δn𝔼Xn[ρXn(m)]Πρ,δn
=imTr𝔼Xn[ΠρXn(i),δ]Πρ,δnρnΠρ,δn
im2nH(B)δTr𝔼Xn[ΠρXn(i),δ]Πρ,δn

The first equality follows because the codewords Xn(m) and Xn(i) are independent since they are different. The second equality follows from (2). The first inequality follows from (\ref{eq:3rd-typ-prop}). Continuing, we have

im2nH(B)δ𝔼Xn[TrΠρXn(i),δ]
im2nH(B)δ 2nH(B|X)+δ
=im2nI(X;B)2δ
M 2nI(X;B)2δ.

The first inequality follows from Πρ,δnI and exchanging the trace with the expectation. The second inequality follows from (\ref{eq:2nd-cond-typ}). The next two are straightforward.

Putting everything together, we get our final bound on the expectation of the average error probability:

1𝔼Xn[TrΠρXn(m),δΠ^ρXn(m1),δΠ^ρXn(1),δΠρ,δnρxn(m)Πρ,δnΠ^ρXn(1),δΠ^ρXn(m1),δΠρXn(m),δ]
ϵ+2((ϵ+2ϵ)+M2nI(X;B)2δ)1/2.

Thus, as long as we choose M=2nI(X;B)3δ, there exists a code with vanishing error probability.

Non-additivity of the classical capacity

The HSW theorem can be seen as expressing the classical capacity of a channel Φ in terms of a regularization of the Holevo χ-quantity over multiple uses of Φ. An open problem in quantum information theory was to determine if the χ-quantity is additive, which would imply that the classical capacity could be expressed using a single use of Φ.[5] However, channels giving counterexamples to this statement were eventually given by Matthew Hastings in 2009.[6] Follow-up work showed that this is a generic phenomenon, in the sense that a channel chosen randomly from a natural probability distribution will give a counterexample with high probability. (This stands in contrast to proofs using the probabilistic method, where random sampling is shown to give a counterexample only with nonzero probability.) A proof of this can be given using Dvoretzky's theorem.[5]

Minimal output entropy

Non-additivity of the classical capacity is closely related to non-additivity of the minimal von Neumann entropy of the output of a quantum channel. An easier problem is to consider the minimal output quantum Rényi entropy for p>2, for which simple counterexamples using the inherent entanglement of fermions were given by Grudka, Horodecki, and Pankowski.[7]

See also

References

Notes

  1. "Lecture 11: The classical capacity of a quantum channel". https://www.uio.no/studier/emner/matnat/math/MAT4430/v22/lecture-notes/lecture11.pdf. 
  2. Schumacher, Benjamin; Westmoreland, Michael (1997), "Sending classical information via noisy quantum channels", Phys. Rev. A 56 (1): 131–138, doi:10.1103/PhysRevA.56.131, Bibcode1997PhRvA..56..131S 
  3. Holevo, Alexander S. (1998), "The Capacity of Quantum Channel with General Signal States", IEEE Transactions on Information Theory 44 (1): 269–273, doi:10.1109/18.651037 
  4. Sen, Pranab (2012), "Achieving the Han-Kobayashi inner bound for the quantum interference channel by sequential decoding", IEEE International Symposium on Information Theory Proceedings (ISIT 2012), pp. 736–740, doi:10.1109/ISIT.2012.6284656 
  5. 5.0 5.1 Aubrun, Guillaume; Szarek, Stanisław; Werner, Elisabeth (2011). "Hastings's Additivity Counterexample via Dvoretzky's Theorem". Communications in Mathematical Physics 305 (1): 85–97. doi:10.1007/s00220-010-1172-y. ISSN 0010-3616. http://link.springer.com/10.1007/s00220-010-1172-y. 
  6. Hastings, M. B. (2009-03-15). "Superadditivity of communication capacity using entangled inputs". Nature Physics (Springer Science and Business Media LLC) 5 (4): 255–257. doi:10.1038/nphys1224. ISSN 1745-2473. 
  7. Grudka, Andrzej; Horodecki, Michał; Pankowski, Łukasz (2010-10-22). "Constructive counterexamples to the additivity of the minimum output Rényi entropy of quantum channels for all p > 2". Journal of Physics A: Mathematical and Theoretical 43 (42). doi:10.1088/1751-8113/43/42/425304. ISSN 1751-8113. https://iopscience.iop.org/article/10.1088/1751-8113/43/42/425304. Retrieved 2025-12-15.