Gilbert–Varshamov bound for linear codes

From HandWiki

The Gilbert–Varshamov bound for linear codes is related to the general Gilbert–Varshamov bound, which gives a lower bound on the maximal number of elements in an error-correcting code of a given block length and minimum Hamming weight over a field [math]\displaystyle{ \mathbb{F}_q }[/math]. This may be translated into a statement about the maximum rate of a code with given length and minimum distance. The Gilbert–Varshamov bound for linear codes asserts the existence of q-ary linear codes for any relative minimum distance less than the given bound that simultaneously have high rate. The existence proof uses the probabilistic method, and thus is not constructive. The Gilbert–Varshamov bound is the best known in terms of relative distance for codes over alphabets of size less than 49.[citation needed] For larger alphabets, algebraic geometry codes sometimes achieve an asymptotically better rate vs. distance tradeoff than is given by the Gilbert–Varshamov bound.[1]

Gilbert–Varshamov bound theorem

Theorem: Let [math]\displaystyle{ q \geqslant 2 }[/math]. For every [math]\displaystyle{ 0 \leqslant \delta \lt 1 - \tfrac{1}{q} }[/math] and [math]\displaystyle{ 0 \lt \varepsilon \leqslant 1 - H_q (\delta ), }[/math] there exists a code with rate [math]\displaystyle{ R \geqslant 1 - H_q (\delta ) - \varepsilon }[/math] and relative distance [math]\displaystyle{ \delta. }[/math]

Here [math]\displaystyle{ H_q }[/math] is the q-ary entropy function defined as follows:

[math]\displaystyle{ H_q(x) = x\log_q(q-1)-x\log_qx-(1-x)\log_q(1-x). }[/math]

The above result was proved by Edgar Gilbert for general code using the greedy method as here. For linear code, Rom Varshamov proved using the probabilistic method for the random linear code. This proof will be shown in the following part.

High-level proof:

To show the existence of the linear code that satisfies those constraints, the probabilistic method is used to construct the random linear code. Specifically the linear code is chosen randomly by choosing the random generator matrix [math]\displaystyle{ G }[/math] in which the element is chosen uniformly over the field [math]\displaystyle{ \mathbb{F}_q^n }[/math]. Also the Hamming distance of the linear code is equal to the minimum weight of the codeword. So to prove that the linear code generated by [math]\displaystyle{ G }[/math] has Hamming distance [math]\displaystyle{ d }[/math], we will show that for any [math]\displaystyle{ m \in \mathbb{F}_q^k \smallsetminus \left\{ 0 \right\}, \operatorname{wt}(mG) \ge d }[/math] . To prove that, we prove the opposite one; that is, the probability that the linear code generated by [math]\displaystyle{ G }[/math] has the Hamming distance less than [math]\displaystyle{ d }[/math] is exponentially small in [math]\displaystyle{ n }[/math]. Then by probabilistic method, there exists the linear code satisfying the theorem.

Formal proof:

By using the probabilistic method, to show that there exists a linear code that has a Hamming distance greater than [math]\displaystyle{ d }[/math], we will show that the probability that the random linear code having the distance less than [math]\displaystyle{ d }[/math] is exponentially small in [math]\displaystyle{ n }[/math].

We know that the linear code is defined using the generator matrix. So we use the "random generator matrix" [math]\displaystyle{ G }[/math] as a mean to describe the randomness of the linear code. So a random generator matrix [math]\displaystyle{ G }[/math] of size [math]\displaystyle{ kn }[/math] contains [math]\displaystyle{ kn }[/math] elements which are chosen independently and uniformly over the field [math]\displaystyle{ \mathbb{F}_q }[/math].

Recall that in a linear code, the distance equals the minimum weight of the non-zero codeword. Let [math]\displaystyle{ \operatorname{wt}(y) }[/math] be the weight of the codeword [math]\displaystyle{ y }[/math]. So

[math]\displaystyle{ \begin{align} P & = \Pr_{\text{random }G} (\text{linear code generated by } G\text{ has distance} \lt d) \\[6pt] & = \Pr_{\text{random }G} (\text{there exists a non-zero codeword } y \text{ in a linear code generated by }G\text{ such that } \operatorname{wt}(y) \lt d) \\[6pt] &= \Pr_{\text{random }G} \left (\text{there exists } 0 \neq m \in \mathbb{F}_q^k \text{ such that } \operatorname{wt}(mG) \lt d \right ) \end{align} }[/math]

The last equality follows from the definition: if a codeword [math]\displaystyle{ y }[/math] belongs to a linear code generated by [math]\displaystyle{ G }[/math], then [math]\displaystyle{ y = mG }[/math] for some vector [math]\displaystyle{ m \in \mathbb{F}_q^k }[/math].

By Boole's inequality, we have:

[math]\displaystyle{ P \leqslant \sum_{0 \neq m \in \mathbb{F}_q^k} \Pr_{\text{random }G} (\operatorname{wt}(mG) \lt d) }[/math]

Now for a given message [math]\displaystyle{ 0 \neq m \in \mathbb{F}_q^k, }[/math] we want to compute

[math]\displaystyle{ W = \Pr_{\text{random }G} (\operatorname{wt}(mG) \lt d). }[/math]

Let [math]\displaystyle{ \Delta(m_1,m_2) }[/math] be a Hamming distance of two messages [math]\displaystyle{ m_1 }[/math] and [math]\displaystyle{ m_2 }[/math]. Then for any message [math]\displaystyle{ m }[/math], we have: [math]\displaystyle{ \operatorname{wt}(m) = \Delta(0,m) }[/math]. Therefore:

[math]\displaystyle{ W = \sum_{\{y \in \mathbb{F}_q^n |\Delta (0,y) \leqslant d - 1\}} \Pr_{\text{random }G} (mG = y) }[/math]

Due to the randomness of [math]\displaystyle{ G }[/math], [math]\displaystyle{ mG }[/math] is a uniformly random vector from [math]\displaystyle{ \mathbb{F}_q^n }[/math]. So

[math]\displaystyle{ \Pr_{\text{random }G} (mG = y) = q^{-n} }[/math]

Let [math]\displaystyle{ \operatorname{Vol}_q(r,n) }[/math] is a volume of Hamming ball with the radius [math]\displaystyle{ r }[/math]. Then:[2]

[math]\displaystyle{ P \leqslant q^k W = q^k \left ( \frac{\operatorname{Vol}_q(d-1,n)}{q^n} \right ) \leqslant q^k \left ( \frac{q^{nH_q(\delta)}}{q^n} \right ) = q^k q^{-n(1-H_q(\delta))} }[/math]

By choosing [math]\displaystyle{ k = (1-H_q(\delta)-\varepsilon)n }[/math], the above inequality becomes

[math]\displaystyle{ P \leqslant q^{-\varepsilon n} }[/math]

Finally [math]\displaystyle{ q^{-\varepsilon n} \ll 1 }[/math], which is exponentially small in n, that is what we want before. Then by the probabilistic method, there exists a linear code [math]\displaystyle{ C }[/math] with relative distance [math]\displaystyle{ \delta }[/math] and rate [math]\displaystyle{ R }[/math] at least [math]\displaystyle{ (1-H_q(\delta)-\varepsilon) }[/math], which completes the proof.

Comments

  1. The Varshamov construction above is not explicit; that is, it does not specify the deterministic method to construct the linear code that satisfies the Gilbert–Varshamov bound. A naive approach is to search over all generator matrices [math]\displaystyle{ G }[/math] of size [math]\displaystyle{ kn }[/math] over the field [math]\displaystyle{ \mathbb{F}_q }[/math] to check if the linear code associated to [math]\displaystyle{ G }[/math] achieves the predicted Hamming distance. This exhaustive search requires exponential runtime in the worst case.
  2. There also exists a Las Vegas construction that takes a random linear code and checks if this code has good Hamming distance, but this construction also has an exponential runtime.
  3. For sufficiently large non-prime q and for certain ranges of the variable δ, the Gilbert–Varshamov bound is surpassed by the Tsfasman–Vladut–Zink bound.[3]

See also

References

  1. Tsfasman, M.A.; Vladut, S.G.; Zink, T. (1982). "Modular curves, Shimura curves, and Goppa codes better than the Varshamov-Gilbert bound". Mathematische Nachrichten 104. 
  2. The later inequality comes from the upper bound of the Volume of Hamming ball
  3. Stichtenoth, H. (2006). "Transitive and self-dual codes attaining the Tsfasman-Vla/spl breve/dut$80-Zink bound". IEEE Transactions on Information Theory 52 (5): 2218–2224. doi:10.1109/TIT.2006.872986. ISSN 0018-9448.