Expander code

From HandWiki
Revision as of 14:07, 12 March 2026 by AIposter (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Expander codes
File:300px
bipartite expander graph
Classification
TypeLinear block code
Block lengthn
Message lengthnm
Rate1m/n
Distance2(1ϵ)γn
Alphabet size2
Notation[n,nm,2(1ϵ)γn]2-code

In coding theory, expander codes form a class of error-correcting codes that are constructed from bipartite expander graphs. Along with Justesen codes, expander codes are of particular interest since they have a constant positive rate, a constant positive relative distance, and a constant alphabet size. In fact, the alphabet contains only two elements, so expander codes belong to the class of binary codes. Furthermore, expander codes can be both encoded and decoded in time proportional to the block length of the code.

Expander codes

In coding theory, an expander code is a [n,nm]2 linear block code whose parity check matrix is the adjacency matrix of a bipartite expander graph. These codes have good relative distance 2(1ε)γ, where ε and γ are properties of the expander graph as defined later, rate (1mn), and decodability (algorithms of running time O(n) exist).

Definition

Let B be a (c,d)-biregular graph between a set of n nodes {v1,,vn}, called variables, and a set of cn/d nodes {C1,,Ccn/d}, called constraints.

Let b(i,j) be a function designed so that, for each constraint Ci, the variables neighbouring Ci are vb(i,1),,vb(i,d).

Let 𝒮 be an error-correcting code of block length d. The expander code 𝒞(B,𝒮) is the code of block length n whose code words are the words (x1,,xn) such that, for 1icn/d, (xb(i,1),,xb(i,d)) is a code word of 𝒮.[1]

It has been shown that nontrivial lossless expander graphs exist. Moreover, we can explicitly construct them.[2]

Rate

The rate of C is its dimension divided by its block length. In this case, the parity check matrix has size m×n, and hence C has rate at least (nm)/n=1m/n.

Distance

Suppose ε<12. Then the distance of a (n,m,d,γ,1ε) expander code C is at least 2(1ε)γn.

Proof

Note that we can consider every code word c in C as a subset of vertices SL, by saying that vertex viS if and only if the ith index of the code word is a 1. Then c is a code word if every vertex vR is adjacent to an even number of vertices in S. (In order to be a code word, cP=0, where P is the parity check matrix. Then, each vertex in R corresponds to each column of P. Matrix multiplication over GF(2)={0,1} then gives the desired result.) So, if a vertex vR is adjacent to a single vertex in S, we know immediately that c is not a code word. Let N(S) denote the neighbours in R of S, and U(S) denote those neighbours of S which are unique, i.e., adjacent to a single vertex of S.

Lemma 1

For every SL of size |S|γn, d|S||N(S)||U(S)|d(12ε)|S|.

Proof

Trivially, |N(S)||U(S)|, since vU(S) implies vN(S). |N(S)|d|S| follows since the degree of every vertex in S is d. By the expansion property of the graph, there must be a set of d(1ε)|S| edges which go to distinct vertices. The remaining dε|S| edges make at most dε|S|neighbours not unique, so U(S)d(1ε)|S|dε|S|=d(12ε)|S|.

Corollary

Every sufficiently small S has a unique neighbour. This follows since ε<12.

Lemma 2

Every subset TL with |T|<2(1ε)γn has a unique neighbour.

Proof

Lemma 1 proves the case |T|γn, so suppose 2(1ε)γn>|T|>γn. Let ST such that |S|=γn. By Lemma 1, we know that |U(S)|d(12ε)|S|. Then a vertex vU(S) is in U(T) if vN(TS), and we know that |TS|2(1ε)γnγn=(12ε)γn, so by the first part of Lemma 1, we know |N(TS)|d(12ε)γn. Since ε<12, |U(T)||U(S)N(TS)||U(S)||N(TS)|>0, and hence U(T) is not empty.

Corollary

Note that if a TL has at least 1 unique neighbour, i.e. |U(T)|>0, then the corresponding word c corresponding to T cannot be a code word, as it will not multiply to the all zeros vector by the parity check matrix. By the previous argument, cCwt(c)2(1ε)γn. Since C is linear, we conclude that C has distance at least 2(1ε)γn.

Encoding

The encoding time for an expander code is upper bounded by that of a general linear code - O(n2) by matrix multiplication. A result due to Spielman shows that encoding is possible in O(n) time.[3]

Decoding

Decoding of expander codes is possible in O(n) time when ε<14 using the following algorithm.

Let vi be the vertex of L that corresponds to the iindex in the code words of C. Let y{0,1}n be a received word, and V(y)={vithe ith position of y is a 1}. Let e(i) be |{vRviN(v) and N(v)V(y) is even}|, and o(i) be |{vRviN(v) and N(v)V(y) is odd}|. Then consider the greedy algorithm:


Input: received word y.

initialize y' to y
while there is a v in R adjacent to an odd number of vertices in V(y')
    if there is an i such that o(i) > e(i)
        flip entry i in y'
    else
        fail

Output: fail, or modified code word y.


Proof

We show first the correctness of the algorithm, and then examine its running time.

Correctness

We must show that the algorithm terminates with the correct code word when the received code word is within half the code's distance of the original code word. Let the set of corrupt variables be S, s=|S|, and the set of unsatisfied (adjacent to an odd number of vertices) vertices in R be c. The following lemma will prove useful.

Lemma 3

If 0<s<γn, then there is a vi with o(i)>e(i).

Proof

By Lemma 1, we know that U(S)d(12ε)s. So an average vertex has at least d(12ε)>d/2 unique neighbours (recall unique neighbours are unsatisfied and hence contribute to o(i)), since ε<14, and thus there is a vertex vi with o(i)>e(i).

So, if we have not yet reached a code word, then there will always be some vertex to flip. Next, we show that the number of errors can never increase beyond γn.

Lemma 4

If we start with s<γ(12ε)n, then we never reach s=γn at any point in the algorithm.

Proof

When we flip a vertex vi, o(i) and e(i) are interchanged, and since we had o(i)>e(i), this means the number of unsatisfied vertices on the right decreases by at least one after each flip. Since s<γ(12ε)n, the initial number of unsatisfied vertices is at most dγ(12ε)n, by the graph's d-regularity. If we reached a string with γn errors, then by Lemma 1, there would be at least dγ(12ε)n unique neighbours, which means there would be at least dγ(12ε)n unsatisfied vertices, a contradiction.

Lemmas 3 and 4 show us that if we start with s<γ(12ε)n (half the distance of C), then we will always find a vertex vi to flip. Each flip reduces the number of unsatisfied vertices in R by at least 1, and hence the algorithm terminates in at most m steps, and it terminates at some code word, by Lemma 3. (Were it not at a code word, there would be some vertex to flip). Lemma 4 shows us that we can never be farther than γn away from the correct code word. Since the code has distance 2(1ε)γn>γn (since ε<12), the code word it terminates on must be the correct code word, since the number of bit flips is less than half the distance (so we couldn't have travelled far enough to reach any other code word).

Complexity

We now show that the algorithm can achieve linear time decoding. Let nm be constant, and r be the maximum degree of any vertex in R. Note that r is also constant for known constructions.

  1. Pre-processing: It takes O(mr) time to compute whether each vertex in R has an odd or even number of neighbours.
  2. Pre-processing 2: We take O(dn)=O(dmr) time to compute a list of vertices vi in L which have o(i)>e(i).
  3. Each Iteration: We simply remove the first list element. To update the list of odd / even vertices in R, we need only update O(d) entries, inserting / removing as necessary. We then update O(dr) entries in the list of vertices in L with more odd than even neighbours, inserting / removing as necessary. Thus each iteration takes O(dr) time.
  4. As argued above, the total number of iterations is at most m.

This gives a total runtime of O(mdr)=O(n) time, where d and r are constants.

See also

Notes

This article is based on Dr. Venkatesan Guruswami's course notes.[4]

References

  1. Sipser, M.; Spielman, D.A. (1996). "Expander codes". IEEE Transactions on Information Theory 42 (6): 1710–1722. doi:10.1109/18.556667. 
  2. Capalbo, M.; Reingold, O.; Vadhan, S.; Wigderson, A. (2002). "Randomness conductors and constant-degree lossless expanders". STOC '02 Proceedings of the thirty-fourth annual ACM symposium on Theory of computing. ACM. pp. 659–668. doi:10.1145/509907.510003. ISBN 978-1-58113-495-7. http://dl.acm.org/citation.cfm?id=510003. 
  3. Spielman, D. (1996). "Linear-time encodable and decodable error-correcting codes". IEEE Transactions on Information Theory 42 (6): 1723–31. doi:10.1109/18.556668. 
  4. Guruswami, V. (15 November 2006). "Lecture 13: Expander Codes". CSE 533: Error-Correcting. University of Washington. http://www.cs.washington.edu/education/courses/cse533/06au/lecnotes/lecture13.pdf. 
    Guruswami, V. (March 2010). "Notes 8: Expander Codes and their decoding". Introduction to Coding Theory. Carnegie Mellon University. https://www.cs.cmu.edu/~venkatg/teaching/codingtheory/notes/notes8.pdf. 
    Guruswami, V. (September 2004). "Guest column: error-correcting codes and expander graphs". ACM SIGACT News 35 (3): 25–41. doi:10.1145/1027914.1027924. http://dl.acm.org/citation.cfm?id=1027924.