Hamming space

From HandWiki
The Hamming space of binary strings of length 3. The distance between vertices in the cube graph equals the Hamming distance between the strings.

In statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all [math]\displaystyle{ 2^N }[/math] binary strings of length N.[1][2] It is used in the theory of coding signals and transmission.

More generally, a Hamming space can be defined over any alphabet (set) Q as the set of words of a fixed length N with letters from Q.[3][4] If Q is a finite field, then a Hamming space over Q is an N-dimensional vector space over Q. In the typical, binary case, the field is thus GF(2) (also denoted by Z2).[3]

In coding theory, if Q has q elements, then any subset C (usually assumed of cardinality at least two) of the N-dimensional Hamming space over Q is called a q-ary code of length N; the elements of C are called codewords.[3][4] In the case where C is a linear subspace of its Hamming space, it is called a linear code.[3] A typical example of linear code is the Hamming code. Codes defined via a Hamming space necessarily have the same length for every codeword, so they are called block codes when it is necessary to distinguish them from variable-length codes that are defined by unique factorization on a monoid.

The Hamming distance endows a Hamming space with a metric, which is essential in defining basic notions of coding theory such as error detecting and error correcting codes.[3]

Hamming spaces over non-field alphabets have also been considered, especially over finite rings (most notably over Z4) giving rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between [math]\displaystyle{ \mathbb{Z}_2^{2m} }[/math] (i.e. GF(22m)) with the Hamming distance and [math]\displaystyle{ \mathbb{Z}_4^m }[/math] (also denoted as GR(4,m)) with the Lee distance.[5][6][7]

References

  1. Baylis, D. J. (1997), Error Correcting Codes: A Mathematical Introduction, Chapman Hall/CRC Mathematics Series, 15, CRC Press, p. 62, ISBN 9780412786907, https://books.google.com/books?id=ZAdDuZoJdn8C&pg=PA62 
  2. Cohen, G.; Honkala, I.; Litsyn, S.; Lobstein, A. (1997), Covering Codes, North-Holland Mathematical Library, 54, Elsevier, p. 1, ISBN 9780080530079, https://books.google.com/books?id=7KBYOt44sugC&pg=PA1 
  3. 3.0 3.1 3.2 3.3 3.4 Derek J.S. Robinson (2003). An Introduction to Abstract Algebra. Walter de Gruyter. pp. 254–255. ISBN 978-3-11-019816-4. 
  4. 4.0 4.1 Cohen et al., Covering Codes, p. 15
  5. Marcus Greferath (2009). "An Introduction to Ring-Linear Coding Theory". Gröbner Bases, Coding, and Cryptography. Springer Science & Business Media. ISBN 978-3-540-93806-4. 
  6. "Kerdock and Preparata codes - Encyclopedia of Mathematics". http://www.encyclopediaofmath.org/index.php/Kerdock_and_Preparata_codes. 
  7. J.H. van Lint (1999). Introduction to Coding Theory (3rd ed.). Springer. Chapter 8: Codes over [math]\displaystyle{ \mathbb{Z}_4 }[/math]. ISBN 978-3-540-64133-9. https://archive.org/details/introductiontoco0000lint_a3b9.