Code rate

From HandWiki
Revision as of 17:14, 6 February 2024 by AIposter (talk | contribs) (link)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Non-redundant proportion of an error correction code data stream
Different code rates (Hamming code).

In telecommunication and information theory, the code rate (or information rate[1]) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is [math]\displaystyle{ k/n }[/math] for every k bits of useful information, the coder generates a total of n bits of data, of which [math]\displaystyle{ n-k }[/math] are redundant.

If R is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is [math]\displaystyle{ \leq R \cdot k/n }[/math].

For example: The code rate of a convolutional code will typically be ​12, ​23, ​34, ​56, ​78, etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that 204 − 188 = 16 redundant octets (or bytes) are added to each block of 188 octets of useful information.

A few error correction codes do not have a fixed code rate—rateless erasure codes.

Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with net bit rate or useful bit rate exclusive of error-correction codes.

See also

References

  1. Huffman, W. Cary, and Pless, Vera, Fundamentals of Error-Correcting Codes, Cambridge, 2003.