Forney algorithm
In coding theory, the Forney algorithm (or Forney's algorithm) calculates the error values at known error locations. It is used as one of the steps in decoding BCH codes and Reed–Solomon codes (a subclass of BCH codes). George David Forney Jr. developed the algorithm.[1]
Procedure
- Need to introduce terminology and the setup...
Code words look like polynomials. By design, the generator polynomial has consecutive roots αc, αc+1, ..., αc+d−2.
Syndromes
Error location polynomial[2]
- [math]\displaystyle{ \Lambda(x) = \prod_{i=1}^\nu (1- x \, X_i) = 1 + \sum_{i=1}^\nu \lambda_i \, x^i }[/math]
The zeros of Λ(x) are X1−1, ..., Xν−1. The zeros are the reciprocals of the error locations [math]\displaystyle{ X_j = \alpha^{i_j} }[/math].
Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword.
In the more general case, the error weights ej can be determined by solving the linear system
- [math]\displaystyle{ s_0 = e_1 \alpha^{(c + 0)\,i_1} + e_2 \alpha^{(c + 0)\,i_2} + \cdots \, }[/math]
- [math]\displaystyle{ s_1 = e_1 \alpha^{(c + 1)\,i_1} + e_2 \alpha^{(c + 1)\,i_2} + \cdots \, }[/math]
- [math]\displaystyle{ \cdots \, }[/math]
However, there is a more efficient method known as the Forney algorithm, which is based on Lagrange interpolation. First calculate the error evaluator polynomial[3]
- [math]\displaystyle{ \Omega(x) = S(x)\,\Lambda(x) \pmod{x^{2t}} \, }[/math]
Where S(x) is the partial syndrome polynomial:[4]
- [math]\displaystyle{ S(x) = s_0 x^0 + s_1 x^1 + s_2 x^2 + \cdots + s_{2t-1} x^{2t-1}. }[/math]
Then evaluate the error values:[3]
- [math]\displaystyle{ e_j = - \frac{X_j^{1-c} \, \Omega(X_j^{-1})}{\Lambda'(X_j^{-1})} \, }[/math]
The value c is often called the "first consecutive root" or "fcr". Some codes select c = 1, so the expression simplifies to:
- [math]\displaystyle{ e_j = - \frac{\Omega(X_j^{-1})}{\Lambda'(X_j^{-1})} }[/math]
Formal derivative
Λ'(x) is the formal derivative of the error locator polynomial Λ(x):[3]
- [math]\displaystyle{ \Lambda'(x) = \sum_{i=1}^{\nu} i \, \cdot \, \lambda_i \, x^{i-1} }[/math]
In the above expression, note that i is an integer, and λi would be an element of the finite field. The operator · represents ordinary multiplication (repeated addition in the finite field) which is the same as the finite field's multiplication operator, i.e.
- [math]\displaystyle{ i\lambda = (1+\ldots+1)\lambda=\lambda+\ldots+\lambda. }[/math]
For instance, in characteristic 2, [math]\displaystyle{ i\lambda=0, \lambda }[/math] according as i is even or odd.
Derivation
Lagrange interpolation
(Gill n.d.) gives a derivation of the Forney algorithm.
Erasures
Define the erasure locator polynomial
- [math]\displaystyle{ \Gamma(x) = \prod (1- x \, \alpha^{j_i}) }[/math]
Where the erasure locations are given by ji. Apply the procedure described above, substituting Γ for Λ.
If both errors and erasures are present, use the error-and-erasure locator polynomial
- [math]\displaystyle{ \Psi(x) = \Lambda(x) \, \Gamma(x) }[/math]
See also
- BCH code
- Reed–Solomon error correction
References
- Forney, G. (October 1965), "On Decoding BCH Codes", IEEE Transactions on Information Theory 11 (4): 549–557, doi:10.1109/TIT.1965.1053825, ISSN 0018-9448
- Gill, John (n.d.), EE387 Notes #7, Handout #28, Stanford University, pp. 42–45, archived from the original on June 30, 2014, https://web.archive.org/web/20140630172526/http://web.stanford.edu/class/ee387/handouts/notes7.pdf, retrieved April 21, 2010
- W. Wesley Peterson's book
Original source: https://en.wikipedia.org/wiki/Forney algorithm.
Read more |