Guruswami–Sudan list decoding algorithm

From HandWiki

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance [math]\displaystyle{ \delta }[/math], then it is possible in principle to recover an encoded message when up to [math]\displaystyle{ \delta / 2 }[/math] fraction of the codeword symbols are corrupted. But when error rate is greater than [math]\displaystyle{ \delta / 2 }[/math], this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than [math]\displaystyle{ \delta / 2 }[/math] fraction of errors.

There are many polynomial-time algorithms for list decoding. In this article, we first present an algorithm for Reed–Solomon (RS) codes which corrects up to [math]\displaystyle{ 1 - \sqrt{2R} }[/math] errors and is due to Madhu Sudan. Subsequently, we describe the improved GuruswamiSudan list decoding algorithm, which can correct up to [math]\displaystyle{ 1 - \sqrt{R} }[/math] errors.

Here is a plot of the rate R and distance [math]\displaystyle{ \delta }[/math] for different algorithms.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/81/Graph.jpg

Algorithm 1 (Sudan's list decoding algorithm)

Problem statement

Input : A field [math]\displaystyle{ F }[/math]; n distinct pairs of elements [math]\displaystyle{ {(x_{i},y_{i})_{i=1}^n} }[/math] in [math]\displaystyle{ F \times F }[/math]; and integers [math]\displaystyle{ d }[/math] and [math]\displaystyle{ t }[/math].

Output: A list of all functions [math]\displaystyle{ f: F \to F }[/math] satisfying

[math]\displaystyle{ f(x) }[/math] is a polynomial in [math]\displaystyle{ x }[/math] of degree at most [math]\displaystyle{ d }[/math]

[math]\displaystyle{ \#\{i|f(x_{i}) = y_{i}\} \ge t }[/math]

 

 

 

 

(1)

To understand Sudan's Algorithm better, one may want to first know another algorithm which can be considered as the earlier version or the fundamental version of the algorithms for list decoding RS codes - the Berlekamp–Welch algorithm. Welch and Berlekamp initially came with an algorithm which can solve the problem in polynomial time with best threshold on [math]\displaystyle{ t }[/math] to be [math]\displaystyle{ t \ge (n+d+1)/2 }[/math]. The mechanism of Sudan's Algorithm is almost the same as the algorithm of Berlekamp–Welch Algorithm, except in the step 1, one wants to compute a bivariate polynomial of bounded [math]\displaystyle{ (1, k) }[/math] degree. Sudan's list decoding algorithm for Reed–Solomon code which is an improvement on Berlekamp and Welch algorithm, can solve the problem with [math]\displaystyle{ t = (\sqrt{2nd}) }[/math]. This bound is better than the unique decoding bound [math]\displaystyle{ 1 - \left (\frac{R} {2} \right) }[/math] for [math]\displaystyle{ R \lt 0.07 }[/math].

Algorithm

Definition 1 (weighted degree)

For weights [math]\displaystyle{ w_x,w_y \in \mathbb{Z}^+ }[/math], the [math]\displaystyle{ (w_x,w_y) }[/math] – weighted degree of monomial [math]\displaystyle{ q_{ij}x^i y^j }[/math] is [math]\displaystyle{ iw_x + jw_y }[/math]. The [math]\displaystyle{ (w_x,w_y) }[/math] – weighted degree of a polynomial [math]\displaystyle{ Q(x,y) = \sum_{ij} q_{ij}x^iy^j }[/math] is the maximum, over the monomials with non-zero coefficients, of the [math]\displaystyle{ (w_x,w_y) }[/math] – weighted degree of the monomial.

For example, [math]\displaystyle{ xy^2 }[/math] has [math]\displaystyle{ (1,3) }[/math]-degree 7

Algorithm:

Inputs: [math]\displaystyle{ n,d,t }[/math]; {[math]\displaystyle{ (x_1,y_1)\cdots(x_n,y_n) }[/math]} /* Parameters l,m to be set later. */

Step 1: Find a non-zero bivariate polynomial [math]\displaystyle{ Q : F^2 \mapsto F }[/math] satisfying

  • [math]\displaystyle{ Q(x,y) }[/math] has [math]\displaystyle{ (1,d) }[/math]-weighted degree at most [math]\displaystyle{ D=m+ld }[/math]
  • For every [math]\displaystyle{ i \in [n] }[/math],

[math]\displaystyle{ Q(x_i,y_i) = 0 }[/math]

 

 

 

 

(2)

Step 2. Factor Q into irreducible factors.

Step 3. Output all the polynomials [math]\displaystyle{ f }[/math] such that [math]\displaystyle{ (y- f(x)) }[/math] is a factor of Q and [math]\displaystyle{ f(x_i) = y_i }[/math] for at least t values of [math]\displaystyle{ i \in [n] }[/math]

Analysis

One has to prove that the above algorithm runs in polynomial time and outputs the correct result. That can be done by proving following set of claims.

Claim 1:

If a function [math]\displaystyle{ Q : F^2 \to F }[/math] satisfying (2) exists, then one can find it in polynomial time.

Proof:

Note that a bivariate polynomial [math]\displaystyle{ Q(x, y ) }[/math] of [math]\displaystyle{ (1, d) }[/math]-weighted degree at most [math]\displaystyle{ D }[/math] can be uniquely written as [math]\displaystyle{ Q(x,y) = \sum_{j=0}^l \sum_{k=0}^{m+(l-j)d} q_{kj} x^k y^j }[/math]. Then one has to find the coefficients [math]\displaystyle{ q_{kj} }[/math] satisfying the constraints [math]\displaystyle{ \sum_{j=0}^l \sum_{k=0}^{m+(l-j)d} q_{kj} x_i^k y_i^j = 0 }[/math], for every [math]\displaystyle{ i \in [n] }[/math]. This is a linear set of equations in the unknowns {[math]\displaystyle{ q_{kj} }[/math]}. One can find a solution using Gaussian elimination in polynomial time.

Claim 2:

If [math]\displaystyle{ (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} \gt n }[/math] then there exists a function [math]\displaystyle{ Q(x,y) }[/math] satisfying (2)

Proof:

To ensure a non zero solution exists, the number of coefficients in [math]\displaystyle{ Q(x,y) }[/math] should be greater than the number of constraints. Assume that the maximum degree [math]\displaystyle{ deg_x(Q) }[/math] of [math]\displaystyle{ x }[/math] in [math]\displaystyle{ Q(x,y) }[/math] is m and the maximum degree [math]\displaystyle{ deg_y(Q) }[/math] of [math]\displaystyle{ y }[/math] in [math]\displaystyle{ Q(x,y) }[/math] is [math]\displaystyle{ l }[/math]. Then the degree of [math]\displaystyle{ Q(x,y) }[/math] will be at most [math]\displaystyle{ m+ld }[/math]. One has to see that the linear system is homogeneous. The setting [math]\displaystyle{ q_{jk} = 0 }[/math] satisfies all linear constraints. However this does not satisfy (2), since the solution can be identically zero. To ensure that a non-zero solution exists, one has to make sure that number of unknowns in the linear system to be [math]\displaystyle{ (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} \gt n }[/math], so that one can have a non zero [math]\displaystyle{ Q(x,y) }[/math]. Since this value is greater than n, there are more variables than constraints and therefore a non-zero solution exists.

Claim 3:

If [math]\displaystyle{ Q(x,y) }[/math] is a function satisfying (2) and [math]\displaystyle{ f(x) }[/math] is function satisfying (1) and [math]\displaystyle{ t\gt m+ld }[/math], then [math]\displaystyle{ (y-f(x)) }[/math] divides [math]\displaystyle{ Q(x,y) }[/math]

Proof:

Consider a function [math]\displaystyle{ p(x) = Q(x,f(x)) }[/math]. This is a polynomial in [math]\displaystyle{ x }[/math], and argue that it has degree at most [math]\displaystyle{ m+ld }[/math]. Consider any monomial [math]\displaystyle{ q_{jk}x^k y^j }[/math] of [math]\displaystyle{ Q(x) }[/math]. Since [math]\displaystyle{ Q }[/math] has [math]\displaystyle{ (1,d) }[/math]-weighted degree at most [math]\displaystyle{ m+ld }[/math], one can say that [math]\displaystyle{ k+jd \le m+ld }[/math]. Thus the term [math]\displaystyle{ q_{kj}x^kf(x)^j }[/math] is a polynomial in [math]\displaystyle{ x }[/math] of degree at most [math]\displaystyle{ k+jd \le m+ld }[/math]. Thus [math]\displaystyle{ p(x) }[/math] has degree at most [math]\displaystyle{ m+ld }[/math]

Next argue that [math]\displaystyle{ p(x) }[/math] is identically zero. Since [math]\displaystyle{ Q(x_i,f(x_i)) }[/math] is zero whenever [math]\displaystyle{ y_i = f(x_i) }[/math], one can say that [math]\displaystyle{ p(x_i) }[/math] is zero for strictly greater than [math]\displaystyle{ m+ld }[/math] points. Thus [math]\displaystyle{ p }[/math]has more zeroes than its degree and hence is identically zero, implying [math]\displaystyle{ Q(x,f(x)) \equiv 0 }[/math]

Finding optimal values for [math]\displaystyle{ m }[/math] and [math]\displaystyle{ l }[/math]. Note that [math]\displaystyle{ m+ld \lt t }[/math] and [math]\displaystyle{ (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} \gt n }[/math] For a given value [math]\displaystyle{ l }[/math], one can compute the smallest [math]\displaystyle{ m }[/math] for which the second condition holds By interchanging the second condition one can get [math]\displaystyle{ m }[/math] to be at most [math]\displaystyle{ (n+1-d \begin{pmatrix}l + 1\\2\end{pmatrix})/2 - 1 }[/math] Substituting this value into first condition one can get [math]\displaystyle{ t }[/math] to be at least [math]\displaystyle{ \frac{n+1}{l+1} + \frac{dl}{2} }[/math] Next minimize the above equation of unknown parameter [math]\displaystyle{ l }[/math]. One can do that by taking derivative of the equation and equating that to zero By doing that one will get, [math]\displaystyle{ l = \sqrt{\frac{2(n+1)}{d}} -1 }[/math] Substituting back the [math]\displaystyle{ l }[/math] value into [math]\displaystyle{ m }[/math] and [math]\displaystyle{ t }[/math] one will get [math]\displaystyle{ m \ge \sqrt{\frac{(n+1)d}{2}} - \sqrt{\frac {(n+1)d}{2}} + \frac{d}{2} - 1 = \frac{d}{2} -1 }[/math] [math]\displaystyle{ t \gt \sqrt{\frac{2(n+1)d^2}{d}} - \frac {d}{2} -1 }[/math] [math]\displaystyle{ t \gt \sqrt{2(n+1)d} - \frac {d}{2} -1 }[/math]

Algorithm 2 (Guruswami–Sudan list decoding algorithm)

Definition

Consider a [math]\displaystyle{ (n,k) }[/math] Reed–Solomon code over the finite field [math]\displaystyle{ \mathbb{F} = GF(q) }[/math] with evaluation set [math]\displaystyle{ (\alpha_1,\alpha_2,\ldots,\alpha_n) }[/math] and a positive integer [math]\displaystyle{ r }[/math], the Guruswami-Sudan List Decoder accepts a vector [math]\displaystyle{ \beta = (\beta_1,\beta_2,\ldots,\beta_n) }[/math] [math]\displaystyle{ \in }[/math] [math]\displaystyle{ \mathbb{F}^n }[/math] as input, and outputs a list of polynomials of degree [math]\displaystyle{ \le k }[/math] which are in 1 to 1 correspondence with codewords.

The idea is to add more restrictions on the bi-variate polynomial [math]\displaystyle{ Q(x,y) }[/math] which results in the increment of constraints along with the number of roots.

Multiplicity

A bi-variate polynomial [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity [math]\displaystyle{ r }[/math] at [math]\displaystyle{ (0,0) }[/math] means that [math]\displaystyle{ Q(x,y) }[/math] has no term of degree [math]\displaystyle{ \le r }[/math], where the x-degree of [math]\displaystyle{ f(x) }[/math] is defined as the maximum degree of any x term in [math]\displaystyle{ f(x) }[/math] [math]\displaystyle{ \qquad }[/math] [math]\displaystyle{ deg_x f(x) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \max_{i \in I} \{i\} }[/math]

For example: Let [math]\displaystyle{ Q(x,y) = y - 4x^2 }[/math].

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig1.jpg

Hence, [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity 1 at (0,0).

Let [math]\displaystyle{ Q(x,y) = y + 6x^2 }[/math].

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig2.jpg

Hence, [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity 1 at (0,0).

Let [math]\displaystyle{ Q(x,y) = (y - 4x^2) (y + 6x^2) = y^2 + 6x^2y - 4x^2y -24x^4 }[/math]

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig3.jpg

Hence, [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity 2 at (0,0).

Similarly, if [math]\displaystyle{ Q(x,y) = [(y - \beta) - 4(x - \alpha)^2)] [(y - \beta) + 6(x - \alpha)^2)] }[/math] Then, [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity 2 at [math]\displaystyle{ (\alpha,\beta) }[/math].

General definition of multiplicity

[math]\displaystyle{ Q(x,y) }[/math] has [math]\displaystyle{ r }[/math] roots at [math]\displaystyle{ (\alpha,\beta) }[/math] if [math]\displaystyle{ Q(x,y) }[/math] has a zero of multiplicity [math]\displaystyle{ r }[/math] at [math]\displaystyle{ (\alpha,\beta) }[/math] when [math]\displaystyle{ (\alpha,\beta) \ne (0,0) }[/math].

Algorithm

Let the transmitted codeword be [math]\displaystyle{ ( f(\alpha_1), f(\alpha_2),\ldots,f(\alpha_n)) }[/math],[math]\displaystyle{ (\alpha_1,\alpha_2,\ldots,\alpha_n) }[/math] be the support set of the transmitted codeword & the received word be [math]\displaystyle{ (\beta_1,\beta_2,\ldots,\beta_n) }[/math]

The algorithm is as follows:

Interpolation step

For a received vector [math]\displaystyle{ (\beta_1,\beta_2,\ldots,\beta_n) }[/math], construct a non-zero bi-variate polynomial [math]\displaystyle{ Q(x,y) }[/math] with [math]\displaystyle{ (1,k)- }[/math]weighted degree of at most [math]\displaystyle{ d }[/math] such that [math]\displaystyle{ Q }[/math] has a zero of multiplicity [math]\displaystyle{ r }[/math] at each of the points [math]\displaystyle{ (\alpha_i,\beta_i) }[/math] where [math]\displaystyle{ 1 \le i \le n }[/math]

[math]\displaystyle{ Q(\alpha_i,\beta_i) = 0 \, }[/math]

Factorization step

Find all the factors of [math]\displaystyle{ Q(x,y) }[/math] of the form [math]\displaystyle{ y - p(x) }[/math] and [math]\displaystyle{ p(\alpha_i) = \beta_i }[/math] for at least [math]\displaystyle{ t }[/math] values of [math]\displaystyle{ i }[/math]

where [math]\displaystyle{ 0 \le i \le n }[/math] & [math]\displaystyle{ p(x) }[/math] is a polynomial of degree [math]\displaystyle{ \le k }[/math]

Recall that polynomials of degree [math]\displaystyle{ \le k }[/math] are in 1 to 1 correspondence with codewords. Hence, this step outputs the list of codewords.

Analysis

Interpolation step

Lemma: Interpolation step implies [math]\displaystyle{ \begin{pmatrix}r + 1\\2\end{pmatrix} }[/math] constraints on the coefficients of [math]\displaystyle{ a_i }[/math]

Let [math]\displaystyle{ Q(x,y) = \sum_{i = 0, j = 0} ^{i = m, j = p} a_{i,j} x^i y^j }[/math] where [math]\displaystyle{ \deg_x Q(x,y) = m }[/math] and [math]\displaystyle{ \deg_y Q(x,y) = p }[/math]

Then, [math]\displaystyle{ Q(x + \alpha, y + \beta) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \sum_{u = 0, v = 0} ^{r} }[/math] [math]\displaystyle{ Q_{u,v} }[/math] [math]\displaystyle{ (\alpha, \beta) }[/math] [math]\displaystyle{ x^{u} }[/math] [math]\displaystyle{ y^{v} }[/math] ........................(Equation 1)

where [math]\displaystyle{ Q_{u,v} }[/math] [math]\displaystyle{ (x, y) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \sum_{i = 0, j = 0} ^{i = m, j = p} }[/math] [math]\displaystyle{ \begin{pmatrix}i\\u\end{pmatrix} }[/math] [math]\displaystyle{ \begin{pmatrix}j\\v\end{pmatrix} }[/math] [math]\displaystyle{ a_{i,j} }[/math] [math]\displaystyle{ x^{i-u} }[/math] [math]\displaystyle{ y^{j-v} }[/math]

Proof of Equation 1:

[math]\displaystyle{ Q(x + \alpha,y + \beta) = \sum_{i,j} a_{i,j} (x + \alpha)^i (y + \beta)^j }[/math]
[math]\displaystyle{ Q(x + \alpha,y + \beta)=\sum_{i,j} a_{i,j} \Bigg ( \sum_u \begin{pmatrix}i\\u\end{pmatrix} x^u \alpha^{i-u} \Bigg ) \Bigg ( \sum_v \begin{pmatrix}i\\v\end{pmatrix} y^v \beta^{j-v} \Bigg ) }[/math].................Using binomial expansion
[math]\displaystyle{ Q(x + \alpha,y + \beta) = \sum_{u,v} x^u y^v \Bigg ( \sum_{i,j} \begin{pmatrix}i\\u\end{pmatrix} \begin{pmatrix}i\\v \end{pmatrix} a_{i,j} \alpha^{i-u} \beta^{j-v} \Bigg ) }[/math]
[math]\displaystyle{ Q(x + \alpha,y + \beta) = \sum_{u,v} }[/math] [math]\displaystyle{ Q_{u,v} (\alpha, \beta) x^u y^v }[/math]

Proof of Lemma:

The polynomial [math]\displaystyle{ Q(x, y) }[/math] has a zero of multiplicity [math]\displaystyle{ r }[/math] at [math]\displaystyle{ (\alpha,\beta) }[/math] if

[math]\displaystyle{ Q_{u,v} }[/math] [math]\displaystyle{ (\alpha,\beta) }[/math] [math]\displaystyle{ \equiv }[/math] [math]\displaystyle{ 0 }[/math] such that [math]\displaystyle{ 0 \le u + v \le r - 1 }[/math]
[math]\displaystyle{ u }[/math] can take [math]\displaystyle{ r - v }[/math] values as [math]\displaystyle{ 0 \le v \le r-1 }[/math]. Thus, the total number of constraints is

[math]\displaystyle{ \sum_{v = 0}^{r-1} {r - v} }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \begin{pmatrix}r + 1\\2\end{pmatrix} }[/math]

Thus, [math]\displaystyle{ \begin{pmatrix}r + 1\\2\end{pmatrix} }[/math] number of selections can be made for [math]\displaystyle{ (u,v) }[/math] and each selection implies constraints on the coefficients of [math]\displaystyle{ a_i }[/math]

Factorization step

Proposition:

[math]\displaystyle{ Q(x, p(x)) \equiv 0 }[/math] if [math]\displaystyle{ y - p(x) }[/math] is a factor of [math]\displaystyle{ Q(x,y) }[/math]

Proof:

Since, [math]\displaystyle{ y - p(x) }[/math] is a factor of [math]\displaystyle{ Q(x,y) }[/math], [math]\displaystyle{ Q(x,y) }[/math] can be represented as

[math]\displaystyle{ Q(x,y) = L(x,y) (y - p(x)) }[/math] [math]\displaystyle{ + }[/math] [math]\displaystyle{ R(x) }[/math]

where, [math]\displaystyle{ L(x,y) }[/math] is the quotient obtained when [math]\displaystyle{ Q(x,y) }[/math] is divided by [math]\displaystyle{ y - p(x) }[/math] [math]\displaystyle{ R(x) }[/math] is the remainder

Now, if [math]\displaystyle{ y }[/math] is replaced by [math]\displaystyle{ p(x) }[/math], [math]\displaystyle{ Q(x, p(x)) }[/math] [math]\displaystyle{ \equiv }[/math] [math]\displaystyle{ 0 }[/math], only if [math]\displaystyle{ R(x) }[/math] [math]\displaystyle{ \equiv }[/math] [math]\displaystyle{ 0 }[/math]

Theorem:

If [math]\displaystyle{ p(\alpha) = \beta }[/math], then [math]\displaystyle{ (x - \alpha) ^r }[/math] is a factor of [math]\displaystyle{ Q(x,p(x)) }[/math]

Proof:

[math]\displaystyle{ Q(x, y) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \sum_{u,v} }[/math] [math]\displaystyle{ Q_{u,v} }[/math] [math]\displaystyle{ (\alpha, \beta) }[/math] [math]\displaystyle{ (x - \alpha)^{u} }[/math] [math]\displaystyle{ (y - \beta)^{v} }[/math]...........................From Equation 2

[math]\displaystyle{ Q(x, p(x)) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \sum_{u,v} }[/math] [math]\displaystyle{ Q_{u,v} }[/math] [math]\displaystyle{ (\alpha, \beta) }[/math] [math]\displaystyle{ (x - \alpha)^{u} }[/math] [math]\displaystyle{ (p(x) - \beta)^{v} }[/math]

Given, [math]\displaystyle{ p(\alpha) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ \beta }[/math] [math]\displaystyle{ (p(x) - \beta) }[/math] mod [math]\displaystyle{ (x - \alpha) }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ 0 }[/math]

Hence, [math]\displaystyle{ (x - \alpha)^{u} }[/math] [math]\displaystyle{ (p(x) - \beta)^{v} }[/math] mod [math]\displaystyle{ (x - \alpha) ^{u+v} }[/math] [math]\displaystyle{ = }[/math] [math]\displaystyle{ 0 }[/math]

Thus, [math]\displaystyle{ (x - \alpha) ^r }[/math] is a factor of [math]\displaystyle{ Q(x,p(x)) }[/math].

As proved above,

[math]\displaystyle{ t \cdot r \gt D }[/math]

[math]\displaystyle{ t \gt \frac{D} {r} }[/math]

[math]\displaystyle{ \frac{D(D+2)} {2(k-1)} \gt n\begin{pmatrix}r + 1\\2\end{pmatrix} }[/math] where LHS is the upper bound on the number of coefficients of [math]\displaystyle{ Q(x,y) }[/math] and RHS is the earlier proved Lemma.

[math]\displaystyle{ D = \sqrt{knr(r-1)} \, }[/math]

Therefore, [math]\displaystyle{ t = \left \lceil{\sqrt{kn (1 - \frac{1}{r})}} \right \rceil }[/math]

Substitute [math]\displaystyle{ r = 2kn }[/math],

[math]\displaystyle{ t \gt \left \lceil {\sqrt{kn - \frac{1}{2}}} \right \rceil \gt \left \lceil {\sqrt{kn}} \right \rceil }[/math]

Hence proved, that Guruswami–Sudan List Decoding Algorithm can list decode Reed-Solomon codes up to [math]\displaystyle{ 1 - \sqrt{R} }[/math] errors.

References