Sterbenz lemma

From HandWiki
Revision as of 20:36, 6 February 2024 by SpringEdit (talk | contribs) (update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Exact floating-point subtraction theorem

In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma[1] is a theorem giving conditions under which floating-point differences are computed exactly. It is named after Pat H. Sterbenz, who published a variant of it in 1974.[2]

Sterbenz lemma — In a floating-point number system with subnormal numbers, if [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are floating-point numbers such that

[math]\displaystyle{ \frac{y}{2} \leq x \leq 2 y, }[/math]

then [math]\displaystyle{ x - y }[/math] is also a floating-point number. Thus, a correctly rounded floating-point subtraction

[math]\displaystyle{ x \ominus y = \operatorname{fl}(x - y) = x - y }[/math]

is computed exactly.

The Sterbenz lemma applies to IEEE 754, the most widely used floating-point number system in computers.

Proof

Let [math]\displaystyle{ \beta }[/math] be the radix of the floating-point system and [math]\displaystyle{ p }[/math] the precision.

Consider several easy cases first:

  • If [math]\displaystyle{ x }[/math] is zero then [math]\displaystyle{ x - y = -y }[/math], and if [math]\displaystyle{ y }[/math] is zero then [math]\displaystyle{ x - y = x }[/math], so the result is trivial because floating-point negation is always exact.
  • If [math]\displaystyle{ x = y }[/math] the result is zero and thus exact.
  • If [math]\displaystyle{ x \lt 0 }[/math] then we must also have [math]\displaystyle{ y/2 \leq x \lt 0 }[/math] so [math]\displaystyle{ y \lt 0 }[/math]. In this case, [math]\displaystyle{ x - y = -(-x - -y) }[/math], so the result follows from the theorem restricted to [math]\displaystyle{ x, y \geq 0 }[/math].
  • If [math]\displaystyle{ x \leq y }[/math], we can write [math]\displaystyle{ x - y = -(y - x) }[/math] with [math]\displaystyle{ x/2 \leq y \leq 2 x }[/math], so the result follows from the theorem restricted to [math]\displaystyle{ x \geq y }[/math].

For the rest of the proof, assume [math]\displaystyle{ 0 \lt y \lt x \leq 2 y }[/math] without loss of generality.

Write [math]\displaystyle{ x, y \gt 0 }[/math] in terms of their positive integral significands [math]\displaystyle{ s_x, s_y \leq \beta^p - 1 }[/math] and minimal exponents [math]\displaystyle{ e_x, e_y }[/math]:

[math]\displaystyle{ \begin{align} x &= s_x \cdot \beta^{e_x - p + 1} \\ y &= s_y \cdot \beta^{e_y - p + 1} \end{align} }[/math]

Note that [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] may be subnormal—we do not assume [math]\displaystyle{ s_x, s_y \geq \beta^{p - 1} }[/math].

The subtraction gives:

[math]\displaystyle{ \begin{align} x - y &= s_x \cdot \beta^{e_x - p + 1} - s_y \cdot \beta^{e_y - p + 1} \\ &= s_x \beta^{e_x - e_y} \cdot \beta^{e_y - p + 1} - s_y \cdot \beta^{e_y - p + 1} \\ &= (s_x \beta^{e_x - e_y} - s_y) \cdot \beta^{e_y - p + 1}. \end{align} }[/math]

Let [math]\displaystyle{ s' = s_x \beta^{e_x - e_y} - s_y }[/math]. Since [math]\displaystyle{ 0 \lt y \lt x }[/math] we have:

  • [math]\displaystyle{ e_y \leq e_x }[/math], so [math]\displaystyle{ e_x - e_y \geq 0 }[/math], from which we can conclude [math]\displaystyle{ \beta^{e_x - e_y} }[/math] is an integer and therefore so is [math]\displaystyle{ s' = s_x \beta^{e_x - e_y} - s_y }[/math]; and
  • [math]\displaystyle{ x - y \gt 0 }[/math], so [math]\displaystyle{ s' \gt 0 }[/math].

Further, since [math]\displaystyle{ x \leq 2 y }[/math], we have [math]\displaystyle{ x - y \leq y }[/math], so that

[math]\displaystyle{ s' \cdot \beta^{e_y - p + 1} = x - y \leq y = s_y \cdot \beta^{e_y - p + 1} }[/math]

which implies that

[math]\displaystyle{ 0 \lt s' \leq s_y \leq \beta^p - 1. }[/math]

Hence

[math]\displaystyle{ x - y = s' \cdot \beta^{e_y - p + 1}, \quad \text{for} \quad 0 \lt s' \leq \beta^p - 1, }[/math]

so [math]\displaystyle{ x - y }[/math] is a floating-point number.

Note: Even if [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are normal, i.e., [math]\displaystyle{ s_x, s_y \geq \beta^{p - 1} }[/math], we cannot prove that [math]\displaystyle{ s' \geq \beta^{p - 1} }[/math] and therefore cannot prove that [math]\displaystyle{ x - y }[/math] is also normal. For example, the difference of the two smallest positive normal floating-point numbers [math]\displaystyle{ x = (\beta^{p - 1} + 1) \cdot \beta^{e_{\mathrm{min}} - p + 1} }[/math] and [math]\displaystyle{ y = \beta^{p - 1} \cdot \beta^{e_{\mathrm{min}} - p + 1} }[/math] is [math]\displaystyle{ x - y = 1 \cdot \beta^{e_{\mathrm{min}} - p + 1} }[/math] which is necessarily subnormal. In floating-point number systems without subnormal numbers, such as CPUs in nonstandard flush-to-zero mode instead of the standard gradual underflow, the Sterbenz lemma does not apply.

Relation to catastrophic cancellation

The Sterbenz lemma may be contrasted with the phenomenon of catastrophic cancellation:

  • The Sterbenz lemma asserts that if [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are sufficiently close floating-point numbers then their difference [math]\displaystyle{ x - y }[/math] is computed exactly by floating-point arithmetic [math]\displaystyle{ x \ominus y = \operatorname{fl}(x - y) }[/math], with no rounding needed.
  • The phenomenon of catastrophic cancellation is that if [math]\displaystyle{ \tilde x }[/math] and [math]\displaystyle{ \tilde y }[/math] are approximations to true numbers [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math]—whether the approximations arise from prior rounding error or from series truncation or from physical uncertainty or anything else—the error of the difference [math]\displaystyle{ \tilde x - \tilde y }[/math] from the desired difference [math]\displaystyle{ x - y }[/math] is inversely proportional to [math]\displaystyle{ x - y }[/math]. Thus, the closer [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are, the worse [math]\displaystyle{ \tilde x - \tilde y }[/math] may be as an approximation to [math]\displaystyle{ x - y }[/math], even if the subtraction itself is computed exactly.

In other words, the Sterbenz lemma shows that subtracting nearby floating-point numbers is exact, but if the numbers you have are approximations then even their exact difference may be far off from the difference of numbers you wanted to subtract.

Use in numerical analysis

The Sterbenz lemma is instrumental in proving theorems on error bounds in numerical analysis of floating-point algorithms. For example, Heron's formula [math]\displaystyle{ A = \sqrt{s (s - a) (s - b) (s - c)} }[/math] for the area of triangle with side lengths [math]\displaystyle{ a }[/math], [math]\displaystyle{ b }[/math], and [math]\displaystyle{ c }[/math], where [math]\displaystyle{ s = (a + b + c)/2 }[/math] is the semi-perimeter, may give poor accuracy for long narrow triangles if evaluated directly in floating-point arithmetic. However, for [math]\displaystyle{ a \geq b \geq c }[/math], the alternative formula [math]\displaystyle{ A = \frac{1}{4} \sqrt{\bigl(a + (b + c)\bigr) \bigl(c - (a - b)\bigr) \bigl(c + (a - b)\bigr) \bigl(a + (b - c)\bigr)} }[/math] can be proven, with the help of the Sterbenz lemma, to have low forward error for all inputs.[3][4][5]

References

  1. Muller, Jean-Michel; Brunie, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie et al. (2018). Handbook of Floating-Point Arithmetic (2nd ed.). Gewerbestrasse 11, 6330 Cham, Switzerland: Birkhäuser. Lemma 4.1, p. 101. doi:10.1007/978-3-319-76526-6. ISBN 978-3-319-76525-9. https://doi.org/10.1007/978-3-319-76526-6. 
  2. Sterbenz, Pat H. (1974). Floating-Point Computation. Englewood Cliffs, NJ, United States: Prentice-Hall. Theorem 4.3.1 and Corollary, p. 138. ISBN 0-13-322495-3. https://archive.org/details/SterbenzFloatingPointComputation/page/n153/mode/2up. 
  3. Kahan, W. (2014-09-04). "Miscalculating Area and Angles of a Needle-like Triangle". Lecture Notes for Introductory Numerical Analysis Classes. https://people.eecs.berkeley.edu/~wkahan/Triangle.pdf. 
  4. Goldberg, David (March 1991). "What every computer scientist should know about floating-point arithmetic". ACM Computing Surveys (New York, NY, United States: Association for Computing Machinery) 23 (1): 5–48. doi:10.1145/103162.103163. ISSN 0360-0300. https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html. Retrieved 2020-09-17. 
  5. Boldo, Sylvie (April 2013). "How to Compute the Area of a Triangle: a Formal Revisit". in Nannarelli, Alberto; Seidel, Peter-Michael; Tang, Ping Tak Peter. 21st IEEE Symposium on Computer Arithmetic. IEEE Computer Society. pp. 91–98. doi:10.1109/ARITH.2013.29. ISBN 978-0-7695-4957-6. https://hal.inria.fr/hal-00790071/document.