Significance arithmetic

From HandWiki
Short description: Rules of calculations on approximated numbers

Significance arithmetic is a set of rules (sometimes called significant figure rules) for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant figures to use to represent the result of a calculation. If a calculation is done without analysis of the uncertainty involved, a result that is written with too many significant figures can be taken to imply a higher precision than is known, and a result that is written with too few significant figures results in an avoidable loss of precision. Understanding these rules requires a good understanding of the concept of significant and insignificant figures.

The rules of significance arithmetic are an approximation based on statistical rules for dealing with probability distributions. See the article on propagation of uncertainty for these more advanced and precise rules. Significance arithmetic rules rely on the assumption that the number of significant figures in the operands gives accurate information about the uncertainty of the operands and hence the uncertainty of the result. For alternatives see Interval arithmetic and Floating-point error mitigation.

An important caveat is that significant figures apply only to measured values. Values known to be exact should be ignored for determining the number of significant figures that belong in the result. Examples of such values include:

  • integer counts (e.g. the number of oranges in a bag)
  • definitions of one unit in terms of another (e.g. a minute is 60 seconds)
  • actual prices asked or offered, and quantities given in requirement specifications
  • legally defined conversions, such as international currency exchange
  • scalar operations, such as "tripling" or "halving"
  • mathematical constants, such as π and e

Physical constants such as the gravitational constant, however, have a limited number of significant digits, because these constants are known to us only by measurement. On the other hand, c (the speed of light) is exactly 299,792,458 m/s by definition.

Multiplication and division using significance arithmetic

When multiplying or dividing numbers, the result is rounded to the number of significant figures in the factor with the least significant figures. Here, the quantity of significant figures in each of the factors is important—not the position of the significant figures. For instance, using significance arithmetic rules:

  • 8 × 8 ≈ 6 × 101
  • 8 × 8.0 ≈ 6 × 101
  • 8.0 × 8.0 ≈ 64
  • 8.02 × 8.02 ≈ 64.3
  • 8 / 2.0 ≈ 4
  • 8.6 / 2.0012 ≈ 4.3
  • 2 × 0.8 ≈ 2

If, in the above, the numbers are assumed to be measurements (and therefore probably inexact) then "8" above represents an inexact measurement with only one significant digit. Therefore, the result of "8 × 8" is rounded to a result with only one significant digit, i.e., "6 × 101" instead of the unrounded "64" that one might expect. In many cases, the rounded result is less accurate than the non-rounded result; a measurement of "8" has an actual underlying quantity between 7.5 and 8.5. The true square would be in the range between 56.25 and 72.25. So 6 × 101 is the best one can give, as other possible answers give a false sense of accuracy. Further, the 6 × 101 is itself confusing (as it might be considered to imply 60 ± 5, which is over-optimistic; more accurate would be 64 ± 8).

Addition and subtraction using significance arithmetic

When adding or subtracting using significant figures rules, results are rounded to the position of the least significant digit in the most uncertain of the numbers being added (or subtracted).[citation needed] That is, the result is rounded to the last digit that is significant in each of the numbers being summed. Here the position of the significant figures is important, but the quantity of significant figures is irrelevant. Some examples using these rules are:

1
+ 1.1
2
  • 1 is significant to the ones place, 1.1 is significant to the tenths place. Of the two, the least precise is the ones place. The answer cannot have any significant figures past the ones place.
1.0
+ 1.1
2.1
  • 1.0 and 1.1 are significant to the tenths place, so the answer will also have a number in the tenths place.
9.9
9.9
9.9
9.9
3.3
+ 1.1
40.0
  • All the addends are significant to the tenths place, so the answer is significant to the tenth place. While each term has two digits of significance, the sum carried over into the tens columns so the answer has three digits of significance.
    100 + 110 ≈ 200
  • We see the answer is 200, given the significance to the hundreds place of the 100. The answer maintains a single digit of significance in the hundreds place, just like the first term in the arithmetic.
    100. + 110. = 210.
  • 100. and 110. are both significant to the ones place (as indicated by the decimal), so the answer is also significant to the ones place.
    1 × 102 + 1.1 × 102 ≈ 2 × 102
  • 100 is significant up to the hundreds place, while 110 is up to the tens place. Of the two, the least accurate is the hundreds place. The answer should not have significant digits past the hundreds place.
    1.0 × 102 + 111 = 2.1 × 102
  • 1.0 × 102 is significant up to the tens place while 111 has numbers up until the ones place. The answer will have no significant figures past the tens place.
    123.25 + 46.0 + 86.26 ≈ 255.5
  • 123.25 and 86.26 are significant until the hundredths place while 46.0 is only significant until the tenths place. The answer will be significant up until the tenths place.
    100 − 1 ≈ 100
  • We see the answer is 100, given the significance to the hundredths place of the 100. It may seem counter-intuitive, but giving the nature of significant digits dictating precision, we can see how this follows from the standard rules.

Transcendental functions

Transcendental functions have a complicated method to determine the significance of the function output. These include logarithmic functions, exponential functions and the trigonometric functions. The significance of the output depends on the condition number. In general, the number of significant figures of the output is equal to the number of significant figures of the function input (function argument) minus the order of magnitude of the condition number.

The condition number of a differentiable function [math]\displaystyle{ f }[/math] at a point [math]\displaystyle{ x }[/math] is (see details):

[math]\displaystyle{ \left|\frac{xf'(x)}{f(x)}\right| }[/math]

Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to non-zero, yielding a ratio with zero in the denominator, hence an infinite relative change. The condition number of the most used functions are as follows;[1] these can be used to compute significant figures for all elementary functions:

Name Symbol Condition number
Addition / subtraction [math]\displaystyle{ x + a }[/math] [math]\displaystyle{ \left|\frac{x}{x+a}\right| }[/math]
Scalar multiplication [math]\displaystyle{ a x }[/math] [math]\displaystyle{ 1 }[/math]
Division [math]\displaystyle{ 1 / x }[/math] [math]\displaystyle{ 1 }[/math]
Polynomial [math]\displaystyle{ x^n }[/math] [math]\displaystyle{ |n| }[/math]
Exponential function [math]\displaystyle{ e^x }[/math] [math]\displaystyle{ |x| }[/math]
Logarithm with base b [math]\displaystyle{ \log_b(x) }[/math] [math]\displaystyle{ \left|\frac{1}{\log_{b}(x)\ln(b)}\right| }[/math]
Natural logarithm function [math]\displaystyle{ \ln(x) }[/math] [math]\displaystyle{ \left|\frac{1}{\ln(x)}\right| }[/math]
Sine function [math]\displaystyle{ \sin(x) }[/math] [math]\displaystyle{ |x\cot(x)| }[/math]
Cosine function [math]\displaystyle{ \cos(x) }[/math] [math]\displaystyle{ |x\tan(x)| }[/math]
Tangent function [math]\displaystyle{ \tan(x) }[/math] [math]\displaystyle{ |x(\tan(x)+\cot(x))| }[/math]
Inverse sine function [math]\displaystyle{ \arcsin(x) }[/math] [math]\displaystyle{ \frac{x}{\sqrt{1-x^2}\arcsin(x)} }[/math]
Inverse cosine function [math]\displaystyle{ \arccos(x) }[/math] [math]\displaystyle{ \frac{|x|}{\sqrt{1-x^2}\arccos(x)} }[/math]
Inverse tangent function [math]\displaystyle{ \arctan(x) }[/math] [math]\displaystyle{ \frac{x}{(1+x^2)\arctan(x)} }[/math]

Derivation

The fact that the number of significant figures of the function output is equal to the number of significant figures of the function input (function argument) minus the base-10 logarithm of the condition number (which is approximately the order of magnitude/number of digits of the condition number) can be easily derived from first principles: let [math]\displaystyle{ \hat{x} }[/math] and [math]\displaystyle{ f(\hat{x}) }[/math] be the true values and let [math]\displaystyle{ x }[/math] and [math]\displaystyle{ f(x) }[/math] be approximate values with errors [math]\displaystyle{ \delta x }[/math] and [math]\displaystyle{ \delta f }[/math] respectively, so that

[math]\displaystyle{ \hat{x} = x + \delta x }[/math] and [math]\displaystyle{ f(\hat{x}) = f(x) + \delta f }[/math].

Then

[math]\displaystyle{ \delta f = f(\hat{x}) - f(x) = f(x + {\delta x}) - f(x) = \frac{f(x + \delta x) - f(x)} {\delta x } \cdot {\delta x} \approx \frac{df(x)}{dx} {\delta x} }[/math],

and hence

[math]\displaystyle{ |\delta f| \approx \left|\frac{df(x)}{dx} {\delta x}\right| }[/math].

The significant figures of a number is related to the uncertain error of the number by

[math]\displaystyle{ \left\vert {\delta x} \right\vert \approx \left\vert { x \cdot 10^{-({\rm significant ~ figures ~ of ~} x)}} \right\vert }[/math]

where "significant figures of x" here means the number of significant figures of x. Substituting this into the above equation gives

[math]\displaystyle{ \left\vert {f(x) \cdot 10^{-({\rm significant ~ figures ~ of ~} f(x))}} \right\vert \approx \left\vert {\frac{df(x)}{dx} x \cdot 10^{-({\rm significant ~ figures ~ of ~} x)}}\right\vert }[/math]
[math]\displaystyle{ \left\vert {f(x)} \right\vert \cdot 10^{-({\rm significant ~ figures ~ of ~} f(x))} \approx \left\vert {\frac{df(x)}{dx} x}\right\vert \cdot 10^{-({\rm significant ~ figures ~ of ~} x)} }[/math].

Therefore

[math]\displaystyle{ -{({\rm significant ~ figures ~ of ~} f(x))} \approx \log_{10} \left ( \left\vert{\frac{df(x)}{dx} \frac{x}{f(x)}}\right\vert \cdot 10^{-{({\rm significant ~ figures ~ of ~} x)}} \right ) = {-({\rm significant ~ figures ~ of ~} x)} + \log_{10} \left( \left\vert{\frac{df(x)}{dx} \frac{x}{f(x)}}\right\vert \right) }[/math]

giving, finally:

[math]\displaystyle{ {({\rm significant ~ figures ~ of ~} f(x))} \approx {({\rm significant ~ figures ~ of ~} x)} - \log_{10} \left( \left\vert{\frac{df(x)}{dx} \frac{x}{f(x)}}\right\vert \right) }[/math].

Rounding rules

Because significance arithmetic involves rounding, it is useful to understand a specific rounding rule that is often used when doing scientific calculations: the round-to-even rule (also called banker's rounding). It is especially useful when dealing with large data sets.

This rule helps to eliminate the upwards skewing of data when using traditional rounding rules. Whereas traditional rounding always rounds up when the following digit is 5, bankers sometimes round down to eliminate this upwards bias. See the article on rounding for more information on rounding rules and a detailed explanation of the round-to-even rule.

Disagreements about importance

Significant figures are used extensively in high school and undergraduate courses as a shorthand for the precision with which a measurement is known. However, significant figures are not a perfect representation of uncertainty, and are not meant to be. Instead, they are a useful tool for avoiding expressing more information than the experimenter actually knows, and for avoiding rounding numbers in such a way as to lose precision.

For example, here are some important differences between significant figure rules and uncertainty:

  • Uncertainty is not the same as a mistake. If the outcome of a particular experiment is reported as 1.234 ± 0.056 it does not mean the observer made a mistake; it may be that the outcome is inherently statistical, and is best described by the expression indicating a value showing only those digits that are significant, i.e. the known digits plus one uncertain digit, in this case 1.23 ± 0.06. To describe that outcome as 1.234 would be incorrect under these circumstances, even though it expresses less uncertainty.
  • Uncertainty is not the same as insignificance, and vice versa. An uncertain number may be highly significant (example: signal averaging). Conversely, a completely certain number may be insignificant.
  • Significance is not the same as significant digits. Digit-counting is not as rigorous a way to represent significance as specifying the uncertainty separately and explicitly (such as 1.234 ± 0.056).
  • Manual, algebraic propagation of uncertainty—the nominal topic of this article—is possible, but challenging. Alternative methods include the crank three times method and the Monte Carlo method. Another option is interval arithmetic, which can provide a strict upper bound on the uncertainty, but generally it is not a tight upper bound (i.e. it does not provide a best estimate of the uncertainty). For most purposes, Monte Carlo is more useful than interval arithmetic.[citation needed] Kahan considers significance arithmetic to be unreliable as a form of automated error analysis.[2]

In order to explicitly express the uncertainty in any uncertain result, the uncertainty should be given separately, with an uncertainty interval, and a confidence interval. The expression 1.23 U95 = 0.06 implies that the true (unknowable) value of the variable is expected to lie in the interval from 1.17 to 1.29 with at least 95% confidence. If the confidence interval is not specified it has traditionally been assumed to be 95% corresponding to two standard deviations from the mean. Confidence intervals at one standard deviation (68%) and three standard deviations (99%) are also commonly used.

See also

References

Further reading

  • Delury, D. B. (1958). "Computations with approximate numbers". The Mathematics Teacher 51 (7): 521–30. 
  • Bond, E. A. (1931). "Significant Digits in Computation with Approximate Numbers". The Mathematics Teacher 24 (4): 208–12. 
  • ASTM E29-06b, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications

External links