Discretization error

From HandWiki
Revision as of 20:59, 6 February 2024 by Rjetedi (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Error from taking a finite number of steps in a computation to approximate an infinite process

In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a lattice. Discretization error can usually be reduced by using a more finely spaced lattice, with an increased computational cost.

Examples

Discretization error is the principal source of error in methods of finite differences and the pseudo-spectral method of computational physics.

When we define the derivative of [math]\displaystyle{ \,\!f(x) }[/math] as [math]\displaystyle{ f'(x) = \lim_{h\rightarrow0}{\frac{f(x+h)-f(x)}{h}} }[/math] or [math]\displaystyle{ f'(x)\approx\frac{f(x+h)-f(x)}{h} }[/math], where [math]\displaystyle{ \,\!h }[/math] is a finitely small number, the difference between the first formula and this approximation is known as discretization error.

Related phenomena

In signal processing, the analog of discretization is sampling, and results in no loss if the conditions of the sampling theorem are satisfied, otherwise the resulting error is called aliasing.

Discretization error, which arises from finite resolution in the domain, should not be confused with quantization error, which is finite resolution in the range (values), nor in round-off error arising from floating-point arithmetic. Discretization error would occur even if it were possible to represent the values exactly and use exact arithmetic – it is the error from representing a function by its values at a discrete set of points, not an error in these values.[1]

References

See also