Inverse distance weighting

From HandWiki
Revision as of 22:16, 6 February 2024 by Smart bot editor (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Type of deterministic method for multivariate interpolation
Inverse Distance Weighting as a sum of all weighting functions for each sample point. Each function has the value of one of the samples at its sample point and zero at every other sample point.

Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the known points. This method can also be used to create spatial weights matrices in spatial autocorrelation analyses (e.g. Moran's I).[1]

The name given to this type of method was motivated by the weighted average applied, since it resorts to the inverse of the distance to each known point ("amount of proximity") when assigning weights.

Definition of the problem

The expected result is a discrete assignment of the unknown function [math]\displaystyle{ u }[/math] in a study region:

[math]\displaystyle{ u(x): x \to \mathbb{R}, \quad x \in \mathbf{D} \sub \mathbb{R}^n, }[/math]

where [math]\displaystyle{ \mathbf{D} }[/math] is the study region.

The set of [math]\displaystyle{ N }[/math] known data points can be described as a list of tuples:

[math]\displaystyle{ [(x_1, u_1), (x_2, u_2), ..., (x_N, u_N)]. }[/math]

The function is to be "smooth" (continuous and once differentiable), to be exact ([math]\displaystyle{ u(x_i) = u_i }[/math]) and to meet the user's intuitive expectations about the phenomenon under investigation. Furthermore, the function should be suitable for a computer application at a reasonable cost (nowadays, a basic implementation will probably make use of parallel resources).

Shepard's method

Historical reference

At the Harvard Laboratory for Computer Graphics and Spatial Analysis, beginning in 1965, a varied collection of scientists converged to rethink, among other things, what are now called geographic information systems.[2]

The motive force behind the Laboratory, Howard Fisher, conceived an improved computer mapping program that he called SYMAP, which, from the start, Fisher wanted to improve on the interpolation. He showed Harvard College freshmen his work on SYMAP, and many of them participated in Laboratory events. One freshman, Donald Shepard, decided to overhaul the interpolation in SYMAP, resulting in his famous article from 1968.[3]

Shepard's algorithm was also influenced by the theoretical approach of William Warntz and others at the Lab who worked with spatial analysis. He conducted a number of experiments with the exponent of distance, deciding on something closer to the gravity model (exponent of -2). Shepard implemented not just basic inverse distance weighting, but also allowed barriers (permeable and absolute) to interpolation.

Other research centers were working on interpolation at this time, particularly University of Kansas and their SURFACE II program. Still, the features of SYMAP were state-of-the-art, even though programmed by an undergraduate.

Basic form

Shepard's interpolation for different power parameters p, from scattered points on the surface [math]\displaystyle{ z=\exp(-x^2-y^2) }[/math]

Given a set of sample points [math]\displaystyle{ \{ \mathbf{x}_i, u_i | \text{for } \mathbf{x}_i \in \mathbb{R}^n, u_i \in \mathbb{R}\}_{i=1}^N }[/math], the IDW interpolation function [math]\displaystyle{ u(\mathbf{x}): \mathbb{R}^n \to \mathbb{R} }[/math] is defined as:

[math]\displaystyle{ u(\mathbf{x}) = \begin{cases} \dfrac{\sum_{i = 1}^{N}{ w_i(\mathbf{x}) u_i } }{ \sum_{i = 1}^{N}{ w_i(\mathbf{x}) } }, & \text{if } d(\mathbf{x},\mathbf{x}_i) \neq 0 \text{ for all } i, \\ u_i, & \text{if } d(\mathbf{x},\mathbf{x}_i) = 0 \text{ for some } i, \end{cases} }[/math]

where

[math]\displaystyle{ w_i(\mathbf{x}) = \frac{1}{d(\mathbf{x},\mathbf{x}_i)^p} }[/math]

is a simple IDW weighting function, as defined by Shepard,[3] x denotes an interpolated (arbitrary) point, xi is an interpolating (known) point, [math]\displaystyle{ d }[/math] is a given distance (metric operator) from the known point xi to the unknown point x, N is the total number of known points used in interpolation and [math]\displaystyle{ p }[/math] is a positive real number, called the power parameter.

Here weight decreases as distance increases from the interpolated points. Greater values of [math]\displaystyle{ p }[/math] assign greater influence to values closest to the interpolated point, with the result turning into a mosaic of tiles (a Voronoi diagram) with nearly constant interpolated value for large values of p. For two dimensions, power parameters [math]\displaystyle{ p \leq 2 }[/math] cause the interpolated values to be dominated by points far away, since with a density [math]\displaystyle{ \rho }[/math] of data points and neighboring points between distances [math]\displaystyle{ r_0 }[/math] to [math]\displaystyle{ R }[/math], the summed weight is approximately

[math]\displaystyle{ \sum_j w_j \approx \int_{r_0}^R \frac{2\pi r\rho \,dr}{r^p} = 2\pi\rho\int_{r_0}^R r^{1-p} \,dr, }[/math]

which diverges for [math]\displaystyle{ R\rightarrow\infty }[/math] and [math]\displaystyle{ p\leq2 }[/math]. For M dimensions, the same argument holds for [math]\displaystyle{ p\leq M }[/math]. For the choice of value for p, one can consider the degree of smoothing desired in the interpolation, the density and distribution of samples being interpolated, and the maximum distance over which an individual sample is allowed to influence the surrounding ones.

Shepard's method is a consequence of minimization of a functional related to a measure of deviations between tuples of interpolating points {x, u} and i tuples of interpolated points {xi, ui}, defined as:

[math]\displaystyle{ \phi(\mathbf{x}, u) = \left( \sum_{i = 0}^{N}{\frac{(u-u_i)^2}{d(\mathbf{x},\mathbf{x}_i)^p}} \right)^{\frac{1}{p}} , }[/math]

derived from the minimizing condition:

[math]\displaystyle{ \frac{\partial \phi(\mathbf{x}, u)}{\partial u} = 0. }[/math]

The method can easily be extended to other dimensional spaces and it is in fact a generalization of Lagrange approximation into a multidimensional spaces. A modified version of the algorithm designed for trivariate interpolation was developed by Robert J. Renka [4] and is available in Netlib as algorithm 661 in the TOMS Library.

Example in 1 dimension

Shepard's interpolation in 1 dimension, from 4 scattered points and using p = 2

Modified Shepard's method

Another modification of Shepard's method calculates interpolated value using only nearest neighbors within R-sphere (instead of full sample). Weights are slightly modified in this case:

[math]\displaystyle{ w_k(\mathbf{x}) = \left( \frac{\max(0,R-d(\mathbf{x},\mathbf{x}_k))}{R d(\mathbf{x},\mathbf{x}_k)} \right)^2. }[/math]

When combined with fast spatial search structure (like kd-tree), it becomes efficient N log N interpolation method suitable for large-scale problems.

See also

References