Closest pair of points problem
File:Closest pair of points.svg The closest pair of points problem or closest pair problem is a problem of computational geometry: given [math]\displaystyle{ n }[/math] points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane[1] was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Time bounds
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis.[2][3][4] This is significantly faster than the [math]\displaystyle{ O(n^2) }[/math] time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear [math]\displaystyle{ O(n\log\log n) }[/math] time.[5] In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower [math]\displaystyle{ O(n\log n) }[/math] time bound,[6] and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.[7][8]
Linear-time randomized algorithms
A linear expected time randomized algorithm of (Rabin 1976), modified slightly by Richard Lipton to make its analysis easier, proceeds as follows, on an input set [math]\displaystyle{ S }[/math] consisting of [math]\displaystyle{ n }[/math] points in a [math]\displaystyle{ k }[/math]-dimensional Euclidean space:
- Select [math]\displaystyle{ n }[/math] pairs of points uniformly at random, with replacement, and let [math]\displaystyle{ d }[/math] be the minimum distance of the selected pairs.
- Round the input points to a square grid of points whose size (the separation between adjacent grid points) is [math]\displaystyle{ d }[/math], and use a hash table to collect together pairs of input points that round to the same grid point.
- For each input point, compute the distance to all other inputs that either round to the same grid point or to another grid point within the Moore neighborhood of [math]\displaystyle{ 3^k-1 }[/math] surrounding grid points.
- Return the smallest of the distances computed throughout this process.
The algorithm will always correctly determine the closest pair, because it maps any pair closer than distance [math]\displaystyle{ d }[/math] to the same grid point or to adjacent grid points. The uniform sampling of pairs in the first step of the algorithm (compared to a different method of Rabin for sampling a similar number of pairs) simplifies the proof that the expected number of distances computed by the algorithm is linear.[4]
Instead, a different algorithm (Khuller Matias) goes through two phases: a random iterated filtering process that approximates the closest distance to within an approximation ratio of [math]\displaystyle{ 2\sqrt{k} }[/math], together with a finishing step that turns this approximate distance into the exact closest distance. The filtering process repeat the following steps, until [math]\displaystyle{ S }[/math] becomes empty:
- Choose a point [math]\displaystyle{ p }[/math] uniformly at random from [math]\displaystyle{ S }[/math].
- Compute the distances from [math]\displaystyle{ p }[/math] to all the other points of [math]\displaystyle{ S }[/math] and let [math]\displaystyle{ d }[/math] be the minimum such distance.
- Round the input points to a square grid of size [math]\displaystyle{ d/(2\sqrt{k}) }[/math], and delete from [math]\displaystyle{ S }[/math] all points whose Moore neighborhood has no other points.
The approximate distance found by this filtering process is the final value of [math]\displaystyle{ d }[/math], computed in the step before [math]\displaystyle{ S }[/math] becomes empty. Each step removes all points whose closest neighbor is at distance [math]\displaystyle{ d }[/math] or greater, at least half of the points in expectation, from which it follows that the total expected time for filtering is linear. Once an approximate value of [math]\displaystyle{ d }[/math] is known, it can be used for the final steps of Rabin's algorithm; in these steps each grid point has a constant number of inputs rounded to it, so again the time is linear.[3]
Dynamic closest-pair problem
The dynamic version for the closest-pair problem is stated as follows:
- Given a dynamic set of objects, find algorithms and data structures for efficient recalculation of the closest pair of objects each time the objects are inserted or deleted.
If the bounding box for all points is known in advance and the constant-time floor function is available, then the expected [math]\displaystyle{ O(n) }[/math]-space data structure was suggested that supports expected-time [math]\displaystyle{ O(\log n) }[/math] insertions and deletions and constant query time. When modified for the algebraic decision tree model, insertions and deletions would require [math]\displaystyle{ O(\log^2 n) }[/math] expected time.[9] The complexity of the dynamic closest pair algorithm cited above is exponential in the dimension [math]\displaystyle{ d }[/math], and therefore such an algorithm becomes less suitable for high-dimensional problems.
An algorithm for the dynamic closest-pair problem in [math]\displaystyle{ d }[/math] dimensional space was developed by Sergey Bespamyatnikh in 1998.[10] Points can be inserted and deleted in [math]\displaystyle{ O(\log n) }[/math] time per point (in the worst case).
See also
Notes
- ↑ Shamos, Michael Ian; Hoey, Dan (1975). "16th Annual Symposium on Foundations of Computer Science, Berkeley, California, USA, October 13-15, 1975". IEEE Computer Society. pp. 151–162. doi:10.1109/SFCS.1975.8.
- ↑ Rabin, M. (1976). "Algorithms and Complexity: Recent Results and New Directions". Academic Press. pp. 21–39. As cited by (Khuller Matias).
- ↑ 3.0 3.1 Khuller, Samir; Matias, Yossi (1995). "A simple randomized sieve algorithm for the closest-pair problem". Information and Computation 118 (1): 34–37. doi:10.1006/inco.1995.1049.
- ↑ 4.0 4.1 Lipton, Richard (24 September 2011). "Rabin Flips a Coin". Gödel's Lost Letter and P=NP. http://rjlipton.wordpress.com/2009/03/01/rabin-flips-a-coin/.
- ↑ Fortune, Steve; Hopcroft, John (1979). "A note on Rabin's nearest-neighbor algorithm". Information Processing Letters 8 (1): 20–23. doi:10.1016/0020-0190(79)90085-1.
- ↑ Clarkson, Kenneth L. (1983). "24th Annual Symposium on Foundations of Computer Science, Tucson, Arizona, USA, 7-9 November 1983". IEEE Computer Society. pp. 226–232. doi:10.1109/SFCS.1983.16.
- ↑ Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "33.4: Finding the closest pair of points". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 957–961. ISBN 0-262-03293-7.
- ↑ Kleinberg, Jon M.; Tardos, Éva (2006). "5.4 Finding the closest pair of points". Algorithm Design. Addison-Wesley. pp. 225–231. ISBN 978-0-321-37291-8.
- ↑ Golin, Mordecai; Raman, Rajeev; Schwarz, Christian; Smid, Michiel (1998). "Randomized data structures for the dynamic closest-pair problem". SIAM Journal on Computing 27 (4): 1036–1072. doi:10.1137/S0097539794277718. http://repository.ust.hk/ir/bitstream/1783.1-1429/1/27771.pdf.
- ↑ Bespamyatnikh, S. N. (1998). "An optimal algorithm for closest-pair maintenance". Discrete & Computational Geometry 19 (2): 175–195. doi:10.1007/PL00009340.
Original source: https://en.wikipedia.org/wiki/Closest pair of points problem.
Read more |