Compressed cover tree
The compressed cover tree is a type of data structure in computer science that is specifically designed to facilitate the speed-up of a k-nearest neighbors algorithm in finite metric spaces.[1] Compressed cover tree is a simplified version of explicit representation of cover tree that was motivated by past issues in proofs of time complexity results[2] of cover tree. The compressed cover tree was specifically designed to achieve claimed time complexities of cover tree[3] in a mathematically rigorous way.
Problem statement
In the modern formulation, the k-nearest neighbor problem is to find all [math]\displaystyle{ k\geq 1 }[/math] nearest neighbors in a given reference set R for all points from another given query set Q. Both sets belong to a common ambient space X with a distance metric d satisfying all metric axioms.
Definitions
Compressed cover tree
Let (R,d) be a finite metric space. A compressed cover tree [math]\displaystyle{ \mathcal{T}(R) }[/math] has the vertex set R with a root [math]\displaystyle{ r \in R }[/math] and a level function [math]\displaystyle{ l:R \rightarrow \mathbb{Z} }[/math] satisfying the conditions below:
- Root condition: the level of the root node r satisfies [math]\displaystyle{ l(r) \geq 1 + \max\limits_{p \in R \setminus \{r\}}l(p) }[/math]
- Covering condition: For every node [math]\displaystyle{ q \in R\setminus \{r\} }[/math], we select a unique parent p and a level l(q) such that [math]\displaystyle{ d(q,p) \leq 2^{l(q)+1} }[/math] and [math]\displaystyle{ l(q) \lt l(p) }[/math] this parent node pp has a single link to its child node q.
- Separation condition: For [math]\displaystyle{ i \in \Z }[/math], the cover set [math]\displaystyle{ C_i = \{p \in R \mid l(p) \geq i\} }[/math] has [math]\displaystyle{ d_{\min}(C_i) = \min\limits_{p \in C_{i}}\min\limits_{q \in C_{i}\setminus \{p\}} d(p,q) \gt 2^{i} }[/math]
Expansion constants
In a metric space, let [math]\displaystyle{ \bar B(p,t) }[/math] be the closed ball with a center p and a radius [math]\displaystyle{ t\geq 0 }[/math]. The notation [math]\displaystyle{ |\bar B(p,t)| }[/math] denotes the number (if finite) of points in the closed ball.
The expansion constant [3] [math]\displaystyle{ c(R) }[/math] is the smallest [math]\displaystyle{ c(R)\geq 2 }[/math] such that [math]\displaystyle{ |\bar{B}(p,2t)|\leq c(R) \cdot |\bar{B}(p,t)| }[/math] for any point [math]\displaystyle{ p\in R }[/math] and [math]\displaystyle{ t\geq 0 }[/math].
the new minimized expansion constant [1] [math]\displaystyle{ c_m }[/math] is a discrete analog of the doubling dimension Navigating nets [4] [math]\displaystyle{ c_m(R) = \lim\limits_{\xi \rightarrow 0^{+}}\inf\limits_{R\subseteq A\subseteq X}\sup\limits_{p \in A,t \gt \xi}\dfrac{|\bar{B}(p,2t) \cap A|}{|\bar{B}(p,t) \cap A|} }[/math], where A is a locally finite set which covers R.
Note that [math]\displaystyle{ c_m(R) \leq c(R) }[/math] for any finite metric space (R,d).
Aspect ratio
For any finite set R with a metric d, the diameter is [math]\displaystyle{ \mathrm{diam}(R) = \max_{p \in R}\max_{q \in R}d(p,q) }[/math]. The aspect ratio is [math]\displaystyle{ \Delta(R) = \dfrac{\mathrm{diam}(R)}{d_{\min}(R)} }[/math], where [math]\displaystyle{ d_{\min}(R) }[/math] is the shortest distance between points of R.
Complexity
Insert
Although cover trees provide faster searches than the naive approach, this advantage must be weighed with the additional cost of maintaining the data structure. In a naive approach adding a new point to the dataset is trivial because order does not need to be preserved, but in a compressed cover tree it can be bounded
- Using expansion constant: [math]\displaystyle{ O(c(R)^{10} \cdot \log|R|) }[/math].
- Using minimized expansion constant / doubling dimension [math]\displaystyle{ O(c_m(R)^{8} \cdot \log\Delta(|R|)) }[/math].
K-nearest neighborhood search
Let Q and R be finite subsets of a metric space (X,d). Once all points of R are inserted into a compressed cover tree [math]\displaystyle{ \mathcal{T}(R) }[/math] it can be used for find-queries of the query point set Q. The following time complexities have been proven for finding the k-nearest neighbor of a query point [math]\displaystyle{ q \in Q }[/math] in the reference set R:
- Using expansion constant: [math]\displaystyle{ O\Big ( c(R \cup \{q\})^2 \cdot \log_2(k) \cdot \big((c_m(R))^{10} \cdot \log_2(|R|) + c(R \cup \{q\}) \cdot k\big) \Big). }[/math].
- Using minimized expansion constant / doubling dimension [math]\displaystyle{ O\Big ((c_m(R))^{10} \cdot \log_2(k) \cdot \log_2(\Delta(R)) + |\bar{B}(q, 5d_k(q,R))| \cdot \log_2(k) \Big ) }[/math], where [math]\displaystyle{ |\bar{B}(q, 5d_k(q,R))| }[/math] is a number of points inside a closed ball around q having a radius 5 times the distance of q to its k-nearest neighbor.
Space
The compressed cover tree constructed on finite metric space R requires O(|R|) space, during the construction and during the execution of the Find algorithm.
Compared to other similar data structures
Tables below show time complexity estimates which use minimized expansion constant [math]\displaystyle{ c_m(R) }[/math] or dimensionality constant [math]\displaystyle{ 2^{\text{dim}} }[/math] [4] related to doubling dimension. Note that [math]\displaystyle{ \Delta }[/math] denotes the aspect ratio.
Results for building data structures
Name of datastructure, source |
Claimed time complexity | Claimed space complexity | Proof of result |
---|---|---|---|
Navigating nets [4] | [math]\displaystyle{ O\big(2^{O(\text{dim})} \cdot |R| \cdot \log(\Delta(R)) \cdot \log(\log((\Delta(R)))\big) }[/math] | [math]\displaystyle{ O(2^{O(\text{dim})}|R|) }[/math] | Theorem 2.5 [4] |
Compressed cover tree [1] | [math]\displaystyle{ O\big(c(R \cup \{q\})^{O(1)} \cdot \log(k) \cdot (\log(|R|) + k)\big) }[/math] | [math]\displaystyle{ O(|R|) }[/math] | Theorem 3.6 [1] |
Results for exact k-nearest neighbors of one query point [math]\displaystyle{ q \in Q }[/math] in reference set R assuming that all data structures are already built. Below we denote the distance between a query point q and the reference set R as [math]\displaystyle{ d(q,R) }[/math] and distance from a query point q to its k-nearest neighbor in set R as [math]\displaystyle{ d_k(q,R) }[/math]:
Name of datastructure, source |
Claimed time complexity | Claimed space complexity | Proof of result |
---|---|---|---|
Navigating nets [4] | [math]\displaystyle{ O\big(2^{O(\text{dim})} \cdot \log(\Delta) + |\bar{B}(q,O(d(q,R))|\big) }[/math] | [math]\displaystyle{ O(2^{O(\text{dim})}|R|) }[/math] | Proof outline in Theorem 2.3 [4] |
Compressed cover tree [1] | [math]\displaystyle{ O\big(\log(k)\cdot (c_m(R)^{O(1)} \log(|\Delta|) + |\bar{B}(q,O(d_k(q,R))| ) \big ) }[/math] | [math]\displaystyle{ O(|R|) }[/math] | Corollary 3.7 [1] |
Tables below show time complexity estimates which use [math]\displaystyle{ c(R) }[/math] or KR-type constant [math]\displaystyle{ 2^{\text{dim}_{KR}} }[/math][4] as a hidden factor. Note that the dimensionality factor [math]\displaystyle{ 2^{\text{dim}_{KR}} }[/math] is equivalent to [math]\displaystyle{ c(R)^{O(1)} }[/math]
Results for building data structures
Name of datastructure, source |
Claimed time complexity | Claimed space complexity | Proof of result |
---|---|---|---|
Navigating nets [4] | [math]\displaystyle{ O\big(2^{O(\text{dim}_{KR})} \cdot |R| \log(|R|) \log(\log|R| )\big) }[/math] | [math]\displaystyle{ O(2^{O(dim)} \cdot |R|)) }[/math] | Not available |
Cover tree [3] | [math]\displaystyle{ O(c(R)^{O(1)} \cdot |R| \cdot \log|R|) }[/math] | [math]\displaystyle{ O(|R|) }[/math] | Counterexample 4.2 [2] shows that the past proof is incorrect. |
Compressed cover tree [1] | [math]\displaystyle{ O(c(R)^{O(1)} \cdot |R| \cdot \log|R|) }[/math] | [math]\displaystyle{ O(|R|) }[/math] | Corollary 3.10 [1] |
Results for exact k-nearest neighbors of one query point [math]\displaystyle{ q \in X }[/math] assuming that all data structures are already built.
Name of datastructure, source |
Claimed time complexity | Claimed space complexity | Proof of result |
---|---|---|---|
Navigating nets [4] | [math]\displaystyle{ O(2^{O(\text{dim}_{KR}(R \cup \{q\})}(k + \log|R|)) }[/math] | [math]\displaystyle{ O(2^{O(dim)} \cdot |R|)) }[/math] | Not available |
Cover tree [3] | [math]\displaystyle{ O\big(c(R)^{O(1)}\log|R|\big ) }[/math] for k = 1. | [math]\displaystyle{ O(|R|) }[/math] | Counterexample 5.2 [2] shows that the past proof is incorrect. |
Compressed cover tree [1] | [math]\displaystyle{ O\big(c(R \cup \{q\})^{O(1)} \cdot \log(k) \cdot (\log(|R|) + k)\big) }[/math] | [math]\displaystyle{ O(|R|) }[/math] | Theorem 4.9 |
See also
- Cover tree
- Nearest neighbor search
- kd-tree
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Elkin, Yury; Kurlin, Vitaliy (2021). "A new near-linear time algorithm for k-nearest neighbor search using a compressed cover tree". arXiv:2111.15478 [cs.CG].
- ↑ 2.0 2.1 2.2 Elkin, Yury; Kurlin, Vitaliy (2022). "Counterexamples expose gaps in the proof of time complexity for cover trees introduced in 2006". arXiv:2208.09447 [cs.CG].
- ↑ 3.0 3.1 3.2 3.3 Alina Beygelzimer, Sham Kakade, and John Langford. Cover Trees for Nearest Neighbor. In Proc. International Conference on Machine Learning (ICML), 2006.
- ↑ 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Kenneth Clarkson. Nearest-neighbor searching and metric space dimensions. In G. Shakhnarovich, T. Darrell, and P. Indyk, editors, Nearest-Neighbor Methods for Learning and Vision: Theory and Practice, pages 15--59. MIT Press, 2006.
Original source: https://en.wikipedia.org/wiki/Compressed cover tree.
Read more |