Cell-probe model

From HandWiki

In computer science, the cell-probe model is a model of computation similar to the random-access machine, except that all operations are free except memory access. This model is useful for proving lower bounds of algorithms for data structure problems.

Overview

The cell-probe model is a modification of the random-access machine model, in which computational cost is only assigned to accessing memory cells.

The model is intended for proving lower bounds on the complexity of data structure problems. One type of such problems has two phases: the preprocessing phase and the query phase. The input to the first phase, the preprocessing phase, is a set of data from which to build some structure from memory cells. The input to the second phase, the query phase, is a query parameter. The query has to consult the data structure in order to compute its result; for example, a query may be asked to determine if the query parameter was included in the original input data set. Another type of problem involves both update operations, that modify the data structure, and query operations. For example, an update may add an element to the structure, or remove one. In both cases, the cell-probe complexity of the data structure is characterized by the number of memory cells accessed during preprocessing, query and (if relevant) update. The cell probe complexity is a lower bound on the time complexity of the corresponding operations on a random-access machine, where memory transfers are part of the operations counted in measuring time.

An example of such a problem is the dynamic partial sum problem.[1][2]

History

Andrew Yao's 1981 paper "Should Tables Be Sorted?"[3] is considered as the introduction of the cell-probe model. Yao used it to give a minimum number of memory cell "probes" or accesses necessary to determine whether a given query datum exists within a table stored in memory. In 1989, Fredman and Saks initiated the study of cell probe lower bounds for dynamic data-structure problems (i.e., involving updates and queries), and introduced the notation CPROBE(b) for the cell-probe model assuming that a memory cell (word) consists of b bits.[4]

Notable results

Searching Tables

Yao[3] considered a static data-structure problem where one has to build a data structure ("table") to represent a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements out of [math]\displaystyle{ 1,\dots,m }[/math]. The query parameter is a number [math]\displaystyle{ x\le m }[/math] and the query has to report whether [math]\displaystyle{ x }[/math] is in the table. Yao showed that as long as the table size is bounded independently of [math]\displaystyle{ m }[/math] and [math]\displaystyle{ m }[/math] is large enough, a query must perform [math]\displaystyle{ \lceil\log (n+1)\rceil }[/math] probes in the worst case. This shows that a sorted table together with binary search for queries is an optimal scheme.

Dynamic Partial Sums

The dynamic partial sum problem defines two operations Update(k,v) which sets the value in an array A at index k to be v, and Sum(k) which returns the sum of the values in A at indices 0 through k. A naïve implementation would take [math]\displaystyle{ O(1) }[/math] time for Update and [math]\displaystyle{ O(n) }[/math] time for Sum.[5]

Instead, values can be stored as leaves in a tree whose inner nodes store the sum over the subtree rooted at that node. In this structure Update requires [math]\displaystyle{ O(\log n) }[/math] time to update each node in the leaf to root path, and Sum similarly requires [math]\displaystyle{ O(\log n) }[/math] time to traverse the tree from leaf to root summing the values of all subtrees left of the query index.

Improving on a result of Fredman and Saks, Mihai Pătraşcu used the cell-probe model and an information transfer argument to show that the partial sums problem requires [math]\displaystyle{ \Omega\left(\log n\right) }[/math] time per operation in the worst case (i.e., the worst of query and update must consume such time), assuming [math]\displaystyle{ b=\Omega(\log n) }[/math] bits per word.[1][2] He further exhibited the trade-off curve between update time and query time and investigated the case that updates are restricted to small numbers (of [math]\displaystyle{ \delta=o(b) }[/math] bits).

Disjoint Set Maintenance (Union-Find)

In the disjoint-set data structure, the structure represents a collection of disjoint sets; there is an update operation, called Union, which unites two sets, and a query operation, called Find, which identifies the set to which a given element belongs. Fredman and Saks proved that in the model CPROBE(log n), any solution for this problem requires [math]\displaystyle{ \Omega(m\alpha(m,n)) }[/math] probes in the worst case (even in expectation) to execute [math]\displaystyle{ n-1 }[/math] unions and [math]\displaystyle{ m\ge n }[/math] finds.[4] This shows that the classic data structure described in the article on disjoint-set data structure is optimal.

Approximate Nearest Neighbor Searching

The exact nearest neighbor search problem is to determine the closest in a set of input points to a given query point. An approximate version of this problem is often considered since many applications of this problem are in very high dimension spaces and solving the problem in high dimensions requires exponential time or space with respect to the dimension.[6]

Chakrabarti and Regev proved that the approximate nearest neighbor search problem on the Hamming cube using polynomial storage and [math]\displaystyle{ d^{O(1)} }[/math] word size requires a worst-case query time of [math]\displaystyle{ \Omega\left(\frac{\log\log d}{\log\log\log d}\right) }[/math]. This proof used the cell-probe model and information theoretic techniques from communication complexity.

The Cell-Probe Model versus Random Access Machines

In the cell probe model, limiting the range of values that can be stored in a cell is paramount (otherwise one could encode the whole data structure in one cell). The idealized random access machine used as a computational model in Computer Science does not impose a limit on the contents of each cell (in contrast to the word RAM). Thus cell probe lower bounds apply to the word RAM, but do not apply to the idealized RAM. Certain techniques for cell-probe lower bounds can, however, be carried over to the idealized RAM with an algebraic instruction set and similar lower bounds result.[7]

External links

References

  1. 1.0 1.1 Pătraşcu, Mihai; Demaine, Erik D. (2006). "Logarithmic lower bounds in the cell-probe model". SIAM Journal on Computing 35 (4): 932–963. doi:10.1137/s0097539705447256. Bibcode2005cs........2041P. 
  2. 2.0 2.1 Pătraşcu, Mihai. "Lower Bounds for Dynamic Partial Sums". http://cg.scs.carleton.ca/~morin/teaching/5408/refs/p09.pdf. Retrieved 9 April 2014. 
  3. 3.0 3.1 Yao, Andrew Chi-Chih (July 1981). "Should Tables Be Sorted?". J. ACM 28 (3): 615–628. doi:10.1145/322261.322274. 
  4. 4.0 4.1 Fredman, Michael; Saks, Michael (1989). "The cell probe complexity of dynamic data structures". Proceedings of the twenty-first annual ACM symposium on Theory of computing - STOC '89. 21. pp. 345–354. doi:10.1145/73007.73040. ISBN 0897913078. 
  5. Zatloukal, Kevin (November 10, 2010). "Notes on "Logarithmic Lower Bounds in the Cell-Probe Model"". http://homes.cs.washington.edu/~anuprao/pubs/CSE533Autumn2010/zatloukaldynamiclowerbound.pdf. Retrieved 9 April 2014. 
  6. Chakrabarti, Amit; Regev, Oded (2004). "An Optimal Randomised Cell Probe Lower Bound for Approximate Nearest Neighbour Searching". 45th Annual IEEE Symposium on Foundations of Computer Science. pp. 473–482. doi:10.1109/FOCS.2004.12. ISBN 0-7695-2228-9. 
  7. Ben-Amram, Amir M.; Galil, Zvi (2002). "Lower bounds for dynamic data structures on algebraic RAMs". Algorithmica 32 (3): 364–395. doi:10.1007/s00453-001-0079-6.