Cuckoo filter

From HandWiki
Short description: Data structure for approximate set membership

A cuckoo filter is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set, like a Bloom filter does. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". A cuckoo filter can also delete existing items, which is not supported by Bloom filters. In addition, for applications that store many items and target moderately low false positive rates, cuckoo filters can achieve lower space overhead than space-optimized Bloom filters.[1]

Cuckoo filters were first described in 2014.[2]

Algorithm description

A cuckoo filter uses a hash table based on cuckoo hashing to store the fingerprints of items.[2] The data structure is broken into buckets of some size [math]\displaystyle{ b }[/math]. To insert the fingerprint of an item [math]\displaystyle{ x }[/math], one first computes two potential buckets [math]\displaystyle{ h_1(x) }[/math] and [math]\displaystyle{ h_2(x) }[/math] where [math]\displaystyle{ x }[/math] could go. These buckets are calculated using the formula

[math]\displaystyle{ h_1(x)=\text{hash}(x) }[/math]
[math]\displaystyle{ h_2(x)=h_1(x)\oplus\text{hash}(\text{fingerprint}(x)) }[/math]

Note that, due to the symmetry of the XOR operation, one can compute [math]\displaystyle{ h_2(x) }[/math] from [math]\displaystyle{ h_1(x) }[/math], and [math]\displaystyle{ h_1(x) }[/math] from [math]\displaystyle{ h_2(x) }[/math]. As defined above, [math]\displaystyle{ h_2(x) = h_1(x)\oplus\text{hash}(\text{fingerprint}(x)) }[/math]; it follows that [math]\displaystyle{ h_1(x) = h_2(x)\oplus\text{hash}(\text{fingerprint}(x)) }[/math]. These properties are what make it possible to store the fingerprints with cuckoo hashing.

The fingerprint of [math]\displaystyle{ x }[/math] is placed into one of buckets [math]\displaystyle{ h_1(x) }[/math] and [math]\displaystyle{ h_2(x) }[/math]. If the buckets are full, then one of the fingerprints in the bucket is evicted using cuckoo hashing, and placed into the other bucket where it can go. If that bucket, in turn, is also full, then that may trigger another eviction, etc.

The hash table can achieve both high utilization (thanks to cuckoo hashing), and compactness because only fingerprints are stored. Lookup and delete operations of a cuckoo filter are straightforward.[2]

There are a maximum of two buckets to check by [math]\displaystyle{ h_1(x) }[/math] and [math]\displaystyle{ h_2(x) }[/math]. If found, the appropriate lookup or delete operation can be performed in [math]\displaystyle{ O(b) }[/math] time. Often, in practice, [math]\displaystyle{ b }[/math] is a constant.

In order for the hash table to offer theoretical guarantees, the fingerprint size [math]\displaystyle{ f }[/math] must be at least [math]\displaystyle{ \Omega((\log n) / b) }[/math] bits.[2][3][4] Subject to this constraint, cuckoo filters guarantee a false-positive rate of at most [math]\displaystyle{ \epsilon \le b/2^{f - 1} }[/math].[2]

Comparison to Bloom filters

A cuckoo filter is similar to a Bloom filter in that they both are fast and compact, and they may both return false positives as answers to set-membership queries:

  • Space-optimal Bloom filters use [math]\displaystyle{ 1.44\log_2(1/\epsilon) }[/math] bits of space per inserted key, where [math]\displaystyle{ \epsilon }[/math] is the false positive rate. A cuckoo filter requires [math]\displaystyle{ (\log_2(1/\epsilon) + 1 + \log_2 b)/\alpha }[/math] space per key[2] where [math]\displaystyle{ \alpha }[/math] is the hash table load factor, which can be [math]\displaystyle{ 95.5\% }[/math] based on the cuckoo filter's setting. Note that the information theoretical lower bound requires [math]\displaystyle{ \log_2(1/\epsilon) }[/math] bits for each item. Both bloom filters and cuckoo filters with low load can be compressed when not in use.
  • On a positive lookup, a space-optimal Bloom filter requires a constant [math]\displaystyle{ \log_2(1/\epsilon) }[/math] memory accesses into the bit array, whereas a cuckoo filter requires at most [math]\displaystyle{ 2b }[/math] memory accesses, which can be a constant in practice.
  • Cuckoo filters have degraded insertion speed after reaching a load threshold, when table expanding is recommended. In contrast, Bloom filters can keep inserting new items at the cost of a higher false positive rate before expansion.
  • Bloom filters offer fast union and approximate intersection operations using cheap bitwise operations, which can also be applied to compressed bloom filters if streaming compression is used.

Limitations

  • A cuckoo filter can only delete items that are known to be inserted before.
  • Insertion can fail and rehashing is required like other cuckoo hash tables. Note that the amortized insertion complexity is still [math]\displaystyle{ O(1) }[/math].[5]
  • Cuckoo filters require a fingerprint size [math]\displaystyle{ f }[/math] of at least [math]\displaystyle{ \Omega((\log n) / b) }[/math] bits. This means that the space per key must be at least [math]\displaystyle{ \Omega((\log n) / b) }[/math] bits, even if [math]\displaystyle{ \epsilon }[/math] is large. In practice, [math]\displaystyle{ b }[/math] is chosen to be large enough that this is not a major issue.[2]

References

  1. "Bloom Filters, Cuckoo Hashing, Cuckoo Filters, Adaptive Cuckoo Filters, and Learned Bloom Filters". https://smartech.gatech.edu/handle/1853/60577. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Fan, Bin; Andersen, Dave G.; Kaminsky, Michael (2014). "Cuckoo filter: Practically better than Bloom". Proc. 10th ACM International on Conference on Emerging Networking Experiments and Technologies (CoNEXT '14). Sydney, Australia. pp. 75–88. doi:10.1145/2674005.2674994. ISBN 9781450332798. 
  3. "Cuckoo filter: Simplification and analysis". Proc. 15th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2016). 53. Reykjavik, Iceland. 22 June 2016. pp. 8:1–8:12. doi:10.4230/LIPIcs.SWAT.2016.8. 
  4. Template:Cite tech report
  5. Pagh, Rasmus; Rodler, Flemming Friche (2001). "Proc. 9th Annual European Symposium on Algorithms (ESA 2001)". 2161. Århus, Denmark. pp. 121–133. doi:10.1007/3-540-44676-1_10. ISBN 978-3-540-42493-2. 

External links