Double hashing
Double hashing is a computer programming technique used in conjunction with open addressing in hash tables to resolve hash collisions, by using a secondary hash of the key as an offset when a collision occurs. Double hashing with open addressing is a classical data structure on a table [math]\displaystyle{ T }[/math]. The double hashing technique uses one hash value as an index into the table and then repeatedly steps forward an interval until the desired value is located, an empty location is reached, or the entire table has been searched; but this interval is set by a second, independent hash function. Unlike the alternative collision-resolution methods of linear probing and quadratic probing, the interval depends on the data, so that values mapping to the same location have different bucket sequences; this minimizes repeated collisions and the effects of clustering.
Given two random, uniform, and independent hash functions [math]\displaystyle{ h_1 }[/math] and [math]\displaystyle{ h_2 }[/math], the [math]\displaystyle{ i }[/math]th location in the bucket sequence for value [math]\displaystyle{ k }[/math] in a hash table of [math]\displaystyle{ |T| }[/math] buckets is: [math]\displaystyle{ h(i,k)=(h_1(k) + i \cdot h_2(k))\bmod|T|. }[/math] Generally, [math]\displaystyle{ h_1 }[/math] and [math]\displaystyle{ h_2 }[/math] are selected from a set of universal hash functions; [math]\displaystyle{ h_1 }[/math] is selected to have a range of [math]\displaystyle{ \{0,|T|-1\} }[/math] and [math]\displaystyle{ h_2 }[/math] to have a range of [math]\displaystyle{ \{1,|T|-1\} }[/math]. Double hashing approximates a random distribution; more precisely, pair-wise independent hash functions yield a probability of [math]\displaystyle{ (n/|T|)^2 }[/math] that any pair of keys will follow the same bucket sequence.
Selection of h2(k)
The secondary hash function [math]\displaystyle{ h_2(k) }[/math] should have several characteristics:
- it should never yield an index of zero
- it should cycle through the whole table
- it should be very fast to compute
- it should be pair-wise independent of [math]\displaystyle{ h_1(k) }[/math]
- The distribution characteristics of [math]\displaystyle{ h_2 }[/math] are irrelevant. It is analogous to a random-number generator.
- All [math]\displaystyle{ h_2(k) }[/math] be relatively prime to |T|.
In practice:
- If division hashing is used for both functions, the divisors are chosen as primes.
- If the T is a power of 2, the first and last requirements are usually satisfied by making [math]\displaystyle{ h_2(k) }[/math] always return an odd number. This has the side effect of doubling the chance of collision due to one wasted bit.[1]
Analysis
Let [math]\displaystyle{ n }[/math] be the number of elements stored in [math]\displaystyle{ T }[/math], then [math]\displaystyle{ T }[/math]'s load factor is [math]\displaystyle{ \alpha = n/|T| }[/math]. That is, start by randomly, uniformly and independently selecting two universal hash functions [math]\displaystyle{ h_1 }[/math] and [math]\displaystyle{ h_2 }[/math] to build a double hashing table [math]\displaystyle{ T }[/math]. All elements are put in [math]\displaystyle{ T }[/math] by double hashing using [math]\displaystyle{ h_1 }[/math] and [math]\displaystyle{ h_2 }[/math]. Given a key [math]\displaystyle{ k }[/math], the [math]\displaystyle{ (i+1) }[/math]-st hash location is computed by:
[math]\displaystyle{ h(i,k) = ( h_1(k) + i \cdot h_2(k) ) \bmod |T|. }[/math]
Let [math]\displaystyle{ T }[/math] have fixed load factor [math]\displaystyle{ \alpha: 1 \gt \alpha \gt 0 }[/math]. Bradford and Katehakis[2] showed the expected number of probes for an unsuccessful search in [math]\displaystyle{ T }[/math], still using these initially chosen hash functions, is [math]\displaystyle{ \tfrac{1}{1-\alpha} }[/math] regardless of the distribution of the inputs. Pair-wise independence of the hash functions suffices.
Like all other forms of open addressing, double hashing becomes linear as the hash table approaches maximum capacity. The usual heuristic is to limit the table loading to 75% of capacity. Eventually, rehashing to a larger size will be necessary, as with all other open addressing schemes.
Variants
Peter Dillinger's PhD thesis[3] points out that double hashing produces unwanted equivalent hash functions when the hash functions are treated as a set, as in Bloom filters: If [math]\displaystyle{ h_2(y) = -h_2(x) }[/math] and [math]\displaystyle{ h_1(y) = h_1(x) + k\cdot h_2(x) }[/math], then [math]\displaystyle{ h(i, y) = h(k - i, x) }[/math] and the sets of hashes [math]\displaystyle{ \left\{h(0, x), ..., h(k, x)\right\} = \left\{h(0, y), ..., h(k, y)\right\} }[/math] are identical. This makes a collision twice as likely as the hoped-for [math]\displaystyle{ 1/|T|^2 }[/math].
There are additionally a significant number of mostly-overlapping hash sets; if [math]\displaystyle{ h_2(y) = h_2(x) }[/math] and [math]\displaystyle{ h_1(y) = h_1(x) \pm h_2(x) }[/math], then [math]\displaystyle{ h(i, y) = h(i\pm 1, x) }[/math], and comparing additional hash values (expanding the range of [math]\displaystyle{ i }[/math]) is of no help.
Triple hashing
Adding a quadratic term [math]\displaystyle{ i^2, }[/math][4] [math]\displaystyle{ i(i+1)/2 }[/math] (a triangular number) or even [math]\displaystyle{ i^2 \cdot h_3(x) }[/math] (triple hashing)[5] to the hash function improves the hash function somewhat[4] but does not fix this problem; if:
- [math]\displaystyle{ h_1(y) = h_1(x) + k \cdot h_2(x) + k^2 \cdot h_3(x), }[/math]
- [math]\displaystyle{ h_2(y) = -h_2(x) - 2k \cdot h_3(x), }[/math] and
- [math]\displaystyle{ h_3(y) = h_3(x). }[/math]
then
- [math]\displaystyle{ \begin{align} h(k-i, y) &= h_1(y) + (k - i) \cdot h_2(y) + (k-i)^2 \cdot h_3(y) \\ &= h_1(y) + (k - i) (-h_2(x) - 2k h_3(x)) + (k-i)^2 h_3(x) \\ &= \ldots \\ &= h_1(x) + k h_2(x) + k^2 h_3(x) + (i - k) h_2(x) + (i^2 - k^2) h_3(x) \\ &= h_1(x) + i h_2(x) + i^2 h_3(x) \\ &= h(i, x). \\ \end{align} }[/math]
Enhanced double hashing
Adding a cubic term [math]\displaystyle{ i^3 }[/math][4] or [math]\displaystyle{ (i^3-i)/6 }[/math] (a tetrahedral number),[1] does solve the problem, a technique known as enhanced double hashing. This can be computed efficiently by forward differencing:
struct key; /// Opaque /// Use other data types when needed. (Must be unsigned for guaranteed wrapping.) extern unsigned int h1(struct key const *), h2(struct key const *); /// Calculate k hash values from two underlying hash functions /// h1() and h2() using enhanced double hashing. On return, /// hashes[i] = h1(x) + i*h2(x) + (i*i*i - i)/6. /// Takes advantage of automatic wrapping (modular reduction) /// of unsigned types in C. void ext_dbl_hash(struct key const *x, unsigned int hashes[], unsigned int n) { unsigned int a = h1(x), b = h2(x), i; for (i = 0; i < n; i++) { hashes[i] = a; a += b; // Add quadratic difference to get cubic b += i; // Add linear difference to get quadratic // i++ adds constant difference to get linear } }
In addition to rectifying the collision problem, enhanced double hashing also removes double-hashing's numerical restrictions on [math]\displaystyle{ h_2(x) }[/math]'s properties, allowing a hash function similar in property to (but still independent of) [math]\displaystyle{ h_1 }[/math] to be used.[1]
See also
References
- ↑ 1.0 1.1 1.2 Dillinger, Peter C.; Manolios, Panagiotis (November 15–17, 2004). "Bloom Filters in Probabilistic Verification". 5h International Conference on Formal Methods in Computer Aided Design (FMCAD 2004). Austin, Texas. doi:10.1007/978-3-540-30494-4_26. https://www.khoury.northeastern.edu/~pete/pub/bloom-filters-verification.pdf.
- ↑ Bradford, Phillip G. (April 2007), "A Probabilistic Study on Combinatorial Expanders and Hashing", SIAM Journal on Computing 37 (1): 83–111, doi:10.1137/S009753970444630X, http://phillipbradford.com/papers/AProbStudyExpandersAndHashing.pdf.
- ↑ Dillinger, Peter C. (December 2010). Adaptive Approximate State Storage (PDF) (PhD thesis). Northeastern University. pp. 93–112.
- ↑ 4.0 4.1 4.2 "Less Hashing, Same Performance: Building a Better Bloom Filter". Random Structures and Algorithms 33 (2): 187–218. September 2008. doi:10.1002/rsa.20208. https://www.eecs.harvard.edu/~michaelm/postscripts/rsa2008.pdf.
- ↑ Alternatively defined with the triangular number, as in Dillinger 2004.
External links
- How Caching Affects Hashing by Gregory L. Heileman and Wenbin Luo 2005.
- Hash Table Animation
- klib a C library that includes double hashing functionality.
Original source: https://en.wikipedia.org/wiki/Double hashing.
Read more |