Funnelsort
Funnelsort is a comparison-based sorting algorithm. It is similar to mergesort, but it is a cache-oblivious algorithm, designed for a setting where the number of elements to sort is too large to fit in a cache where operations are done. It was introduced by Matteo Frigo, Charles Leiserson, Harald Prokop, and Sridhar Ramachandran in 1999 in the context of the cache oblivious model.[1][2]
Mathematical properties
In the external memory model, the number of memory transfers it needs to perform a sort of [math]\displaystyle{ N }[/math] items on a machine with cache of size [math]\displaystyle{ Z }[/math] and cache lines of length [math]\displaystyle{ L }[/math] is [math]\displaystyle{ O \left(\tfrac{N}{L} \log_{Z} N \right) }[/math], under the tall cache assumption that [math]\displaystyle{ Z = \Omega(L^2) }[/math]. This number of memory transfers has been shown to be asymptotically optimal for comparison sorts. Funnelsort also achieves the asymptotically optimal runtime complexity of [math]\displaystyle{ \Theta(N \log N) }[/math].
Algorithm
Basic overview
Funnelsort operates on a contiguous array of [math]\displaystyle{ N }[/math] elements. To sort the elements, it performs the following:
- Split the input into [math]\displaystyle{ N^{1/3} }[/math] arrays of size [math]\displaystyle{ N^{2/3} }[/math], and sort the arrays recursively.
- Merge the [math]\displaystyle{ N^{1/3} }[/math] sorted sequences using a [math]\displaystyle{ N^{1/3} }[/math]-merger. (This process will be described in more detail.)
Funnelsort is similar to merge sort in that some number of subarrays are recursively sorted, after which a merging step combines the subarrays into one sorted array. Merging is performed by a device called a k-merger, which is described in the section below.
k-mergers
A k-merger takes [math]\displaystyle{ k }[/math] sorted sequences. Upon one invocation of a k-merger, it outputs the first [math]\displaystyle{ k^3 }[/math] elements of the sorted sequence obtained by merging the k input sequences.
At the top level, funnelsort uses a [math]\displaystyle{ N^{1/3} }[/math]-merger on [math]\displaystyle{ N^{1/3} }[/math] sequences of length [math]\displaystyle{ N^{2/3} }[/math], and invokes this merger once.
The k-merger is built recursively out of [math]\displaystyle{ \sqrt{k} }[/math]-mergers. It consists of [math]\displaystyle{ \sqrt{k} }[/math] input [math]\displaystyle{ \sqrt{k} }[/math]-mergers [math]\displaystyle{ I_1, I_2, \ldots, I_\sqrt{k} }[/math], and a single output [math]\displaystyle{ \sqrt{k} }[/math]-merger [math]\displaystyle{ O }[/math]. The k inputs are separated into [math]\displaystyle{ \sqrt{k} }[/math] sets of [math]\displaystyle{ \sqrt{k} }[/math] inputs each. Each of these sets is an input to one of the input mergers. The output of each input merger is connected to a buffer, a FIFO queue that can hold [math]\displaystyle{ 2k^{3/2} }[/math] elements. The buffers are implemented as circular queues. The outputs of the [math]\displaystyle{ \sqrt{k} }[/math] buffers are connected to the inputs of the output merger [math]\displaystyle{ O }[/math]. Finally, the output of [math]\displaystyle{ O }[/math] is the output of the entire k-merger.
In this construction, any input merger only outputs [math]\displaystyle{ k^{3/2} }[/math] items at once, but the buffer it outputs to has double the space. This is done so that an input merger can be called only when its buffer does not have enough items, but that when it is called, it outputs a lot of items at once (namely, [math]\displaystyle{ k^{3/2} }[/math] of them).
A k-merger works recursively in the following way. To output [math]\displaystyle{ k^3 }[/math] elements, it recursively invokes its output merger [math]\displaystyle{ k^{3/2} }[/math] times. However, before it makes a call to [math]\displaystyle{ O }[/math], it checks all of its buffers, filling each of them that are less than half full. To fill the i-th buffer, it recursively invokes the corresponding input merger [math]\displaystyle{ I_i }[/math] once. If this cannot be done (due to the merger running out of inputs), this step is skipped. Since this call outputs [math]\displaystyle{ k^{3/2} }[/math] elements, the buffer contains at least [math]\displaystyle{ k^{3/2} }[/math] elements. At the end of all these operations, the k-merger has output the first [math]\displaystyle{ k^3 }[/math] of its input elements, in sorted order.
Analysis
Most of the analysis of this algorithm revolves around analyzing the space and cache miss complexity of the k-merger.
The first important bound is that a k-merger can be fit in [math]\displaystyle{ O(k^2) }[/math] space. To see this, we let [math]\displaystyle{ S(k) }[/math] denote the space needed for a k-merger. To fit the [math]\displaystyle{ k^{1/2} }[/math] buffers of size [math]\displaystyle{ 2k^{3/2} }[/math] takes [math]\displaystyle{ O(k^2) }[/math] space. To fit the [math]\displaystyle{ \sqrt{k} + 1 }[/math] smaller buffers takes [math]\displaystyle{ (\sqrt{k} + 1) S(\sqrt{k}) }[/math] space. Thus, the space satisfies the recurrence [math]\displaystyle{ S(k) = (\sqrt{k} + 1) S(\sqrt{k}) + O(k^2) }[/math]. This recurrence has solution [math]\displaystyle{ S(k) = O(k^2) }[/math].
It follows that there is a positive constant [math]\displaystyle{ \alpha }[/math] such that a problem of size at most [math]\displaystyle{ \alpha \sqrt{Z} }[/math] fits entirely in cache, meaning that it incurs no additional cache misses.
Letting [math]\displaystyle{ Q_M(k) }[/math] denote the number of cache misses incurred by a call to a k-merger, one can show that [math]\displaystyle{ Q_M(k) = O((k^3 \log_Z k)/L). }[/math] This is done by an induction argument. It has [math]\displaystyle{ k \le \alpha \sqrt{Z} }[/math] as a base case. For larger k, we can bound the number of times a [math]\displaystyle{ \sqrt{k} }[/math]-merger is called. The output merger is called exactly [math]\displaystyle{ k^{3/2} }[/math] times. The total number of calls on input mergers is at most [math]\displaystyle{ k^{3/2} + 2\sqrt{k} }[/math]. This gives a total bound of [math]\displaystyle{ 2 k^{3/2} + 2 \sqrt{k} }[/math] recursive calls. In addition, the algorithm checks every buffer to see if needs to be filled. This is done on [math]\displaystyle{ \sqrt{k} }[/math] buffers every step for [math]\displaystyle{ k^{3/2} }[/math] steps, leading to a max of [math]\displaystyle{ k^2 }[/math] cache misses for all the checks.
This leads to the recurrence [math]\displaystyle{ Q_M(k) \le (2k^{3/2} + 2 \sqrt{k}) Q_M(\sqrt{k}) + k^2 }[/math], which can be shown to have the solution given above.
Finally, the total cache misses [math]\displaystyle{ Q(N) }[/math] for the entire sort can be analyzed. It satisfies the recurrence [math]\displaystyle{ Q(N) = N^{1/3} Q(N^{2/3}) + Q_M(N^{1/3}). }[/math] This can be shown to have solution [math]\displaystyle{ Q(N) = O((N/L) \log_Z N). }[/math]
Lazy funnelsort
Lazy funnelsort is a modification of the funnelsort, introduced by Gerth Stølting Brodal and Rolf Fagerberg in 2002.[3] The modification is that when a merger is invoked, it does not have to fill each of its buffers. Instead, it lazily fills a buffer only when it is empty. This modification has the same asymptotic runtime and memory transfers as the original funnelsort, but has applications in cache-oblivious algorithms for problems in computational geometry in a method known as distribution sweeping.
See also
References
- ↑ M. Frigo, C.E. Leiserson, H. Prokop, and S. Ramachandran. Cache-oblivious algorithms. In Proceedings of the 40th IEEE Symposium on Foundations of Computer Science (FOCS 99), pp. 285-297. 1999. Extended abstract at IEEE, at Citeseer.
- ↑ Harald Prokop. Cache-Oblivious Algorithms. Masters thesis, MIT. 1999.
- ↑ Brodal, Gerth Stølting; Fagerberg, Rolf (25 June 2002). "Cache Oblivious Distribution Sweeping". Automata, Languages and Programming. Lecture Notes in Computer Science. 2380. Springer. pp. 426–438. doi:10.1007/3-540-45465-9_37. ISBN 978-3-540-43864-9.. See also the longer technical report.
Original source: https://en.wikipedia.org/wiki/Funnelsort.
Read more |