Hypercube (communication pattern)
[math]\displaystyle{ d }[/math]-dimensional hypercube is a network topology for parallel computers with [math]\displaystyle{ 2^d }[/math] processing elements. The topology allows for an efficient implementation of some basic communication primitives such as Broadcast, All-Reduce, and Prefix sum.[1] The processing elements are numbered [math]\displaystyle{ 0 }[/math] through [math]\displaystyle{ 2^d - 1 }[/math]. Each processing element is adjacent to processing elements whose numbers differ in one and only one bit. The algorithms described in this page utilize this structure efficiently.
Algorithm outline
Most of the communication primitives presented in this article share a common template.[2] Initially, each processing element possesses one message that must reach every other processing element during the course of the algorithm. The following pseudo code sketches the communication steps necessary. Hereby, Initialization, Operation, and Output are placeholders that depend on the given communication primitive (see next section).
Input: message [math]\displaystyle{ m }[/math]. Output: depends on Initialization, Operation and Output. Initialization [math]\displaystyle{ s := m }[/math] for [math]\displaystyle{ 0 \leq k \lt d }[/math] do [math]\displaystyle{ y := i \text{ XOR } 2^k }[/math] Send [math]\displaystyle{ s }[/math] to [math]\displaystyle{ y }[/math] Receive [math]\displaystyle{ m }[/math] from [math]\displaystyle{ y }[/math] Operation[math]\displaystyle{ (s, m) }[/math] endfor Output
Each processing element iterates over its neighbors (the expression [math]\displaystyle{ i \text{ XOR } 2^k }[/math] negates the [math]\displaystyle{ k }[/math]-th bit in [math]\displaystyle{ i }[/math]'s binary representation, therefore obtaining the numbers of its neighbors). In each iteration, each processing element exchanges a message with the neighbor and processes the received message afterwards. The processing operation depends on the communication primitive.
Communication primitives
Prefix sum
In the beginning of a prefix sum operation, each processing element [math]\displaystyle{ i }[/math] owns a message [math]\displaystyle{ m_i }[/math]. The goal is to compute [math]\displaystyle{ \bigoplus_{0 \le j \le i} m_j }[/math], where [math]\displaystyle{ \oplus }[/math] is an associative operation. The following pseudo code describes the algorithm.
Input: message [math]\displaystyle{ m_i }[/math] of processor [math]\displaystyle{ i }[/math]. Output: prefix sum [math]\displaystyle{ \bigoplus_{0 \le j \le i} m_j }[/math] of processor [math]\displaystyle{ i }[/math]. [math]\displaystyle{ x := m_i }[/math] [math]\displaystyle{ \sigma := m_i }[/math] for [math]\displaystyle{ 0 \le k \le d - 1 }[/math] do [math]\displaystyle{ y := i \text{ XOR } 2^k }[/math] Send [math]\displaystyle{ \sigma }[/math] to [math]\displaystyle{ y }[/math] Receive [math]\displaystyle{ m }[/math] from [math]\displaystyle{ y }[/math] [math]\displaystyle{ \sigma := \sigma \oplus m }[/math] if bit [math]\displaystyle{ k }[/math] in [math]\displaystyle{ i }[/math] is set then [math]\displaystyle{ x := x \oplus m }[/math] endfor
The algorithm works as follows. Observe that hypercubes of dimension [math]\displaystyle{ d }[/math] can be split into two hypercubes of dimension [math]\displaystyle{ d - 1 }[/math]. Refer to the sub cube containing nodes with a leading 0 as the 0-sub cube and the sub cube consisting of nodes with a leading 1 as 1-sub cube. Once both sub cubes have calculated the prefix sum, the sum over all elements in the 0-sub cube has to be added to the every element in the 1-sub cube, since every processing element in the 0-sub cube has a lower rank than the processing elements in the 1-sub cube. The pseudo code stores the prefix sum in variable [math]\displaystyle{ x }[/math] and the sum over all nodes in a sub cube in variable [math]\displaystyle{ \sigma }[/math]. This makes it possible for all nodes in 1-sub cube to receive the sum over the 0-sub cube in every step.
This results in a factor of [math]\displaystyle{ \log p }[/math] for [math]\displaystyle{ T_\text{start} }[/math] and a factor of [math]\displaystyle{ n\log p }[/math] for [math]\displaystyle{ T_\text{byte} }[/math]: [math]\displaystyle{ T(n,p) = (T_\text{start} + nT_\text{byte})\log p }[/math].
All-gather / all-reduce
All-gather operations start with each processing element having a message [math]\displaystyle{ m_i }[/math]. The goal of the operation is for each processing element to know the messages of all other processing elements, i.e. [math]\displaystyle{ x := m_0 \cdot m_1 \dots m_p }[/math] where [math]\displaystyle{ \cdot }[/math] is concatenation. The operation can be implemented following the algorithm template.
Input: message [math]\displaystyle{ x := m_i }[/math] at processing unit [math]\displaystyle{ i }[/math]. Output: all messages [math]\displaystyle{ m_1 \cdot m_2 \dots m_p }[/math]. [math]\displaystyle{ x := m_i }[/math] for [math]\displaystyle{ 0 \le k \lt d }[/math] do [math]\displaystyle{ y := i \text{ XOR } 2^k }[/math] Send [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math] Receive [math]\displaystyle{ x' }[/math] from [math]\displaystyle{ y }[/math] [math]\displaystyle{ x := x \cdot x' }[/math] endfor
With each iteration, the transferred message doubles in length. This leads to a runtime of [math]\displaystyle{ T(n,p) \approx \sum_{j=0}^{d-1}(T_\text{start} + n \cdot 2^jT_\text{byte})= \log(p) T_\text{start} + (p-1)nT_\text{byte} }[/math].
The same principle can be applied to the All-Reduce operations, but instead of concatenating the messages, it performs a reduction operation on the two messages. So it is a Reduce operation, where all processing units know the result. Compared to a normal reduce operation followed by a broadcast, All-Reduce in hypercubes reduces the number of communication steps.
All-to-all
Here every processing element has a unique message for all other processing elements.
Input: message [math]\displaystyle{ m_{ij} }[/math] at processing element [math]\displaystyle{ i }[/math] to processing element [math]\displaystyle{ j }[/math]. for [math]\displaystyle{ d \gt k \geq 0 }[/math] do Receive from processing element [math]\displaystyle{ i \text{ XOR } 2^k }[/math]: all messages for my [math]\displaystyle{ k }[/math]-dimensional sub cube Send to processing element [math]\displaystyle{ i \text{ XOR } 2^k }[/math]: all messages for its [math]\displaystyle{ k }[/math]-dimensional sub cube endfor
With each iteration a messages comes closer to its destination by one dimension, if it hasn't arrived yet. Hence, all messages have reached their target after at most [math]\displaystyle{ d = \log{p} }[/math] steps. In every step, [math]\displaystyle{ p / 2 }[/math] messages are sent: in the first iteration, half of the messages aren't meant for the own sub cube. In every following step, the sub cube is only half the size as before, but in the previous step exactly the same number of messages arrived from another processing element.
This results in a run-time of [math]\displaystyle{ T(n,p) \approx \log{p} (T_\text{start} + \frac{p}{2}nT_\text{byte}) }[/math].
ESBT-broadcast
The ESBT-broadcast (Edge-disjoint Spanning Binomial Tree) algorithm[3] is a pipelined broadcast algorithm with optimal runtime for clusters with hypercube network topology. The algorithm embeds [math]\displaystyle{ d }[/math] edge-disjoint binomial trees in the hypercube, such that each neighbor of processing element [math]\displaystyle{ 0 }[/math] is the root of a spanning binomial tree on [math]\displaystyle{ 2^d - 1 }[/math] nodes. To broadcast a message, the source node splits its message into [math]\displaystyle{ k }[/math] chunks of equal size and cyclically sends them to the roots of the binomial trees. Upon receiving a chunk, the binomial trees broadcast it.
Runtime
In each step, the source node sends one of its [math]\displaystyle{ k }[/math] chunks to a binomial tree. Broadcasting the chunk within the binomial tree takes [math]\displaystyle{ d }[/math] steps. Thus, it takes [math]\displaystyle{ k }[/math] steps to distribute all chunks and additionally [math]\displaystyle{ d }[/math] steps until the last binomial tree broadcast has finished, resulting in [math]\displaystyle{ k + d }[/math] steps overall. Therefore, the runtime for a message of length [math]\displaystyle{ n }[/math] is [math]\displaystyle{ T(n, p, k) = \left(\frac{n}{k} T_\text{byte} + T_\text{start} \right) (k + d) }[/math]. With the optimal chunk size [math]\displaystyle{ k^* = \sqrt{\frac{nd \cdot T_\text{byte}}{T_\text{start}}} }[/math], the optimal runtime of the algorithm is [math]\displaystyle{ T^*(n, p) = n \cdot T_\text{byte} + \log(p) \cdot T_\text{start} + \sqrt{n \log(p) \cdot T_\text{start} \cdot T_\text{byte}} }[/math].
Construction of the binomial trees
This section describes how to construct the binomial trees systematically. First, construct a single binomial spanning tree von [math]\displaystyle{ 2^d }[/math] nodes as follows. Number the nodes from [math]\displaystyle{ 0 }[/math] to [math]\displaystyle{ 2^d - 1 }[/math] and consider their binary representation. Then the children of each nodes are obtained by negating single leading zeroes. This results in a single binomial spanning tree. To obtain [math]\displaystyle{ d }[/math] edge-disjoint copies of the tree, translate and rotate the nodes: for the [math]\displaystyle{ k }[/math]-th copy of the tree, apply a XOR operation with [math]\displaystyle{ 2^k }[/math] to each node. Subsequently, right-rotate all nodes by [math]\displaystyle{ k }[/math] digits. The resulting binomial trees are edge-disjoint and therefore fulfill the requirements for the ESBT-broadcasting algorithm.
References
- ↑ Grama, A.(2003). Introduction to Parallel Computing. Addison Wesley; Auflage: 2 ed. ISBN:978-0201648652.
- ↑ Foster, I.(1995). Designing and Building Parallel Programs: Concepts and Tools for Parallel Software Engineering. Addison Wesley; ISBN:0201575949.
- ↑ Johnsson, S.L.; Ho, C.-T. (1989). "Optimum broadcasting and personalized communication in hypercubes". IEEE Transactions on Computers 38 (9): 1249–1268. doi:10.1109/12.29465. ISSN 0018-9340.
Original source: https://en.wikipedia.org/wiki/Hypercube (communication pattern).
Read more |