Collective operation

From HandWiki

Collective operations are building blocks for interaction patterns, that are often used in SPMD algorithms in the parallel programming context. Hence, there is an interest in efficient realizations of these operations. A realization of the collective operations is provided by the Message Passing Interface[1] (MPI).

Definitions

In all asymptotic runtime functions, we denote the latency [math]\displaystyle{ \alpha }[/math], the communication cost per word [math]\displaystyle{ \beta }[/math], the number of processing units [math]\displaystyle{ p }[/math] and the input size per node [math]\displaystyle{ n }[/math]. In cases where we have initial messages on more than one node we assume that all local messages are of the same size. To address individual processing units we use [math]\displaystyle{ p_i \in \{ p_0, p_1, \dots, p_{p - 1} \} }[/math].

If we do not have an equal distribution, i.e. node [math]\displaystyle{ p_i }[/math] has a message of size [math]\displaystyle{ n_i }[/math], we get an upper bound for the runtime by setting [math]\displaystyle{ n = \max(n_0, n_1, \dots, n_{p-1}) }[/math].

A distributed memory model is assumed. The concepts are similar for the shared memory model. However, shared memory systems can provide hardware support for some operations like broadcast (§ Broadcast) for example, which allows convenient concurrent read.[2] Thus, new algorithmic possibilities can become available.

Broadcast

Main page: Broadcast (parallel pattern)
There are three squares vertically aligned on the left and three squares vertically aligned on the right. A dotted line connects the high left and high right square. Two solid lines connect the high left square and the middle and low right square. The letter a is written in the high left square and in all right squares.
Information flow of Broadcast operation performed on three nodes.

The broadcast pattern[3] is used to distribute data from one processing unit to all processing units, which is often needed in SPMD parallel programs to dispense input or global values. Broadcast can be interpreted as an inverse version of the reduce pattern (§ Reduce). Initially only root [math]\displaystyle{ r }[/math] with [math]\displaystyle{ id }[/math] [math]\displaystyle{ 0 }[/math] stores message [math]\displaystyle{ m }[/math]. During broadcast [math]\displaystyle{ m }[/math] is sent to the remaining processing units, so that eventually [math]\displaystyle{ m }[/math] is available to all processing units.

Since an implementation by means of a sequential for-loop with [math]\displaystyle{ p-1 }[/math] iterations becomes a bottleneck, divide-and-conquer approaches are common. One possibility is to utilize a binomial tree structure with the requirement that [math]\displaystyle{ p }[/math] has to be a power of two. When a processing unit is responsible for sending [math]\displaystyle{ m }[/math] to processing units [math]\displaystyle{ i..j }[/math], it sends [math]\displaystyle{ m }[/math] to processing unit [math]\displaystyle{ \left \lceil (i+j)/2 \right \rceil }[/math] and delegates responsibility for the processing units [math]\displaystyle{ \left \lceil (i+j)/2 \right \rceil .. \left \lceil (i+j)-1 \right \rceil }[/math] to it, while its own responsibility is cut down to [math]\displaystyle{ i..\left \lceil (i+j)/2 \right \rceil-1 }[/math].

Binomial trees have a problem with long messages [math]\displaystyle{ m }[/math]. The receiving unit of [math]\displaystyle{ m }[/math] can only propagate the message to other units, after it received the whole message. In the meantime, the communication network is not utilized. Therefore pipelining on binary trees is used, where [math]\displaystyle{ m }[/math] is split into an array of [math]\displaystyle{ k }[/math] packets of size [math]\displaystyle{ \left \lceil n/k \right \rceil }[/math]. The packets are then broadcast one after another, so that data is distributed fast in the communication network.

Pipelined broadcast on balanced binary tree is possible in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math].

Reduce

Main page: Reduce (parallel pattern)
There are three squares vertically aligned on the left and three squares vertically aligned on the right. A circle with the letter f inside is placed between the two columns. Three solid lines connect the circle with the left three squares. One solid line connects the circle and the high right square. The letters a, b and c are written in the left squares from high to low. The letter alpha is written in the top right square.
Information flow of Reduce operation performed on three nodes. f is the associative operator and α is the result of the reduction.

The reduce pattern[4] is used to collect data or partial results from different processing units and to combine them into a global result by a chosen operator. Given [math]\displaystyle{ p }[/math] processing units, message [math]\displaystyle{ m_i }[/math] is on processing unit [math]\displaystyle{ p_i }[/math] initially. All [math]\displaystyle{ m_i }[/math] are aggregated by [math]\displaystyle{ \otimes }[/math] and the result is eventually stored on [math]\displaystyle{ p_0 }[/math]. The reduction operator [math]\displaystyle{ \otimes }[/math] must be associative at least. Some algorithms require a commutative operator with a neutral element. Operators like [math]\displaystyle{ sum }[/math], [math]\displaystyle{ min }[/math], [math]\displaystyle{ max }[/math] are common.

Implementation considerations are similar to broadcast (§ Broadcast). For pipelining on binary trees the message must be representable as a vector of smaller object for component-wise reduction.

Pipelined reduce on a balanced binary tree is possible in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math].

All-Reduce

There are three squares vertically aligned on the left and three squares vertically aligned on the right. A circle with the letter f inside is placed between the two columns. Three solid lines connect the circle with the left three squares. One solid line connects the circle and the high right square. The letters a, b and c are written in the left squares from high to low. The letter alpha is written in the top right square.
Information flow of All-Reduce operation performed on three nodes. f is the associative operator and α is the result of the reduction.

The all-reduce pattern[5] (also called allreduce) is used if the result of a reduce operation (§ Reduce) must be distributed to all processing units. Given [math]\displaystyle{ p }[/math] processing units, message [math]\displaystyle{ m_i }[/math] is on processing unit [math]\displaystyle{ p_i }[/math] initially. All [math]\displaystyle{ m_i }[/math] are aggregated by an operator [math]\displaystyle{ \otimes }[/math] and the result is eventually stored on all [math]\displaystyle{ p_i }[/math]. Analog to the reduce operation, the operator [math]\displaystyle{ \otimes }[/math] must be at least associative.

All-reduce can be interpreted as a reduce operation with a subsequent broadcast (§ Broadcast). For long messages a corresponding implementation is suitable, whereas for short messages, the latency can be reduced by using a hypercube (Hypercube (communication pattern) § All-Gather/ All-Reduce) topology, if [math]\displaystyle{ p }[/math] is a power of two. All-reduce can also be implemented with a butterfly algorithm and achieve optimal latency and bandwidth.[6]

All-reduce is possible in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math], since reduce and broadcast are possible in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math] with pipelining on balanced binary trees. All-reduce implemented with a butterfly algorithm achieves the same asymptotic runtime.

Prefix-Sum/Scan

Main page: Prefix sum
There are three squares vertically aligned on the left and three rectangles vertically aligned on the right. A circle with the word scan inside is placed between the two columns. Three solid lines connect the circle with the left three squares. Three solid lines connect the circle with the three right square. The letters a, b and c are written in the left squares from high to low. In the high right square the letter a is written. In the mid right square the term a plus b is written. In the low right square the term a plus b plus c is written.
Information flow of Prefix-Sum/Scan operation performed on three nodes. The operator + can be any associative operator.

The prefix-sum or scan operation[7] is used to collect data or partial results from different processing units and to compute intermediate results by an operator, which are stored on those processing units. It can be seen as a generalization of the reduce operation (§ Reduce). Given [math]\displaystyle{ p }[/math] processing units, message [math]\displaystyle{ m_i }[/math] is on processing unit [math]\displaystyle{ p_i }[/math]. The operator [math]\displaystyle{ \otimes }[/math] must be at least associative, whereas some algorithms require also a commutative operator and a neutral element. Common operators are [math]\displaystyle{ sum }[/math], [math]\displaystyle{ min }[/math] and [math]\displaystyle{ max }[/math]. Eventually processing unit [math]\displaystyle{ p_i }[/math] stores the prefix sum [math]\displaystyle{ \otimes_{i' \lt = i} }[/math][math]\displaystyle{ m_{i'} }[/math]. In the case of the so-called exclusive prefix sum, processing unit [math]\displaystyle{ p_i }[/math] stores the prefix sum [math]\displaystyle{ \otimes_{i' \lt i} }[/math][math]\displaystyle{ m_{i'} }[/math]. Some algorithms require to store the overall sum at each processing unit in addition to the prefix sums.

For short messages, this can be achieved with a hypercube topology if [math]\displaystyle{ p }[/math] is a power of two. For long messages, the hypercube (Hypercube (communication pattern) § Prefix sum, Prefix sum § Distributed memory: Hypercube algorithm) topology is not suitable, since all processing units are active in every step and therefore pipelining can't be used. A binary tree topology is better suited for arbitrary [math]\displaystyle{ p }[/math] and long messages (Prefix sum § Large Message Sizes: Pipelined Binary Tree).

Prefix-sum on a binary tree can be implemented with an upward and downward phase. In the upward phase reduction is performed, while the downward phase is similar to broadcast, where the prefix sums are computed by sending different data to the left and right children. With this approach pipelining is possible, because the operations are equal to reduction (§ Reduce) and broadcast (§ Broadcast).

Pipelined prefix sum on a binary tree is possible in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math].

Barrier

Main page: Barrier (computer science)

The barrier[8] as a collective operation is a generalization of the concept of a barrier, that can be used in distributed computing. When a processing unit calls barrier, it waits until all other processing units have called barrier as well. Barrier is thus used to achieve global synchronization in distributed computing.

One way to implement barrier is to call all-reduce (§ All-Reduce) with an empty/ dummy operand. We know the runtime of All-reduce is [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math]. Using a dummy operand reduces size [math]\displaystyle{ n }[/math] to a constant factor and leads to a runtime of [math]\displaystyle{ \mathcal{O}(\alpha \log p) }[/math].

Gather

There are three squares vertically aligned on the left and three rectangles vertically aligned on the right. A dotted line connects the high left square with the high right rectangle. Two solid lines connect the mid and low left squares with the high right rectangle. The letters a, b and c are written in the left squares from high to low. The letters a, b and c are written in the high right rectangle in a row.
Information flow of Gather operation performed on three nodes.

The gather communication pattern[9] is used to store data from all processing units on a single processing unit. Given [math]\displaystyle{ p }[/math] processing units, message [math]\displaystyle{ m_i }[/math] on processing unit [math]\displaystyle{ p_i }[/math]. For a fixed processing unit [math]\displaystyle{ p_j }[/math], we want to store the message [math]\displaystyle{ m_1 \cdot m_2 \cdot \ldots \cdot m_p }[/math] on [math]\displaystyle{ p_j }[/math]. Gather can be thought of as a reduce operation (§ Reduce) that uses the concatenation operator. This works due to the fact that concatenation is associative. By using the same binomial tree reduction algorithm we get a runtime of [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math]. We see that the asymptotic runtime is similar to the asymptotic runtime of reduce [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math], but with the addition of a factor p to the term [math]\displaystyle{ \beta n }[/math]. This additional factor is due to the message size increasing in each step as messages get concatenated. Compare this to reduce where message size is a constant for operators like [math]\displaystyle{ min }[/math].

All-Gather

There are three squares vertically aligned on the left and three rectangles vertically aligned on the right. Three dotted lines connect the high left square with the high right rectangle, the mid left square with the mid right rectangle and the low left square with the low right rectangle. Two solid lines connect the mid and low left squares with the high right rectangle. Two solid lines connect the high and low left squares with the mid right rectangle. Two solid lines connect the high and mid left squares with the low right rectangle. The letters a, b and c are written in the left squares from high to low. The letters a, b and c are written in all right rectangles in a row.
Information flow of All-Gather operation performed on three nodes.

The all-gather communication pattern[9] is used to collect data from all processing units and to store the collected data on all processing units. Given [math]\displaystyle{ p }[/math] processing units [math]\displaystyle{ p_i }[/math], message [math]\displaystyle{ m_i }[/math] initially stored on [math]\displaystyle{ p_i }[/math], we want to store the message [math]\displaystyle{ m_1 \cdot m_2 \cdot \ldots \cdot m_p }[/math] on each [math]\displaystyle{ p_j }[/math].

It can be thought of in multiple ways. The first is as an all-reduce operation (§ All-Reduce) with concatenation as the operator, in the same way that gather can be represented by reduce. The second is as a gather-operation followed by a broadcast of the new message of size [math]\displaystyle{ pn }[/math]. With this we see that all-gather in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math] is possible.

Scatter

There are three rectangles vertically aligned on the left and three squares vertically aligned on the right. A dotted line connects the high left rectangle with the high right square. Two solid lines connect the high left rectangle with the mid and low right squares. The letters c, b and a are written in the high left rectangle in a row. The letters a, b and c are written in the right right squares from high to low.
Information flow of Scatter operation performed on three nodes.

The scatter communication pattern[10] is used to distribute data from one processing unit to all the processing units. It differs from broadcast, in that it does not send the same message to all processing units. Instead it splits the message and delivers one part of it to each processing unit.

Given [math]\displaystyle{ p }[/math] processing units [math]\displaystyle{ p_i }[/math], a fixed processing unit [math]\displaystyle{ p_j }[/math] that holds the message [math]\displaystyle{ m = m_1 \cdot m_2 \cdot \ldots \cdot m_p }[/math]. We want to transport the message [math]\displaystyle{ m_i }[/math] onto [math]\displaystyle{ p_i }[/math]. The same implementation concerns as for gather (§ Gather) apply. This leads to an optimal runtime in [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math].

All-to-all

Main page: All-to-all (parallel pattern)

All-to-all[11] is the most general communication pattern. For [math]\displaystyle{ 0 \leq i, j \lt p }[/math], message [math]\displaystyle{ m_{i, j} }[/math] is the message that is initially stored on node [math]\displaystyle{ i }[/math] and has to be delivered to node [math]\displaystyle{ j }[/math]. We can express all communication primitives that do not use operators through all-to-all. For example, broadcast of message [math]\displaystyle{ m }[/math] from node [math]\displaystyle{ p_k }[/math] is emulated by setting [math]\displaystyle{ m_{i, j} = m }[/math] for [math]\displaystyle{ i = k }[/math] and setting [math]\displaystyle{ m_{l, j} }[/math] empty for [math]\displaystyle{ l \neq k }[/math].

Assuming we have a fully connected network, the best possible runtime for all-to-all is in [math]\displaystyle{ \mathcal{O}(p (\alpha + \beta n)) }[/math] . This is achieved through [math]\displaystyle{ p }[/math] rounds of direct message exchange. For [math]\displaystyle{ p }[/math] power of 2, in communication round [math]\displaystyle{ k }[/math] , node [math]\displaystyle{ p_i }[/math] exchanges messages with node [math]\displaystyle{ p_j, j = i \oplus k }[/math] .

If the message size is small and latency dominates the communication, a hypercube algorithm can be used to distribute the messages in time [math]\displaystyle{ \mathcal{O}(\log p (\alpha + \beta p n)) }[/math] .

There are three rectangles vertically aligned on the left and three rectangles vertically aligned on the right. The rectangles are three time higher as wide. The terms a1, a2 and a3 are written in the high left rectangle one below the other. The terms b1, b2 and b3 are written in the mid left rectangle one below the other. The terms c1, c2 and c3 are written in the low left rectangle one below the other. The terms a1, b1 and c1 are written in the high right rectangle one below the other. The terms a2, b2 and c2 are written in the mid right rectangle one below the other. The terms a3, b3 and c3 are written in the low right rectangle one below the other. A dotted line connects a1 from the high left rectangle and a1 from the high right rectangle. A dotted line connects b2 from the mid left rectangle and b2 from the mid right rectangle. A dotted line connects c3 from the low left rectangle and c3 from the low right rectangle. Solid lines connect the other corresponding terms between the left and right rectangles.
Information flow of All-to-All operation performed on three nodes. Letters indicate nodes and numbers indicate information items.

Runtime Overview

This table[12] gives an overview over the best known asymptotic runtimes, assuming we have free choice of network topology.

Example topologies we want for optimal runtime are binary tree, binomial tree, hypercube.

In practice, we have to adjust to the available physical topologies, e.g. dragonfly, fat tree, grid network (references other topologies, too).

More information under Network topology.

For each operation, the optimal algorithm can depend on the input sizes [math]\displaystyle{ n }[/math]. For example, broadcast for short messages is best implemented using a binomial tree whereas for long messages a pipelined communication on a balanced binary tree is optimal.

The complexities stated in the table depend on the latency [math]\displaystyle{ \alpha }[/math] and the communication cost per word [math]\displaystyle{ \beta }[/math] in addition to the number of processing units [math]\displaystyle{ p }[/math] and the input message size per node [math]\displaystyle{ n }[/math]. The # senders and # receivers columns represent the number of senders and receivers that are involved in the operation respectively. The # messages column lists the number of input messages and the Computations? column indicates if any computations are done on the messages or if the messages are just delivered without processing. Complexity gives the asymptotic runtime complexity of an optimal implementation under free choice of topology.

Name # senders # receivers # messages Computations? Complexity
Broadcast [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ 1 }[/math] no [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math]
Reduce [math]\displaystyle{ p }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ p }[/math] yes [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math]
All-reduce [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] yes [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math]
Prefix sum [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] yes [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta n) }[/math]
Barrier [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ 0 }[/math] no [math]\displaystyle{ \mathcal{O}(\alpha \log p) }[/math]
Gather [math]\displaystyle{ p }[/math] [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ p }[/math] no [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math]
All-Gather [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] no [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math]
Scatter [math]\displaystyle{ 1 }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] no [math]\displaystyle{ \mathcal{O}(\alpha \log p + \beta p n) }[/math]
All-To-All [math]\displaystyle{ p }[/math] [math]\displaystyle{ p }[/math] [math]\displaystyle{ p^2 }[/math] no [math]\displaystyle{ \mathcal{O}(\log p (\alpha + \beta p n)) }[/math] or [math]\displaystyle{ \mathcal{O}(p (\alpha + \beta n)) }[/math]

Notes

  1. Intercommunicator Collective Operations. The Message Passing Interface (MPI) standard, chapter 7.3.1. Mathematics and Computer Science Division, Argonne National Laboratory.
  2. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, p. 395
  3. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 396-401
  4. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 402-403
  5. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 403-404
  6. Yuan, Xin (February 2009). "Bandwidth optimal all-reduce algorithms for clusters of workstations". Journal of Parallel and Distributed Computing 69 (2). https://www.cs.fsu.edu/~xyuan/paper/09jpdc.pdf. 
  7. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 404-406
  8. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, p. 408
  9. 9.0 9.1 Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 412-413
  10. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, p. 413
  11. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, pp. 413-418
  12. Sanders, Mehlhorn, Dietzfelbinger, Dementiev 2019, p. 394

References

Sanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (2019). Sequential and Parallel Algorithms and Data Structures - The Basic Toolbox. Springer Nature Switzerland AG. ISBN 978-3-030-25208-3.