Karp–Flatt metric

From HandWiki
Revision as of 21:01, 6 February 2024 by Dennis Ross (talk | contribs) (fix)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The Karp–Flatt metric is a measure of parallelization of code in parallel processor systems. This metric exists in addition to Amdahl's law and Gustafson's law as an indication of the extent to which a particular computer code is parallelized. It was proposed by Alan H. Karp and Horace P. Flatt in 1990.

Description

Given a parallel computation exhibiting speedup [math]\displaystyle{ \psi }[/math] on [math]\displaystyle{ p }[/math] processors, where [math]\displaystyle{ p }[/math] > 1, the experimentally determined serial fraction [math]\displaystyle{ e }[/math] is defined to be the Karp–Flatt Metric viz:

[math]\displaystyle{ e = \frac{\frac{1}{\psi}-\frac{1}{p}}{1-\frac{1}{p}} }[/math]

The lower the value of [math]\displaystyle{ e }[/math], the better the parallelization.

Justification

There are many ways to measure the performance of a parallel algorithm running on a parallel processor. The Karp–Flatt metric defines a metric which reveals aspects of the performance that are not easily discerned from other metrics. A pseudo-"derivation" of sorts follows from Amdahl's Law, which can be written as:

[math]\displaystyle{ T(p) = T_s + \frac{T_p}{p} }[/math]

Where:

  • [math]\displaystyle{ T(p) }[/math] is the total time taken for code execution in a [math]\displaystyle{ p }[/math]-processor system
  • [math]\displaystyle{ T_s }[/math] is the time taken for the serial part of the code to run
  • [math]\displaystyle{ T_p }[/math] is the time taken for the parallel part of the code to run in one processor
  • [math]\displaystyle{ p }[/math] is the number of processors

with the result obtained by substituting [math]\displaystyle{ p }[/math] = 1 viz. [math]\displaystyle{ T(1) = T_s + T_p }[/math], if we define the serial fraction [math]\displaystyle{ e }[/math] = [math]\displaystyle{ \frac{T_s}{T(1)} }[/math] then the equation can be rewritten as

[math]\displaystyle{ T(p) = T(1) e + \frac{T(1) (1-e)}{p} }[/math]

In terms of the speedup [math]\displaystyle{ \psi }[/math] = [math]\displaystyle{ \frac{T(1)}{T(p)} }[/math] :

[math]\displaystyle{ \frac{1}{\psi} = e + \frac{1-e}{p} }[/math]

Solving for the serial fraction, we get the Karp–Flatt metric as above. Note that this is not a "derivation" from Amdahl's law as the left hand side represents a metric rather than a mathematically derived quantity. The treatment above merely shows that the Karp–Flatt metric is consistent with Amdahl's Law.

Use

While the serial fraction e is often mentioned in computer science literature, it was rarely used as a diagnostic tool the way speedup and efficiency are. Karp and Flatt hoped to correct this by proposing this metric. This metric addresses the inadequacies of the other laws and quantities used to measure the parallelization of computer code. In particular, Amdahl's law does not take into account load balancing issues, nor does it take overhead into consideration. Using the serial fraction as a metric poses definite advantages over the others, particularly as the number of processors grows.

For a problem of fixed size, the efficiency of a parallel computation typically decreases as the number of processors increases. By using the serial fraction obtained experimentally using the Karp–Flatt metric, we can determine if the efficiency decrease is due to limited opportunities of parallelism or increases in algorithmic or architectural overhead.

References

  • Karp, Alan H.; Flatt, Horace P. (1990). "Measuring Parallel Processor Performance". Communications of the ACM 33 (5): 539–543. doi:10.1145/78607.78614. 
  • Quinn, Michael J. (2004). Parallel Programming in C with MPI and OpenMP. Boston: McGraw-Hill. ISBN 0-07-058201-7. 

External links