From HandWiki
Short description: Agglomerative hierarchical clustering method

UPGMA (unweighted pair group method with arithmetic mean) is a simple agglomerative (bottom-up) hierarchical clustering method. It also has a weighted variant, WPGMA, and they are generally attributed to Sokal and Michener.[1]

Note that the unweighted term indicates that all distances contribute equally to each average that is computed and does not refer to the math by which it is achieved. Thus the simple averaging in WPGMA produces a weighted result and the proportional averaging in UPGMA produces an unweighted result (see the working example).[2]


The UPGMA algorithm constructs a rooted tree (dendrogram) that reflects the structure present in a pairwise similarity matrix (or a dissimilarity matrix). At each step, the nearest two clusters are combined into a higher-level cluster. The distance between any two clusters [math]\displaystyle{ \mathcal{A} }[/math] and [math]\displaystyle{ \mathcal{B} }[/math], each of size (i.e., cardinality) [math]\displaystyle{ {|\mathcal{A}|} }[/math] and [math]\displaystyle{ {|\mathcal{B}|} }[/math], is taken to be the average of all distances [math]\displaystyle{ d(x,y) }[/math] between pairs of objects [math]\displaystyle{ x }[/math] in [math]\displaystyle{ \mathcal{A} }[/math] and [math]\displaystyle{ y }[/math] in [math]\displaystyle{ \mathcal{B} }[/math], that is, the mean distance between elements of each cluster:

[math]\displaystyle{ {1 \over {|\mathcal{A}|\cdot|\mathcal{B}|}}\sum_{x \in \mathcal{A}}\sum_{ y \in \mathcal{B}} d(x,y) }[/math]

In other words, at each clustering step, the updated distance between the joined clusters [math]\displaystyle{ \mathcal{A} \cup \mathcal{B} }[/math] and a new cluster [math]\displaystyle{ X }[/math] is given by the proportional averaging of the [math]\displaystyle{ d_{\mathcal{A},X} }[/math] and [math]\displaystyle{ d_{\mathcal{B},X} }[/math] distances:

[math]\displaystyle{ d_{(\mathcal{A} \cup \mathcal{B}),X} = \frac{|\mathcal{A}| \cdot d_{\mathcal{A},X} + |\mathcal{B}| \cdot d_{\mathcal{B},X}}{|\mathcal{A}| + |\mathcal{B}|} }[/math]

The UPGMA algorithm produces rooted dendrograms and requires a constant-rate assumption - that is, it assumes an ultrametric tree in which the distances from the root to every branch tip are equal. When the tips are molecular data (i.e., DNA, RNA and protein) sampled at the same time, the ultrametricity assumption becomes equivalent to assuming a molecular clock.

Working example

This working example is based on a JC69 genetic distance matrix computed from the 5S ribosomal RNA sequence alignment of five bacteria: Bacillus subtilis ([math]\displaystyle{ a }[/math]), Bacillus stearothermophilus ([math]\displaystyle{ b }[/math]), Lactobacillus viridescens ([math]\displaystyle{ c }[/math]), Acholeplasma modicum ([math]\displaystyle{ d }[/math]), and Micrococcus luteus ([math]\displaystyle{ e }[/math]).[3][4]

First step

  • First clustering

Let us assume that we have five elements [math]\displaystyle{ (a,b,c,d,e) }[/math] and the following matrix [math]\displaystyle{ D_1 }[/math] of pairwise distances between them :

a b c d e
a 0 17 21 31 23
b 17 0 30 34 21
c 21 30 0 28 39
d 31 34 28 0 43
e 23 21 39 43 0

In this example, [math]\displaystyle{ D_1 (a,b)=17 }[/math] is the smallest value of [math]\displaystyle{ D_1 }[/math], so we join elements [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math].

  • First branch length estimation

Let [math]\displaystyle{ u }[/math] denote the node to which [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are now connected. Setting [math]\displaystyle{ \delta(a,u)=\delta(b,u)=D_1(a,b)/2 }[/math] ensures that elements [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] are equidistant from [math]\displaystyle{ u }[/math]. This corresponds to the expectation of the ultrametricity hypothesis. The branches joining [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math] to [math]\displaystyle{ u }[/math] then have lengths [math]\displaystyle{ \delta(a,u)=\delta(b,u)=17/2=8.5 }[/math] (see the final dendrogram)

  • First distance matrix update

We then proceed to update the initial distance matrix [math]\displaystyle{ D_1 }[/math] into a new distance matrix [math]\displaystyle{ D_2 }[/math] (see below), reduced in size by one row and one column because of the clustering of [math]\displaystyle{ a }[/math] with [math]\displaystyle{ b }[/math]. Bold values in [math]\displaystyle{ D_2 }[/math] correspond to the new distances, calculated by averaging distances between each element of the first cluster [math]\displaystyle{ (a,b) }[/math] and each of the remaining elements:

[math]\displaystyle{ D_2((a,b),c)=(D_1(a,c) \times 1 + D_1(b,c) \times 1)/(1+1)=(21+30)/2=25.5 }[/math]

[math]\displaystyle{ D_2((a,b),d)=(D_1(a,d) + D_1(b,d))/2=(31+34)/2=32.5 }[/math]

[math]\displaystyle{ D_2((a,b),e)=(D_1(a,e) + D_1(b,e))/2=(23+21)/2=22 }[/math]

Italicized values in [math]\displaystyle{ D_2 }[/math] are not affected by the matrix update as they correspond to distances between elements not involved in the first cluster.

Second step

  • Second clustering

We now reiterate the three previous steps, starting from the new distance matrix [math]\displaystyle{ D_2 }[/math]

(a,b) c d e
(a,b) 0 25.5 32.5 22
c 25.5 0 28 39
d 32.5 28 0 43
e 22 39 43 0

Here, [math]\displaystyle{ D_2 ((a,b),e)=22 }[/math] is the smallest value of [math]\displaystyle{ D_2 }[/math], so we join cluster [math]\displaystyle{ (a,b) }[/math] and element [math]\displaystyle{ e }[/math].

  • Second branch length estimation

Let [math]\displaystyle{ v }[/math] denote the node to which [math]\displaystyle{ (a,b) }[/math] and [math]\displaystyle{ e }[/math] are now connected. Because of the ultrametricity constraint, the branches joining [math]\displaystyle{ a }[/math] or [math]\displaystyle{ b }[/math] to [math]\displaystyle{ v }[/math], and [math]\displaystyle{ e }[/math] to [math]\displaystyle{ v }[/math] are equal and have the following length: [math]\displaystyle{ \delta(a,v)=\delta(b,v)=\delta(e,v)=22/2=11 }[/math]

We deduce the missing branch length: [math]\displaystyle{ \delta(u,v)=\delta(e,v)-\delta(a,u)=\delta(e,v)-\delta(b,u)=11-8.5=2.5 }[/math] (see the final dendrogram)

  • Second distance matrix update

We then proceed to update [math]\displaystyle{ D_2 }[/math] into a new distance matrix [math]\displaystyle{ D_3 }[/math] (see below), reduced in size by one row and one column because of the clustering of [math]\displaystyle{ (a,b) }[/math] with [math]\displaystyle{ e }[/math]. Bold values in [math]\displaystyle{ D_3 }[/math] correspond to the new distances, calculated by proportional averaging:

[math]\displaystyle{ D_3(((a,b),e),c)=(D_2((a,b),c) \times 2 + D_2(e,c) \times 1)/(2+1)=(25.5 \times 2 + 39 \times 1)/3=30 }[/math]

Thanks to this proportional average, the calculation of this new distance accounts for the larger size of the [math]\displaystyle{ (a,b) }[/math] cluster (two elements) with respect to [math]\displaystyle{ e }[/math] (one element). Similarly:

[math]\displaystyle{ D_3(((a,b),e),d)=(D_2((a,b),d) \times 2 + D_2(e,d) \times 1)/(2+1)=(32.5 \times 2 + 43 \times 1)/3=36 }[/math]

Proportional averaging therefore gives equal weight to the initial distances of matrix [math]\displaystyle{ D_1 }[/math]. This is the reason why the method is unweighted, not with respect to the mathematical procedure but with respect to the initial distances.

Third step

  • Third clustering

We again reiterate the three previous steps, starting from the updated distance matrix [math]\displaystyle{ D_3 }[/math].

((a,b),e) c d
((a,b),e) 0 30 36
c 30 0 28
d 36 28 0

Here, [math]\displaystyle{ D_3 (c,d)=28 }[/math] is the smallest value of [math]\displaystyle{ D_3 }[/math], so we join elements [math]\displaystyle{ c }[/math] and [math]\displaystyle{ d }[/math].

  • Third branch length estimation

Let [math]\displaystyle{ w }[/math] denote the node to which [math]\displaystyle{ c }[/math] and [math]\displaystyle{ d }[/math] are now connected. The branches joining [math]\displaystyle{ c }[/math] and [math]\displaystyle{ d }[/math] to [math]\displaystyle{ w }[/math] then have lengths [math]\displaystyle{ \delta(c,w)=\delta(d,w)=28/2=14 }[/math] (see the final dendrogram)

  • Third distance matrix update

There is a single entry to update, keeping in mind that the two elements [math]\displaystyle{ c }[/math] and [math]\displaystyle{ d }[/math] each have a contribution of [math]\displaystyle{ 1 }[/math] in the average computation:

[math]\displaystyle{ D_4((c,d),((a,b),e))=(D_3(c,((a,b),e)) \times 1 + D_3(d,((a,b),e)) \times 1)/(1+1)=(30 \times 1 + 36 \times 1)/2=33 }[/math]

Final step

The final [math]\displaystyle{ D_4 }[/math] matrix is:

((a,b),e) (c,d)
((a,b),e) 0 33
(c,d) 33 0

So we join clusters [math]\displaystyle{ ((a,b),e) }[/math] and [math]\displaystyle{ (c,d) }[/math].

Let [math]\displaystyle{ r }[/math] denote the (root) node to which [math]\displaystyle{ ((a,b),e) }[/math] and [math]\displaystyle{ (c,d) }[/math] are now connected. The branches joining [math]\displaystyle{ ((a,b),e) }[/math] and [math]\displaystyle{ (c,d) }[/math] to [math]\displaystyle{ r }[/math] then have lengths:

[math]\displaystyle{ \delta(((a,b),e),r)=\delta((c,d),r)=33/2=16.5 }[/math]

We deduce the two remaining branch lengths:

[math]\displaystyle{ \delta(v,r)=\delta(((a,b),e),r)-\delta(e,v)=16.5-11=5.5 }[/math]

[math]\displaystyle{ \delta(w,r)=\delta((c,d),r)-\delta(c,w)=16.5-14=2.5 }[/math]

The UPGMA dendrogram

UPGMA Dendrogram 5S data.svg

The dendrogram is now complete.[5] It is ultrametric because all tips ([math]\displaystyle{ a }[/math] to [math]\displaystyle{ e }[/math]) are equidistant from [math]\displaystyle{ r }[/math] :

[math]\displaystyle{ \delta(a,r)=\delta(b,r)=\delta(e,r)=\delta(c,r)=\delta(d,r)=16.5 }[/math]

The dendrogram is therefore rooted by [math]\displaystyle{ r }[/math], its deepest node.

Comparison with other linkages

Alternative linkage schemes include single linkage clustering, complete linkage clustering, and WPGMA average linkage clustering. Implementing a different linkage is simply a matter of using a different formula to calculate inter-cluster distances during the distance matrix update steps of the above algorithm. Complete linkage clustering avoids a drawback of the alternative single linkage clustering method - the so-called chaining phenomenon, where clusters formed via single linkage clustering may be forced together due to single elements being close to each other, even though many of the elements in each cluster may be very distant to each other. Complete linkage tends to find compact clusters of approximately equal diameters.[6]

Comparison of dendrograms obtained under different clustering methods from the same distance matrix.
Simple linkage-5S.svg
Complete linkage Dendrogram 5S data.svg
WPGMA Dendrogram 5S data.svg
UPGMA Dendrogram 5S data.svg
Single-linkage clustering Complete-linkage clustering Average linkage clustering: WPGMA Average linkage clustering: UPGMA.


  • In ecology, it is one of the most popular methods for the classification of sampling units (such as vegetation plots) on the basis of their pairwise similarities in relevant descriptor variables (such as species composition).[7] For example, it has been used to understand the trophic interaction between marine bacteria and protists.[8]
  • In bioinformatics, UPGMA is used for the creation of phenetic trees (phenograms). UPGMA was initially designed for use in protein electrophoresis studies, but is currently most often used to produce guide trees for more sophisticated algorithms. This algorithm is for example used in sequence alignment procedures, as it proposes one order in which the sequences will be aligned. Indeed, the guide tree aims at grouping the most similar sequences, regardless of their evolutionary rate or phylogenetic affinities, and that is exactly the goal of UPGMA[9]
  • In phylogenetics, UPGMA assumes a constant rate of evolution (molecular clock hypothesis) and that all sequences were sampled at the same time, and is not a well-regarded method for inferring relationships unless this assumption has been tested and justified for the data set being used. Notice that even under a 'strict clock', sequences sampled at different times should not lead to an ultrametric tree.

Time complexity

A trivial implementation of the algorithm to construct the UPGMA tree has [math]\displaystyle{ O(n^3) }[/math] time complexity, and using a heap for each cluster to keep its distances from other cluster reduces its time to [math]\displaystyle{ O(n^2 \log n) }[/math]. Fionn Murtagh presented an [math]\displaystyle{ O(n^2) }[/math] time and space algorithm.[10]

See also


  1. "A statistical method for evaluating systematic relationships". University of Kansas Science Bulletin 38: 1409–1438. 1958. 
  2. Garcia, Santi; Puigbò, Pere. "DendroUPGMA: A dendrogram construction utility". p. 4. 
  3. "Collection of published 5S, 5.8S and 4.5S ribosomal RNA sequences". Nucleic Acids Research 14 Suppl (Suppl): r1–59. 1986. doi:10.1093/nar/14.suppl.r1. PMID 2422630. 
  4. Phylogenetic analysis using ribosomal RNA. Methods in Enzymology. 164. 1988. pp. 793–812. doi:10.1016/s0076-6879(88)64084-5. 
  5. Swofford, David L.; Olsen, Gary J.; Waddell, Peter J.; Hillis, David M. (1996). "Phylogenetic inference". Molecular Systematics, 2nd edition. Sunderland, MA: Sinauer. pp. 407–514. ISBN 9780878932825. 
  6. Everitt, B. S.; Landau, S.; Leese, M. (2001). Cluster Analysis. 4th Edition. London: Arnold. pp. 62–64. 
  7. Numerical Ecology. Developments in Environmental Modelling. 20 (Second English ed.). Amsterdam: Elsevier. 1998. 
  8. "Different marine heterotrophic nanoflagellates affect differentially the composition of enriched bacterial communities". Microbial Ecology 49 (3): 474–85. April 2005. doi:10.1007/s00248-004-0035-5. PMID 16003474. 
  9. "Multiple alignment by aligning alignments". Bioinformatics 23 (13): i559–68. July 2007. doi:10.1093/bioinformatics/btm226. PMID 17646343. 
  10. "Complexities of Hierarchic Clustering Algorithms: the state of the art". Computational Statistics Quarterly 1: 101–113. 1984. 

External links