Generalized distributive law
The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm.[1] It is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. The law and algorithm were introduced in a semi-tutorial by Srinivas M. Aji and Robert J. McEliece with the same title.[1]
Introduction
"The distributive law in mathematics is the law relating the operations of multiplication and addition, stated symbolically, [math]\displaystyle{ a*(b + c) = a*b + a*c }[/math]; that is, the monomial factor [math]\displaystyle{ a }[/math] is distributed, or separately applied, to each term of the binomial factor [math]\displaystyle{ b + c }[/math], resulting in the product [math]\displaystyle{ a*b + a*c }[/math]" - Britannica[2]
As it can be observed from the definition, application of distributive law to an arithmetic expression reduces the number of operations in it. In the previous example the total number of operations reduced from three (two multiplications and an addition in [math]\displaystyle{ a*b + a*c }[/math]) to two (one multiplication and one addition in [math]\displaystyle{ a*(b + c) }[/math]). Generalization of distributive law leads to a large family of fast algorithms. This includes the FFT and Viterbi algorithm.
This is explained in a more formal way in the example below:
[math]\displaystyle{ \alpha(a,\, b) \stackrel{\mathrm{def}}{=} \displaystyle\sum \limits_{c,d,e \in A} f(a, \, c, \, b) \, g(a, \, d, \, e) }[/math] where [math]\displaystyle{ f(\cdot) }[/math] and [math]\displaystyle{ g(\cdot) }[/math] are real-valued functions, [math]\displaystyle{ a,b,c,d,e \in A }[/math] and [math]\displaystyle{ |A|=q }[/math] (say)
Here we are "marginalizing out" the independent variables ([math]\displaystyle{ c }[/math], [math]\displaystyle{ d }[/math], and [math]\displaystyle{ e }[/math]) to obtain the result. When we are calculating the computational complexity, we can see that for each [math]\displaystyle{ q^{2} }[/math] pairs of [math]\displaystyle{ (a,b) }[/math], there are [math]\displaystyle{ q^{3} }[/math] terms due to the triplet [math]\displaystyle{ (c,d,e) }[/math] which needs to take part in the evaluation of [math]\displaystyle{ \alpha(a,\, b) }[/math] with each step having one addition and one multiplication. Therefore, the total number of computations needed is [math]\displaystyle{ 2\cdot q^2 \cdot q^3 = 2q^5 }[/math]. Hence the asymptotic complexity of the above function is [math]\displaystyle{ O(n^5) }[/math].
If we apply the distributive law to the RHS of the equation, we get the following:
- [math]\displaystyle{ \alpha(a, \, b) \stackrel{\mathrm{def}}{=} \displaystyle\sum\limits_{c \in A} f(a, \, c, \, b ) \cdot \sum _{d,\,e \in A} g(a,\,d,\,e) }[/math]
This implies that [math]\displaystyle{ \alpha(a, \, b) }[/math] can be described as a product [math]\displaystyle{ \alpha_{1}(a,\, b) \cdot \alpha_{2}(a) }[/math] where [math]\displaystyle{ \alpha_{1}(a,b) \stackrel{\mathrm{def}}{=} \displaystyle\sum\limits_{c \in A} f(a, \, c, \, b ) }[/math] and [math]\displaystyle{ \alpha_{2}(a) \stackrel{\mathrm{def}}{=} \displaystyle\sum\limits_{d,\,e \in A} g(a,\, d, \,e ) }[/math]
Now, when we are calculating the computational complexity, we can see that there are [math]\displaystyle{ q^{3} }[/math] additions in [math]\displaystyle{ \alpha_{1}(a,\, b) }[/math] and [math]\displaystyle{ \alpha_{2}(a) }[/math] each and there are [math]\displaystyle{ q^2 }[/math] multiplications when we are using the product [math]\displaystyle{ \alpha_{1}(a,\, b) \cdot \alpha_{2}(a) }[/math] to evaluate [math]\displaystyle{ \alpha(a, \, b) }[/math]. Therefore, the total number of computations needed is [math]\displaystyle{ q^3 + q^3 + q^2 = 2q^3 + q^2 }[/math]. Hence the asymptotic complexity of calculating [math]\displaystyle{ \alpha(a,b) }[/math] reduces to [math]\displaystyle{ O(n^{3}) }[/math] from [math]\displaystyle{ O(n^{5}) }[/math]. This shows by an example that applying distributive law reduces the computational complexity which is one of the good features of a "fast algorithm".
History
Some of the problems that used distributive law to solve can be grouped as follows
1. Decoding algorithms
A GDL like algorithm was used by Gallager's for decoding low density parity-check codes. Based on Gallager's work Tanner introduced the Tanner graph and expressed Gallagers work in message passing form. The tanners graph also helped explain the Viterbi algorithm.
It is observed by Forney that Viterbi's maximum likelihood decoding of convolutional codes also used algorithms of GDL-like generality.
2. Forward-backward algorithm
The forward backward algorithm helped as an algorithm for tracking the states in the markov chain. And this also was used the algorithm of GDL like generality
3. Artificial intelligence
The notion of junction trees has been used to solve many problems in AI. Also the concept of bucket elimination used many of the concepts.
The MPF problem
MPF or marginalize a product function is a general computational problem which as special case includes many classical problems such as computation of discrete Hadamard transform, maximum likelihood decoding of a linear code over a memory-less channel, and matrix chain multiplication. The power of the GDL lies in the fact that it applies to situations in which additions and multiplications are generalized. A commutative semiring is a good framework for explaining this behavior. It is defined over a set [math]\displaystyle{ K }[/math] with operators "[math]\displaystyle{ + }[/math]" and "[math]\displaystyle{ . }[/math]" where [math]\displaystyle{ (K,\, +) }[/math] and [math]\displaystyle{ (K,\, .) }[/math] are a commutative monoids and the distributive law holds.
Let [math]\displaystyle{ p_1, \ldots, p_n }[/math] be variables such that [math]\displaystyle{ p_1 \in A_1, \ldots, p_n \in A_{n} }[/math] where [math]\displaystyle{ A }[/math] is a finite set and [math]\displaystyle{ |A_i| = q_i }[/math]. Here [math]\displaystyle{ i = 1,\ldots, n }[/math]. If [math]\displaystyle{ S = \{i_{1}, \ldots, i_{r}\} }[/math] and [math]\displaystyle{ S \, \subset \{1,\ldots, n\} }[/math], let [math]\displaystyle{ A_{S} = A_{i_1} \times \cdots \times A_{i_r} }[/math], [math]\displaystyle{ p_{S} = (p_{i_1},\ldots, p_{i_r}) }[/math], [math]\displaystyle{ q_{S} = |A_{S}| }[/math], [math]\displaystyle{ \mathbf A = A_{1} \times \cdots \times A_{n} }[/math], and [math]\displaystyle{ \mathbf p = \{p_{1}, \ldots, p_{n}\} }[/math]
Let [math]\displaystyle{ S = \{S_{j}\}_{j=1}^M }[/math] where [math]\displaystyle{ S_{j} \subset \{1, ...\,,n\} }[/math]. Suppose a function is defined as [math]\displaystyle{ \alpha_{i}: A_{S_{i}} \rightarrow R }[/math], where [math]\displaystyle{ R }[/math] is a commutative semiring. Also, [math]\displaystyle{ p_{S_{i}} }[/math] are named the local domains and [math]\displaystyle{ \alpha_{i} }[/math] as the local kernels.
Now the global kernel [math]\displaystyle{ \beta : \mathbf A \rightarrow R }[/math] is defined as : [math]\displaystyle{ \beta(p_{1}, ...\,, p_{n}) = \prod_{i=1}^M \alpha(p_{S_{i}}) }[/math]
Definition of MPF problem: For one or more indices [math]\displaystyle{ i = 1, ...\,, M }[/math], compute a table of the values of [math]\displaystyle{ S_{i} }[/math]-marginalization of the global kernel [math]\displaystyle{ \beta }[/math], which is the function [math]\displaystyle{ \beta_{i}:A_{S_{i}} \rightarrow R }[/math] defined as [math]\displaystyle{ \beta_{i}(p_{S_{i}}) \, = \displaystyle\sum\limits_{p_{S_{i}^c} \in A_{S_{i}^c}} \beta(p) }[/math]
Here [math]\displaystyle{ S_{i}^c }[/math] is the complement of [math]\displaystyle{ S_{i} }[/math] with respect to [math]\displaystyle{ \mathbf \{1,...\,,n\} }[/math] and the [math]\displaystyle{ \beta_i(p_{S_i}) }[/math] is called the [math]\displaystyle{ i^{th} }[/math] objective function, or the objective function at [math]\displaystyle{ S_i }[/math]. It can observed that the computation of the [math]\displaystyle{ i^{th} }[/math] objective function in the obvious way needs [math]\displaystyle{ Mq_1 q_2 q_3\cdots q_{n} }[/math] operations. This is because there are [math]\displaystyle{ q_1 q_2\cdots q_n }[/math] additions and [math]\displaystyle{ (M-1)q_1 q_2...q_n }[/math] multiplications needed in the computation of the [math]\displaystyle{ i^\text{th} }[/math] objective function. The GDL algorithm which is explained in the next section can reduce this computational complexity.
The following is an example of the MPF problem. Let [math]\displaystyle{ p_{1},\,p_{2},\,p_{3},\,p_{4}, }[/math] and [math]\displaystyle{ p_{5} }[/math] be variables such that [math]\displaystyle{ p_{1} \in A_{1}, p_{2} \in A_{2}, p_{3} \in A_{3}, p_{4} \in A_{4}, }[/math] and [math]\displaystyle{ p_{5} \in A_{5} }[/math]. Here [math]\displaystyle{ M=4 }[/math] and [math]\displaystyle{ S = \{\{1,2,5\},\{2,4\},\{1,4\}, \{2\}\} }[/math]. The given functions using these variables are [math]\displaystyle{ f(p_{1},p_{2},p_{5}) }[/math] and [math]\displaystyle{ g(p_{3},p_{4}) }[/math] and we need to calculate [math]\displaystyle{ \alpha(p_{1}, \, p_{4}) }[/math] and [math]\displaystyle{ \beta(p_{2}) }[/math] defined as:
- [math]\displaystyle{ \alpha(p_1, \, p_4) = \displaystyle\sum\limits_{p_2 \in A_2,\, p_3 \in A_3, \, p_5 \in A_5 } f(p_1,\, p_2,\, p_5 ) \cdot g(p_2, \, p_4) }[/math]
- [math]\displaystyle{ \beta(p_{2}) = \sum\limits_{p_1 \in A_1,\, p_3 \in A_3,\, p_4 \in A_4, \, p_5 \in A_5 } f(p_1, \, p_2, \, p_5) \cdot g(p_2, \, p_4) }[/math]
Here local domains and local kernels are defined as follows:
local domains | local kernels |
---|---|
[math]\displaystyle{ \{p_{1}, p_{2}, p_{5}\} }[/math] | [math]\displaystyle{ (f(p_{1}, p_{2}, p_{5}) }[/math] |
[math]\displaystyle{ \{ p_{2}, p_{4}\} }[/math] | [math]\displaystyle{ g(p_{2}, p_{4}) }[/math] |
[math]\displaystyle{ \{p_{1}, p_{4}\} }[/math] | [math]\displaystyle{ 1 }[/math] |
[math]\displaystyle{ \{p_{2}\} }[/math] | [math]\displaystyle{ 1 }[/math] |
where [math]\displaystyle{ \alpha(p_{1}, p_{4}) }[/math] is the [math]\displaystyle{ 3^{rd} }[/math] objective function and [math]\displaystyle{ \beta(p_{2}) }[/math] is the [math]\displaystyle{ 4^{th} }[/math] objective function.
Consider another example where [math]\displaystyle{ p_{1},p_{2},p_{3},p_{4},r_{1},r_{2},r_{3},r_{4} \in \{0,1\} }[/math] and [math]\displaystyle{ f(r_{1},r_{2},r_{3},r_{4}) }[/math] is a real valued function. Now, we shall consider the MPF problem where the commutative semiring is defined as the set of real numbers with ordinary addition and multiplication and the local domains and local kernels are defined as follows:
local domains | local kernels |
---|---|
[math]\displaystyle{ \{r_1, r_2, r_3,r_4\} }[/math] | [math]\displaystyle{ f(r_1, r_2, r_3,r_4) }[/math] |
[math]\displaystyle{ \{ p_1, r_1\} }[/math] | [math]\displaystyle{ (-1)^{p_1 r_1} }[/math] |
[math]\displaystyle{ \{p_2, r_2\} }[/math] | [math]\displaystyle{ (-1)^{p_2 r_2} }[/math] |
[math]\displaystyle{ \{p_3, r_3\} }[/math] | [math]\displaystyle{ (-1)^{p_3 r_3} }[/math] |
[math]\displaystyle{ \{p_4, r_4\} }[/math] | [math]\displaystyle{ (-1)^{p_4 r_4} }[/math] |
[math]\displaystyle{ \{p_1,p_2, p_3, p_4\} }[/math] | [math]\displaystyle{ 1 }[/math] |
Now since the global kernel is defined as the product of the local kernels, it is
- [math]\displaystyle{ F(p_1, p_2, p_3,p_4, r_1, r_2, r_3,r_4) = f(p_1,p_2,p_3,p_4)\cdot(-1)^{p_1r_1 + p_2r_2 + p_3r_3 + p_4r_4} }[/math]
and the objective function at the local domain [math]\displaystyle{ p_1, p_2, p_3,p_4 }[/math] is
- [math]\displaystyle{ F(p_1, p_2, p_3,p_4) = \displaystyle\sum \limits_{r_1,r_2,r_3,r_4} f(r_1,r_2,r_3,r_4) \cdot(-1)^{p_1r_1 + p_2r_2 + p_3r_3 + p_4r_4}. }[/math]
This is the Hadamard transform of the function [math]\displaystyle{ f(\cdot) }[/math]. Hence we can see that the computation of Hadamard transform is a special case of the MPF problem. More examples can be demonstrated to prove that the MPF problem forms special cases of many classical problem as explained above whose details can be found at[1]
GDL: an algorithm for solving the MPF problem
If one can find a relationship among the elements of a given set [math]\displaystyle{ S }[/math], then one can solve the MPF problem basing on the notion of belief propagation which is a special use of "message passing" technique. The required relationship is that the given set of local domains can be organised into a junction tree. In other words, we create a graph theoretic tree with the elements of [math]\displaystyle{ S }[/math] as the vertices of the tree [math]\displaystyle{ T }[/math], such that for any two arbitrary vertices say [math]\displaystyle{ v_{i} }[/math] and [math]\displaystyle{ v_{j} }[/math] where [math]\displaystyle{ i \neq j }[/math] and there exists an edge between these two vertices, then the intersection of corresponding labels, viz [math]\displaystyle{ S_{i}\cap S_{j} }[/math], is a subset of the label on each vertex on the unique path from [math]\displaystyle{ v_{i} }[/math] to [math]\displaystyle{ v_{j} }[/math].
For example,
Example 1: Consider the following nine local domains:
- [math]\displaystyle{ \{p_2\} }[/math]
- [math]\displaystyle{ \{p_3,p_2\} }[/math]
- [math]\displaystyle{ \{p_2,p_1\} }[/math]
- [math]\displaystyle{ \{p_3,p_4\} }[/math]
- [math]\displaystyle{ \{p_3\} }[/math]
- [math]\displaystyle{ \{p_1,p_4\} }[/math]
- [math]\displaystyle{ \{p_1\} }[/math]
- [math]\displaystyle{ \{p_4\} }[/math]
- [math]\displaystyle{ \{p_2,p_4\} }[/math]
For the above given set of local domains, one can organize them into a junction tree as shown below:
Similarly If another set like the following is given
Example 2: Consider the following four local domains:
- [math]\displaystyle{ \{p_1,p_2\} }[/math]
- [math]\displaystyle{ \{p_2,p_3\} }[/math]
- [math]\displaystyle{ \{p_3,p_4\} }[/math]
- [math]\displaystyle{ \{p_1,p_4\} }[/math]
Then constructing the tree only with these local domains is not possible since this set of values has no common domains which can be placed between any two values of the above set. But however, if add the two dummy domains as shown below then organizing the updated set into a junction tree would be possible and easy too.
5.[math]\displaystyle{ \{p_{1},p_{2} }[/math],[math]\displaystyle{ p_{4}\} }[/math]
6.[math]\displaystyle{ \{p_{2},p_{3} }[/math],[math]\displaystyle{ p_{4}\} }[/math]
Similarly for these set of domains, the junction tree looks like shown below:
Generalized distributive law (GDL) algorithm
Input: A set of local domains.
Output: For the given set of domains, possible minimum number of operations that is required to solve the problem is computed.
So, if [math]\displaystyle{ v_{i} }[/math] and [math]\displaystyle{ v_{j} }[/math] are connected by an edge in the junction tree, then a message from [math]\displaystyle{ v_{i} }[/math] to [math]\displaystyle{ v_{j} }[/math] is a set/table of values given by a function: [math]\displaystyle{ \mu_{i,j} }[/math]:[math]\displaystyle{ A_{S_{i}\cap S_{j}} \rightarrow R }[/math]. To begin with all the functions i.e. for all combinations of [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] in the given tree, [math]\displaystyle{ \mu_{i,j} }[/math] is defined to be identically [math]\displaystyle{ 1 }[/math] and when a particular message is update, it follows the equation given below.
- [math]\displaystyle{ \mu_{i,j}(p_{S_{i}\cap S_{j}}) }[/math] = [math]\displaystyle{ \sum_{p_{S_{i}\setminus S_{j}}\in A_{S_{i} \setminus S_{j}}} \alpha _{i} (p_{S_{i}}) \prod_{{v_k \operatorname{adj} v_i},{k \neq j}} \mu_{k,j}(p_{S_k\cap S_i})(1) }[/math]
where [math]\displaystyle{ v_k \operatorname{adj} v_i }[/math] means that [math]\displaystyle{ v_{k} }[/math] is an adjacent vertex to [math]\displaystyle{ v_{i} }[/math] in tree.
Similarly each vertex has a state which is defined as a table containing the values from the function [math]\displaystyle{ \sigma_{i}: A_{S_{i}} \rightarrow R }[/math], Just like how messages initialize to 1 identically, state of [math]\displaystyle{ v_{i} }[/math] is defined to be local kernel [math]\displaystyle{ \alpha(p_{S_{i}}) }[/math], but whenever [math]\displaystyle{ \sigma_{i} }[/math] gets updated, it follows the following equation:
- [math]\displaystyle{ \sigma(p_{S_i}) = \alpha_i(p_{S_i}) \prod_{v_k \operatorname{adj} v_i} \mu_{k,j}(p_{S_k\cap S_i})(2). }[/math]
Basic working of the algorithm
For the given set of local domains as input, we find out if we can create a junction tree, either by using the set directly or by adding dummy domains to the set first and then creating the junction tree, if construction junction is not possible then algorithm output that there is no way to reduce the number of steps to compute the given equation problem, but once we have junction tree, algorithm will have to schedule messages and compute states, by doing these we can know where steps can be reduced, hence will be discusses this below.
Scheduling of the message passing and the state computation
There are two special cases we are going to talk about here namely Single Vertex Problem in which the objective function is computed at only one vertex [math]\displaystyle{ v_{0} }[/math] and the second one is All Vertices Problem where the goal is to compute the objective function at all vertices.
Lets begin with the single-vertex problem, GDL will start by directing each edge towards the targeted vertex [math]\displaystyle{ v_0 }[/math]. Here messages are sent only in the direction towards the targeted vertex. Note that all the directed messages are sent only once. The messages are started from the leaf nodes(where the degree is 1) go up towards the target vertex [math]\displaystyle{ v_0 }[/math]. The message travels from the leaves to its parents and then from there to their parents and so on until it reaches the target vertex [math]\displaystyle{ v_0 }[/math]. The target vertex [math]\displaystyle{ v_0 }[/math] will compute its state only when it receives all messages from all its neighbors. Once we have the state, We have got the answer and hence the algorithm terminates.
For Example, let us consider a junction tree constructed from the set of local domains given above i.e. the set from example 1,
Now the Scheduling table for these domains is (where the target vertex is [math]\displaystyle{ p_2 }[/math]).
[math]\displaystyle{ \text{Round Message or State Computation} }[/math]
[math]\displaystyle{ 1.\mu_{8,4}(p_{4}) = \alpha_{8}(p_{4}) }[/math]
[math]\displaystyle{ 2.\mu_{8,4}(p_{4}) = \Sigma_{p_{2}} \alpha_{9}(p_{2},p_{4}) }[/math]
[math]\displaystyle{ 3.\mu_{5,2}(p_{3}) = \alpha_{5}(p_{3}) }[/math]
[math]\displaystyle{ 4.\mu_{6,3}(p_{1}) = \Sigma_{p_{4}} \alpha_{6}(p_{1},p_{4}) }[/math]
[math]\displaystyle{ 5.\mu_{7,3}(p_{1}) = \alpha_{7}(p_{1}) }[/math]
[math]\displaystyle{ 6.\mu_{4,2}(p_{3}) = \Sigma_{p_{4}} \alpha_{4}(p_{3},p_{4}).\mu_{8,4}(p_{4}).\mu_{9,4}(p_{4}) }[/math]
[math]\displaystyle{ 7.\mu_{3,1}(p_{2}) = \Sigma_{p_{1}} \alpha_{3}(p_{2},p_{1}).\mu_{6,3}(p_{1}).\mu_{7,3}(p_{1}) }[/math]
[math]\displaystyle{ 8.\mu_{2,1}(p_{2}) = \Sigma_{p_{3}} \alpha_{2}(p_{3},p_{2}).\mu_{4,2}(p_{3}).\mu_{5,2}(p_{3}) }[/math]
[math]\displaystyle{ 9.\sigma_{1}(p_{2}) = \alpha_{1}(p_{2}).\mu_{2,1}(p_{2}).\mu_{3,1}(p_{2}) }[/math]
Thus the complexity for Single Vertex GDL can be shown as
[math]\displaystyle{ \Sigma_{v} d(v)|A_{S_{(v)}}| }[/math] arithmetic operations
Where (Note: The explanation for the above equation is explained later in the article )
[math]\displaystyle{ S(v) }[/math] is the label of [math]\displaystyle{ v }[/math].
[math]\displaystyle{ d(v) }[/math] is the degree of [math]\displaystyle{ v }[/math] (i.e. number of vertices adjacent to v).
To solve the All-Vertices problem, we can schedule GDL in several ways, some of them are parallel implementation where in each round, every state is updated and every message is computed and transmitted at the same time. In this type of implementation the states and messages will stabilizes after number of rounds that is at most equal to the diameter of the tree. At this point all the all states of the vertices will be equal to the desired objective function.
Another way to schedule GDL for this problem is serial implementation where its similar to the Single vertex problem except that we don't stop the algorithm until all the vertices of a required set have not got all the messages from all their neighbors and have compute their state.
Thus the number of arithmetic this implementation requires is at most [math]\displaystyle{ \Sigma_{v \in V} d(v)|A_{S_{(v)}}| }[/math] arithmetic operations.
Constructing a junction tree
The key to constructing a junction tree lies in the local domain graph [math]\displaystyle{ G_{LD} }[/math], which is a weighted complete graph with [math]\displaystyle{ M }[/math] vertices [math]\displaystyle{ v_1,v_2,v_3,\ldots ,v_M }[/math] i.e. one for each local domain, having the weight of the edge [math]\displaystyle{ e_{i,j} : v_i \leftrightarrow v_j }[/math] defined by
[math]\displaystyle{ \omega_{i,j} = |S_{i} \cap S_{j}| }[/math].
if [math]\displaystyle{ x_{k} \in S_{i} \cap S_{j} }[/math], then we say [math]\displaystyle{ x_{k} }[/math] is contained in[math]\displaystyle{ e_{i,j} }[/math]. Denoted by [math]\displaystyle{ \omega_{max} }[/math] (the weight of a maximal-weight spanning tree of [math]\displaystyle{ G_{LD} }[/math]), which is defined by
- [math]\displaystyle{ \omega^{*} = \Sigma ^M_{i=1}|S_{i}| - n }[/math]
where n is the number of elements in that set. For more clarity and details, please refer to these.[3][4]
Scheduling theorem
Let [math]\displaystyle{ 'T' }[/math] be a junction tree with vertex set [math]\displaystyle{ 'V' }[/math] and edge set [math]\displaystyle{ 'E' }[/math]. In this algorithm, the messages are sent in both the direction on any edge, so we can say/regard the edge set E as set of ordered pairs of vertices. For example, from Figure 1 [math]\displaystyle{ 'E' }[/math] can be defined as follows
- [math]\displaystyle{ E = \{(1,2),(2,1),(1,3),(3,1),(4,2),(2,4),(5,2),(2,5),(6,3),(3,6),(7,3),(3,7),(8,4),(4,8),(9,4),(4,9)\} }[/math]
NOTE:[math]\displaystyle{ E }[/math] above gives you all the possible directions that a message can travel in the tree.
The schedule for the GDL is defined as a finite sequence of subsets of[math]\displaystyle{ E }[/math]. Which is generally represented by [math]\displaystyle{ \mathcal{E} = }[/math]{[math]\displaystyle{ E_{1},E_{2},E_{3},\ldots, E_{N} }[/math]}, Where [math]\displaystyle{ E_{N} }[/math] is the set of messages updated during the [math]\displaystyle{ N^{th} }[/math] round of running the algorithm.
Having defined/seen some notations, we will see want the theorem says, When we are given a schedule [math]\displaystyle{ \mathcal{E} =\{ E_1,E_2,E_3,\ldots, E_N\} }[/math], the corresponding message trellis as a finite directed graph with Vertex set of [math]\displaystyle{ V \times \{0,1,2,3,\ldots, N\} }[/math], in which a typical element is denoted by [math]\displaystyle{ v_{i}(t) }[/math] for [math]\displaystyle{ t \in \{0,1,2,3,\ldots,N\} }[/math], Then after completion of the message passing, state at vertex [math]\displaystyle{ v_{j} }[/math] will be the [math]\displaystyle{ j^\text{th} }[/math] objective defined in
- [math]\displaystyle{ \sigma(p_{S_i}) = \alpha_i(p_{S_i}) \prod_{v_k \operatorname{adj} v_i} \mu_{k,j}(p_{S_{k}\cap S_{i}}) }[/math]
and iff there is a path from [math]\displaystyle{ v_i(0) }[/math] to [math]\displaystyle{ v_j(N) }[/math]
Computational complexity
Here we try to explain the complexity of solving the MPF problem in terms of the number of mathematical operations required for the calculation. i.e. We compare the number of operations required when calculated using the normal method (Here by normal method we mean by methods that do not use message passing or junction trees in short methods that do not use the concepts of GDL)and the number of operations using the generalized distributive law.
Example: Consider the simplest case where we need to compute the following expression [math]\displaystyle{ ab+ac }[/math].
To evaluate this expression naively requires two multiplications and one addition. The expression when expressed using the distributive law can be written as [math]\displaystyle{ a(b+c) }[/math] a simple optimization that reduces the number of operations to one addition and one multiplication.
Similar to the above explained example we will be expressing the equations in different forms to perform as few operation as possible by applying the GDL.
As explained in the previous sections we solve the problem by using the concept of the junction trees. The optimization obtained by the use of these trees is comparable to the optimization obtained by solving a semi group problem on trees. For example, to find the minimum of a group of numbers we can observe that if we have a tree and the elements are all at the bottom of the tree, then we can compare the minimum of two items in parallel and the resultant minimum will be written to the parent. When this process is propagated up the tree the minimum of the group of elements will be found at the root.
The following is the complexity for solving the junction tree using message passing
We rewrite the formula used earlier to the following form. This is the eqn for a message to be sent from vertex v to w
- [math]\displaystyle{ \mu _{v,w} (p_{v \cap w}) = \sum _{p _{v \setminus w} \in A _{S(v) \setminus S(w)}} \alpha _{v} (p _{v}) \prod _{u adj v _{u \neq v}} \mu _{u,v} (p _{u \cap v}) }[/math] ----message equation
Similarly we rewrite the equation for calculating the state of vertex v as follows
- [math]\displaystyle{ \sigma_v(p_v) = \alpha_v (p_v) \prod_{u \operatorname{adj} v} \mu _{v,w} (p _{v \cap w}) }[/math]
We first will analyze for the single-vertex problem and assume the target vertex is [math]\displaystyle{ v_0 }[/math] and hence we have one edge from [math]\displaystyle{ v }[/math] to [math]\displaystyle{ v _{0} }[/math]. Suppose we have an edge [math]\displaystyle{ (v,w) }[/math] we calculate the message using the message equation. To calculate [math]\displaystyle{ p _{u \cap v} }[/math] requires
- [math]\displaystyle{ q _{v \setminus w} -1 }[/math]
additions and
- [math]\displaystyle{ q _{v \setminus w} (d(v)-1) }[/math]
multiplications.
(We represent the [math]\displaystyle{ |A _{S(v) \ S(w)}| }[/math] as [math]\displaystyle{ q _{v \setminus w} }[/math].)
But there will be many possibilities for [math]\displaystyle{ x _{v \cap w} }[/math] hence
[math]\displaystyle{ q _{v \cap w} \stackrel{\mathrm{def}}{=} | A _{S(v) \cap S(w)}| }[/math] possibilities for [math]\displaystyle{ p _{v \cap w} }[/math].
Thus the entire message will need
- [math]\displaystyle{ (q _{v \cap w})(q _{v \setminus w} -1) = q _{v} - q _{v \cap w} }[/math]
additions and
- [math]\displaystyle{ (q _{v \cap w}) q _{v \setminus w}. (d(v) -1) = (d(v) -1) q _v }[/math]
multiplications
The total number of arithmetic operations required to send a message towards [math]\displaystyle{ v_0 }[/math]along the edges of tree will be
- [math]\displaystyle{ \sum _{ v \neq v0} (q_v - q _{v \cap w}) }[/math]
additions and
- [math]\displaystyle{ \sum _{ v \neq v0} (d(v) - 1) q_v }[/math]
multiplications.
Once all the messages have been transmitted the algorithm terminates with the computation of state at [math]\displaystyle{ v_0 }[/math] The state computation requires [math]\displaystyle{ d(v_0) q _0 }[/math] more multiplications. Thus number of calculations required to calculate the state is given as below
- [math]\displaystyle{ \sum _{v \neq v _{0}} (q _{v} - q _{v \cap w}) }[/math]
additions and
- [math]\displaystyle{ \sum _{v \neq v _{0}} (d(v) -1) q _{v} + d(v _{0})q _{v _{0}} }[/math]
multiplications
Thus the grand total of the number of calculations is
- [math]\displaystyle{ \chi (T) = \sum _{v \in V} d(v)q _{v} - \sum _{e \in E} q _{e} }[/math] ----[math]\displaystyle{ (1) }[/math]
where [math]\displaystyle{ e = (v,w) }[/math] is an edge and its size is defined by [math]\displaystyle{ q _{v \cap w} }[/math]
The formula above gives us the upper bound.
If we define the complexity of the edge [math]\displaystyle{ e = (v,w) }[/math] as
- [math]\displaystyle{ \chi (e) = q _{v} + q _{w} - q _{v \cap w} }[/math]
Therefore, [math]\displaystyle{ (1) }[/math] can be written as
- [math]\displaystyle{ \chi(T) = \sum _{e \in E} \chi (e) }[/math]
We now calculate the edge complexity for the problem defined in Figure 1 as follows
- [math]\displaystyle{ \chi(1,2) = q_2 + q_2 q_3 - q_2 }[/math]
- [math]\displaystyle{ \chi(2,4) = q_3 q_4 + q_2 q_3 - q_3 }[/math]
- [math]\displaystyle{ \chi(2,5) = q_3 + q_2 q_3 - q_3 }[/math]
- [math]\displaystyle{ \chi(4,8) = q_4 + q_3 q_4 - q_4 }[/math]
- [math]\displaystyle{ \chi(4,9) = q_2 q_4 + q_3 q_4 - q_4 }[/math]
- [math]\displaystyle{ \chi(1,3) = q _2 + q_2 q_1 - q_2 }[/math]
- [math]\displaystyle{ \chi(3,7) = q_1 + q_1 q_2 - q_1 }[/math]
- [math]\displaystyle{ \chi(3,6) = q_1 q _4 + q _1 q_2 - q _1 }[/math]
The total complexity will be [math]\displaystyle{ 3 q _{2}q _{3} + 3q _{3}q _{4}+ 3 q _{1}q _{2}+q _{2}q _{4} + q _{1}q _{4} - q _{1} - q _{3} - q _{4} }[/math] which is considerably low compared to the direct method. (Here by direct method we mean by methods that do not use message passing. The time taken using the direct method will be the equivalent to calculating message at each node and time to calculate the state of each of the nodes.)
Now we consider the all-vertex problem where the message will have to be sent in both the directions and state must be computed at both the vertexes. This would take [math]\displaystyle{ O( \sum _{v} d(v) d(v) q _{v}) }[/math] but by precomputing we can reduce the number of multiplications to [math]\displaystyle{ 3(d-2) }[/math]. Here [math]\displaystyle{ d }[/math] is the degree of the vertex. Ex : If there is a set [math]\displaystyle{ (a _{1}, \ldots ,a _{d}) }[/math] with [math]\displaystyle{ d }[/math] numbers. It is possible to compute all the d products of [math]\displaystyle{ d-1 }[/math] of the [math]\displaystyle{ a _{i} }[/math] with at most [math]\displaystyle{ 3(d-2) }[/math] multiplications rather than the obvious [math]\displaystyle{ d(d-2) }[/math]. We do this by precomputing the quantities [math]\displaystyle{ b_1 = a_1, b_2= b_1 \cdot a_2 = a_1 \cdot a _2, b _{d-1} = b _{d-2} \cdot a_{d-1} = a_1 a_2 \cdots a_{d-1} }[/math] and [math]\displaystyle{ c_d = a_d, c_{d-1} = a_{d-1} c_d = a _{d-1} \cdot a_d, \ldots , c_2 = a _2 \cdot c_3 = a _2 a_3 \cdots a_d }[/math] this takes [math]\displaystyle{ 2 (d-2) }[/math] multiplications. Then if [math]\displaystyle{ m_j }[/math] denotes the product of all [math]\displaystyle{ a_i }[/math] except for [math]\displaystyle{ a_j }[/math] we have [math]\displaystyle{ m_1 = c_2, m_2 = b_1 \cdot c_3 }[/math] and so on will need another [math]\displaystyle{ d-2 }[/math] multiplications making the total [math]\displaystyle{ 3 (d-2) }[/math]
There is not much we can do when it comes to the construction of the junction tree except that we may have many maximal weight spanning tree and we should choose the spanning tree with the least [math]\displaystyle{ \chi(T) }[/math] and sometimes this might mean adding a local domain to lower the junction tree complexity.
It may seem that GDL is correct only when the local domains can be expressed as a junction tree. But even in cases where there are cycles and a number of iterations the messages will approximately be equal to the objective function. The experiments on Gallager–Tanner–Wiberg algorithm for low density parity-check codes were supportive of this claim.
References
- ↑ 1.0 1.1 1.2 Aji, S.M.; McEliece, R.J. (Mar 2000). "The generalized distributive law". IEEE Transactions on Information Theory 46 (2): 325–343. doi:10.1109/18.825794. https://authors.library.caltech.edu/1541/1/AJIieeetit00.pdf.
- ↑ "distributive law". Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc. http://www.britannica.com/EBchecked/topic/166204/distributive-law. Retrieved 1 May 2012.
- ↑ "Archived copy". Archived from the original on 2015-03-19. https://web.archive.org/web/20150319085443/https://ai.stanford.edu/~paskin/gm-short-course/lec3.pdf. Retrieved 2015-03-19. The Junction Tree Algorithms
- ↑ http://www-anw.cs.umass.edu/~cs691t/SS02/lectures/week7.PDF The Junction Tree Algorithm
Original source: https://en.wikipedia.org/wiki/Generalized distributive law.
Read more |