Ford–Fulkerson algorithm
The Ford–Fulkerson method or Ford–Fulkerson algorithm (FFA) is a greedy algorithm that computes the maximum flow in a flow network. It is sometimes called a "method" instead of an "algorithm" as the approach to finding augmenting paths in a residual graph is not fully specified[1] or it is specified in several implementations with different running times.[2] It was published in 1956 by L. R. Ford Jr. and D. R. Fulkerson.[3] The name "Ford–Fulkerson" is often also used for the Edmonds–Karp algorithm, which is a fully defined implementation of the Ford–Fulkerson method.
The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path.
Algorithm
Let [math]\displaystyle{ G(V,E) }[/math] be a graph, and for each edge from u to v, let [math]\displaystyle{ c(u,v) }[/math] be the capacity and [math]\displaystyle{ f(u,v) }[/math] be the flow. We want to find the maximum flow from the source s to the sink t. After every step in the algorithm the following is maintained:
Capacity constraints [math]\displaystyle{ \forall (u, v) \in E: \ f(u,v) \le c(u,v) }[/math] The flow along an edge cannot exceed its capacity. Skew symmetry [math]\displaystyle{ \forall (u, v) \in E: \ f(u,v) = - f(v,u) }[/math] The net flow from u to v must be the opposite of the net flow from v to u (see example). Flow conservation [math]\displaystyle{ \forall u \in V: u \neq s \text{ and } u \neq t \Rightarrow \sum_{w \in V} f(u,w) = 0 }[/math] The net flow to a node is zero, except for the source, which "produces" flow, and the sink, which "consumes" flow. Value(f) [math]\displaystyle{ \sum_{(s,u) \in E} f(s, u) = \sum_{(v,t) \in E} f(v, t) }[/math] The flow leaving from s must be equal to the flow arriving at t.
This means that the flow through the network is a legal flow after each round in the algorithm. We define the residual network [math]\displaystyle{ G_f(V,E_f) }[/math] to be the network with capacity [math]\displaystyle{ c_f(u,v) = c(u,v) - f(u,v) }[/math] and no flow. Notice that it can happen that a flow from v to u is allowed in the residual network, though disallowed in the original network: if [math]\displaystyle{ f(u,v)\gt 0 }[/math] and [math]\displaystyle{ c(v,u)=0 }[/math] then [math]\displaystyle{ c_f(v,u)=c(v,u)-f(v,u)=f(u,v)\gt 0 }[/math].
Algorithm Ford–Fulkerson
- Inputs Given a Network [math]\displaystyle{ G = (V,E) }[/math] with flow capacity c, a source node s, and a sink node t
- Output Compute a flow f from s to t of maximum value
- [math]\displaystyle{ f(u,v) \leftarrow 0 }[/math] for all edges [math]\displaystyle{ (u,v) }[/math]
- While there is a path p from s to t in [math]\displaystyle{ G_f }[/math], such that [math]\displaystyle{ c_f(u,v) \gt 0 }[/math] for all edges [math]\displaystyle{ (u,v) \in p }[/math]:
- Find [math]\displaystyle{ c_f(p) = \min\{c_f(u,v) : (u,v) \in p\} }[/math]
- For each edge [math]\displaystyle{ (u,v) \in p }[/math]
- [math]\displaystyle{ f(u,v) \leftarrow f(u,v) + c_f(p) }[/math] (Send flow along the path)
- [math]\displaystyle{ f(v,u) \leftarrow f(v,u) - c_f(p) }[/math] (The flow might be "returned" later)
- "←" denotes assignment. For instance, "largest ← item" means that the value of largest changes to the value of item.
- "return" terminates the algorithm and outputs the following value.
The path in step 2 can be found with, for example, a breadth-first search (BFS) or a depth-first search in [math]\displaystyle{ G_f(V,E_f) }[/math]. If you use the former, the algorithm is called Edmonds–Karp.
When no more paths in step 2 can be found, s will not be able to reach t in the residual network. If S is the set of nodes reachable by s in the residual network, then the total capacity in the original network of edges from S to the remainder of V is on the one hand equal to the total flow we found from s to t, and on the other hand serves as an upper bound for all such flows. This proves that the flow we found is maximal. See also Max-flow Min-cut theorem.
If the graph [math]\displaystyle{ G(V,E) }[/math] has multiple sources and sinks, we act as follows: Suppose that [math]\displaystyle{ T=\{t\mid t \text{ is a sink}\} }[/math] and [math]\displaystyle{ S=\{s\mid s \text{ is a source}\} }[/math]. Add a new source [math]\displaystyle{ s^* }[/math] with an edge [math]\displaystyle{ (s^*,s) }[/math] from [math]\displaystyle{ s^* }[/math] to every node [math]\displaystyle{ s\in S }[/math], with capacity [math]\displaystyle{ c(s^*,s)=d_s=\sum_{(s,u)\in E}c(s,u) }[/math]. And add a new sink [math]\displaystyle{ t^* }[/math] with an edge [math]\displaystyle{ (t, t^*) }[/math] from every node [math]\displaystyle{ t\in T }[/math] to [math]\displaystyle{ t^* }[/math], with capacity [math]\displaystyle{ c(t, t^*)=d_t=\sum_{(v,t)\in E}c(v,t) }[/math]. Then apply the Ford–Fulkerson algorithm.
Also, if a node u has capacity constraint [math]\displaystyle{ d_u }[/math], we replace this node with two nodes [math]\displaystyle{ u_{\mathrm{in}},u_{\mathrm{out}} }[/math], and an edge [math]\displaystyle{ (u_{\mathrm{in}},u_{\mathrm{out}}) }[/math], with capacity [math]\displaystyle{ c(u_{\mathrm{in}},u_{\mathrm{out}})=d_u }[/math]. Then apply the Ford–Fulkerson algorithm.
Complexity
By adding the flow augmenting path to the flow already established in the graph, the maximum flow will be reached when no more flow augmenting paths can be found in the graph. However, there is no certainty that this situation will ever be reached, so the best that can be guaranteed is that the answer will be correct if the algorithm terminates. In the case that the algorithm runs forever, the flow might not even converge towards the maximum flow. However, this situation only occurs with irrational flow values.[4] When the capacities are integers, the runtime of Ford–Fulkerson is bounded by [math]\displaystyle{ O(E f) }[/math] (see big O notation), where [math]\displaystyle{ E }[/math] is the number of edges in the graph and [math]\displaystyle{ f }[/math] is the maximum flow in the graph. This is because each augmenting path can be found in [math]\displaystyle{ O(E) }[/math] time and increases the flow by an integer amount of at least [math]\displaystyle{ 1 }[/math], with the upper bound [math]\displaystyle{ f }[/math].
A variation of the Ford–Fulkerson algorithm with guaranteed termination and a runtime independent of the maximum flow value is the Edmonds–Karp algorithm, which runs in [math]\displaystyle{ O(VE^2) }[/math] time.
Integral example
The following example shows the first steps of Ford–Fulkerson in a flow network with 4 nodes, source [math]\displaystyle{ A }[/math] and sink [math]\displaystyle{ D }[/math]. This example shows the worst-case behaviour of the algorithm. In each step, only a flow of [math]\displaystyle{ 1 }[/math] is sent across the network. If breadth-first-search were used instead, only two steps would be needed.
Notice how flow is "pushed back" from [math]\displaystyle{ C }[/math] to [math]\displaystyle{ B }[/math] when finding the path [math]\displaystyle{ A,C,B,D }[/math].
Non-terminating example
Consider the flow network shown on the right, with source [math]\displaystyle{ s }[/math], sink [math]\displaystyle{ t }[/math], capacities of edges [math]\displaystyle{ e_1 }[/math], [math]\displaystyle{ e_2 }[/math] and [math]\displaystyle{ e_3 }[/math] respectively [math]\displaystyle{ 1 }[/math], [math]\displaystyle{ r=(\sqrt{5}-1)/2 }[/math] and [math]\displaystyle{ 1 }[/math] and the capacity of all other edges some integer [math]\displaystyle{ M \ge 2 }[/math]. The constant [math]\displaystyle{ r }[/math] was chosen so, that [math]\displaystyle{ r^2 = 1 - r }[/math]. We use augmenting paths according to the following table, where [math]\displaystyle{ p_1 = \{ s, v_4, v_3, v_2, v_1, t \} }[/math], [math]\displaystyle{ p_2 = \{ s, v_2, v_3, v_4, t \} }[/math] and [math]\displaystyle{ p_3 = \{ s, v_1, v_2, v_3, t \} }[/math].
Step | Augmenting path | Sent flow | Residual capacities | ||
---|---|---|---|---|---|
[math]\displaystyle{ e_1 }[/math] | [math]\displaystyle{ e_2 }[/math] | [math]\displaystyle{ e_3 }[/math] | |||
0 | [math]\displaystyle{ r^0=1 }[/math] | [math]\displaystyle{ r }[/math] | [math]\displaystyle{ 1 }[/math] | ||
1 | [math]\displaystyle{ \{ s, v_2, v_3, t \} }[/math] | [math]\displaystyle{ 1 }[/math] | [math]\displaystyle{ r^0 }[/math] | [math]\displaystyle{ r^1 }[/math] | [math]\displaystyle{ 0 }[/math] |
2 | [math]\displaystyle{ p_1 }[/math] | [math]\displaystyle{ r^1 }[/math] | [math]\displaystyle{ r^2 }[/math] | [math]\displaystyle{ 0 }[/math] | [math]\displaystyle{ r^1 }[/math] |
3 | [math]\displaystyle{ p_2 }[/math] | [math]\displaystyle{ r^1 }[/math] | [math]\displaystyle{ r^2 }[/math] | [math]\displaystyle{ r^1 }[/math] | [math]\displaystyle{ 0 }[/math] |
4 | [math]\displaystyle{ p_1 }[/math] | [math]\displaystyle{ r^2 }[/math] | [math]\displaystyle{ 0 }[/math] | [math]\displaystyle{ r^3 }[/math] | [math]\displaystyle{ r^2 }[/math] |
5 | [math]\displaystyle{ p_3 }[/math] | [math]\displaystyle{ r^2 }[/math] | [math]\displaystyle{ r^2 }[/math] | [math]\displaystyle{ r^3 }[/math] | [math]\displaystyle{ 0 }[/math] |
Note that after step 1 as well as after step 5, the residual capacities of edges [math]\displaystyle{ e_1 }[/math], [math]\displaystyle{ e_2 }[/math] and [math]\displaystyle{ e_3 }[/math] are in the form [math]\displaystyle{ r^n }[/math], [math]\displaystyle{ r^{n+1} }[/math] and [math]\displaystyle{ 0 }[/math], respectively, for some [math]\displaystyle{ n \in \mathbb{N} }[/math]. This means that we can use augmenting paths [math]\displaystyle{ p_1 }[/math], [math]\displaystyle{ p_2 }[/math], [math]\displaystyle{ p_1 }[/math] and [math]\displaystyle{ p_3 }[/math] infinitely many times and residual capacities of these edges will always be in the same form. Total flow in the network after step 5 is [math]\displaystyle{ 1 + 2(r^1 + r^2) }[/math]. If we continue to use augmenting paths as above, the total flow converges to [math]\displaystyle{ \textstyle 1 + 2\sum_{i=1}^\infty r^i = 3 + 2r }[/math]. However, note that there is a flow of value [math]\displaystyle{ 2M + 1 }[/math], by sending [math]\displaystyle{ M }[/math] units of flow along [math]\displaystyle{ sv_1t }[/math], 1 unit of flow along [math]\displaystyle{ sv_2v_3t }[/math], and [math]\displaystyle{ M }[/math] units of flow along [math]\displaystyle{ sv_4t }[/math]. Therefore, the algorithm never terminates and the flow does not even converge to the maximum flow.[5]
Another non-terminating example based on the Euclidean algorithm is given by (Backman Huynh), where they also show that the worst case running-time of the Ford-Fulkerson algorithm on a network [math]\displaystyle{ G(V,E) }[/math] in ordinal numbers is [math]\displaystyle{ \omega^{\Theta(|E|)} }[/math].
Python implementation of Edmonds–Karp algorithm
import collections class Graph: """ This class represents a directed graph using adjacency matrix representation. """ def __init__(self, graph): self.graph = graph # residual graph self.row = len(graph) def bfs(self, s, t, parent): """ Returns true if there is a path from source 's' to sink 't' in residual graph. Also fills parent[] to store the path. """ # Mark all the vertices as not visited visited = [False] * self.row # Create a queue for BFS queue = collections.deque() # Mark the source node as visited and enqueue it queue.append(s) visited[s] = True # Standard BFS loop while queue: u = queue.popleft() # Get all adjacent vertices of the dequeued vertex u # If an adjacent has not been visited, then mark it # visited and enqueue it for ind, val in enumerate(self.graph[u]): if (visited[ind] == False) and (val > 0): queue.append(ind) visited[ind] = True parent[ind] = u # If we reached sink in BFS starting from source, then return # true, else false return visited[t] # Returns the maximum flow from s to t in the given graph def edmonds_karp(self, source, sink): # This array is filled by BFS and to store path parent = [-1] * self.row max_flow = 0 # There is no flow initially # Augment the flow while there is path from source to sink while self.bfs(source, sink, parent): # Find minimum residual capacity of the edges along the # path filled by BFS. Or we can say find the maximum flow # through the path found. path_flow = float("Inf") s = sink while s != source: path_flow = min(path_flow, self.graph[parent[s]][s]) s = parent[s] # Add path flow to overall flow max_flow += path_flow # update residual capacities of the edges and reverse edges # along the path v = sink while v != source: u = parent[v] self.graph[u][v] -= path_flow self.graph[v][u] += path_flow v = parent[v] return max_flow
See also
Notes
- ↑ Laung-Terng Wang, Yao-Wen Chang, Kwang-Ting (Tim) Cheng (2009). Electronic Design Automation: Synthesis, Verification, and Test. Morgan Kaufmann. pp. 204. ISBN 978-0080922003. https://archive.org/details/electronicdesign00wang.
- ↑ Thomas H. Cormen; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2009). Introduction to Algorithms. MIT Press. pp. 714. ISBN 978-0262258104. https://archive.org/details/introductiontoal00corm_805.
- ↑ Ford, L. R.; Fulkerson, D. R. (1956). "Maximal flow through a network". Canadian Journal of Mathematics 8: 399–404. doi:10.4153/CJM-1956-045-5. http://www.cs.yale.edu/homes/lans/readings/routing/ford-max_flow-1956.pdf.
- ↑ "Ford-Fulkerson Max Flow Labeling Algorithm". 1998. CiteSeerX 10.1.1.295.9049.
- ↑ Zwick, Uri (21 August 1995). "The smallest networks on which the Ford–Fulkerson maximum flow procedure may fail to terminate". Theoretical Computer Science 148 (1): 165–170. doi:10.1016/0304-3975(95)00022-O.
References
- Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 26.2: The Ford–Fulkerson method". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill. pp. 651–664. ISBN 0-262-03293-7.
- George T. Heineman; Gary Pollice; Stanley Selkow (2008). "Chapter 8:Network Flow Algorithms". Algorithms in a Nutshell. Oreilly Media. pp. 226–250. ISBN 978-0-596-51624-6.
- Jon Kleinberg; Éva Tardos (2006). "Chapter 7:Extensions to the Maximum-Flow Problem". Algorithm Design. Pearson Education. pp. 378–384. ISBN 0-321-29535-8. https://archive.org/details/algorithmdesign0000klei/page/378.
- Samuel Gutekunst (2019). ENGRI 1101. Cornell University.
- Backman, Spencer; Huynh, Tony (2018). "Transfinite Ford–Fulkerson on a finite network". Computability 7 (4): 341–347. doi:10.3233/COM-180082.
External links
- A tutorial explaining the Ford–Fulkerson method to solve the max-flow problem
- Another Java animation
- Java Web Start application
Original source: https://en.wikipedia.org/wiki/Ford–Fulkerson algorithm.
Read more |