# Difference between revisions of "Edmonds–Karp algorithm"

en>Glrx (tweak authors in citation) |
en>Gareth Jones (wlink big O notation during first use in this article) |
||

Line 1: | Line 1: | ||

− | In [[computer science]] and [[graph theory]], the '''Edmonds–Karp algorithm''' is an implementation of the [[Ford–Fulkerson algorithm|Ford–Fulkerson method]] for computing the [[maximum flow problem|maximum flow]] in a [[flow network]] in ''O''(''V'' ''E''<sup>2</sup>) time. It is asymptotically slower than the [[Push-relabel maximum flow algorithm#Relabel-to-front algorithm, ie. using FIFO heuristic|relabel-to-front algorithm]], which runs in ''O''(''V''<sup>3</sup>) time, but it is often faster in practice for [[sparse graph]]s. The algorithm was first published by Yefim (Chaim) Dinic in 1970<ref>{{cite journal |first=E. A. |last=Dinic |title=Algorithm for solution of a problem of maximum flow in a network with power estimation |journal=Soviet Math. Doklady |volume=11 |issue= |pages=1277–1280 |publisher=Doklady |year=1970 |url= |doi= |id= |accessdate= }}</ref> and independently published by [[Jack Edmonds]] and [[Richard Karp]] in 1972<ref>{{cite journal |last1=Edmonds |first1=Jack |author1-link=Jack Edmonds |last2=Karp |first2=Richard M. |author2-link=Richard Karp |title=Theoretical improvements in algorithmic efficiency for network flow problems |journal=Journal of the ACM |volume=19 |issue=2 |pages=248–264 |publisher=[[Association for Computing Machinery]] |year=1972 |url= |doi=10.1145/321694.321699 |id= |accessdate= }}</ref> | + | In [[computer science]] and [[graph theory]], the '''Edmonds–Karp algorithm''' is an implementation of the [[Ford–Fulkerson algorithm|Ford–Fulkerson method]] for computing the [[maximum flow problem|maximum flow]] in a [[flow network]] in ''[[big O notation|O]]''(''V'' ''E''<sup>2</sup>) time. It is asymptotically slower than the [[Push-relabel maximum flow algorithm#Relabel-to-front algorithm, ie. using FIFO heuristic|relabel-to-front algorithm]], which runs in ''O''(''V''<sup>3</sup>) time, but it is often faster in practice for [[sparse graph]]s. The algorithm was first published by Yefim (Chaim) Dinic in 1970<ref>{{cite journal |first=E. A. |last=Dinic |title=Algorithm for solution of a problem of maximum flow in a network with power estimation |journal=Soviet Math. Doklady |volume=11 |issue= |pages=1277–1280 |publisher=Doklady |year=1970 |url= |doi= |id= |accessdate= }}</ref> and independently published by [[Jack Edmonds]] and [[Richard Karp]] in 1972.<ref>{{cite journal |last1=Edmonds |first1=Jack |author1-link=Jack Edmonds |last2=Karp |first2=Richard M. |author2-link=Richard Karp |title=Theoretical improvements in algorithmic efficiency for network flow problems |journal=Journal of the ACM |volume=19 |issue=2 |pages=248–264 |publisher=[[Association for Computing Machinery]] |year=1972 |url= |doi=10.1145/321694.321699 |id= |accessdate= }}</ref> [[Dinic's algorithm]] includes additional techniques that reduce the running time to ''O''(''V''<sup>2</sup>''E''). |

==Algorithm== | ==Algorithm== | ||

− | The algorithm is identical to the [[Ford–Fulkerson algorithm]], except that the search order when finding the [[augmenting path]] is defined. The path found must be a shortest path that has available capacity. This can be found by a [[breadth-first search]], as we let edges have unit length. The running time of ''O''(''V'' ''E''<sup>2</sup>) is found by showing that each augmenting path can be found in ''O''(''E'') time, that every time at least one of the ''E'' edges becomes saturated, that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most ''V''. Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. There is an accessible proof in<ref>{{cite book |author=[[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]] and [[Clifford Stein]] |title=[[Introduction to Algorithms]] |publisher=MIT Press | + | The algorithm is identical to the [[Ford–Fulkerson algorithm]], except that the search order when finding the [[augmenting path]] is defined. The path found must be a shortest path that has available capacity. This can be found by a [[breadth-first search]], as we let edges have unit length. The running time of ''O''(''V'' ''E''<sup>2</sup>) is found by showing that each augmenting path can be found in ''O''(''E'') time, that every time at least one of the ''E'' edges becomes saturated, that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most ''V''. Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. There is an accessible proof in ''[[Introduction to Algorithms]]''.<ref>{{cite book |author=[[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]] and [[Clifford Stein]] |title=[[Introduction to Algorithms]] |publisher=MIT Press | year = 2009 |isbn=978-0-262-03384-8 |edition=third |chapter=26.2 |pages=727–730 }}</ref> |

==Pseudocode== | ==Pseudocode== | ||

Line 76: | Line 76: | ||

! Resulting network | ! Resulting network | ||

|- | |- | ||

− | |rowspan="2"| <math>\min(c_f(A,D),c_f(D,E),c_f(E,G)) = </math>< | + | |rowspan="2"| <math>\min(c_f(A,D),c_f(D,E),c_f(E,G)) = </math><br> |

− | <math>\min(3-0,2-0,1-0) = </math>< | + | <math>\min(3-0,2-0,1-0) = </math><br> |

− | <math>\min(3,2,1) = 1</math>< | + | <math>\min(3,2,1) = 1</math><br> |

|align="center"| <math>A,D,E,G</math> | |align="center"| <math>A,D,E,G</math> | ||

|- | |- | ||

| [[Image:Edmonds-Karp flow example 1.svg|300px]]</td> | | [[Image:Edmonds-Karp flow example 1.svg|300px]]</td> | ||

|- | |- | ||

− | |rowspan="2"| <math>\min(c_f(A,D),c_f(D,F),c_f(F,G)) = </math>< | + | |rowspan="2"| <math>\min(c_f(A,D),c_f(D,F),c_f(F,G)) = </math><br> |

− | <math>\min(3-1,6-0,9-0) = </math>< | + | <math>\min(3-1,6-0,9-0) = </math><br> |

− | <math>\min(2,6,9) = 2</math>< | + | <math>\min(2,6,9) = 2</math><br> |

|align="center"| <math>A,D,F,G</math> | |align="center"| <math>A,D,F,G</math> | ||

|- | |- | ||

| [[Image:Edmonds-Karp flow example 2.svg|300px]]</td> | | [[Image:Edmonds-Karp flow example 2.svg|300px]]</td> | ||

|- | |- | ||

− | |rowspan="2"| <math>\min(c_f(A,B),c_f(B,C),c_f(C,D),c_f(D,F),c_f(F,G)) = </math>< | + | |rowspan="2"| <math>\min(c_f(A,B),c_f(B,C),c_f(C,D),c_f(D,F),c_f(F,G)) = </math><br> |

− | <math>\min(3-0,4-0,1-0,6-2,9-2) = </math>< | + | <math>\min(3-0,4-0,1-0,6-2,9-2) = </math><br> |

− | <math>\min(3,4,1,4,7) = 1</math>< | + | <math>\min(3,4,1,4,7) = 1</math><br> |

|align="center"| <math>A,B,C,D,F,G</math> | |align="center"| <math>A,B,C,D,F,G</math> | ||

|- | |- | ||

| [[Image:Edmonds-Karp flow example 3.svg|300px]]</td> | | [[Image:Edmonds-Karp flow example 3.svg|300px]]</td> | ||

|- | |- | ||

− | |rowspan="2"| <math>\min(c_f(A,B),c_f(B,C),c_f(C,E),c_f(E,D),c_f(D,F),c_f(F,G)) = </math>< | + | |rowspan="2"| <math>\min(c_f(A,B),c_f(B,C),c_f(C,E),c_f(E,D),c_f(D,F),c_f(F,G)) = </math><br> |

− | <math>\min(3-1,4-1,2-0,0--1,6-3,9-3) = </math>< | + | <math>\min(3-1,4-1,2-0,0-(-1),6-3,9-3) = </math><br> |

− | <math>\min(2,3,2,1,3,6) = 1</math>< | + | <math>\min(2,3,2,1,3,6) = 1</math><br> |

|align="center"| <math>A,B,C,E,D,F,G</math> | |align="center"| <math>A,B,C,E,D,F,G</math> | ||

|- | |- | ||

Line 114: | Line 114: | ||

# Algorithms and Complexity (see pages 63–69). http://www.cis.upenn.edu/~wilf/AlgComp3.html | # Algorithms and Complexity (see pages 63–69). http://www.cis.upenn.edu/~wilf/AlgComp3.html | ||

− | {{DEFAULTSORT: | + | {{DEFAULTSORT:Edmonds-Karp Algorithm}} |

[[Category:Network flow]] | [[Category:Network flow]] | ||

[[Category:Graph algorithms]] | [[Category:Graph algorithms]] | ||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

− |

## Revision as of 17:10, 8 January 2014

In computer science and graph theory, the **Edmonds–Karp algorithm** is an implementation of the Ford–Fulkerson method for computing the maximum flow in a flow network in *O*(*V* *E*^{2}) time. It is asymptotically slower than the relabel-to-front algorithm, which runs in *O*(*V*^{3}) time, but it is often faster in practice for sparse graphs. The algorithm was first published by Yefim (Chaim) Dinic in 1970^{[1]} and independently published by Jack Edmonds and Richard Karp in 1972.^{[2]} Dinic's algorithm includes additional techniques that reduce the running time to *O*(*V*^{2}*E*).

## Contents

## Algorithm

The algorithm is identical to the Ford–Fulkerson algorithm, except that the search order when finding the augmenting path is defined. The path found must be a shortest path that has available capacity. This can be found by a breadth-first search, as we let edges have unit length. The running time of *O*(*V* *E*^{2}) is found by showing that each augmenting path can be found in *O*(*E*) time, that every time at least one of the *E* edges becomes saturated, that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most *V*. Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. There is an accessible proof in *Introduction to Algorithms*.^{[3]}

## Pseudocode

*For a more high level description, see Ford–Fulkerson algorithm.*

algorithmEdmondsKarpinput: C[1..n, 1..n](Capacity matrix)E[1..n, 1..?](Neighbour lists)s(Source)t(Sink)output: f(Value of maximum flow)F(A matrix giving a legal flow with the maximum value)f := 0(Initial flow is zero)F :=array(1..n, 1..n)(Residual capacity from u to v is C[u,v] - F[u,v])foreverm, P := BreadthFirstSearch(C, E, s, t, F)ifm = 0breakf := f + m(Backtrack search, and write flow)v := twhilev ≠ s u := P[v] F[u,v] := F[u,v] + m F[v,u] := F[v,u] - m v := ureturn(f, F)algorithmBreadthFirstSearchinput: C, E, s, t, Foutput: M[t](Capacity of path found)P(Parent table)P :=array(1..n)foruin1..n P[u] := -1 P[s] := -2(make sure source is not rediscovered)M :=array(1..n)(Capacity of found path to node)M[s] := ∞ Q := queue() Q.push(s)whileQ.size() > 0 u := Q.pop()forvinE[u](If there is available capacity, and v is not seen before in search)ifC[u,v] - F[u,v] > 0andP[v] = -1 P[v] := u M[v] := min(M[u], C[u,v] - F[u,v])ifv ≠ t Q.push(v)elsereturnM[t], Preturn0, P

## Example

Given a network of seven nodes, source A, sink G, and capacities as shown below:

In the pairs written on the edges, is the current flow, and is the capacity. The residual capacity from to is , the total capacity, minus the flow that is already used. If the net flow from to is negative, it *contributes* to the residual capacity.

Capacity | Path |
---|---|

Resulting network | |

Notice how the length of the augmenting path found by the algorithm (in red) never decreases. The paths found are the shortest possible. The flow found is equal to the capacity across the minimum cut in the graph separating the source and the sink. There is only one minimal cut in this graph, partitioning the nodes into the sets and , with the capacity

## Notes

## References

- Algorithms and Complexity (see pages 63–69). http://www.cis.upenn.edu/~wilf/AlgComp3.html