[PDF] [PDF] CMSC 451: Lecture 15 Network Flows: The Ford-Fulkerson Algorithm





Previous PDF Next PDF



CMSC 451: Lecture 15 Network Flows: The Ford-Fulkerson Algorithm

02-Nov-2017 Today we discuss the Ford-Fulkerson. Max Flow algorithm



? Ford–Fulkerson demo ? exponential-time example ? pathological

17-Apr-2018 Ford–Fulkerson algorithm: exponential-time example. Bad news. Number of augmenting paths can be exponential in input size.



Ford-Fulkerson pathological example

Ford-Fulkerson pathological example. Theorem. The Ford-Fulkerson algorithm may not terminate; moreover it may converge a value not equal to the value of 



7. Ford-Fulkerson Demo

Algorithm Design by Éva Tardos and Jon Kleinberg • Copyright © 2005 Addison Wesley • Slides by Kevin Wayne. 7. Ford-Fulkerson Demo 



Lecture 26: The Ford–Fulkerson Algorithm 1 Residual graphs

Lecture 26: The Ford–Fulkerson Algorithm. November 4 2019. University of Illinois at Urbana-Champaign. 1 Residual graphs



Discrete Mathematics and Algorithms - 1 Network Flow

Ford-. Fulkerson may be seen as a natural extension of the following simple but ineffective



? max-flow and min-cut problems ? Ford–Fulkerson algorithm

11-Jan-2021 Ford–Fulkerson algorithm. ? max-flow min-cut theorem. ? capacity-scaling algorithm. ? shortest augmenting paths. ? Dinitz' algorithm.



Lecture 26: The Ford–Fulkerson Algorithm 1 Residual graphs

Lecture 26: The Ford–Fulkerson Algorithm. April 6 2020. University of Illinois at Urbana-Champaign. 1 Residual graphs



CS 97SI

29-Jun-2015 Ford-Fulkerson Algorithm. Bipartite Matching. Min-cost Max-flow Algorithm. Network Flow Problems. 2. Page 3. Network Flow Problem.



? max-flow and min-cut problems ? Ford-Fulkerson algorithm

21-Apr-2013 Ford-Fulkerson algorithm. ? max-flow min-cut theorem. ? capacity-scaling algorithm. ? shortest augmenting paths. ? blocking-flow algorithm.



[PDF] ? max-flow and min-cut problems ? Ford–Fulkerson algorithm

1 nov 2021 · Summary--This note discusses the problem of maximizing the rate of flow from one terminal to another through a network which



[PDF] Slides19 - Ford-Fulkerson

Towards a Max Flow Algorithm Problem: possible to get stuck at a flow that is not maximum no more paths with excess capacity Example on board 



[PDF] Ford-Fulkerson Algorithm for Maximum Flow and Minimum Cut

Ford-Fulkerson Algorithm Correctness and Analysis Polynomial Time Algorithms Example At every stage of the Ford-Fulkerson algorithm the fiow values



[PDF] Lecture 13 - Network Flow - MIT OpenCourseWare

The Ford–Fulkerson algorithm is an elegant solution to the maximum flow problem The Ford–Fulkerson algorithm begins with a flow f (initially the zero 



[PDF] Ford-Fulkerson algorithm: an example

Ford-Fulkerson algorithm: an example Prof Giancarlo Ferrari Trecate The algorithm stops and the maximal flow value is 20



[PDF] CMSC 451: Lecture 15 Network Flows: The Ford-Fulkerson Algorithm

2 nov 2017 · An example of a flow and the associated residual network are shown in Fig 2(a) and (b) respectively For example the edge (b c) of capacity 



[PDF] CME 305: Discrete Mathematics and Algorithms - 1 Network Flow

of Lemma 1 implies that the running time of the Ford-Fulkerson algorithm is O(mv(f?)) for integral c The following example shows that this bound allows 



[PDF] Graph Theory and Optimization Flow: Ford-Fulkerson Algorithm Max

Ford-Fulkerson Min Cut=Max flow Menger Matching Ford-Fulkerson Algorithm 1 st example Problem here: there is no path where to push flow



[PDF] ? max-flow and min-cut problems ? Ford-Fulkerson algorithm

21 avr 2013 · Ford-Fulkerson algorithm ? max-flow min-cut theorem ? capacity-scaling algorithm ? shortest augmenting paths ? blocking-flow algorithm



[PDF] Ford-Fulkerson Analysis

Each iteration of the Ford-Fulkerson algorithm sends a Ford-Fulkerson algorithm computes the maximum flow Recall: Ford-Fulkerson Example

:

CMSC 451 Dave Mount

CMSC 451: Lecture 15

Network Flows: The Ford-Fulkerson Algorithm

Thursday, Nov 2, 2017

Reading:Sect. 7.2{7.3 in KT.

Network Flow:We continue discussion of the network ow problem. Last time, we introduced ba- sic concepts, such the conceptss-tnetworks and ows. Today, we discuss the Ford-Fulkerson Max Flow algorithm, cuts, and the relationship between ows and cuts.

Recall that a

ow networkis a directed graphG= (V;E) in which each edge (u;v)2Ehas a nonegativecapacityc(u;v)0, with a designated source vertexsand sink vertext. We assume that there are no edges enteringsor exitingt. A owis a functionfthat maps each edge to a nonnegative real number that does not exceed the edge's capacity, and such that the total ow into any vertex other thansandtequals the total ow out of this vertex. The totalvalueof a ow is equal to the sum of ows coming out ofs(which, by ow conservation, is equal to the total ow enteringt). The objective of themax ow problemis to compute a ow of maximum value. Today we present an algorithm for this problem. Why Greedy Fails:Before considering our algorithm, we start by considering why a simple greedy scheme for computing the maximum ow does not work. The idea behind the greedy algorithm is motivated by the path-based notion of ow. (Recall this from the previous lec- ture.) Initially the ow on each edge is set to zero. Next, nd any pathPfromstot, such that the edge capacities on this path are all strictly positive. Letcminbe the minimum capacity of any edge on this path. This quantity is called thebottleneck capacityof the path. Pushcminunits through this path. For each edge (u;v)2P, setf(u;v) cmin+f(u;v), and decrease the capacity of (u;v) bycmin. Repeat this until nos-tpath (of positive capacity edges) remains in the network. While this may seem to be a very reasonable algorithm, and will generally produce a valid ow, it may fail to compute the maximum ow. To see why, consider the network shown in Fig. 1 (a). Suppose we push 5 units along the topmost path, 8 units along the middle path, and 5 units along the bottommost path. We have a ow of value 18. After adjusting the capacities (see Fig. 1 (b)) we see that there is no path of positive capacity fromstot. Thus, greedy gets stuck.sadc5/105/55/10bt8/108/85/50/30/30/30/38/8(a)sadc505bt20033330(b)

Fig. 1: The greedy

ow algorithm can get stuck before nding the maximum ow.

Lecture 15 1 Fall 2017

CMSC 451 Dave Mount

Residual Network:The key insight to overcoming the problem with the greedy algorithm is to observe that, in addition to increasing ows on edges, it is possible todecrease ows on edges that already carry ow (as long as the ow never becomes negative). It may seem counterintuitive that this would help, but we shall see that it is exactly what is needed to obtain an optimal solution. To make this idea clearer, we rst need to dene the notion of the residual network and augmenting paths. Given a ow networkGand a owf, dene theresidual network, denoted G f, to be a network having the same vertex set and same source and sink, and whose edges are dened as follows: Forward edges:For each edge (u;v) for whichf(u;v)< c(u;v), create an edge (u;v) inGf and assign it the capacitycf(u;v) =c(u;v)f(u;v). Intuitively, this edge signies that we canincreasethe ow along this edge by up tocf(u;v) units without violating the original capacity constraint. Backward edges:For each edge (u;v) for whichf(u;v)>0, create an edge (v;u) inGf and assign it a capacity ofcf(v;u) =f(u;v). Intuitively, this edge signies that we candecreasethe existing ow by as much ascf(u;v) units. Conceptually, by pushing positive ow along the reverse edge (v;u) we are decreasing the ow along the original edge (u;v). Observe that every edge of the residual network hasstrictly positivecapacity. (This will be important later on.) Note that each edge in the original network may result in the generation of up to two new edges in the residual network. Thus, the residual network is of the same asymptotic size as the original network.

An example of a

ow and the associated residual network are shown in Fig. 2 (a) and (b), respectively. For example, the edge (b;c) of capacity 2 signies that we can add up to 2 more units of ow to edge (b;c) and the edge (c;b) of capacity 8 signies that we can cancel up to

8 units of

ow from the edge (b;c).sadc5/105/55/10bt8/108/85/50/30/30/30/38/8(a): A owfin networkGsadc555bt28533338(b): Residual networkGf585

Fig. 2: A

owfand the residual networkGf. The capacity of each edge in the residual network is called itsresidual capacity. The key observation about the residual network is that if we can push ow through the residual network then we can push this additional amount of ow through the original network. This is formalized in the following lemma. Given two owsfandf0, we dene theirsum,f+f0,

Lecture 15 2 Fall 2017

CMSC 451 Dave Mount

in the natural way, by summing the ows along each edge. Iff00=f+f0, thenf00(u;v) = f(u;v) +f0(u;v). Clearly, the value off+f0is equal tojfj+jf0j.

Lemma:Letfbe a

ow inGand letf0be a ow inGf. Then (f+f0) is a ow inG.

Proof:(Sketch) To show that the resulting

ow is valid, we need to show that it satises both the capacity constraints and ow conservation. It is easy to see that the capacities ofGfwere exactly designed so that any ow along an edge ofGfwhen added to the owfofGwill satisfyG's capacity constraints. Also, since both ows satisfy ow conservation, it is easy to see that their sum will as well. (More generally, any linear combinationf+f0will satisfy ow conservation.) This lemma suggests that all we need to do to increase the ow is to nd any ow in the residual network. This leads to the notion of an augmenting path. Augmenting Paths and Ford-Fulkerson:Consider a networkG, letfbe a ow inG, and let G fbe the associated residual network. Anaugmenting pathis a simple pathPfromstot inGf. Theresidual capacity(also called thebottleneck capacity) of the path is the minimum capacity of any edge on the path. It is denotedcf(P). (Recall that all the edges ofGfare of strictly positive capacity, socf(P)>0.) By pushingcf(P) units of ow along each edge of the path, we obtain a valid ow inGf, and by the previous lemma, adding this tofresults in a valid ow inGof strictly higher value.

For example, in Fig.

3 (a) we show an augmenting path of capacity 3 in the residual network for the ow given earlier in Fig. 2 . In (b), we show the result of adding this ow to every edge of the augmenting path. Observe that because of the backwards edge (c;b), we have decreased the

ow along edge (b;c) by 3, from 8 to 5.sadc555bt28533338(a): Augmenting path of capacity 3585sadc8/105/58/10bt5/108/85/50/33/33/30/38/8(b): The

ow after augmentation

Fig. 3: Augmenting path and augmentation.

How is this dierent from the greedy algorithm? The greedy algorithm only increases ow on edges. Since an augmenting path may increase ow on a backwards edge, it may actually decreasethe ow on some edge of the original network. This observation naturally suggests an algorithm for computing ows of ever larger value.

Start with a

ow of weight 0, and then repeatedly nd an augmenting path. Repeat this until no such path exists. This, in a nutshell, is the simplest and best known algorithm for computing ows, called theFord-Fulkerson method. (We do not call it an \algorithm,"

Lecture 15 3 Fall 2017

CMSC 451 Dave Mount

Ford-Fulkerson Network Flow

ford-fulkerson-flow(G = (V, E, s, t)) { f = 0 (all edges carry zero flow) while (true) {

G' = the residual-network of G for f

if (G' has no s-t augmenting path) break // no augmentations left

P = any-augmenting-path of G' // augmenting path

c = minimum capacity edge of P // augmentation amount augment f by adding c to the flow on every edge of P return f }since the method of selecting the augmenting path is not specied. We will discuss various strategies in future lectures.) It is summarized in the code fragment below. There are three issues to consider before declaring this a reasonable algorithm.

How eciently can we perform augmentation?

How many augmentations might be required until converging? If no more augmentations can be performed, have we found the max- ow? Let us consider rst the question of how to perform augmentation. First, givenGandf, we need to compute the residual network,Gf. This is easy to do inO(n+m) time, where n=jVjandm=jEj. We assume thatGfcontains only edges of strictly positive capacity. Next, we need to determine whether there exists an augmenting path fromstot Gf. We can do this by performing either a DFS or BFS in the residual network starting atsand terminating as soon (if ever)tis reached. LetPbe the resulting path. Clearly, this can be done inO(n+m) time as well. Finally, we compute the minimum cost edge alongP, and increase the owfby this amount for every edge ofP. Two questions remain: What is the best way to select the augmenting path, and is this correct in the sense of converging to the maximum ow? Next, we consider the issue of correctness. Before doing this, we will need to introduce the concept of a cut. Cuts:In order to show that Ford-Fulkerson leads to the maximum ow, we need to formalize the notion of a \bottleneck" in the network. Intuitively, the ow cannot be increased forever, because there is some subset of edges, whose capacities eventually become saturated with ow. Every path fromstotmust cross one of these saturated edges, and so the sum of capacities of these edges imposes an upper bound on size of the maximum ow. Thus, these edges form a bottleneck. We want to make this concept mathematically formal. Since such a set of edges lie on every path fromsfromt, their removal denes a partition separating the vertices thatscan reach from the vertices thatscannot reach. This suggests the following concept. Given a networkG, dene acut(also called ans-tcut) to be a partition of the vertex set into two disjoint subsetsXVandY=VnX, wheres2Xandt2Y. We dene thenet

Lecture 15 4 Fall 2017

CMSC 451 Dave Mount

owfromXtoYto be the sum of ows fromXtoYminus the sum of ows fromYtoX, that is, f(X;Y) =X x2XX y2Yf(x;y)X y2YX x2Xf(y;x):

Observe thatf(X;Y) =f(Y;X).

For example, Fig.

4 sho wsa o wof v alue17. It also sho wsa cut ( X;Y) = (fs;ag;fb;c;d;tg),

wheref(X;Y) = 17.sadc4/105/55/10bt7/108/85/51/30/30/30/37/8X=fs;agY=fb;c;d;tgf(X;Y) = 5 + 01 + 8 + 5 = 17Fig. 4: Flow of value 17 across the cut (fs;ag;fb;c;d;tg).

Lemma:Let (X;Y) be anys-tcut in a network. Given any owf, the value offis equal to the net ow across the cut, that is,f(X;Y) =jfj. Proof:Recall that there are no edges leading intos, and so we havejfj=fout(s) =fout(s) f in(s). Since all the other nodes ofXmust satisfy ow conservation it follows that jfj=X x2X(fout(x)fin(x)) Now, observe that every edge (u;v) where bothuandvare inXcontributes one positive term and one negative term of valuef(u;v) to the above sum, and so all of these cancel out. The only terms that remain are the edges that either go fromXtoY(which contribute positively) and those fromYtoX(which contribute negatively). Thus, it follows that the value of the sum is exactlyf(X;Y), and thereforejfj=f(X;Y). Dene thecapacityof the cut (X;Y) to be the sum of the capacities of the edges leading from

XtoY, that is,

c(X;Y) =X x2XX y2Yc(x;y):

There is a noteworthy asymmetry between the net

ow of a cut and the capacity of a cut. In the net ow, we consider both the ows fromXtoYand (negated) fromYtoX. In contrast, in the denition of cut capacity, we only consider capacities fromXtoY. To understand why this asymmetry exists, suppose that there there are two edges (x;y) and (y;x), wherex2X andy2Y, where the edges have the same capacities and both carry the same ow. The ow fromytoxeectively cancels out the ow fromxtoy(as if you have an \eddy" in the middle of a river, where the ow cycles around). Thus, this cycle does not contribute to the total ow value, and so it is important to consider both in the net ow. On the other hand,

Lecture 15 5 Fall 2017

CMSC 451 Dave Mount

the capacity of the edge fromytoxdoes not contribute to nor detract from the maximum amount of ow we can push from theX-side to theY-side of the cut. Therefore, we do not need to consider it in computing the cut's total capacity. It is easy to see that it is not possible to push more ow through a cut than its capacity.

Combining this with the above lemma we have:

Lemma:Given anys-tcut (X;Y) and any

owfwe havejfj c(X;Y). The optimality of the Ford-Fulkerson method is based on the following famous theorem, called theMax-Flow/Min-Cut Theorem. Basically, it states that in any ow network the minimum capacity cut acts like a bottleneck to limit the maximum amount of ow. The Ford-Fulkerson method terminates when it nds this bottleneck, and hence on termination, it nds both the minimum cut and the maximum ow. Max-Flow/Min-Cut Theorem:The following three conditions are equivalent. (i)fis a maximum ow inG, (ii) The residual n etworkGfcontains no augmenting paths, (iii)jfj=c(X;Y) for some cut (X;Y) ofG.

Proof:

(i))(ii): (by contradiction) Iffis a max ow and there were an augmenting path inGf, then by pushing ow along this path we would have a larger ow, a contradiction. (ii))(iii): If there are no augmenting paths thensandtare not connected in the residual network. LetXbe those vertices reachable fromsin the residual network, and letYbe the rest. Clearly, (X;Y) forms a cut. Clearly, each edge crossing the cut fromXtoYmust be saturated (or else we could reach one more vertex on the

Y-side by adding

ow along this edge), and each edge crossing the cut fromYtoX must be carrying zero ow (or else we could reach one more vertex on theY-side by reducing ow along this edge). It follows that the ow across the cut equals the capacity of the cut, thusjfj=c(X;Y). (iii))(i): This follows directly from the previous lemma. No ow value can exceed any cut capacity, and so ifany ow's value matchesanycut's capacity, this ow must be maximum. We have established that, on termination, Ford-Fulkerson generates the maximum ow. But, is it guaranteed to terminate and, if so, how long does it take? We will consider this question next time.

Lecture 15 6 Fall 2017

quotesdbs_dbs11.pdfusesText_17
[PDF] fordham immunization form

[PDF] forecasting and assessing the impact of artificial intelligence on society

[PDF] forecasting exchange rates

[PDF] forecasting the impact of artificial intelligence

[PDF] foreign business in idaho

[PDF] foreign businesses in switzerland

[PDF] foreign company doing business in usa

[PDF] foreign company selling in usa

[PDF] foreign company with u.s. employees

[PDF] foreign currency exchange rates

[PDF] foreign earned income exclusion

[PDF] foreign exchange rate pdf

[PDF] foreign exchange rates

[PDF] foreign importer of record

[PDF] foreign language courses in delhi fees