min cost flow: polynomial algorithms. overview recap: min cost flow, residual network potential and...
TRANSCRIPT
Min Cost Flow: Polynomial Algorithms
Overview• Recap:
• Min Cost Flow, Residual Network• Potential and Reduced Cost
• Polynomial Algorithms• Approach• Capacity Scaling
• Successive Shortest Path Algorithm Recap
• Incorporating Scaling• Cost Scaling
• Preflow/Push Algorithm Recap
• Incorporating Scaling• Double Scaling Algorithm - Idea
Min Cost Flow - Recap• G=(V,E) is a directed graph
• Capacity u(v,w) for every
• Balances: For every we will have a number b(v)
• Cost c(v,w) for every
• can assume non-negative
v1
v2
v3 v4
v5
5
-2
-3
4,1
3,4
5,11,1
3,3
Min Cost Flow - Recap fdsfds
Compute feasible flow with min cost ij ijx c
Residual Network - Recap
• We replace each arc by two arcs and .
• The arc has cost and residual capacity .
• The arc has cost and residual capacity .
• The residual network consists only of arcs with positive residual capacity.
• A For a feasible flow x, G(x) is the residual network that corresponds to the flow x.
Reduced Cost - Recap
• Let be a potential function of the nodes in V.• Reduced cost: of an edge (u,v) :
Reduced Cost - Recap
• Theorem (Reduced Cost Optimality):A feasible solution x* is an optimal solution of the minimum cost flow problem Exists Node potential function that satisfies the following reduced cost optimality conditions:
• Idea: No negative cycles in G(x*)
Min Cost Flow: Polynomial Algorithms
Approach
• We have seen several algorithm for the MCF problem, but none polynomial – in logU, logC.
• Idea: Scaling!• Capacity/Flow values• Costs• both
• Next Week: Algorithms with running time independent of logU, logC• Strongly Polynomial• Will solve problems with irrational data
Capacity Scaling
Successive Shortest Path - Recap
• Pseudo-Flow – Flow which doesn’t necessarily satisfy balance constraints
• Define imbalance function e for node i:
• Define E and D as the sets of excess and Deficit nodes.
• Observation:
Successive Shortest Path - Recap
• Lemma: Suppose pseudoflow x has potential , satisfying reduced cost optimality.Letting d be the vector of distances from some node s in G(x) with as the length of (i,j), then:• Potential also satisfies optimality conditions• for edges in shortest paths from s.
• Idea: triangle inequality
Successive Shortest Path - Recap
• Algorithm:• Maintain pseudoflow x with pot. , satisfying
reduced cost optimality.• Iteratively, choose node s with positive
excess – e(s) > 0, until there none.• compute shortest path distances d from s
in G(x) with as the length of (i,j).• Update . also satisfies optimality
conditions• Push as much flow as possible along path
from s to some node t with e(t) < 0. (retains optimality conditions w.r.t. ).
Successive Shortest Path - Recap
• Algorithm Complexity:• Assuming integrality, at most nU iterations.• In each iteration, compute shortest paths,
• Using Dijkstra, bounded by O(m+nlogn) per iteration
Capacity Scaling - Scheme
• Successive Shortest Path Algorithm may push very little in each iteration.
• Fix idea: use scaling• Modify algorithm to push units of flow
• Ignore edges with residual capacity < • until there is no node with excess or no
node with deficit • Decrease by factor 2, and repeat
• Until < 1.
Definitions
• Define - nodes with access at least - nodes with deficit at least
• The -residual network G(x, ) is the subgraph of G(x) with edges with residual capacity at least .
3 4
3 3
34
2
1
G(x)
Definitions
• Define - nodes with access at least - nodes with deficit at least
• The -residual network G(x, ) is the subgraph of G(x) with edges with residual capacity at least .
3 4
3 3
34
G(x, 3)
Main Observation in Algorithm
• Observation: Augmentation of units must start at a node in S(), along a path in G(x, ), and end at a node in T().
• In the -phase, we find shortest paths in G(x, ), and augment over them.
• Thus, edges in G(x, ), will satisfy the reduced optimality conditions.
• We will consider edges with less residual capacity later.
Initializing phases
• In later -phases, “new” edges with residual capacity rij < 2 may be introduced to G(x, ).
• We ignored them until now, so possibly
• Solution: saturate such edges.
i j
Capacity Scaling Algorithm
Capacity Scaling Algorithm
Initial values. 0 pseudoflow and potential (optimal!)Large enough
Capacity Scaling Algorithm
In beginning of -phase, fix optimality condition on “new” edges with resid. Cap. rij < 2 by saturation
Capacity Scaling Algorithm
augment path in G(x,) from node inS() to node in T()
Capacity Scaling Algorithm - Correctness
• Inductively, The algorithm maintains a reduced cost optimal flow x w.r.t. in G(x, ).• Initially this is clear.• At the beginning of the -phase, “new” arcs
s.t. rij < 2 are introduced, may not satisfy optimality.Saturation of edges such that < 0 suffices, since the reversal satisfies .
• Augmenting flow in shortest path procedure in G(x, ) retains optimality
Capacity Scaling Algorithm - Correctness
• When =1, G(x, ) = G(x).• The algorithm ends with , or .
• By integrality, we have a feasible flow.
Capacity Scaling Algorithm - Assumption
• We assume path from k to l in G(x, ) exists.• And we assume we can compute shortest
distances from k to rest of nodes.
• Quick fix: initially, add dummy node D with artificial edges (1,D) and (D,1) with infinite capacity and very large cost.
Capacity Scaling Algorithm – Complexity
• The algorithm has O(log U) phases.• We analyze each phase separately.
Capacity Scaling Algorithm – Phase Complexity
• Let’s assess the sum of excesses at the beginning.
• Observe that when -phase begins, either, or .
Thus the sum of excesses (= sum of deficits) is less than
DE
Capacity Scaling Algorithm – Phase Complexity – Cont.
• Saturation of edges in the beginning of the phase saturate edges with rij < 2. This may add at most to the sum of excesses
Capacity Scaling Algorithm – Phase Complexity – Cont.
• Thus the sum of excesses is less than • Therefore, at most augmentations can
be performed in a phase.
• In total: per phase.
Capacity Scaling Algorithm –Complexity
• per phase.
Cost Scaling
Approximate Optimality
• A pseudoflow (flow) x is said to be -optimal for some >0 if for some pot. , for every edge (i,j) in G(x).
Approximate Optimality Properties
• Lemma: For a min. cost flow problem with integer costs, any feasible flow is C-optimal. In addition, if , then any -optimal feasible flow is optimal.
• Proof:• Part 1 is easy: set .• Part 2: for any cycle W in G(x),
.applying integrality, it follows . The lemma follows.
a
b
c
d
4 -5
3 -2
Algorithm Strategy
• The previous lemma suggests the following strategy:1. Begin with feasible flow x and , which is C-
optimal2. Iteratively improve from and -optimal flow
(x, ) to an /2-optimal flow (x’, ), until < 1/n.
• We discuss two methods to implement the underlying improvement procedure.
• The first is a variation of the Preflow Push Algorithm.
Preflow Push Recap
Distance Labels
• Distance Labels Satisfy:• d(t) = 0, d(s) = n• d(v) ≤ d(w) + 1 if r(v,w) > 0
• d(v) is at most the distance from v to t in the residual network.
• s must be disconnected from t …
Terms
• Nodes with positive excess are called active.
• Admissible arc in the residual graph:
w
v
d(v) = d(w) + 1
The preflow push algorithm
While there is an active node {
pick an active node v and push/relabel(v)
}
Push/relabel(v) {
If there is an admissible arc (v,w) then {
push = min {e(v) , r(v,w)} flow from v to w
} else {
d(v) := min{d(w) + 1 | r(v,w) > 0} (relabel)
}
Running Time
• The # of relabelings is (2n-1)(n-2) < 2n2
• The # of saturating pushes is at most 2nm
• The # of nonsaturating pushes is at most 4n2m – using potential Φ = Σv active d(v)
Back to Min Cost Flow…
Applying Preflow Push’s technique
• Our motivation was to find a method to improve an -optimal flow (x, ) to an /2-optimal flow (x’, ).
• We use Push Preflow’s technique:
• Labels: (i)• Admissible edge in residual network:
if • Relabel: Increase (i) by
j
i𝑐 i , j𝜋
𝑐 i , j❑ +𝜋 ( j )− 𝜋 (i )<0
Initialization
• We first transform the input -optimal flow (x, ) to an /2-optimal pseduoflow (x’, ).
• This is easy• Saturate edges with negative reduced
cost.• Clear flow from edges with positive
reduced cost.
v w-10
Obtain a - optimal pseudoflow
Push/relabel until no active nodes exist
Correctness
• Lemma 1: Let x be pseudo-flow, and x’ a feasible flow.
Then, for every node v with excess in x, there exists a path P in G(x) ending at a node w with deficit, and its reversal is a path in G(x’).
• Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed).
Lemma 1: Proof Cont.
• Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed).
3 4
3 3
34
2
1
3 2
3 4
25
2
0
-
Lemma 1: Proof Cont.
• Proof: Look at the difference x’-x, and observe the underlying graph (edges with negative difference are reversed).
vw
S
• There must be a node with deficit reachable, otherwise x’ isn’t feasible
Correctness (cont)
Corollary: There is an outgoing residual arc incident with every active vertex
Corollary: So we can push/relabel as long as there is an active vertex
Correctness – Cont.
• Lemma 2: The algorithm maintains an /2-optimal pseudo-flow (x’, ),
• Proof: By induction on the number of operations.
• Initially, optimal preflow.
Correctness – Cont.
• Push on edge where may introduce (j,i) to G(x), but that’s OK: .
Correctness – Cont.
• Relabel operation at node i occurs when there’s no admissible arc - for every edge emanating from i in the residual network.
• Relabel decreases by , so it satisfies -optimality condition.
• Relabel increases , so it clearly still satisfies -optimality condition.
Correctness (cont)
• Lemma 3: When (and if) the algorithm stops the preflow is an /2-optimal feasible flow (x’, ).
Proof:
It is a feasible flow since there is no active vertex.
It is /2-optimal since ’ satisfies /2-optimality conditions.
Complexity
• Lemma : a node is relabeled at most 3n times.
Lemma 2 – Cont.
• Proof: Let x’ be the -opt. input flow, f and x’ an intermediate -opt. pseudo-flow. Let , ’ be the respective potentials. Take the path P from v to w as described in Lemma 1.
-opt. gives:
-opt. gives:
Lemma 2 – Cont.
−3 ε2∙ 𝑙𝑒𝑛 (𝑃 )≤ (𝜋 (𝑤 )−𝜋 ′ (𝑤 ) )− (𝜋 (𝑣 )−𝜋 ′ (𝑣 ) )
Nodes never become deficit nodes, so is untouched
Lemma 2 – Cont.
3 ε2∙𝑙𝑒𝑛 (𝑃 )≥ 𝜋 (𝑣 )−𝜋 ′ (𝑣 )
Complexity Analysis (Cont.)
• Lemma: The # of saturating pushes is at most O(nm).
• Proof: same as in Preflow Push.
Complexity Analysis – non Saturating Pushes
• Def: The admissible network is the graph of admissible edges.
4
-1
-2
-2
2
-4
2
Complexity Analysis – non Saturating Pushes
• Def: The admissible network is the graph of admissible edges.
-1
-2
-2-4
Complexity Analysis – non Saturating Pushes
• Def: The admissible network is the graph of admissible edges.
• Lemma: The admissible network is acyclic throughout the algorithm.
• Proof: induction.
Complexity Analysis – non Saturating Pushes – Cont.
• Lemma: The # of nonsaturating pushes is O(n2m).
• Proof: Let g(i) be the # of nodes reachable from i in admissible network
• Let Φ = Σi active g(i)
Complexity Analysis – non Saturating Pushes – Cont.
• Φ = Σi active g(i)
• By acyclicity, decreases (by at least one) by every nonsaturating push
i j
Complexity Analysis – non Saturating Pushes – Cont.
• Φ = Σi active g(i)
• Initially g(i) = 1.
• Increases by at most n by a saturating push : total increase O(n2m)
• Increases by each relabeling by at most n (no incoming edges become admissible): total increase < O(n3)
• O(n2m) non saturating pushes
Cost Scaling Summary
• Total complexity O(n2mlog(nC)).• Can be improved using ideas used to
improve preflow push
Double Scaling (If Time Permits)
Double Scaling Idea
• Use capacity scaling to implement improve-approximation
• New ‘tricks’ used:• Transform network to bipartite
network on nodes .• Augment on admissible paths
(generalization of shortest paths and push on edges)
• Find admissible paths greedily (DFS), and relabel potentials in retreats.
Network Transformation
i
b(i)
j
b(j)Cij, uij
xij
i
b(i)
(i,j)Cij,
xij
j
b(j)-uij0,
rij
• Bipartite network on nodes . For Convenience, shall refer to V as N1, and to E as N2.• All edges from N1 to N2.
• No capacities
Improve approximation Initialization
• Recall: receives (x, ) which is -optimal.• If we reset x’:=0, then G(x’) = G.• No capacities – G is a subgraph of G(x).
• -optimality means for all edges in G(x), and in particular in G.
• Adding to for every j in N2, obtains an optimal pseudoflow.
N1N2
+
Capacity Scaling - Scheme
• While S() • Pick a node k S().• search for an admissible path using DFS
• On retreat from node i, relabel: increase (i) by
• Upon finding a deficit node, augment units of flow in path.
• Decrease by factor 2, and repeat• Until < 1.
Double Scaling - Correctness
• Assuming algorithm ends, immediate, since we augment along admissible path.
• (Residual path from excess to node to deficit node always exists – see cost scaling algorithm)
• We relabel when indeed no outgoing admissible edges.
Double Scaling - Complexity
• O(log U) phases.• In each phase, each augmentation clears a
node from S() and doesn’t introduce a new one.
so O(m) augmentations per phase.
Double Scaling Complexity – Cont.
• In each augmentation, • We find a path of length at most 2n
(admissible network is acyclic and bipartite)• Need to account retreats.
Double Scaling Complexity – Cont.
• In each retreat, we relabel.• Using above lemma, potential cannot grow
more than O(l), where l is a length of path to a deficit node.
• Since graph is bipartite, l = O(n).• So in all augmentations, O(n (m+n)) = O(mn)
retreats.
N1N2
Double Scaling Complexity – Cont.
• To sum up:• Implemented improve-approximation using
capacity scaling• O(log U) phases.
• O(m) augmentations per phase.• O(n) nodes in path
• O(mn) node retreats in total.
• Total complexity: O(log(nC) log(U) mn)
Thank You