Graph Sparsifiers byEdge-Connectivity and
Random Spanning Trees
Nick Harvey University of Waterloo
Department of Combinatorics and Optimization
Joint work with Isaac Fung
What are sparsifiers?
• Approximating all cuts– Sparsifiers: number of edges = O(n log n /²2) ,
every cut approximated within 1+². [BK’96]– O~(m) time algorithm to construct them
• Spectral approximation– Spectral sparsifiers: number of edges = O(n log n /²2),
“entire spectrum” approximated within 1+². [SS’08]
– O~(m) time algorithm to construct them
[BSS’09]
Poly(n)
n = # vertices
Laplacian matrix of GLaplacian matrix of Sparsifier
Weighted subgraphs that approximately preserve some properties
m = # edges
Poly(n)
[BSS’09]
Why are sparsifiers useful?• Approximating all cuts– Sparsifiers: fast algorithms for cut/flow problem
Problem Approximation Runtime ReferenceMin st Cut 1+² O~(n2) BK’96Sparsest Cut O(log n) O~(n2) BK’96Max st Flow 1 O~(m+nv) KL’02Sparsest Cut O~(n2) AHK’05Sparsest Cut O(log2 n) O~(m+n3/2) KRV’06Sparsest Cut O~(m+n3/2+²) S’09Perfect Matching in Regular Bip. Graphs
n/a O~(n1.5) GKK’09
Sparsest Cut O~(m+n1+²) M’10
v = flow value
n = # verticesm = # edges
Our Motivation• BSS algorithm is very mysterious, and
“too good to be true”• Are there other methods to get sparsifiers
with only O(n/²2) edges?• Wild Speculation: Union of O(1/²2) random
spanning trees gives a sparsifier (if weighted appropriately)
– True for complete graph [GRV ‘08]• Corollary of our Main Result:
The Wild Speculation is false, but the union of O(log2 n/²2) random spanning trees gives a sparsifier
Formal problem statement
• Design an algorithm such that• Input: An undirected graph G=(V,E)• Output: A weighted subgraph H=(V,F,w),
where FµE and w : F ! R• Goals:• | |±G(U)| - w(±H(U)) | · ² |±G(U)| 8U µ V
(We only want to preserve cuts)• |F| = O(n log n / ²2)• Running time = O~( m / ²2 )
# edges between U and V\U in Gweight of edges between U and V\U in H
n = # verticesm = # edges
| |±(U)| - w(±(U)) | · ² |±(U)| 8U µ V
Sparsifying Complete Graph
• Sampling: Construct H by sampling every edge of Gwith prob p=100 log n/n. Give each edge weight 1/p.
• Properties of H:• # sampled edges = O(n log n)• |±G(U)| ¼ |±H(U)| 8U µ V
• So H is a sparsifier of G
• Consider any cut ±G(U) with |U|=k. Then |±G(U)|¸kn/2.
• Let Xe = 1 if edge e is sampled. Let X = e2C Xe = |±H(U)|.
• Then ¹ = E[X] = p |±(U)| ¸ 50 k log n.
• Say cut fails if |X-¹| ¸ ¹/2.
• So Pr[ cut fails ] · 2 exp( - ¹/12 ) · n-4k.
• # of cuts with |U|=k is .
• So Pr[ any cut fails ] · k n-4k < k n-3k < n-2.
• Whp, every U has ||±H(U)| - p |±(U)|| < p |±(U)|/2
Chernoff Bound
Bound on # small cuts
Key Ingredients
Union bound
Proof Sketch
Exponentially increasing # of bad events
Exponentially decreasingprobability of failure
Generalize to arbitrary G?
• Can’t sample edges with same probability!• Idea [BK’96]
Sample low-connectivity edges with high probability, and high-connectivity edges with low probability
Keep this
Eliminate most of these
Non-uniform sampling algorithm [BK’96]
• Input: Graph G=(V,E), parameters pe 2 [0,1]• Output: A weighted subgraph H=(V,F,w),
where FµE and w : F ! RFor i=1 to ½
For each edge e2EWith probability pe,
Add e to F Increase we by 1/(½pe)
• Main Question: Can we choose ½ and pe’sto achieve sparsification goals?
Non-uniform sampling algorithm [BK’96]
• Claim: H perfectly approximates G in expectation!• For any e2E, E[ we ] = 1
) For every UµV, E[ w(±H(U)) ] = |±G(U)|
• Goal: Show every w(±H(U)) is tightly concentrated
• Input: Graph G=(V,E), parameters pe 2 [0,1]• Output: A weighted subgraph H=(V,F,w),
where FµE and w : F ! RFor i=1 to ½
For each edge e2EWith probability pe,
Add e to F Increase we by 1/(½pe)
Prior Work• Benczur-Karger ‘96– Set ½ = O(log n), pe = 1/“strength” of edge e
(max k s.t. e is contained in a k-edge-connected vertex-induced subgraph of G)
– All cuts are preserved– e pe · n ) |F| = O(n log n) (# edges in sparsifier)
– Running time is O(m log3 n)• Spielman-Srivastava ‘08– Set ½ = O(log n), pe = 1/“effective conductance” of edge e
(view G as an electrical network where each edge is a 1-ohm resistor)
– H is a spectral sparsifier of G ) all cuts are preserved– e pe = n-1 ) |F| = O(n log n) (# edges in sparsifier)
– Running time is O(m log50 n)– Uses “Matrix Chernoff Bound”
Assume ² is constant
O(m log3 n)[Koutis-Miller-Peng ’10]
Similar to edge connectivity
Our Work• Fung-Harvey ’10 (independently Hariharan-Panigrahi
‘10)
– Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
– Edge-connectivity ¸ max { strength, effective conductance }– e pe · n ) |F| = O(n log2 n)– Running time is O(m log2 n)– Advantages:• Edge connectivities natural, easy to compute• Faster than previous algorithms• Implies sampling by edge strength, effective resistances,
or random spanning trees works– Disadvantages:• Extra log factor, no spectral sparsification
Assume ² is constant
Why?Pr[ e 2 T ] = effective resistance of e
edges are negatively correlated) Chernoff bound still works
(min size of a cut that contains e)
Our Work• Fung-Harvey ’10 (independently Hariharan-Panigrahi
‘10)
– Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
– Edge-connectivity ¸ max { strength, effective conductance }– e pe · n ) |F| = O(n log2 n)– Running time is O(m log2 n)– Advantages:• Edge connectivities natural, easy to compute• Faster than previous algorithms• Implies sampling by edge strength, effective resistances…
• Extra trick: Can shrink |F| to O(n log n) by using Benczur-Karger to sparsify our sparsifier!– Running time is O(m log2 n) + O~(n)
Assume ² is constant
(min size of a cut that contains e)
O(n log n)
Our Work• Fung-Harvey ’10 (independently Hariharan-Panigrahi
‘10)
– Set ½ = O(log2 n), pe = 1/edge-connectivity of edge e
– Edge-connectivity ¸ max { strength, effective conductance }– e pe · n ) |F| = O(n log2 n)– Running time is O(m log2 n)– Advantages:• Edge connectivities natural, easy to compute• Faster than previous algorithms• Implies sampling by edge strength, effective resistances…
• Panigrahi ’10– A sparsifier with O(n log n /²2) edges, with running time
O(m) in unwtd graphs and O(m)+O~(n/²2) in wtd graphs
Assume ² is constant
(min size of a cut that contains e)
• Notation: kuv = min size of a cut separating u and v• Main ideas:– Partition edges into connectivity classes
E = E1 [ E2 [ ... Elog n where Ei = { e : 2i-1·ke<2i }– Prove weight of sampled edges that each cut
takes from each connectivity class is about right
– Key point: Edges in ±(U)ÅEi have nearly same weight– This yields a sparsifier
U
Prove weight of sampled edges that each cuttakes from each connectivity class is about right
• Notation:• C = ±(U) is a cut • Ci = ±(U) Å Ei is a cut-induced set
• Need to prove:
C1 C2 C3 C4
• Notation: Ci = ±(U) Å Ei is a cut-induced set
C1 C2 C3 C4
Prove 8 cut-induced set Ci
• Key Ingredients• Chernoff bound: Prove small• Bound on # small cuts: Prove
#{ cut-induced sets Ci induced by a small cut |C| }is small.
• Union bound: sum of failure probabilities is small, so probably no failures.
Counting Small Cut-Induced Sets• Theorem: Let G=(V,E) be a graph. Fix any BµE.
Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)
Then, for every ®¸1,|{ ±(U) Å B : |±(U)|·®K }| < n2®.
• Corollary: Counting Small Cuts [K’93]Let G=(V,E) be a graph.Let K be the edge-connectivity of G. (i.e., global min cut value)
Then, for every ®¸1,|{ ±(U) : |±(U)|·®K }| < n2®.
Comparison• Theorem: Let G=(V,E) be a graph. Fix any BµE.
Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)
Then |{ ±(U) Å B : |±(U)|·c }| < n2c/K 8c¸1.
• Corollary [K’93]: Let G=(V,E) be a graph.Let K be the edge-connectivity of G. (i.e., global min cut value)
Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.
• How many cuts of size 1?Theorem says < n2, taking K=c=1.Corollary, says < 1, because K=0.
(Slightly unfair)
Comparison• Theorem: Let G=(V,E) be a graph. Fix any BµE.
Suppose ke¸K for all e in B. (kuv = min size of a cut separating u and v)
Then |{ ±(U) Å B : |±(U)|·c }| < n2c/K 8c¸1.
• Corollary [K’93]: Let G=(V,E) be a graph.Let K be the edge-connectivity of G. (i.e., global min cut value)
Then, |{ ±(U) : |±(U)|·c }| < n2c/K 8c¸1.
• Important point: A cut-induced set is a subset of edges. Many cuts can induce the same set.
(Slightly unfair)
±(U’)±(U)
Algorithm for Finding a Min Cut [K’93]
• Input: A graph• Output: A minimum cut (maybe)• While graph has 2 vertices–Pick an edge at random–Contract it
• End While• Output remaining edges
• Claim: For any min cut, this algorithm outputs it with probability ¸ 1/n2.
• Corollary: There are · n2 min cuts.
Finding a Small Cut-Induced Set• Input: A graph G=(V,E), and BµE• Output: A cut-induced subset of B• While graph has 2 vertices– If some vertex v has no incident edges in B• Split-off all edges at v and delete v
–Pick an edge at random–Contract it
• End While• Output remaining edges in B• Claim: For any min cut-induced subset of B, this
algorithm outputs it with probability >1/n2.• Corollary: There are <n2 min cut-induced subsets of B
Splitting OffReplace edges {u,v} and {u’,v} with {u,u’}while preserving edge-connectivityBetween all vertices other than v Wolfgang Mader
vu u’
vu u’
Sparsifiers from Random Spanning Trees• Let H be union of ½=log2 n uniform random spanning trees,
where we is 1/(½¢(effective resistance of e))• Then all cuts are preserved and |F| = O(n log2 n)
• Why does this work?– PrT[ e 2 T ] = effective resistance of edge e [Kirchoff 1847]
– Similar to usual independent sampling algorithm,with pe = effective resistance of e
– Key difference: edges in a random spanning tree arenot independent, but they are negatively correlated![BSST 1940]
– Chernoff bounds still work. [Panconesi, Srinivasan 1997]
Sparsifiers from Random Spanning Trees• Let H be union of ½=log2 n uniform random spanning trees,
where we is 1/(½¢(effective resistance of e))• Then all cuts are preserved and |F| = O(n log2 n)
• How is this different than independent sampling?– Consider an n-cycle. There are n/2 disjoint cuts of size 2.– When ½=1, each cut has constant prob of having no edges) need ½=(log n) to get a connected graph– With random trees, get connectivity after just one tree– Are O(1) trees are enough to preserve all cuts?– No! ( log n ) trees are required
Conclusions• Graph sparsifiers important for fast algorithms and some
combinatorial theorems• Sampling by edge-connectivities gives a sparsifier
with O(n log2 n) edges in O(m log2 n) time– Improvements: O(n log n) edges in O(m) + O~(n) time
[Panigrahi ‘10]
• Sampling by effective resistances also works ) sampling O(log2 n) random spanning trees
gives a sparsifier
Questions• Improve log2 n to log n?• Sampling o(log n) random trees gives a sparsifier with
o(log n) approximation?