(near) optimal adaptivity gaps for stochastic multi-value...
TRANSCRIPT
![Page 1: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/1.jpg)
(Near) Optimal Adaptivity Gaps forStochastic Multi-Value Probing
Domagoj Bradac1, Sahil Singla2, Goran Zuzic3
University of Zagreb → ETH Zurich1
Princeton and Institute for Advanced Study2
Carnegie Mellon University3
September 21, 2019
Domagoj Bradac Sahil Singla2Slides are based on a deck by Sahil Singla.
1 / 21
![Page 2: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/2.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
2 / 21
![Page 3: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/3.jpg)
Motivating example: Birthday party
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
Only 1 hour before deadline!
Remark: if probabilities 1 &nodes distinct, thenOrienteering Problem[Blum et al. FOCS’03].
3 / 21
![Page 4: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/4.jpg)
Motivating example: Birthday party
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
Only 1 hour before deadline!
Remark: if probabilities 1 &nodes distinct, thenOrienteering Problem[Blum et al. FOCS’03].
3 / 21
![Page 5: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/5.jpg)
Things to note
Probabilities of being active:independent.
Objective: # distinct items.
Constraint: 1 hour in the givenmetric.
Goal: maximize the expectedobjective value.
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
4 / 21
![Page 6: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/6.jpg)
Things to note
Probabilities of being active:independent.
Objective: # distinct items.
Constraint: 1 hour in the givenmetric.
Goal: maximize the expectedobjective value.
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
4 / 21
![Page 7: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/7.jpg)
Things to note
Probabilities of being active:independent.
Objective: # distinct items.
Constraint: 1 hour in the givenmetric.
Goal: maximize the expectedobjective value.
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
4 / 21
![Page 8: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/8.jpg)
Things to note
Probabilities of being active:independent.
Objective: # distinct items.
Constraint: 1 hour in the givenmetric.
Goal: maximize the expectedobjective value.
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
4 / 21
![Page 9: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/9.jpg)
p = 0.5
p = 0.5
p = 0.4
15 min
25 min
20 min
p = 0.5
p = 0.5
p = 0.8
15 min
25 min
20 min
p = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
0.5+0.5+0.4 = 1.4
0.5(1 + 0.5 + 0)+
0.5(0 + 0.5 + 0.8) = 1.4
Can we do better?
5 / 21
![Page 10: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/10.jpg)
p = 0.5
p = 0.5
p = 0.4
15 min
25 min
20 min
p = 0.5
p = 0.5
p = 0.8
15 min
25 min
20 minp = 0.5p = 0.5
p = 0.8
p = 0.4 p = 0.25
15 min
25 min
20 min
20 min
30 min
15 min
30 min
20 min
0.5+0.5+0.4 = 1.40.5(1 + 0.5 + 0)+
0.5(0 + 0.5 + 0.8) = 1.4 Can we do better?
5 / 21
![Page 11: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/11.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
6 / 21
![Page 12: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/12.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.
Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= iGoal: Maximize f (Probed ∩ A).
i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 13: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/13.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.
Probing constraints C ⊆ 2[n].C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= iGoal: Maximize f (Probed ∩ A).
i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 14: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/14.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= iGoal: Maximize f (Probed ∩ A).
i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 15: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/15.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= iGoal: Maximize f (Probed ∩ A).
i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 16: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/16.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ C
Sample a set of active elements A ⊆ [n].i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= i
Goal: Maximize f (Probed ∩ A).i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 17: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/17.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= i
Goal: Maximize f (Probed ∩ A).i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 18: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/18.jpg)
Problem definition: Stochastic probing
Given:Universe [n] = 1, 2, . . . , n.Probabilities p1, p2, . . . , pn of being active.Probing constraints C ⊆ 2[n].
C is downward closed.
Monotone valuation function f : 2[n] → R≥0.
Then:Find Probed ⊆ [n] satisfying constraints.
i.e., Probed ∈ CSample a set of active elements A ⊆ [n].
i.e., (i ∈ A) ∼ Bernoulli(pi ) independent of j 6= iGoal: Maximize f (Probed ∩ A).
i.e., EA[f (Probed ∩ A)]
7 / 21
![Page 19: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/19.jpg)
Adaptive vs. non-adaptive strategies
Adaptive: a decision tree where every root-leaf path is feasible.
no yes
no yes no yes
no yes
Non-adaptive: ( , , , )
8 / 21
![Page 20: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/20.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
9 / 21
![Page 21: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/21.jpg)
Adaptivity gap
Definition (Adaptivity gap)
Ratio of the best adaptive to best non-adaptive strategy.
AdaptivityGap :=E[Adap]
E[NA]
Adaptive example: probe 1 → probe 2 OR 3.
E[Adap] = 0.5(1 + 0.1) + 0.5(0 + 0.5)
= 0.8E[NA] = 1− 0.5 · 0.5
= 0.75
Main question: How large can the gap be?
p1 = 0.5
p2 = 0.5 p3 = 0.1
No Yes
10 / 21
![Page 22: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/22.jpg)
Why care about adaptivity gaps?
Adaptive strategy concerns:
Can be exponentially sized.How to compute?How to represent?
no yes
no yes no yes
no yes
Best non-adaptive: select a feasible Probed ∈ C in the beginningto
maxEA∼p[f (Probed ∩ A)].
Benefits:Easier to represent: just output the set.Easier to find: g(S) = EA∼p[f (S ∩ A)] is often submodular.
Concern: large adaptivity gap := E[Adap]E[NA] .
11 / 21
![Page 23: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/23.jpg)
Why care about adaptivity gaps?
Adaptive strategy concerns:
Can be exponentially sized.How to compute?How to represent?
no yes
no yes no yes
no yes
Best non-adaptive: select a feasible Probed ∈ C in the beginningto
maxEA∼p[f (Probed ∩ A)].
Benefits:
Easier to represent: just output the set.Easier to find: g(S) = EA∼p[f (S ∩ A)] is often submodular.
Concern: large adaptivity gap := E[Adap]E[NA] .
11 / 21
![Page 24: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/24.jpg)
Why care about adaptivity gaps?
Adaptive strategy concerns:
Can be exponentially sized.How to compute?How to represent?
no yes
no yes no yes
no yes
Best non-adaptive: select a feasible Probed ∈ C in the beginningto
maxEA∼p[f (Probed ∩ A)].
Benefits:Easier to represent: just output the set.Easier to find: g(S) = EA∼p[f (S ∩ A)] is often submodular.
Concern:
large adaptivity gap := E[Adap]E[NA] .
11 / 21
![Page 25: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/25.jpg)
Why care about adaptivity gaps?
Adaptive strategy concerns:
Can be exponentially sized.How to compute?How to represent?
no yes
no yes no yes
no yes
Best non-adaptive: select a feasible Probed ∈ C in the beginningto
maxEA∼p[f (Probed ∩ A)].
Benefits:Easier to represent: just output the set.Easier to find: g(S) = EA∼p[f (S ∩ A)] is often submodular.
Concern: large adaptivity gap := E[Adap]E[NA] .
11 / 21
![Page 26: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/26.jpg)
Small adaptivity gap
Assume α is small.
Best adaptive
Best non-adaptive Algorithm
Adaptivity gap
Approx ratio α
Bound: α · gap
Question: For what constraints and functions is the gap small?
12 / 21
![Page 27: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/27.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
13 / 21
![Page 28: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/28.jpg)
Our results
TheoremAdaptivity gap is 2.
Always gap ≤ 2, there exists an example where gap = 2.Function = monotone submodular.Constraints = downward closed.
Downward closed: If a set can be probed then also its subsets can.
Submodular: E.g., # of distinct elements.
Theorem
Adaptivity gap is between Ω(√
k) and O(k log k).Function = weighted rank of k-matroid intersection.Constraints = downward closed.
14 / 21
![Page 29: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/29.jpg)
Our results
TheoremAdaptivity gap is 2.
Always gap ≤ 2, there exists an example where gap = 2.Function = monotone submodular.Constraints = downward closed.
Downward closed: If a set can be probed then also its subsets can.
Submodular: E.g., # of distinct elements.
Theorem
Adaptivity gap is between Ω(√
k) and O(k log k).Function = weighted rank of k-matroid intersection.Constraints = downward closed.
14 / 21
![Page 30: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/30.jpg)
Prior work
Reference Function Constraints Gap LB Gap UB
[GN’13] k1-matroidintersection
k2-matroidintersection
O((k1 + k2)2
)[GNS’16] k-matroid in-
tersectiondownwardclosed
O(k4 log kn)
[an’16] monotonesubmodular
1-matroid ee−1 ≈ 1.58 e
e−1 ≈ 1.58
[GNS’17] monotonesubmodular
downwardclosed
1.58 [AN’16] 3
this paper monotonesubmodular
downwardclosed
2 2
this paper k-matroid in-tersection
downwardclosed
√k O(k log k)
15 / 21
![Page 31: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/31.jpg)
Prior work
Reference Function Constraints Gap LB Gap UB
[GN’13] k1-matroidintersection
k2-matroidintersection
O((k1 + k2)2
)[GNS’16] k-matroid in-
tersectiondownwardclosed
O(k4 log kn)
[an’16] monotonesubmodular
1-matroid ee−1 ≈ 1.58 e
e−1 ≈ 1.58
[GNS’17] monotonesubmodular
downwardclosed
1.58 [AN’16] 3
this paper monotonesubmodular
downwardclosed
2 2
this paper k-matroid in-tersection
downwardclosed
√k O(k log k)
15 / 21
![Page 32: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/32.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
16 / 21
![Page 33: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/33.jpg)
Upper bound: Proof Steps
TheoremAdaptivity gap is at most 2.
Function = monotone submodular.Constraints = downward closed.
Steps:
1 Transform an adaptive strategy to a non-adaptive.2 Prove E[NA] ≥ 1
2E[Adap].
17 / 21
![Page 34: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/34.jpg)
Upper bound: Proof Steps
TheoremAdaptivity gap is at most 2.
Function = monotone submodular.Constraints = downward closed.
Steps:1 Transform an adaptive strategy to a non-adaptive.2 Prove E[NA] ≥ 1
2E[Adap].
17 / 21
![Page 35: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/35.jpg)
Upper bound: Two ideas
(1) Take a random root-leaf pathOnly show existence
XX X
X X
NO .. p = 0.7 p = 0.3 .. YES
0.5 0.5 0.2 0.8
0.3 0.7
18 / 21
![Page 36: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/36.jpg)
Upper bound: Two ideas
(1) Take a random root-leaf pathOnly show existence
XX X
X X
NO .. p = 0.7 p = 0.3 .. YES
0.5 0.5 0.2 0.8
0.3 0.7
Blue path prob = 0.7 · 0.5 · 0.7Here Adap gets 2.Here NA gets 0.3 + 0.5 + 0.7 = 1.5Goal to show: E[NA] = E[RandomPath] ≥ 1
2E[Adap].
18 / 21
![Page 37: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/37.jpg)
Upper bound: Two ideas
(1) Take a random root-leaf pathOnly show existence
XX X
X X
NO .. p = 0.7 p = 0.3 .. YES
0.5 0.5 0.2 0.8
0.3 0.7
(2) Node-by-node inductionConvert NA to a “greedy” algorithm for induction.
18 / 21
![Page 38: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/38.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 39: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/39.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 40: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/40.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 41: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/41.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 42: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/42.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 43: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/43.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 44: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/44.jpg)
Upper bound: Proof
Goal: E[RandomPath] ≥ 12E[Adap].
Def: I = is the root active in Adap?Def: R = is the root active in NA?
Adap = EI [f (I ) + Adap(TI , fI )]
≤ EI ,R [f (I ∪ R) + Adap(TI , fI∪R)]
≤ EI ,R [2f (R) + Adap(TI , fI∪R)].
Non-adaptive:
NA = EI ,R [f (R) + NA(TI , fR)]
≥ EI ,R [f (R) + NA(TI , fI∪R)]
Induction hypothesis:
EI ,R [Adap(Ti , fI∪R)] ≤ 2 · EI ,R [NA(TI , fI∪R)]
19 / 21
![Page 45: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/45.jpg)
Outline
1 Motivating example
2 Problem definition
3 Adaptivity gap
4 Results
5 Proof: Upper bound
6 Conclusion
20 / 21
![Page 46: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/46.jpg)
Conclusion
Generalizations of stochastic probing:Multi-value setting.k-extendible systems.XOS functions [GNS’17] (submodular ⊆ XOS ⊆ subadditive).
Open problem:O(polylog(n)) for subadditive functions?
Main takeaways:Stochastic probing captures many natural problems.Often have small adaptivity gap.Focus on, much simpler, non-adaptive strategies.
Thank you! Questions?
21 / 21
![Page 47: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/47.jpg)
Conclusion
Generalizations of stochastic probing:Multi-value setting.k-extendible systems.XOS functions [GNS’17] (submodular ⊆ XOS ⊆ subadditive).
Open problem:O(polylog(n)) for subadditive functions?
Main takeaways:Stochastic probing captures many natural problems.Often have small adaptivity gap.Focus on, much simpler, non-adaptive strategies.
Thank you! Questions?
21 / 21
![Page 48: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/48.jpg)
Conclusion
Generalizations of stochastic probing:Multi-value setting.k-extendible systems.XOS functions [GNS’17] (submodular ⊆ XOS ⊆ subadditive).
Open problem:O(polylog(n)) for subadditive functions?
Main takeaways:Stochastic probing captures many natural problems.Often have small adaptivity gap.Focus on, much simpler, non-adaptive strategies.
Thank you! Questions?
21 / 21
![Page 49: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/49.jpg)
Conclusion
Generalizations of stochastic probing:Multi-value setting.k-extendible systems.XOS functions [GNS’17] (submodular ⊆ XOS ⊆ subadditive).
Open problem:O(polylog(n)) for subadditive functions?
Main takeaways:Stochastic probing captures many natural problems.Often have small adaptivity gap.Focus on, much simpler, non-adaptive strategies.
Thank you! Questions?
21 / 21
![Page 50: (Near) Optimal Adaptivity Gaps for Stochastic Multi-Value ...people.inf.ethz.ch/gzuzic/pdfs/adaptivity-gaps-talk.pdf(Near)OptimalAdaptivityGapsfor StochasticMulti-ValueProbing DomagojBradac1,SahilSingla2,GoranZuzic3](https://reader033.vdocuments.mx/reader033/viewer/2022060908/60a2b4eb8a940973c203a689/html5/thumbnails/50.jpg)
Prior work
Reference Function Constraints Gap LB Gap UB
[GN’13] k1-matroidintersection
k2-matroidintersection
k1 + k2 O((k1 + k2)2
)[GNS’16] k-matroid in-
tersectiondownwardclosed
O(k4 log kn)
[AN’16] monotonesubmodular
1-matroid ee−1 ≈ 1.58 e
e−1 ≈ 1.58
[GNS’17] monotonesubmodular
downwardclosed
3 1.58 via [AN’16]
this paper monotonesubmodular
downwardclosed
2 2
this paper k-matroid in-tersection
downwardclosed
k O(k log k)
22 / 21