stochastic optimization is (almost) as easy as deterministic optimization chaitanya swamy joint work...

51
Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Upload: janis-owens

Post on 18-Jan-2018

217 views

Category:

Documents


0 download

DESCRIPTION

An Example Components Assembly Products Assemble To Order system Demand is a random variable. Want to decide inventory levels now anticipating future demand. Then get to know actual demand. Can adjust inventory levels after observing demand. Excess demand  can buy more stock paying more at the last minute Low demand  can stock inventory paying a holding cost

TRANSCRIPT

Page 1: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Stochastic Optimization is (almost) as easy as

Deterministic OptimizationChaitanya Swamy

Joint work with David Shmoys done while at

Cornell University

Page 2: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Stochastic Optimization• Way of modeling uncertainty. • Exact data is unavailable or expensive – data

is uncertain, specified by a probability distribution.Want to make the best decisions given this uncertainty in the data.

• Applications in logistics, transportation models, financial instruments, network design, production planning, …

• Dates back to 1950’s and the work of Dantzig.

Page 3: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

An Example

Components Assembly ProductsAssemble To Order

system

Demand is a random variable.Want to decide inventory levels now anticipating future demand.Then get to know actual demand. Can adjust inventory levels after observing demand.Excess demand can buy more stock paying more at the last minuteLow demand can stock inventory paying a holding cost

Page 4: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Two-Stage Recourse Model

Given :Probability distribution over inputs.Stage I : Make some advance decisions – plan ahead or hedge against uncertainty.Observe the actual input scenario.Stage II : Take recourse. Can augment earlier solution paying a recourse cost.

Choose stage I decisions to minimize (stage I cost) + (expected stage II

recourse cost).

Page 5: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

2-Stage Stochastic Facility Location

Distribution over clients gives the set of clients to serve.

client set D

facility

Stage I: Open some facilities in advance; pay cost fi for facility i.Stage I cost = ∑(i opened) fi .

stage I facility

Page 6: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

2-Stage Stochastic Facility Location

Distribution over clients gives the set of clients to serve.

client set D

facility

Stage I: Open some facilities in advance; pay cost fi for facility i.Stage I cost = ∑(i opened) fi .

stage I facility

How is the probability distribution on clients specified?

•A short (polynomial) list of possible scenarios

•Independent probabilities that each client exists

•A black box that can be sampled.

Page 7: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

2-Stage Stochastic Facility Location

Distribution over clients gives the set of clients to serve.

facility

Stage I: Open some facilities in advance; pay cost fi for facility i.Stage I cost = ∑(i opened) fi .

stage I facility

Actual scenario A = clients to serve, materializes.Stage II: Can open more facilities to serve clients in A; pay cost fi

A to open facility i. Assign clients in A to facilities.Stage II cost = ∑ fi

A + (cost of serving clients in A).

i opened inscenario A

Page 8: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

2-Stage Stochastic Facility Location

Distribution over clients gives the set of clients to serve.

facility

Stage I: Open some facilities in advance; pay cost fi for facility i.Stage I cost = ∑(i opened) fi .

stage I facility

Actual scenario A = clients to serve, materializes.Stage II: Can open more facilities to serve clients in A; pay cost fi

A to open facility i. Assign clients in A to facilities.Stage II cost = ∑ fi

A + (cost of serving clients in A).

i opened inscenario A

Page 9: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Want to decide which facilities to open in stage I.Goal: Minimize Total Cost =

(stage I cost) + EA D [stage II cost for A].Two extremes:1.Ultra-conservative: plan for “everything” and buy only in stage I.2. Defer all decisions to stage II.Both strategies may be sub-optimal.Want to prove a worst-case guarantee. Give an algorithm that “works well” on any instance, and for any probability distribution.

Page 10: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Approximation AlgorithmHard to solve the problem exactly – even special cases are #P-hard.Settle for approximate solutions. Give polytime algorithm that always finds near-optimal solutions.A is a -approximation algorithm if,•A runs in polynomial time.•A(I) ≤ .OPT(I) on all instances I, is called the approximation ratio of A.

Page 11: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Previous Models Considered

• Dye, Stougie & Tomasgard: – approx. algorithm for a resource provisioning

problem. – polynomial-scenario model.

• Ravi & Sinha (RS04):– other problems in the polynomial-scenario model.

• Immorlica, Karger, Minkoff & Mirrokni: – polynomial-scenario model and independent-

activation model. – proportional costs: (stage II cost) = (stage I cost),

e.g., fiA = .fi for each facility i, in each scenario A.

• Gupta, Pal, Ravi & Sinha (GPRS04): – black box model: arbitrary probability

distributions. – Also need proportional costs.

Page 12: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Our Results• Give the first approximation algorithms for

2-stage stochastic integer optimization problems– black-box model – no assumptions on costs.

For some problems improve upon previous results obtained in more restricted models.

• Give a fully polynomial approximation scheme for a large class of 2-stage stochastic linear programs (contrast with Nesterov & Vial ‘00, Kleywegt, Shapiro & Homem De-Mello ‘01, Dyer, Kanan & Stougie ‘02).

• Give a way to “reduce” stochastic optimization problems to their deterministic versions.

Page 13: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Stochastic Set Cover (SSC)Universe U = e1, …, en , subsets S1, S2, …, Sm U, set S has weight wS. Deterministic problem: Pick a minimum weight collection of sets that covers each element.Stochastic version: Set of elements to be covered is given by a probability distribution.

– choose some sets initially paying wS for set S – subset A U to be covered is revealed – can pick additional sets paying wS

A for set S.

Minimize (w-cost of sets picked in stage I) + EA U [wA-cost of new sets picked for scenario A].

Page 14: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

An Integer ProgramFor simplicity, consider wS

A = WS for every scenario A.pA : probability of scenario A U.xS : indicates if set S is picked in stage I.yA,S : indicates if set S is picked in scenario A.Minimize ∑S SxS + ∑AU pA ∑S WSyA,S

subject to, ∑S:eS xS + ∑S:eS yA,S ≥ 1 for each A U,

eAxS, yA,S 0,1 for each S,

A.

A Linear Program

xS, yA,S ≥ 0 for each S, AExponential number of variables and exponential number of constraints.

Page 15: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

A Rounding TheoremAssume LP can be solved in polynomial time. Suppose for the deterministic problem, we have an -approximation algorithm wrt. the LP relaxation, i.e., A such that A(I) ≤ .(optimal LP solution for I)

for every instance I.e.g., “the greedy algorithm” for set cover is a log n-approximation algorithm wrt. LP relaxation.Theorem: Can use such an -approx. algorithm to get a 2-approximation algorithm for stochastic set cover.

Page 16: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Rounding the LPAssume LP can be solved in polynomial time. Suppose we have an -approximation algorithm wrt. the LP relaxation for the deterministic problem.

Let E = e : ∑S:eS xS ≥ ½. So (2x) is a fractional set cover for the set E can round to get an integer set cover S for E of cost ∑SS S ≤ (∑S 2SxS) .S is the first stage decision.

Let (x,y) : optimal solution with cost OPT.∑S:eS xS + ∑S:eS yA,S ≥ 1 for each A U, eA

for every element e, either ∑S:eS xS ≥ ½ OR in each scenario A : eA, ∑S:eS yA,S ≥ ½.

Page 17: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Sets

Elements

Rounding (contd.)Set in SElement in E

Consider any scenario A. Elements in A E are covered.For every e A\E, it must be that ∑S:eS yA,S ≥ ½. So (2yA) is a fractional set cover for A\E can round to get a set cover of W-cost ≤ (∑S 2WSyA,S) .

A

Using this to augment S in scenario A, expected cost

≤ ∑SS S + 2.∑ AU pA (∑S WSyA,S) ≤ 2.OPT.

Page 18: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

An -approx. algorithm for deterministic problem gives a 2-approximation guarantee for stochastic problem.In the polynomial-scenario model, gives simple polytime approximation algorithms for covering problems.

•2log n-approximation for SSC.•4-approximation for stochastic vertex cover.•4-approximation for stochastic multicut on

trees.Ravi & Sinha gave a log n-approximation algorithm for SSC, 2-approximation algorithm for stochastic vertex cover in the polynomial-scenario model.

Rounding (contd.)

Page 19: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Let E = e : ∑S:eS xS ≥ ½. So (2x) is a fractional set cover for the set E can round to get an integer set cover S of cost ∑SS S ≤ (∑S 2SxS) .S is the first stage decision.

Rounding the LPAssume LP can be solved in polynomial time. Suppose we have an -approximation algorithm wrt. the LP relaxation for the deterministic problem.Let (x,y) : optimal solution with cost OPT.

∑S:eS xS + ∑S:eS yA,S ≥ 1 for each A U, eA for every element e, either ∑S:eS xS ≥ ½ OR in each scenario A : eA, ∑S:eS yA,S ≥ ½.

Page 20: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

A Compact FormulationpA : probability of scenario A U.xS : indicates if set S is picked in stage I.Minimize h(x) = ∑S SxS + f(x) s.t. xS ≥ 0for each S

(SSC-P)where, f(x) = ∑AU pAfA(x) and fA(x)= min. ∑S WSyA,S

s.t. ∑S:eS yA,S ≥ 1 – ∑S:eS xS for each eA

yA,S ≥ 0 for each S.

Equivalent to earlier LP.Each fA(x) is convex, so f(x) and h(x) are convex functions.

Page 21: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

The General Strategy1. Get a (1+)-optimal solution (x) to the convex

program using the ellipsoid method.

2. Convert fractional solution (x) to integer solution

– decouple stage I and stage II scenarios – use -approx. algorithm for the deterministic

problem to solve subproblems.

Obtain a c.-approximation algorithm for the stochastic integer problem.

Page 22: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

If yi is infeasible, use violated inequality to chop off infeasible half-ellipsoid.

The Ellipsoid MethodEllipsoid squashed sphereStart with ball containing polytope P.yi = center of current ellipsoid.

Min c.x subject to xP.

P

Page 23: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

The Ellipsoid MethodMin c.x subject to xP.

P

If yi is infeasible, use violated inequality to chop off infeasible half-ellipsoid.

New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid.

Ellipsoid squashed sphereStart with ball containing polytope P.yi = center of current ellipsoid.

Page 24: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

The Ellipsoid MethodMin c.x subject to xP.

If yi is infeasible, use violated inequality to chop off infeasible half-ellipsoid.

New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid.

If yi P, use objective function cut c.x ≤ c.yi to chop off polytope, half-ellipsoid.

c.x ≤ c.yi

Ellipsoid squashed sphereStart with ball containing polytope P.yi = center of current ellipsoid.

P

Page 25: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

The Ellipsoid MethodMin c.x subject to xP.

Ellipsoid squashed sphereStart with ball containing polytope P.yi = center of current ellipsoid.If yi is infeasible, use violated

inequality to chop off infeasible half-ellipsoid.

New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid.

If yi P, use objective function cut c.x ≤ c.yi to chop off polytope, half-ellipsoid.

P

Page 26: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

The Ellipsoid MethodMin c.x subject to xP.

P

x1, x2, …, xk: points lying in P. c.xk is a close to optimal value.

Ellipsoid squashed sphereStart with ball containing polytope P.yi = center of current ellipsoid.If yi is infeasible, use violated

inequality to chop off infeasible half-ellipsoid.

New ellipsoid = min. volume ellipsoid containing “unchopped” half-ellipsoid.

If yi P, use objective function cut c.x ≤ c.yi to chop off polytope, half-ellipsoid.

x1

x2

xk

x*

Page 27: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Ellipsoid for Convex Optimization

Min h(x) subject to xP.

P

Start with ball containing polytope P.yi = center of current ellipsoid.If yi is infeasible, use violated inequality.If yi P – how to make progress?

add inequality h(x) ≤ h(yi)? Separation becomes difficult.yi

h(x) ≤ h(yi)

Page 28: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Let d = subgradient at yi.use subgradient cut d.(x–yi) ≤ 0.Generate new min. volume

ellipsoid.

Ellipsoid for Convex Optimization

Min h(x) subject to xP.

P

Start with ball containing polytope P.yi = center of current ellipsoid.

If yi P – how to make progress?

d m is a subgradient of h(.) at u, if for every v, h(v)-h(u) ≥ d.(v-u).

add inequality h(x) ≤ h(yi)? Separation becomes difficult.

If yi is infeasible, use violated inequality.

d

yi

h(x) ≤ h(yi)

Page 29: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Ellipsoid for Convex Optimization

Min h(x) subject to xP.

P

Start with ball containing polytope P.yi = center of current ellipsoid.

If yi P – how to make progress?

d m is a subgradient of h(.) at u, if for every v, h(v)-h(u) ≥ d.(v-u).

Let d = subgradient at yi.use subgradient cut d.(x–yi) ≤ 0.Generate new min. volume

ellipsoid.

x1, x2, …, xk: points in P. Can show, mini=1…k h(xi) ≤ OPT+.

x*x1

x2

add inequality h(x) ≤ h(yi)? Separation becomes difficult.

If yi is infeasible, use violated inequality.

Page 30: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Let d' = -subgradient at yi.use -subgradient cut d'.(x–yi) ≤ 0.

Ellipsoid for Convex Optimization

Min h(x) subject to xP.

P

x1, x2, …, xk: points in P. Can show, mini=1…k h(xi) ≤ OPT/(1-) + .

Start with ball containing polytope P.yi = center of current ellipsoid.

If yi P – how to make progress?add inequality h(x) ≤ h(yi)? Separation becomes difficult.subgradient is difficult to compute.

If yi is infeasible, use violated inequality.

d' m is a -subgradient of h(.) at u, if vP, h(v)-h(u) ≥ d'.

(v-u) – .h(u).

d'

yi

h(x) ≤ h(yi)

Page 31: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Subgradients and -subgradients

Vector d is a subgradient of h(.) at u, if for every v, h(v) - h(u) ≥ d.(v-u).

Vector d' is an -subgradient of h(.) at u, if for every vP, h(v) - h(u) ≥ d'.(v-u)

– .h(u).P = x : 0 ≤ xS ≤ 1 for each set S .

h(x) = ∑S SxS + ∑AU pA fA(x) = .x + ∑AU pA fA(x)Lemma: Let d be a subgradient at u, and d' be a vector such that dS – S ≤ d'S ≤ dS for each set S. Then, d' is an -subgradient at point u.

Page 32: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Getting a “nice” subgradient

h(x) = .x + ∑AU pA fA(x)fA(x) = min. ∑S WSyA,S

s.t. ∑S:eS yA,S ≥ 1 – ∑S:eS xS eA

yA,S ≥ 0 S

Page 33: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Getting a “nice” subgradient

h(x) = .x + ∑AU pA fA(x)fA(x) = min. ∑S WSyA,S = max. ∑eA (1 – ∑S:eS xS) zA,e

s.t. ∑S:eS yA,S ≥ 1 – ∑S:eS xS s.t. ∑eA S zA,e ≤ WS

eA SyA,S ≥ 0 S zA,e ≥ 0 eA

Page 34: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Getting a “nice” subgradient

h(x) = .x + ∑AU pA fA(x)fA(x)= min. ∑S WSyA,S = max. ∑e (1 – ∑S:eS xS) zA,e

s.t. ∑S:eS yA,S ≥ 1 – ∑S:eS xS s.t. ∑eS zA,e ≤ WS

eA SyA,S ≥ 0 S zA,e = 0 eA, zA,e ≥

0 eConsider point u m. Let zA optimal dual solution for A at u. So

fA(u) = ∑e (1 – ∑S:eS uS) zA,e.For any other point v, zA is a feasible dual solution for A. SofA(v) ≥ ∑e (1 – ∑S:eS vS) zA,e.

Get that h(v) – h(u) ≥ ∑S (S – ∑AU pA ∑eS zA,e)(vS – uS) = d.(v-u) where dS = S – ∑AU pA ∑eS zA,e. So d is a subgradient of h(.) at point u.

Page 35: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Computing an -Subgradient

Given point u m. zA optimal dual solution for A at u.Vector d with dS = S – ∑AU pA ∑eS zA,e

= ∑AU pA(S – ∑eS zA,e) is a subgradient at u.Goal: Get d' such that dS – S ≤ d'S ≤ dS for each S.For each S, -WS ≤ dS ≤ S. Let = maxS WS /S.• Sample once from black box to get random

scenario A.Compute XS = S – ∑eS zA,e for each S.E[XS] = dS and Var[XS] ≤ WS.

2

• Sample O(2/2.log(m/)) times to compute d' such that

Pr[S, dS – S ≤ d'S ≤ dS] ≥ 1- d' is an -subgradient at u with probability ≥ 1-

Page 36: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Putting it all togetherMin h(x) subject to xP.

Can compute -subgradients.Run ellipsoid algorithm.Given yi = center of current ellipsoid.

Continue with smaller ellipsoid.

If yi is infeasible, use violated inequality as a cut.If yi P use -subgradient cut.

P

x1x2

xk

x*

Generate points x1, x2, …, xk in P. Return x = argmini=1…k h(xi). Get that h(x) ≤ OPT/(1-) + .

Page 37: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Cannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?

One last hurdle

xk

Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).

x1

x2 x1 x2

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).

d' : -subgradient

d'.(x-y) ≥ 0y

Page 38: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2 x2x1

x1

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).

Page 39: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2 x2x1

x1

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).

Page 40: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2x1x2

Stop when search interval is small enough. Set x = either end point of remaining segment.

x1x x

Page 41: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).Iterate using x and x3,…, xk updating x along the way.

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2

x1x

Page 42: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

x

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).Iterate using x and x3,…, xk updating x along the way.

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2

x1

Page 43: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2

x1

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).Iterate using x and x3,…, xk updating x along the way.

Page 44: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

One last hurdleCannot evaluate h(.) – how to compute x = argmini=1…k h(xi)?Will find point x in the convex hull of x1,…, xk such that h(x) is close to mini=1…k h(xi).xk

x2

xx1

Can show that h(x) is close to mini=1…k h(xi).

Take points x1 and x2. Do bisection search using -subgradients to find point x on x1–x2 line segment with value close to min(h(x1), h(x2)).Iterate using x and x3,…, xk updating x along the way.

Page 45: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Finally,Get solution x with h(x) close to OPT.

Sample initially to detect if OPT = Ω(1/) – this allows one to get a (1+).OPT guarantee.

Theorem: (SSC-P) can be solved to within a factor of (1 +) in polynomial time, with high probability. Gives a (2log n+)-approximation algorithm for the stochastic set cover problem.

Page 46: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

A Solvable Class of Stochastic LPs

Minimize h(x)= w.x + ∑AU pAfA(x) s.t. x m, x ≥ 0, x P

where fA(x) = min. wA.yA + cA.rA

s.t. BArA ≥ jA

DArA + TAyA ≥ l A – TAxyA m, rA n, yA ≥ 0, rA

≥ 0.Theorem: Can get a (1+)-optimal solution for this class of stochastic programs in polynomial time.

Page 47: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Sample Average Approximation

Sample Average Approximation (SAA) method:– Sample initially N times from scenario distribution– Solve 2-stage problem estimating pA with frequency of

occurrence of scenario AHow large should N be?Kleywegt, Shapiro & Homem De-Mello (KSH01):

– bound N by variance of a certain quantity – need not be polynomially bounded even for our class of programs.

ShmoysS (recent work):– show using -subgradients that for our class, N can be

poly-bounded. Nemirovskii & Shapiro:

– show that for SSC with non-scenario dependent costs, KSH01 gives polynomial bound on N for (preprocessing + SAA) algorithm.

Page 48: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Summary of Results• Give the first FPRAS to solve a class of

stochastic linear programs.

• Obtain the first approximation algorithms for 2-stage discrete stochastic problems – no assumptions on costs or probability distribution.– 2log n-approx. for set cover– 4-approx. for vertex cover (VC) and multicut on

trees.GPRS04 gave an 8-approx. for VC under proportional costs in the black box model.

Page 49: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Results (contd.)– 3.23-approx. for uncapacitated facility location (FL).

Get constant guarantees for several variants such as FL with penalties, or soft capacities, or services.Improve upon results of GPRS04, RS04 obtained in restricted models.

– (1+)-approx. for multicommodity flow.Immorlica et al. gave optimal algorithm for single-commodity in polynomial “scenario” (demand) case.

• Give a general technique to lift deterministic guarantees to stochastic setting.

Page 50: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Open Questions•Practical Impact: Can one use -

subgradients in other deterministic optimization methods, e.g., cutting plane methods? Interior-point algorithms?

•Multi-stage stochastic optimization.•Characterize which deterministic

problems are “easy to solve” in stochastic setting, and which problems are “hard”.

Page 51: Stochastic Optimization is (almost) as easy as Deterministic Optimization Chaitanya Swamy Joint work with David Shmoys done while at Cornell University

Thank You.