introduction to robust optimizationjfro/journees... · introduction to robust optimization michael...

Post on 10-Jul-2020

8 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Introduction to robust optimization

Michael POSS

May 30, 2017

Michael POSS Introduction to robust optimization May 30, 2017 1 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 2 / 53

Robust optimization

1 How much do we know ?

E [u ]

u

f (u)

Mean value Robust Distributionally Stochastic (Deterministic) robust

u∈U

u

f (u)

Michael POSS Introduction to robust optimization May 30, 2017 3 / 53

Robust optimization

2 Worst-case approach

Michael POSS Introduction to robust optimization May 30, 2017 3 / 53

static VS adjustable

Static decisions 99K uncertainty revealed

Complexity Easy for LP ©, NP-hard for combinatorial optimization §

MILP reformulation ©

Two-stages decisions 99K uncertainty revealed 99K more decisions

Complexity NP-hard for LP §, decomposition algorithms ©

Multi-stages decisions 99K uncertainty 99K decisions 99K uncertainty 99K · · ·Complexity NP-hard for LP §, cannot be solved to optimality §

Michael POSS Introduction to robust optimization May 30, 2017 4 / 53

static VS adjustable

Static decisions 99K uncertainty revealed

Complexity Easy for LP ©, NP-hard for combinatorial optimization §

MILP reformulation ©

Two-stages decisions 99K uncertainty revealed 99K more decisions

Complexity NP-hard for LP §, decomposition algorithms ©

Multi-stages decisions 99K uncertainty 99K decisions 99K uncertainty 99K · · ·Complexity NP-hard for LP §, cannot be solved to optimality §

Michael POSS Introduction to robust optimization May 30, 2017 4 / 53

static VS adjustable

Static decisions 99K uncertainty revealed

Complexity Easy for LP ©, NP-hard for combinatorial optimization §

MILP reformulation ©

Two-stages decisions 99K uncertainty revealed 99K more decisions

Complexity NP-hard for LP §, decomposition algorithms ©

Multi-stages decisions 99K uncertainty 99K decisions 99K uncertainty 99K · · ·Complexity NP-hard for LP §, cannot be solved to optimality §

Michael POSS Introduction to robust optimization May 30, 2017 4 / 53

discrete uncertainty VS convex uncertainty

U = vertices(P)

Observation

In many cases, U ∼ P.

Exceptions:

robust constraints f (x , u) ≤ b and f non-concave in u

multi-stages problems with integer adjustable variables

Michael POSS Introduction to robust optimization May 30, 2017 5 / 53

discrete uncertainty VS convex uncertainty

U = vertices(P)

Observation

In many cases, U ∼ P.

Exceptions:

robust constraints f (x , u) ≤ b and f non-concave in u

multi-stages problems with integer adjustable variables

Michael POSS Introduction to robust optimization May 30, 2017 5 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 6 / 53

Robust combinatorial optimization

Combinatorial problem

X ⊆ 0, 1n, u0 ∈ Rn

CO minx∈X

uT0 x .

Robust counterparts with cost uncertainty

1 X ⊆ 0, 1n,U ⊂ Rn

U-CO minx∈X

maxu∈U

uT0 x

2 Regret version:

minx∈X

maxu∈U

(uT0 x − min

y∈XuT0 y

)= min

x∈Xmaxu∈U

miny∈X

(uT0 x − uT0 y

)

Michael POSS Introduction to robust optimization May 30, 2017 7 / 53

Robust combinatorial optimization

Combinatorial problem

X ⊆ 0, 1n, u0 ∈ Rn

CO minx∈X

uT0 x .

Robust counterparts with cost uncertainty

1 X ⊆ 0, 1n,U ⊂ Rn

U-CO minx∈X

maxu∈U

uT0 x

2 Regret version:

minx∈X

maxu∈U

(uT0 x − min

y∈XuT0 y

)= min

x∈Xmaxu∈U

miny∈X

(uT0 x − uT0 y

)

Michael POSS Introduction to robust optimization May 30, 2017 7 / 53

Robust combinatorial optimization

Combinatorial problem

X ⊆ 0, 1n, u0 ∈ Rn

CO minx∈X

uT0 x .

Robust counterparts with cost uncertainty

1 X ⊆ 0, 1n,U ⊂ Rn

U-CO minx∈X

maxu∈U

uT0 x

2 Regret version:

minx∈X

maxu∈U

(uT0 x − min

y∈XuT0 y

)= min

x∈Xmaxu∈U

miny∈X

(uT0 x − uT0 y

)

Michael POSS Introduction to robust optimization May 30, 2017 7 / 53

Robust combinatorial optimization

Combinatorial problem

X ⊆ 0, 1n, u0 ∈ Rn

CO minx∈X

uT0 x .

Robust counterparts with cost uncertainty

1 X ⊆ 0, 1n,U ⊂ Rn

U-CO minx∈X

maxu∈U

uT0 x

2 Regret version:

minx∈X

maxu∈U

(uT0 x − min

y∈XuT0 y

)= min

x∈Xmaxu∈U

miny∈X

(uT0 x − uT0 y

)

Michael POSS Introduction to robust optimization May 30, 2017 7 / 53

General robust counterpart

X = X comb ∩ X num :

X comb Combinatorial nature, known.

X num Numerical uncertainty: uTj x ≤ bj , j = 1, . . . ,m, uncertain.

Robust counterpart

min

: (1)

U-CO uTj x ≤ bj , j = 1, . . . ,m, uj ∈ Uj , (2)

(3)

Examples: knapsack, constrained shortest path

Michael POSS Introduction to robust optimization May 30, 2017 8 / 53

General robust counterpart

X = X comb ∩ X num :

X comb Combinatorial nature, known.

X num Numerical uncertainty: uTj x ≤ bj , j = 1, . . . ,m, uncertain.

Robust counterpart

min

maxu0∈U0

uT0 x : (1)

U-CO uTj x ≤ bj , j = 1, . . . ,m, uj ∈ Uj , (2)

x ∈ X comb.

Examples: knapsack, constrained shortest path

Michael POSS Introduction to robust optimization May 30, 2017 8 / 53

General robust counterpart

X = X comb ∩ X num :

X comb Combinatorial nature, known.

X num Numerical uncertainty: uTj x ≤ bj , j = 1, . . . ,m, uncertain.

Robust counterpart

min

maxu0∈U0

uT0 x : (1)

U-CO uTj x ≤ bj , j = 1, . . . ,m, uj ∈ Uj , (2)

aTk x ≤ dk , k = 1, . . . , ` (3)

x ∈ 0, 1n

(4)

Examples: knapsack, constrained shortest path

Michael POSS Introduction to robust optimization May 30, 2017 8 / 53

General robust counterpart

X = X comb ∩ X num :

X comb Combinatorial nature, known.

X num Numerical uncertainty: uTj x ≤ bj , j = 1, . . . ,m, uncertain.

Robust counterpart

min

z : (1)

U-CO uTj x ≤ bj , j = 1, . . . ,m, uj ∈ Uj , (2)

uT0 x ≤ z , u0 ∈ U0 (3)

aTk x ≤ dk , k = 1, . . . , ` (4)

x ∈ 0, 1n

(5)

Examples: knapsack, constrained shortest path

Michael POSS Introduction to robust optimization May 30, 2017 8 / 53

General robust counterpart

X = X comb ∩ X num :

X comb Combinatorial nature, known.

X num Numerical uncertainty: uTj x ≤ bj , j = 1, . . . ,m, uncertain.

Robust counterpart

min

: (1)

U-CO uTj x ≤ bj , j = 1, . . . ,m, uj ∈ Uj , (2)

uT0 x ≤ z , u0 ∈ U0 (3)

aTk x ≤ dk , k = 1, . . . , ` (4)

x ∈ 0, 1n

(5)

Examples: knapsack, constrained shortest path

Michael POSS Introduction to robust optimization May 30, 2017 8 / 53

discrete uncertainty: U -CO is hard [Kouvelis and Yu, 2013]

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen |U| = 2.

Proof.1 SELECTION PROBLEM: min

S⊆N,|S |=p

∑i∈S

ui

2 ROBUST SEL. PROB.: minS⊆N,|S |=p

maxu∈U

∑i∈S

ui

3 PARTITION PROBLEM: minS⊆N,|S |=|N|/2

max( ∑i∈S

ai ,∑

i∈N\Sai)

4 Reduction: p = |N|2 , and U = u1, u2 such that

u1i = ai and u2

i =2

|N|∑k

ak − ai

⇒ maxu∈U

∑i∈S

ui = max( ∑i∈S

ai ,∑

i∈N\Sai)

Michael POSS Introduction to robust optimization May 30, 2017 9 / 53

discrete uncertainty: U -CO is hard [Kouvelis and Yu, 2013]

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen |U| = 2.

Proof.1 SELECTION PROBLEM: min

S⊆N,|S |=p

∑i∈S

ui

2 ROBUST SEL. PROB.: minS⊆N,|S |=p

maxu∈U

∑i∈S

ui

3 PARTITION PROBLEM: minS⊆N,|S |=|N|/2

max( ∑i∈S

ai ,∑

i∈N\Sai)

4 Reduction: p = |N|2 , and U = u1, u2 such that

u1i = ai and u2

i =2

|N|∑k

ak − ai

⇒ maxu∈U

∑i∈S

ui = max( ∑i∈S

ai ,∑

i∈N\Sai)

Michael POSS Introduction to robust optimization May 30, 2017 9 / 53

discrete uncertainty: U -CO is hard [Kouvelis and Yu, 2013]

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen |U| = 2.

Proof.1 SELECTION PROBLEM: min

S⊆N,|S |=p

∑i∈S

ui

2 ROBUST SEL. PROB.: minS⊆N,|S |=p

maxu∈U

∑i∈S

ui

3 PARTITION PROBLEM: minS⊆N,|S |=|N|/2

max( ∑i∈S

ai ,∑

i∈N\Sai)

4 Reduction: p = |N|2 , and U = u1, u2 such that

u1i = ai and u2

i =2

|N|∑k

ak − ai

⇒ maxu∈U

∑i∈S

ui = max( ∑i∈S

ai ,∑

i∈N\Sai)

Michael POSS Introduction to robust optimization May 30, 2017 9 / 53

discrete uncertainty: U -CO is hard [Kouvelis and Yu, 2013]

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen |U| = 2.

Proof.1 SELECTION PROBLEM: min

S⊆N,|S |=p

∑i∈S

ui

2 ROBUST SEL. PROB.: minS⊆N,|S |=p

maxu∈U

∑i∈S

ui

3 PARTITION PROBLEM: minS⊆N,|S |=|N|/2

max( ∑i∈S

ai ,∑

i∈N\Sai)

4 Reduction: p = |N|2 , and U = u1, u2 such that

u1i = ai and u2

i =2

|N|∑k

ak − ai

⇒ maxu∈U

∑i∈S

ui = max( ∑i∈S

ai ,∑

i∈N\Sai)

Michael POSS Introduction to robust optimization May 30, 2017 9 / 53

discrete uncertainty: U -CO is hard [Kouvelis and Yu, 2013]

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen |U| = 2.

Proof.1 SELECTION PROBLEM: min

S⊆N,|S |=p

∑i∈S

ui

2 ROBUST SEL. PROB.: minS⊆N,|S |=p

maxu∈U

∑i∈S

ui

3 PARTITION PROBLEM: minS⊆N,|S |=|N|/2

max( ∑i∈S

ai ,∑

i∈N\Sai)

4 Reduction: p = |N|2 , and U = u1, u2 such that

u1i = ai and u2

i =2

|N|∑k

ak − ai

⇒ maxu∈U

∑i∈S

ui = max( ∑i∈S

ai ,∑

i∈N\Sai)

Michael POSS Introduction to robust optimization May 30, 2017 9 / 53

polyhedral uncertainty: U -CO is still hard (but solvable)

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen U has a compact description.

Proof.

1 U = conv(u1, u2) ⇒ n equalities and 2 inequalities

2 uT x ≤ b, u ∈ U ⇔ uT x ≤ b, u ∈ ext(U)

Theorem (Ben-Tal and Nemirovski [1998])

Problem U-CO is equivalent to a mixed-integer linear program.

Michael POSS Introduction to robust optimization May 30, 2017 10 / 53

polyhedral uncertainty: U -CO is still hard (but solvable)

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen U has a compact description.

Proof.

1 U = conv(u1, u2) ⇒ n equalities and 2 inequalities

2 uT x ≤ b, u ∈ U ⇔ uT x ≤ b, u ∈ ext(U)

Theorem (Ben-Tal and Nemirovski [1998])

Problem U-CO is equivalent to a mixed-integer linear program.

Michael POSS Introduction to robust optimization May 30, 2017 10 / 53

polyhedral uncertainty: U -CO is still hard (but solvable)

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen U has a compact description.

Proof.

1 U = conv(u1, u2) ⇒ n equalities and 2 inequalities

2 uT x ≤ b, u ∈ U ⇔ uT x ≤ b, u ∈ ext(U)

Theorem (Ben-Tal and Nemirovski [1998])

Problem U-CO is equivalent to a mixed-integer linear program.

Michael POSS Introduction to robust optimization May 30, 2017 10 / 53

polyhedral uncertainty: U -CO is still hard (but solvable)

Theorem

The robust shortest path, assignment, spanning tree, ... are NP-hard evenwhen U has a compact description.

Proof.

1 U = conv(u1, u2) ⇒ n equalities and 2 inequalities

2 uT x ≤ b, u ∈ U ⇔ uT x ≤ b, u ∈ ext(U)

Theorem (Ben-Tal and Nemirovski [1998])

Problem U-CO is equivalent to a mixed-integer linear program.

Michael POSS Introduction to robust optimization May 30, 2017 10 / 53

Dualization - cost uncertainty

Theorem (Ben-Tal and Nemirovski [1998])

Consider α ∈ Rl×n and β ∈ Rl that define polytope

U := u ∈ Rn+ : αT

k u ≤ βk , k = 1, . . . , l.

Problem minx∈X

maxu∈U

uT x is equivalent to a compact MILP.

Proof.

Dualizing the inner maximization: minx∈X

maxu∈U

uT x =

minx∈X

min

l∑

k=1

βkzk :l∑

k=1

αkizk ≥ xi , i = 1, . . . , n, z ≥ 0

,

Robust constraint (e.g. the knapsack)

Michael POSS Introduction to robust optimization May 30, 2017 11 / 53

Dualization - cost uncertainty

Theorem (Ben-Tal and Nemirovski [1998])

Consider α ∈ Rl×n and β ∈ Rl that define polytope

U := u ∈ Rn+ : αT

k u ≤ βk , k = 1, . . . , l.

Problem minx∈X

maxu∈U

uT x is equivalent to a compact MILP.

Proof.

Dualizing the inner maximization: minx∈X

maxu∈U

uT x =

minx∈X

min

l∑

k=1

βkzk :l∑

k=1

αkizk ≥ xi , i = 1, . . . , n, z ≥ 0

,

Robust constraint (e.g. the knapsack)

Michael POSS Introduction to robust optimization May 30, 2017 11 / 53

Dualization - cost uncertainty

Theorem (Ben-Tal and Nemirovski [1998])

Consider α ∈ Rl×n and β ∈ Rl that define polytope

U := u ∈ Rn+ : αT

k u ≤ βk , k = 1, . . . , l.

Problem minx∈X

maxu∈U

uT x is equivalent to a compact MILP.

Proof.

Dualizing the inner maximization: minx∈X

maxu∈U

uT x =

minx∈X

min

l∑

k=1

βkzk :l∑

k=1

αkizk ≥ xi , i = 1, . . . , n, z ≥ 0

,

Robust constraint (e.g. the knapsack)

Michael POSS Introduction to robust optimization May 30, 2017 11 / 53

Cutting plane algorithms [Bertsimas et al., 2016]

U∗0 ⊂ U0, U∗j ⊂ Uj

Master problem

min

z :

MP uTj x ≤ bj , j = 1, . . . ,m, uj ∈ U∗j ,uT0 x ≤ z , u0 ∈ U∗0 ,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 Solve MP → get x , z

2 Solve maxu0∈U0

uT0 x and maxuj∈Uj

uTj x → get u0, . . . , um

3 If uT0 x > z or uTj x > bj then

U∗0 ← U∗0 ∪ u0 and U∗0 ← U∗j ∪ ujgo back to 1

Michael POSS Introduction to robust optimization May 30, 2017 12 / 53

Cutting plane algorithms [Bertsimas et al., 2016]

U∗0 ⊂ U0, U∗j ⊂ Uj

Master problem

min

z :

MP uTj x ≤ bj , j = 1, . . . ,m, uj ∈ U∗j ,uT0 x ≤ z , u0 ∈ U∗0 ,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 Solve MP → get x , z

2 Solve maxu0∈U0

uT0 x and maxuj∈Uj

uTj x → get u0, . . . , um

3 If uT0 x > z or uTj x > bj then

U∗0 ← U∗0 ∪ u0 and U∗0 ← U∗j ∪ ujgo back to 1

Michael POSS Introduction to robust optimization May 30, 2017 12 / 53

Cutting plane algorithms [Bertsimas et al., 2016]

U∗0 ⊂ U0, U∗j ⊂ Uj

Master problem

min

z :

MP uTj x ≤ bj , j = 1, . . . ,m, uj ∈ U∗j ,uT0 x ≤ z , u0 ∈ U∗0 ,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 Solve MP → get x , z

2 Solve maxu0∈U0

uT0 x and maxuj∈Uj

uTj x → get u0, . . . , um

3 If uT0 x > z or uTj x > bj then

U∗0 ← U∗0 ∪ u0 and U∗0 ← U∗j ∪ ujgo back to 1

Michael POSS Introduction to robust optimization May 30, 2017 12 / 53

Cutting plane algorithms [Bertsimas et al., 2016]

U∗0 ⊂ U0, U∗j ⊂ Uj

Master problem

min

z :

MP uTj x ≤ bj , j = 1, . . . ,m, uj ∈ U∗j ,uT0 x ≤ z , u0 ∈ U∗0 ,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 Solve MP → get x , z

2 Solve maxu0∈U0

uT0 x and maxuj∈Uj

uTj x → get u0, . . . , um

3 If uT0 x > z or uTj x > bj then

U∗0 ← U∗0 ∪ u0 and U∗0 ← U∗j ∪ ujgo back to 1

Michael POSS Introduction to robust optimization May 30, 2017 12 / 53

Cutting plane algorithms [Bertsimas et al., 2016]

U∗0 ⊂ U0, U∗j ⊂ Uj

Master problem

min

z :

MP uTj x ≤ bj , j = 1, . . . ,m, uj ∈ U∗j ,uT0 x ≤ z , u0 ∈ U∗0 ,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 Solve MP → get x , z

2 Solve maxu0∈U0

uT0 x and maxuj∈Uj

uTj x → get u0, . . . , um

3 If uT0 x > z or uTj x > bj then

U∗0 ← U∗0 ∪ u0 and U∗0 ← U∗j ∪ ujgo back to 1

Michael POSS Introduction to robust optimization May 30, 2017 12 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

u1

u2 + u2

u2

u1 + u1

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

u1

u2 + u2

u2

u1 + u1

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ Γ

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

u1

u2 + u2

u2

u1 + u1

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ 2

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

u1

u2 + u2

u2

u1 + u1

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ 1.5

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Simpler structure: UΓ-robust combinatorial optimization

U = vertices(P): good, but need “simpler” P

u1

u2 + u2

u2

u1 + u1

UΓ =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ 1

Michael POSS Introduction to robust optimization May 30, 2017 13 / 53

Iterative algorithms for UΓ

P =

ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ Γ

Γ = 3 Γ = 2.5 Γ = 2

Theorem (Bertsimas and Sim [2003], Goetzmann et al. [2011],

Alvarez-Miranda et al. [2013], Lee and Kwon [2014])

Cost uncertainty UΓ-CO ⇒ solving ∼ n/2 problems CO.

Numerical uncertainty UΓ-CO ⇒ solving ∼ (n/2)m problems CO.

Michael POSS Introduction to robust optimization May 30, 2017 14 / 53

Iterative algorithms for UΓ

UΓ = vertices

(ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ Γ

)

Γ = 3 Γ = 2.5 Γ = 2

Theorem (Bertsimas and Sim [2003], Goetzmann et al. [2011],

Alvarez-Miranda et al. [2013], Lee and Kwon [2014])

Cost uncertainty UΓ-CO ⇒ solving ∼ n/2 problems CO.

Numerical uncertainty UΓ-CO ⇒ solving ∼ (n/2)m problems CO.

Michael POSS Introduction to robust optimization May 30, 2017 14 / 53

Iterative algorithms for UΓ

UΓ = vertices

(ui ≤ ui ≤ ui + ui , i = 1, . . . , n,

n∑i=1

ui − uiui

≤ Γ

)

Γ = 3 Γ = 2.5 Γ = 2

Theorem (Bertsimas and Sim [2003], Goetzmann et al. [2011],

Alvarez-Miranda et al. [2013], Lee and Kwon [2014])

Cost uncertainty UΓ-CO ⇒ solving ∼ n/2 problems CO.

Numerical uncertainty UΓ-CO ⇒ solving ∼ (n/2)m problems CO.

Michael POSS Introduction to robust optimization May 30, 2017 14 / 53

Other convex U (recall that U ⇔ conv(U))

Total deviationu ≤ u ≤ u + u,

n∑i=1

(ui − ui ) ≤ Ω

⇒ solving 2 problems CO

Knapsack uncertainty [Poss, 2017])u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b

⇒ solving n problems CO

Decision-dependent [Poss, 2013, 2014, Nohadani and Sharma, 2016]u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b(x)

⇒ solving n problems CO

Axis-parallel Ellipsoids [Mokarami and Hashemi, 2015]n∑

i=1

(ui−uiui

)2≤ Ω

⇒ solving nmaxi ui problems CO

Michael POSS Introduction to robust optimization May 30, 2017 15 / 53

Other convex U (recall that U ⇔ conv(U))

Total deviationu ≤ u ≤ u + u,

n∑i=1

(ui − ui ) ≤ Ω

⇒ solving 2 problems CO

Knapsack uncertainty [Poss, 2017])u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b

⇒ solving n problems CO

Decision-dependent [Poss, 2013, 2014, Nohadani and Sharma, 2016]u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b(x)

⇒ solving n problems CO

Axis-parallel Ellipsoids [Mokarami and Hashemi, 2015]n∑

i=1

(ui−uiui

)2≤ Ω

⇒ solving nmaxi ui problems CO

Michael POSS Introduction to robust optimization May 30, 2017 15 / 53

Other convex U (recall that U ⇔ conv(U))

Total deviationu ≤ u ≤ u + u,

n∑i=1

(ui − ui ) ≤ Ω

⇒ solving 2 problems CO

Knapsack uncertainty [Poss, 2017])u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b

⇒ solving n problems CO

Decision-dependent [Poss, 2013, 2014, Nohadani and Sharma, 2016]u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b(x)

⇒ solving n problems CO

Axis-parallel Ellipsoids [Mokarami and Hashemi, 2015]n∑

i=1

(ui−uiui

)2≤ Ω

⇒ solving nmaxi ui problems CO

Michael POSS Introduction to robust optimization May 30, 2017 15 / 53

Other convex U (recall that U ⇔ conv(U))

Total deviationu ≤ u ≤ u + u,

n∑i=1

(ui − ui ) ≤ Ω

⇒ solving 2 problems CO

Knapsack uncertainty [Poss, 2017])u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b

⇒ solving n problems CO

Decision-dependent [Poss, 2013, 2014, Nohadani and Sharma, 2016]u ≤ u ≤ u + u,

n∑i=1

aiui ≤ b(x)

⇒ solving n problems CO

Axis-parallel Ellipsoids [Mokarami and Hashemi, 2015]n∑

i=1

(ui−uiui

)2≤ Ω

⇒ solving nmaxi ui problems CO

Michael POSS Introduction to robust optimization May 30, 2017 15 / 53

Dynamic Programming [Klopfenstein and Nace, 2008, Monaci et al.,

2013, Poss, 2014]

s

p(s, i)

p(s, i ′)

p(s, i ′′)

i

i ′

i ′′

q(s)

Classical recurrence

F (s) = cheapest cost up to state s; F (O) = 0

F (s) = mini∈q(s)

F (p(s, i)) + ui, s ∈ S\O

Robust recurrence

F (s, α) = cheapest cost up to state s with α remaning deviations; F (O, α) = 0F (s, α) = min

i∈q(s)max(F (p(s, i), α) + ui ,F (p(s, i), α− 1) + ui + ui ),

s ∈ S\O, 1 ≤ α ≤ Γ,F (s, 0) = min

i∈q(s)F (p(s, i), 0) + ui, s ∈ S\O.

Michael POSS Introduction to robust optimization May 30, 2017 16 / 53

Dynamic Programming [Klopfenstein and Nace, 2008, Monaci et al.,

2013, Poss, 2014]

s

p(s, i)

p(s, i ′)

p(s, i ′′)

i

i ′

i ′′

q(s)

Classical recurrence

F (s) = cheapest cost up to state s; F (O) = 0

F (s) = mini∈q(s)

F (p(s, i)) + ui, s ∈ S\O

Robust recurrence

F (s, α) = cheapest cost up to state s with α remaning deviations; F (O, α) = 0F (s, α) = min

i∈q(s)max(F (p(s, i), α) + ui ,F (p(s, i), α− 1) + ui + ui ),

s ∈ S\O, 1 ≤ α ≤ Γ,F (s, 0) = min

i∈q(s)F (p(s, i), 0) + ui, s ∈ S\O.

Michael POSS Introduction to robust optimization May 30, 2017 16 / 53

Are all problems easy?

Hard problems must have one of

1 non-constant number of robust “linear” constraints

2 “non-linear” constraints/cost function

Theorem (Pessoa et al. [2015])

UΓ-robust shortest path with time windows is NP-hard in the strongsense.

Theorem (Bougeret et al. [2016])

Minimizing the weighted sum of completion times is NP-hard in thestrong sense.

Michael POSS Introduction to robust optimization May 30, 2017 17 / 53

Are all problems easy?

Hard problems must have one of

1 non-constant number of robust “linear” constraints

2 “non-linear” constraints/cost function

Theorem (Pessoa et al. [2015])

UΓ-robust shortest path with time windows is NP-hard in the strongsense.

Theorem (Bougeret et al. [2016])

Minimizing the weighted sum of completion times is NP-hard in thestrong sense.

Michael POSS Introduction to robust optimization May 30, 2017 17 / 53

Are all problems easy?

Hard problems must have one of

1 non-constant number of robust “linear” constraints

2 “non-linear” constraints/cost function

Theorem (Pessoa et al. [2015])

UΓ-robust shortest path with time windows is NP-hard in the strongsense.

Theorem (Bougeret et al. [2016])

Minimizing the weighted sum of completion times is NP-hard in thestrong sense.

Michael POSS Introduction to robust optimization May 30, 2017 17 / 53

Are all problems easy?

Hard problems must have one of

1 non-constant number of robust “linear” constraints

2 “non-linear” constraints/cost function

Theorem (Pessoa et al. [2015])

UΓ-robust shortest path with time windows is NP-hard in the strongsense.

Theorem (Bougeret et al. [2016])

Minimizing the weighted sum of completion times is NP-hard in thestrong sense.

Michael POSS Introduction to robust optimization May 30, 2017 17 / 53

UΓ-TWSP is NP-hard in the strong sense

ROBUST PATH WITH DEADLINES (UΓ-PD)Input: Graph D = (N,A), ua, Γ, u = 0.Question: There exists a path p = o i2 i3 · · · d

h−1∑k=1

uik ik+1≤ bih , for each h = 1, . . . , l , u ∈ UΓ?

INDEPENDENT SET (IS)Input: An undirected graph G = (V ,E ) and a positive integer K .Question: There exists W ⊆ V such that |W | ≥ K and i , j * W foreach i , j ∈ E ?

Michael POSS Introduction to robust optimization May 30, 2017 18 / 53

UΓ-TWSP is NP-hard in the strong sense

ROBUST PATH WITH DEADLINES (UΓ-PD)Input: Graph D = (N,A), ua, Γ, u = 0.Question: There exists a path p = o i2 i3 · · · d

h−1∑k=1

uik ik+1≤ bih , for each h = 1, . . . , l , u ∈ UΓ?

INDEPENDENT SET (IS)Input: An undirected graph G = (V ,E ) and a positive integer K .Question: There exists W ⊆ V such that |W | ≥ K and i , j * W foreach i , j ∈ E ?

Michael POSS Introduction to robust optimization May 30, 2017 18 / 53

We are given an instance of IS with |V | = n nodes and |E | = m

0 n

p1 p2n−1

p2 p2n

1 n + 1 n + 2 n + m

p2n+1 p2n+2 p2n+m

p3

p4

2

Set W ⊆ V corresponds to path pW :

pW contains p2i iff i ∈W

pW contains p2i−1 iff i /∈W

Observationh−1∑k=1

uik ik+1 ≤ bih , ∀u ∈ UΓ ⇔ max

u∈UΓ

h−1∑k=1

uik ik+1 ≤ bih

Parameters u and b are chosen such that

1 maxu∈UΓ

n−1∑k=1

uik ik+1 ≤ bn for pW ⇔ |W | ≥ K

2 maxu∈UΓ

n+h−1∑k=1

uik ik+1 ≤ bn+h for pW ⇔ eh = i , j * W

Michael POSS Introduction to robust optimization May 30, 2017 19 / 53

We are given an instance of IS with |V | = n nodes and |E | = m

0 n

p1 p2n−1

p2 p2n

1 n + 1 n + 2 n + m

p2n+1 p2n+2 p2n+m

p3

p4

2

Set W ⊆ V corresponds to path pW :

pW contains p2i iff i ∈W

pW contains p2i−1 iff i /∈W

Observationh−1∑k=1

uik ik+1 ≤ bih , ∀u ∈ UΓ ⇔ max

u∈UΓ

h−1∑k=1

uik ik+1 ≤ bih

Parameters u and b are chosen such that

1 maxu∈UΓ

n−1∑k=1

uik ik+1 ≤ bn for pW ⇔ |W | ≥ K

2 maxu∈UΓ

n+h−1∑k=1

uik ik+1 ≤ bn+h for pW ⇔ eh = i , j * W

Michael POSS Introduction to robust optimization May 30, 2017 19 / 53

We are given an instance of IS with |V | = n nodes and |E | = m

0 n

p1 p2n−1

p2 p2n

1 n + 1 n + 2 n + m

p2n+1 p2n+2 p2n+m

p3

p4

2

Set W ⊆ V corresponds to path pW :

pW contains p2i iff i ∈W

pW contains p2i−1 iff i /∈W

Observationh−1∑k=1

uik ik+1 ≤ bih , ∀u ∈ UΓ ⇔ max

u∈UΓ

h−1∑k=1

uik ik+1 ≤ bih

Parameters u and b are chosen such that

1 maxu∈UΓ

n−1∑k=1

uik ik+1 ≤ bn for pW ⇔ |W | ≥ K

2 maxu∈UΓ

n+h−1∑k=1

uik ik+1 ≤ bn+h for pW ⇔ eh = i , j * W

Michael POSS Introduction to robust optimization May 30, 2017 19 / 53

We are given an instance of IS with |V | = n nodes and |E | = m

0 n

p1 p2n−1

p2 p2n

1 n + 1 n + 2 n + m

p2n+1 p2n+2 p2n+m

p3

p4

2

Set W ⊆ V corresponds to path pW :

pW contains p2i iff i ∈W

pW contains p2i−1 iff i /∈W

Observationh−1∑k=1

uik ik+1 ≤ bih , ∀u ∈ UΓ ⇔ max

u∈UΓ

h−1∑k=1

uik ik+1 ≤ bih

Parameters u and b are chosen such that

1 maxu∈UΓ

n−1∑k=1

uik ik+1 ≤ bn for pW ⇔ |W | ≥ K

2 maxu∈UΓ

n+h−1∑k=1

uik ik+1 ≤ bn+h for pW ⇔ eh = i , j * W

Michael POSS Introduction to robust optimization May 30, 2017 19 / 53

We are given an instance of IS with |V | = n nodes and |E | = m

0 n

p1 p2n−1

p2 p2n

1 n + 1 n + 2 n + m

p2n+1 p2n+2 p2n+m

p3

p4

2

Set W ⊆ V corresponds to path pW :

pW contains p2i iff i ∈W

pW contains p2i−1 iff i /∈W

Observationh−1∑k=1

uik ik+1 ≤ bih , ∀u ∈ UΓ ⇔ max

u∈UΓ

h−1∑k=1

uik ik+1 ≤ bih

Parameters u and b are chosen such that

1 maxu∈UΓ

n−1∑k=1

uik ik+1 ≤ bn for pW ⇔ |W | ≥ K

2 maxu∈UΓ

n+h−1∑k=1

uik ik+1 ≤ bn+h for pW ⇔ eh = i , j * W

Michael POSS Introduction to robust optimization May 30, 2017 19 / 53

Cutting plane algorithms 2

Master problem

min

cT x :

MP f (x , u) ≤ 0, u ∈ U∗,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 solve MP → get x ; solve maxu∈U

f (x , u) → get u

2 If f (x , u) > 0 then U∗ ← U∗ ∪ u; go back to 1

Examples [Agra et al., 2016]

Minimizing tardiness f (x , u) =n∑

i=1

wi maxCi (x , u)− di , 0

Lot-sizing f (x , u) =n∑

i=1

max

hi (

i∑j=1

xi −i∑

j=1

ui ), pi (i∑

j=1

ui −i∑

j=1

xi )

Michael POSS Introduction to robust optimization May 30, 2017 20 / 53

Cutting plane algorithms 2

Master problem

min

cT x :

MP f (x , u) ≤ 0, u ∈ U∗,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 solve MP → get x ; solve maxu∈U

f (x , u) → get u

2 If f (x , u) > 0 then U∗ ← U∗ ∪ u; go back to 1

Examples [Agra et al., 2016]

Minimizing tardiness f (x , u) =n∑

i=1

wi maxCi (x , u)− di , 0

Lot-sizing f (x , u) =n∑

i=1

max

hi (

i∑j=1

xi −i∑

j=1

ui ), pi (i∑

j=1

ui −i∑

j=1

xi )

Michael POSS Introduction to robust optimization May 30, 2017 20 / 53

Cutting plane algorithms 2

Master problem

min

cT x :

MP f (x , u) ≤ 0, u ∈ U∗,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 solve MP → get x ; solve maxu∈U

f (x , u) → get u

2 If f (x , u) > 0 then U∗ ← U∗ ∪ u; go back to 1

Examples [Agra et al., 2016]

Minimizing tardiness f (x , u) =n∑

i=1

wi maxCi (x , u)− di , 0

Lot-sizing f (x , u) =n∑

i=1

max

hi (

i∑j=1

xi −i∑

j=1

ui ), pi (i∑

j=1

ui −i∑

j=1

xi )

Michael POSS Introduction to robust optimization May 30, 2017 20 / 53

Cutting plane algorithms 2

Master problem

min

cT x :

MP f (x , u) ≤ 0, u ∈ U∗,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 solve MP → get x ; solve maxu∈U

f (x , u) → get u

2 If f (x , u) > 0 then U∗ ← U∗ ∪ u; go back to 1

Examples [Agra et al., 2016]

Minimizing tardiness f (x , u) =n∑

i=1

wi maxCi (x , u)− di , 0

Lot-sizing f (x , u) =n∑

i=1

max

hi (

i∑j=1

xi −i∑

j=1

ui ), pi (i∑

j=1

ui −i∑

j=1

xi )

Michael POSS Introduction to robust optimization May 30, 2017 20 / 53

Cutting plane algorithms 2

Master problem

min

cT x :

MP f (x , u) ≤ 0, u ∈ U∗,aTk x ≤ dk , k = 1, . . . , `

x ∈ 0, 1n

1 solve MP → get x ; solve maxu∈U

f (x , u) → get u

2 If f (x , u) > 0 then U∗ ← U∗ ∪ u; go back to 1

Examples [Agra et al., 2016]

Minimizing tardiness f (x , u) =n∑

i=1

wi maxCi (x , u)− di , 0

Lot-sizing f (x , u) =n∑

i=1

max

hi (

i∑j=1

xi −i∑

j=1

ui ), pi (i∑

j=1

ui −i∑

j=1

xi )

Michael POSS Introduction to robust optimization May 30, 2017 20 / 53

Cookbook for static problems

Dualization

good easy to apply

bad breaks combinatorial structure (e.g. shortest path)

Cutting plane algorithms (branch-and-cut)

good handle non-linear functions

bad implementation effort

Iterative algorithms, dynamic programming

good good theoretical bounds

bad solving ns problems can be too much

Michael POSS Introduction to robust optimization May 30, 2017 21 / 53

Cookbook for static problems

Dualization

good easy to apply

bad breaks combinatorial structure (e.g. shortest path)

Cutting plane algorithms (branch-and-cut)

good handle non-linear functions

bad implementation effort

Iterative algorithms, dynamic programming

good good theoretical bounds

bad solving ns problems can be too much

Michael POSS Introduction to robust optimization May 30, 2017 21 / 53

Cookbook for static problems

Dualization

good easy to apply

bad breaks combinatorial structure (e.g. shortest path)

Cutting plane algorithms (branch-and-cut)

good handle non-linear functions

bad implementation effort

Iterative algorithms, dynamic programming

good good theoretical bounds

bad solving ns problems can be too much

Michael POSS Introduction to robust optimization May 30, 2017 21 / 53

Open questions

Knapsack/budget uncertainty

Easy problems that turn NP-hard

Approximation algorithms

Scheduling seems to be a good niche.

Ellipsoidal uncertainty

Axis-parallel NP-hard in general? (known FPTAS)

General Approximation algorithms

Michael POSS Introduction to robust optimization May 30, 2017 22 / 53

Open questions

Knapsack/budget uncertainty

Easy problems that turn NP-hard

Approximation algorithms

Scheduling seems to be a good niche.

Ellipsoidal uncertainty

Axis-parallel NP-hard in general? (known FPTAS)

General Approximation algorithms

Michael POSS Introduction to robust optimization May 30, 2017 22 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 23 / 53

2-stages example: network design

Demands vectors u1, . . . , un that must be routed non-simultaneouslyon a network to be designed.⇒ two-stages program:

1 capacities

2 routing.

a

b2

1

c a

b1

2

c a

b3

22

c

Demands for scenario 1 Demands for scenario 2 Capacity cost per unit

a

b1

1 1

1c a

b1

2c a

b1

12

c

Routing for scenario 1 Routing for scenario 2 Capacity installation

Michael POSS Introduction to robust optimization May 30, 2017 24 / 53

2-stages example: network design

Demands vectors u1, . . . , un that must be routed non-simultaneouslyon a network to be designed.⇒ two-stages program:

1 capacities

2 routing.

a

b2

1

c a

b1

2

c a

b3

22

c

Demands for scenario 1 Demands for scenario 2 Capacity cost per unit

a

b1

1 1

1c a

b1

2c a

b1

12

c

Routing for scenario 1 Routing for scenario 2 Capacity installation

Michael POSS Introduction to robust optimization May 30, 2017 24 / 53

2-stages example: network design

Demands vectors u1, . . . , un that must be routed non-simultaneouslyon a network to be designed.⇒ two-stages program:

1 capacities

2 routing.

a

b2

1

c a

b1

2

c a

b3

22

c

Demands for scenario 1 Demands for scenario 2 Capacity cost per unit

a

b1

1 1

1c a

b1

2c a

b1

12

c

Routing for scenario 1 Routing for scenario 2 Capacity installation

Michael POSS Introduction to robust optimization May 30, 2017 24 / 53

multistage example: lot sizing

Given

Production costs c

Uncertain demands vectorsu1 = (u11, u12, . . . , u1t), . . . , un = (un1, un2, . . . , unt)

Storage costs h

Compute

A production plan that minimizes the costs

Michael POSS Introduction to robust optimization May 30, 2017 25 / 53

multistage example: lot sizing - formulation

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (u) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (u)− ui i = 1, . . . , t, u ∈ Ux , y ≥ 0

Something is wrong !

Michael POSS Introduction to robust optimization May 30, 2017 26 / 53

multistage example: lot sizing - formulation

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (u) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (u)− ui i = 1, . . . , t, u ∈ Ux , y ≥ 0

Something is wrong !

Michael POSS Introduction to robust optimization May 30, 2017 26 / 53

Non-anticipativity - Example

Consider a lot-sizing problem with

two different products A and Bat most 1 unit of product (A and B together) can be produced ateach period

two time periodswe know the demand of the current period at the beginning of theperiodtwo scenarios u and u′ defined as follows:

u =

t = 1 t = 2

A : 0 2B : 0 0

, u′ =

t = 1 t = 2

A : 0 0B : 0 2

,Question Propose a feasible production planAnswer The problem is infeasible !Why? Because scenarios u and u′ cannot be distinguished at thebeginning of period 1, i.e.

u1 = u′1

Michael POSS Introduction to robust optimization May 30, 2017 27 / 53

Non-anticipativity - Example

Consider a lot-sizing problem with

two different products A and Bat most 1 unit of product (A and B together) can be produced ateach period

two time periodswe know the demand of the current period at the beginning of theperiodtwo scenarios u and u′ defined as follows:

u =

t = 1 t = 2

A : 0 2B : 0 0

, u′ =

t = 1 t = 2

A : 0 0B : 0 2

,Question Propose a feasible production planAnswer The problem is infeasible !Why? Because scenarios u and u′ cannot be distinguished at thebeginning of period 1, i.e.

u1 = u′1

Michael POSS Introduction to robust optimization May 30, 2017 27 / 53

Non-anticipativity - Example

Consider a lot-sizing problem with

two different products A and Bat most 1 unit of product (A and B together) can be produced ateach period

two time periodswe know the demand of the current period at the beginning of theperiodtwo scenarios u and u′ defined as follows:

u =

t = 1 t = 2

A : 0 2B : 0 0

, u′ =

t = 1 t = 2

A : 0 0B : 0 2

,Question Propose a feasible production planAnswer The problem is infeasible !Why? Because scenarios u and u′ cannot be distinguished at thebeginning of period 1, i.e.

u1 = u′1

Michael POSS Introduction to robust optimization May 30, 2017 27 / 53

Non-anticipativity - Example

Consider a lot-sizing problem with

two different products A and Bat most 1 unit of product (A and B together) can be produced ateach period

two time periodswe know the demand of the current period at the beginning of theperiodtwo scenarios u and u′ defined as follows:

u =

t = 1 t = 2

A : 1 2B : 0 0

, u′ =

t = 1 t = 2

A : 0 0B : 1 2

,Question Propose a feasible production planAnswer The problem is infeasible !Why? Because scenarios u and u′ cannot be distinguished at thebeginning of period 1, i.e.

u1 = u′1

Michael POSS Introduction to robust optimization May 30, 2017 27 / 53

Graphical representation - scenario tree

period 1

period 2

period 0

ξ ξ′

ξ = ξ′

ξ ξ′

ξ ξ′

Michael POSS Introduction to robust optimization May 30, 2017 28 / 53

multistage example: lot sizing - formulation

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (u) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (u)− ui i = 1, . . . , t, u ∈ Ux , y ≥ 0

Michael POSS Introduction to robust optimization May 30, 2017 29 / 53

multistage example: lot sizing - formulation

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (u) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (u)− ui i = 1, . . . , t, u ∈ Uyi (u) = yi (u

′) i = 1, . . . , t, u, u′ ∈ U , ui = u′i

x , y ≥ 0

Michael POSS Introduction to robust optimization May 30, 2017 29 / 53

multistage example: lot sizing - formulation

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (ui ) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (ui )− ui i = 1, . . . , t, u ∈ U

x , y ≥ 0

Michael POSS Introduction to robust optimization May 30, 2017 29 / 53

2-stages integer example: knapsack

Given a capacity C , and a set of items I with profits c and weights w(u),find the subset of items N ⊆ I that maximizes its profit

such thatfor each u ∈ U , we can remove items in K (u) from N and the total weightsatisfies ∑

n∈N\K(u)

wn(u) ≤ C

.

Michael POSS Introduction to robust optimization May 30, 2017 30 / 53

multistage integer example: lot sizing

Variables

yi (u) production at period i for demand scenario u

xi (u) stock at the end of period i for demand scenario u

zi (u) allowing production for period i for demand scenario u

min γ

s.t. γ ≥t∑

i=1

(ciyi (ui ) + hixi (u)) u ∈ U

xi+1(u) = xi (u) + yi (ui )− ui i = 1, . . . , t, u ∈ U

yi (ui ) ≤ Mzi (u

i ) i = 1, . . . , t, u ∈ Ux , y ≥ 0

z ∈ 0, 1t|U|

Michael POSS Introduction to robust optimization May 30, 2017 31 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 32 / 53

Exact solution procedure

min cT x

s.t. x ∈ X(P) A(u)x + Ey(u) ≤ b u ∈ U (6)

where A(u) = A0 +∑

Akuk .

Lemma

We can replace (6) by

A(u)x + Ey(u) ≤ b u ∈ ext(U).

Idea of the proof:

A(u∗)x∗ + Ey(u∗) ≤ b ⇔ext(U)∑s=1

λs (A(us)x∗ + Ey(us)) ≤ext(U)∑s=1

λsb.

Michael POSS Introduction to robust optimization May 30, 2017 33 / 53

Exact solution procedure

min cT x

s.t. x ∈ X(P) A(u)x + Ey(u) ≤ b u ∈ U (6)

where A(u) = A0 +∑

Akuk .

Lemma

We can replace (6) by

A(u)x + Ey(u) ≤ b u ∈ ext(U).

Idea of the proof:

A(u∗)x∗ + Ey(u∗) ≤ b ⇔ext(U)∑s=1

λs (A(us)x∗ + Ey(us)) ≤ext(U)∑s=1

λsb.

Michael POSS Introduction to robust optimization May 30, 2017 33 / 53

Master problem

min cT x

U∗-LSP ′ s.t. x ∈ X .Constraints corresponding to u ∈ U∗

Separation

max (b − A0x∗)Tπ −∑k∈K

(A1kx∗)T vk

(SPL) s.t. u ∈ UETπ = 0

1Tπ = 1

vkm ≥ πm − (1− uk) k ∈ K ,m ∈ M

vkm ≤ uk k ∈ K ,m ∈ M

π, vkm ≥ 0,

u ∈ 0, 1K .

Michael POSS Introduction to robust optimization May 30, 2017 34 / 53

Two different approaches

Benders (b − A(u∗)x)Tπ∗ ≤ 0. (7)

Row and column generation A(u∗)x + Ey(u∗) ≤ b. (8)

Algorithm 1: RG and RCG

repeatsolve U∗-LSP’;let x∗ be an optimal solution;solve (SPL);let (u∗, π∗) be an optimal solution and z∗ be the optimal solution cost;if z∗ > 0 then

RG : add constraint (7) to U∗-LSP’;RCG : add constraint (8) to U∗-LSP’;

until z∗ > 0;

Michael POSS Introduction to robust optimization May 30, 2017 35 / 53

Two different approaches

Benders (b − A(u∗)x)Tπ∗ ≤ 0. (7)

Row and column generation A(u∗)x + Ey(u∗) ≤ b. (8)

Algorithm 2: RG and RCG

repeatsolve U∗-LSP’;let x∗ be an optimal solution;solve (SPL);let (u∗, π∗) be an optimal solution and z∗ be the optimal solution cost;if z∗ > 0 then

RG : add constraint (7) to U∗-LSP’;RCG : add constraint (8) to U∗-LSP’;

until z∗ > 0;

Michael POSS Introduction to robust optimization May 30, 2017 35 / 53

Numerical results

K Γ tRCG tSPL (%) iter tRG tP′

30 2 150 64 18 4967 1330 3 301 78 19 T 21330 4 1500 90 27 T M30 5 1344 91 25 T M40 2 365 69 21 6523 4940 3 1037 88 22 T M40 4 6879 96 30 T M40 5 5866 95 31 T M40 6 T – – T M50 2 694 73 23 T 9850 3 4446 94 27 T M50 4 22645 98 35 T M50 5 T – – T M50 6 T – – T M

Table: Results from Ayoub and Poss (2013) on a network design problem (Janos -26/84).

Michael POSS Introduction to robust optimization May 30, 2017 36 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 37 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt t = 1, . . . ,T , u ∈ U

We cannot use the previous decomposition anymoreWe can use decision rules, e.g.

y(u) = y0 +∑k∈K

ykuk .

The problem gets the structure of a static robust problem.Can be dualized.More complex decision rules exist. Some can lead to exactreformulations; others can be approximated efficiently.Decision rules are “heuristic”: they provide feasible solutions, possiblysuboptimal.

Michael POSS Introduction to robust optimization May 30, 2017 38 / 53

Decision rules: Example for network design problem

Static yka(u) = ykauk

Affine yka(u) = yka0 +∑h∈K

ykahuh

Dynamic yka(u) is an arbitrary function

Michael POSS Introduction to robust optimization May 30, 2017 39 / 53

Dual bound

Question: Can we obtain some guarantee on the quality of the affinesolution ?Answer: Using a dual model ...

Michael POSS Introduction to robust optimization May 30, 2017 40 / 53

Outline

1 General overview

2 Static problems

3 Adjustable RO

4 Two-stages problems with real recourse

5 Multi-stage problems with real recourse

6 Multi-stage with integer recourse

Michael POSS Introduction to robust optimization May 30, 2017 41 / 53

What about integer adjustable variables ?

Notation us = (u1, . . . , us)

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U (9)

y(u) ∈ RL1 × ZL2 u ∈ U

Observation

Constraints (9) are not equivalent to

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ ext(U)

Michael POSS Introduction to robust optimization May 30, 2017 42 / 53

What about integer adjustable variables ?

Notation us = (u1, . . . , us)

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U (9)

y(u) ∈ RL1 × ZL2 u ∈ U

Observation

Constraints (9) are not equivalent to

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ ext(U)

Michael POSS Introduction to robust optimization May 30, 2017 42 / 53

What about integer adjustable variables ?

Notation us = (u1, . . . , us)

min cT x

s.t. x ∈ X

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U (9)

y(u) ∈ RL1 × ZL2 u ∈ U

Observation

Constraints (9) are not equivalent to

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ ext(U)

Michael POSS Introduction to robust optimization May 30, 2017 42 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

2-stages example: knapsack

Given

Set N

Capacity C

Weights u

Profit c

Removal limit K

Solve

max ∑

i∈Ncixi

s.t.∑i∈N

ui (xi − yi (u)) ≤ C u ∈ U∑i∈N

yi (u) ≤ K u ∈ U

x , y(u) ∈ 0, 1

Example (U 6= ext(U))

Parameters N = 1, 2, ui = 0, ui = 1, ci = 1, C = 0, Γ = K = 1

UΓ opt: x1 = 1, x2 = 0 with cost 1, worst u: (0.5, 0.5)

ext(UΓ) opt: x1 = x2 = 1 with cost 2, worst u: (1, 0)

Michael POSS Introduction to robust optimization May 30, 2017 43 / 53

What to do ?

Three lines of research have been proposed in the litterature:1 Partitioning the uncertainty set.

U = U1 ∪ . . . ∪ Un

Constraints

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U

become

At(u)x +∑t

s=1 Etsys1 ≤ bt(u) t = 1, . . . ,T , u ∈ U1

. . .

At(u)x +∑t

s=1 Etsysn ≤ bt(u) t = 1, . . . ,T , u ∈ Un

2 Row-and-column generation algorithms by Zhao and Zeng [2012]Assumptions Problems with complete recourse

K(U) = K(ext(U))

Algorithms Nested row-and-column generation algorithms.3 Non-linear decision rules proposed by Bertsimas and Georghiou [2015]

Michael POSS Introduction to robust optimization May 30, 2017 44 / 53

What to do ?

Three lines of research have been proposed in the litterature:1 Partitioning the uncertainty set.

U = U1 ∪ . . . ∪ Un

Constraints

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U

become

At(u)x +∑t

s=1 Etsys1 ≤ bt(u) t = 1, . . . ,T , u ∈ U1

. . .

At(u)x +∑t

s=1 Etsysn ≤ bt(u) t = 1, . . . ,T , u ∈ Un

2 Row-and-column generation algorithms by Zhao and Zeng [2012]Assumptions Problems with complete recourse

K(U) = K(ext(U))

Algorithms Nested row-and-column generation algorithms.3 Non-linear decision rules proposed by Bertsimas and Georghiou [2015]

Michael POSS Introduction to robust optimization May 30, 2017 44 / 53

What to do ?

Three lines of research have been proposed in the litterature:1 Partitioning the uncertainty set.

U = U1 ∪ . . . ∪ Un

Constraints

At(u)x +t∑

s=1

Etsys(us) ≤ bt(u) t = 1, . . . ,T , u ∈ U

become

At(u)x +∑t

s=1 Etsys1 ≤ bt(u) t = 1, . . . ,T , u ∈ U1

. . .

At(u)x +∑t

s=1 Etsysn ≤ bt(u) t = 1, . . . ,T , u ∈ Un

2 Row-and-column generation algorithms by Zhao and Zeng [2012]Assumptions Problems with complete recourse

K(U) = K(ext(U))

Algorithms Nested row-and-column generation algorithms.3 Non-linear decision rules proposed by Bertsimas and Georghiou [2015]

Michael POSS Introduction to robust optimization May 30, 2017 44 / 53

Dynamic partition [Bertsimas and Dunning, 2016, Postekand den Hertog, 2016]

Partition P = U1 ∪ · · · ∪ Un

Heuristic bound U-CO(P)

Algorithm

1 Solve U-CO(P)

2 Refine P, go back to 1

Partition step

active vectors u lie indifferent subsets

⇒ Voronoi diagrams

U-CO(P) dimensions increases linearly with |P|Michael POSS Introduction to robust optimization May 30, 2017 45 / 53

Dynamic partition [Bertsimas and Dunning, 2016, Postekand den Hertog, 2016]

Partition P = U1 ∪ · · · ∪ Un

Heuristic bound U-CO(P)

Algorithm

1 Solve U-CO(P)

2 Refine P, go back to 1

Partition step

active vectors u lie indifferent subsets

⇒ Voronoi diagrams

U-CO(P) dimensions increases linearly with |P|Michael POSS Introduction to robust optimization May 30, 2017 45 / 53

Dynamic partition [Bertsimas and Dunning, 2016, Postekand den Hertog, 2016]

Partition P = U1 ∪ · · · ∪ Un

Heuristic bound U-CO(P)

Algorithm

1 Solve U-CO(P)

2 Refine P, go back to 1

Partition step

active vectors u lie indifferent subsets

⇒ Voronoi diagrams

U-CO(P) dimensions increases linearly with |P|Michael POSS Introduction to robust optimization May 30, 2017 45 / 53

Dynamic partition [Bertsimas and Dunning, 2016, Postekand den Hertog, 2016]

Partition P = U1 ∪ · · · ∪ Un

Heuristic bound U-CO(P)

Algorithm

1 Solve U-CO(P)

2 Refine P, go back to 1

Partition step

active vectors u lie indifferent subsets

⇒ Voronoi diagrams

U-CO(P) dimensions increases linearly with |P|Michael POSS Introduction to robust optimization May 30, 2017 45 / 53

Comparison of Bertsimas and Georghiou [2015], Bertsimasand Dunning [2016], Postek and den Hertog [2016] onlot-sizing.

wni (u) order a fixed amount qn at time i

Michael POSS Introduction to robust optimization May 30, 2017 46 / 53

Comparison of Bertsimas and Georghiou [2015], Bertsimasand Dunning [2016], Postek and den Hertog [2016] onlot-sizing.

wni (u) order a fixed amount qn at time i

Michael POSS Introduction to robust optimization May 30, 2017 46 / 53

Concluding remarks

Static problems

Numerical solution by dualization or decomposition algorithms.

U “nice” structure and non-linear objective ⇒ interesting openproblems

Adjustable problems

Hot topic

Very hard to solve!

Even good generic heuristic approaches would be interesting.

Michael POSS Introduction to robust optimization May 30, 2017 47 / 53

Concluding remarks

Static problems

Numerical solution by dualization or decomposition algorithms.

U “nice” structure and non-linear objective ⇒ interesting openproblems

Adjustable problems

Hot topic

Very hard to solve!

Even good generic heuristic approaches would be interesting.

Michael POSS Introduction to robust optimization May 30, 2017 47 / 53

SI EJCO: Robust Combinatorial Optimization

valid inequalities for robust MILPs,

decomposition algorithms for robust MILPs,

constraint programming approaches torobust combinatorial optimization,

heuristic and meta-heuristic algorithms forhard robust combinatorial problems,

ad-hoc combinatorial algorithms,

novel applications of robust combinatorialoptimization,

multi-stage integer robust optimization,

recoverable robust optimization,

Deadline: July 15 2017

Michael POSS Introduction to robust optimization May 30, 2017 48 / 53

References I

Agostinho Agra, Marcio C. Santos, Dritan Nace, and Michael Poss. Adynamic programming approach for a class of robust optimizationproblems. SIAM Journal on Optimization, (3):1799–1823, 2016.

E. Alvarez-Miranda, I. Ljubic, and P. Toth. A note on the bertsimas & simalgorithm for robust combinatorial optimization problems. 4OR, 11(4):349–360, 2013.

A. Ben-Tal and A. Nemirovski. Robust convex optimization. Mathematicsof Operations Research, 23(4):769–805, 1998.

D. Bertsimas and M. Sim. Robust discrete optimization and networkflows. Math. Program., 98(1-3):49–71, 2003.

Dimitris Bertsimas and Iain Dunning. Multistage robust mixed-integeroptimization with adaptive partitions, 2016. URLhttp://dx.doi.org/10.1287/opre.2016.1515.

Michael POSS Introduction to robust optimization May 30, 2017 49 / 53

References II

Dimitris Bertsimas and Angelos Georghiou. Design of near optimal decisionrules in multistage adaptive mixed-integer optimization. OperationsResearch, 63(3):610–627, 2015. doi: 10.1287/opre.2015.1365. URLhttp://dx.doi.org/10.1287/opre.2015.1365.

Dimitris Bertsimas, Iain Dunning, and Miles Lubin. Reformulation versuscutting-planes for robust optimization. Computational ManagementScience, 13(2):195–217, 2016.

M. Bougeret, Artur A. Pessoa, and M. Poss. Robust scheduling withbudgeted uncertainty, 2016. Submitted.

K.-S. Goetzmann, S. Stiller, and C. Telha. Optimization over integers withrobustness in cost and few constraints. In WAOA, pages 89–101, 2011.

O. Klopfenstein and D. Nace. A robust approach to the chance-constrainedknapsack problem. Oper. Res. Lett., 36(5):628–632, 2008.

P. Kouvelis and G. Yu. Robust discrete optimization and its applications,volume 14. Springer Science & Business Media, 2013.

Michael POSS Introduction to robust optimization May 30, 2017 50 / 53

References III

Taehan Lee and Changhyun Kwon. A short note on the robustcombinatorial optimization problems with cardinality constraineduncertainty. 4OR, pages 373–378, 2014.

Shaghayegh Mokarami and S Mehdi Hashemi. Constrained shortest pathwith uncertain transit times. Journal of Global Optimization, 63(1):149–163, 2015.

M. Monaci, U. Pferschy, and P. Serafini. Exact solution of the robustknapsack problem. Computers & OR, 40(11):2625–2631, 2013.

Omid Nohadani and Kartikey Sharma. Optimization underdecision-dependent uncertainty. arXiv preprint arXiv:1611.07992, 2016.

A. A. Pessoa, L. Di Puglia Pugliese, F. Guerriero, and M. Poss. Robustconstrained shortest path problems under budgeted uncertainty.Networks, 66(2):98–111, 2015.

M. Poss. Robust combinatorial optimization with variable budgeteduncertainty. 4OR, 11(1):75–92, 2013.

Michael POSS Introduction to robust optimization May 30, 2017 51 / 53

References IV

M. Poss. Robust combinatorial optimization with variable cost uncertainty.European Journal of Operational Research, 237(3):836–845, 2014.

Michael Poss. Robust combinatorial optimization with knapsackuncertainty. 2017. Available athal.archives-ouvertes.fr/tel-01421260.

Krzysztof Postek and Dick den Hertog. Multistage adjustable robustmixed-integer optimization via iterative splitting of the uncertainty set.INFORMS Journal on Computing, 28(3):553–574, 2016. doi:10.1287/ijoc.2016.0696. URLhttp://dx.doi.org/10.1287/ijoc.2016.0696.

Long Zhao and Bo Zeng. An exact algorithm for two-stage robustoptimization with mixed integer recourse problems. submitted, availableon Optimization-Online. org, 2012.

Michael POSS Introduction to robust optimization May 30, 2017 52 / 53

top related