discrete optimization methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/or_1/chap12_2011.pdf ·...

40
11/30/2011 1 Chapter 12 Discrete Optimization Methods 12.1 Solving by Total Enumeration If model has only a few discrete decision variables, the most effective method of analysis is often the most direct: enumeration of all the possibilities. [12.1] Total enumeration solves a discrete optimization by trying all possible combinations of discrete variable values, computing for each the best corresponding choice of any continuous variables. Among combinations yielding a feasible solution, those with the best objective function value are optimal. [12.2]

Upload: others

Post on 29-Oct-2019

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

1

Chapter 12

Discrete Optimization Methods

12.1 Solving by

Total Enumeration

• If model has only a few discrete decision variables, the

most effective method of analysis is often the most

direct: enumeration of all the possibilities. [12.1]

• Total enumeration solves a discrete optimization by

trying all possible combinations of discrete variable

values, computing for each the best corresponding

choice of any continuous variables. Among

combinations yielding a feasible solution, those with the

best objective function value are optimal. [12.2]

Page 2: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

2

Swedish Steel Model

with All-or-Nothing Constraints

min 16(75)y1+10(250)y2 +8 x3+9 x4 +48 x5+60 x6 +53 x7 s.t. 75y1+ 250y2 + x3+ x4 + x5+ x6 + x7 = 1000 0.0080(75)y1+ 0.0070(250)y2+0.0085x3+0.0040x4 6.5

0.0080(75)y1+ 0.0070(250)y2+0.0085x3+0.0040x4 7.5 0.180(75)y1 + 0.032(250)y2 + 1.0 x5 30.0 0.180(75)y1 + 0.032(250)y2 + 1.0 x5 30.5 0.120(75)y1 + 0.011(250)y2 + 1.0 x6 10.0 0.120(75)y1 + 0.011(250)y2 + 1.0 x6 12.0 0.001(250)y2 + 1.0 x7 11.0 0.001(250)y2 + 1.0 x7 13.0 x3…x7 0 y1, y2 = 0 or 1

(12.1)

Cost = 9967.06

y1* = 1, y2* = 0, x3* = 736.44, x4* = 160.06

x5* = 16.50, x6* = 1.00, x7* = 11.00

Swedish Steel Model

with All-or-Nothing Constraints

Discrete

Combination

Corresponding Continuous Solution Objective

Value

y1 y2 x3 x4 x5 x6 x7

0 0 823.11 125.89 30.00 10.00 11.00 10340.89

0 1 646.67 63.33 22.00 7.25 10.75 10304.08

1 0 736.44 160.06 16.50 1.00 11.00 9967.06

1 1 561.56 94.19 8.50 0.00 10.75 10017.94

Page 3: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

3

Exponential Growth

of Cases to Enumerate

• Exponential growth makes total enumeration impractical

with models having more than a handful of discrete

decision variables. [12.3]

12.2 Relaxation of Discrete

Optimization Models

Constraint Relaxations

• Model (𝑃 ) is a constraint relaxations of model (P) if

every feasible solution to (P) is also feasible in (𝑃 ) and

both models have the same objective function. [12.4]

• Relaxation should be significantly more tractable than

the models they relax, so that deeper analysis is

practical. [12.5]

Page 4: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

4

Example 12.1

Bison Booster

The Boosters are trying to decide what fundraising projects to

undertake at the next country fair. One option is customized T-

shirts, which will sell for $20 each; the other is sweatshirts selling for

$30. History shows that everything offered for sale will be sold

before the fair is over.

Materials to make the shirts are all donated by local

merchants, but the Boosters must rent the equipment for

customization. Different processes are involved, with the T-shirt

equipment renting at $550 for the period up to the fair, and the

sweatshirt equipment for $720. Display space presents another

consideration. The Boosters have only 300 square feet of display

wall area at the fair, and T-shirts will consume 1.5 square feet each,

sweatshirts 4 square feet each. What plan will net the most income?

Bison Booster Example Model

• Decision variables:

x1 number of T-shirts made and sold

x2 number of sweatshirts made and sold

y1 1 if T-shirt equipment is rented; =0 otherwise

y1 1 if sweatshirt equipment is rented; =0 otherwise

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)

s.t. 1.5x1 + 4x2 300 (Display space)

x1 200y1 (T-shirt if equipment)

x2 75y2 (Sweatshirt if equipment)

x1, x2 0

y1, y2 = 0 or 1 Net Income = 3450

x1* = 200, x2* = 0, y1* = 1, y2* = 0

(12.2)

Page 5: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

5

Constraint Relaxation Scenarios

• Double capacities 1.5x1 + 4x2 600

x1 400y1

x2 150y2

x1, x2 0

y1, y2 = 0 or 1

• Dropping first constraint 1.5x1 + 4x2 300

x1 200y1

x2 75y2

x1, x2 0

y1, y2 = 0 or 1

Net Income = 7450

𝑥 1* = 400, 𝑥 2* = 0, 𝑦 1* = 1, 𝑦 2 * = 0

Net Income = 4980

𝑥 1* = 200, 𝑥 2* = 75, 𝑦 1* = 1, 𝑦 2 * = 1

Constraint Relaxation Scenarios

• Treat discrete variables as continuous 1.5x1 + 4x2 300

x1 200y1

x2 75y2

x1, x2 0

0 y1 1

0 y2 1

Net Income = 3450

𝑥 1* = 200, 𝑥 2* = 0, 𝑦 1* = 1, 𝑦 2 * = 0

Page 6: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

6

Linear Programming Relaxations

• Continuous relaxations (linear programming relaxations if the given model is an ILP) are formed by treating any discrete variables as continuous while retaining all other constraints. [12.6]

• LP relaxations of ILPs are by far the most used relaxation forms because they bring all the power of LP to bear on analysis of the given discrete models. [12.7]

Proving Infeasibility with Relaxations

• If a constraint relaxation is infeasible, so is the full model it relaxes. [12.8]

Page 7: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

7

Solution Value Bounds from

Relaxations

• The optimal value of any relaxation of a maximize model yields an upper bound on the optimal value of the full model. The optimal value of any relaxation of a minimize model yields an lower bound. [12.9]

Feasible solutions in

relaxation

Feasible solutions

in true model

True optimum

Example 11.3

EMS Location Planning

1

2

3

4

5

6

7

8

9

10

Page 8: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

8

Minimum Cover EMS Model

𝑀𝑖𝑛 𝑥𝑗

10

𝑗=1

s.t.

(12.3)

x2 1

x1 + x2 1

x1 + x3 1

x3 1

x3 1

x2 1

x2 + x4 1

x3 + x4 1

x8 1 xi = 0 or 1 j=1,…,10

x4 + x6 1

x4 + x5 1

x4 + x5 + x6 1

x4 + x5 + x7 1

x8 + x9 1

x6 + x9 1

x5 + x6 1

x5 + x7 + x10 1

x8 + x9 1

x9 + x10 1

x10 1

x2* = x3* = x4* = x6*

= x8* = x10* =1,

x1* = x5* = x7* = x9*

= 0

Minimum Cover EMS Model

with Relaxation

𝑀𝑖𝑛 𝑥𝑗

10

𝑗=1

s.t.

(12.4)

x2 1

x1 + x2 1

x1 + x3 1

x3 1

x3 1

x2 1

x2 + x4 1

x3 + x4 1

x8 1

0xj 1

j=1,…,10

x4 + x6 1

x4 + x5 1

x4 + x5 + x6 1

x4 + x5 + x7 1

x8 + x9 1

x6 + x9 1

x5 + x6 1

x5 + x7 + x10 1

x8 + x9 1

x9 + x10 1

x10 1

𝑥 2* = 𝑥 3 * = 𝑥 8* = 𝑥 10* =1,

𝑥 4* = 𝑥 5* = 𝑥 6* = 𝑥 9* = 0.5

Page 9: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

9

Optimal Solutions from Relaxations

• If an optimal solution to a constraint relaxation is also feasible in the model it relaxes, the solution is optimal in that original model. [12.10]

Rounded Solutions from Relaxations

• Many relaxations produce optimal solutions that are easily “rounded” to good feasible solutions for the full model. [12.11]

• The objective function value of any (integer) feasible solution to a maximizing discrete optimization problem provides a lower bound on the integer optimal value, and any (integer) feasible solution to a minimizing discrete optimization problem provides an upper bound. [12.12]

Page 10: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

10

Rounded Solutions from Relaxation:

EMS Model

Ceiling

𝑥 1 = 𝑥 1 = 0 = 0

𝑥 2 = 𝑥 2 = 1 = 1

𝑥 3 = 𝑥 3 = 1 = 1

𝑥 4 = 𝑥 4 = 0.5 = 1

𝑥 5 = 𝑥 5 = 0.5 = 1

𝑥 6 = 𝑥 6 = 0.5 = 1

𝑥 7 = 𝑥 7 = 0 = 0

𝑥 8 = 𝑥 8 = 1 = 1

𝑥 9 = 𝑥 9 = 0.5 = 1

𝑥 10 = 𝑥 10 = 1 = 1

𝑥 j10𝑗=1 = 8

(12.5)

Floor

𝑥 1 = 𝑥 1 = 0 = 0

𝑥 2= 𝑥 2 = 1 = 1

𝑥 3= 𝑥 3 = 1 = 1

𝑥 4 = 𝑥 4 = 0.5 = 0

𝑥 5 = 𝑥 5 = 0.5 = 0

𝑥 6 = 𝑥 6 = 0.5 = 0

𝑥 7= 𝑥 7 = 0 = 0

𝑥 8= 𝑥 8 = 1 = 1

𝑥 9 = 𝑥 9 = 0.5 = 0

𝑥 10= 𝑥 10 = 1 = 1 𝑥 𝑗

10𝑗=1 = 4

12.3 Stronger LP Relaxations, Valid Inequalities,

and Lagrangian Relaxation

• A relaxation is strong or sharp if its optimal value

closely bounds that of the true model, and its optimal

solution closely approximates an optimum in the full

model. [12.13]

• Equally correct ILP formulations of a discrete problem

may have dramatically different LP relaxation optima.

[12.14]

Page 11: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

11

Choosing Big-M Constants

• Whenever a discrete model requires sufficiently large

big-M’s, the strongest relaxation will result from models

employing the smallest valid choice of those

constraints. [12.15]

Bison Booster Example Model

with Relaxation in Big-M Constants

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)

s.t. 1.5x1 + 4x2 300 (Display space)

x1 200y1 (T-shirt if equipment)

x2 75y2 (Sweatshirt if equipment)

x1, x2 0

y1, y2 = 0 or 1

• Max 20x1 + 30x2 – 550y1 – 720y2 (Net income)

s.t. 1.5x1 + 4x2 300 (Display space)

x1 10000y1 (T-shirt if equipment)

x2 10000y2 (Sweatshirt if equipment)

x1, x2 0

y1, y2 = 0 or 1

Net Income = 3450

x1* = 200, x2* = 0, y1* = 1, y2* = 0

(12.2)

(12.6)

Net Income = 3989

𝑥 1 = 200, 𝑥 2 = 0, 𝑦 1 = 0.02, 𝑦 2 = 0

Page 12: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

12

Valid Inequalities

• A linear inequality is a valid inequality for a given

discrete optimization model if it holds for all (integer)

feasible solutions to the model. [12.16]

• To strengthen a relaxation, a valid inequality must cut

off (render infeasible) some feasible solutions to the

current LP relaxation that are not feasible in the full ILP

model. [12.17]

Example 11.10

Tmark Facilities Location

1

2 3

4

5

6

7

8

i

Fixed

Cost

1 2400

2 7000

3 3600

4 1600

5 3000

6 4600

7 9000

8 2000

Page 13: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

13

Example 11.10

Tmark Facilities Location

Zone

j

Possible Center Location, i Call

Demand 1 2 3 4 5 6 7 8

1 1.25 1.40 1.10 0.90 1.50 1.90 2.00 2.10 250

2 0.80 0.90 0.90 1.30 1.40 2.20 2.10 1.80 150

3 0.70 0.40 0.80 1.70 1.60 2.50 2.05 1.60 1000

4 0.90 1.20 1.40 0.50 1.55 1.70 1.80 1.40 80

5 0.80 0.70 0.60 0.70 1.45 1.80 1.70 1.30 50

6 1.10 1.70 1.10 0.60 0.90 1.30 1.30 1.40 800

7 1.40 1.40 1.25 0.80 0.80 1.00 1.00 1.10 325

8 1.30 1.50 1.00 1.10 0.70 1.50 1.50 1.00 100

9 1.50 1.90 1.70 1.30 0.40 0.80 0.70 0.80 475

10 1.35 1.60 1.30 1.50 1.00 1.20 1.10 0.70 220

11 2.10 2.90 2.40 1.90 1.10 2.00 0.80 1.20 900

12 1.80 2.60 2.20 1.80 0.95 0.50 2.00 1.00 1500

13 1.60 2.00 1.90 1.90 1.40 1.00 0.90 0.80 430

14 2.00 2.40 2.00 2.20 1.50 1.20 1.10 0.80 200

𝑚𝑖𝑛 (𝑟𝑖,𝑗𝑑𝑗)𝑥𝑖,𝑗

14

𝑗=1

+ 𝑓𝑖𝑦𝑖

8

𝑖=1

8

𝑖=1

𝑥𝑖,𝑗

8

𝑖=1

= 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑗 (𝑐𝑎𝑟𝑟𝑦 𝑗 𝑙𝑜𝑎𝑑)

1500𝑦𝑖 ≤ 𝑑𝑗𝑥𝑖,𝑗 ≤ 5000𝑦𝑖 (𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑜𝑓 𝑖)

14

𝑗=1

𝑥𝑖,𝑗 ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖, 𝑗

𝑦𝑖 = 0 𝑜𝑟 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖

Tmark Facilities Location Example

with LP Relaxation

(12.8)

s.t.

y4*=y8*= 1

y1*=y2*= y3*=y5*= y6*=y7*= 0

Total Cost = 10153

LP Relaxation 𝑦 1 = 0.230, 𝑦 2

= 0.000, 𝑦 3 = 0.000, 𝑦 4

= 0.301,

𝑦 5 = 0.115, 𝑦 6

= 0.000, 𝑦 7 = 0.000, 𝑦 8

= 0.650

Total Cost = 8036.60

Page 14: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

14

𝑚𝑖𝑛 (𝑟𝑖,𝑗𝑑𝑗)𝑥𝑖,𝑗

14

𝑗=1

+ 𝑓𝑖𝑦𝑖

8

𝑖=1

8

𝑖=1

𝑥𝑖,𝑗

8

𝑖=1

= 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑗 (𝑐𝑎𝑟𝑟𝑦 𝑗 𝑙𝑜𝑎𝑑)

1500𝑦𝑖 ≤ 𝑑𝑗𝑥𝑖,𝑗 ≤ 5000𝑦𝑖 (𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑜𝑓 𝑖)

14

𝑗=1

𝑥𝑖,𝑗 ≥ 0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖, 𝑗 𝒙𝒊,𝒋 ≤ 𝒚𝒊 𝒇𝒐𝒓 𝒂𝒍𝒍 𝒊, 𝒋

𝑦𝑖 = 0 𝑜𝑟 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖

Tmark Facilities Location Example

with LP Relaxation

(12.8)

s.t.

LP Relaxation

𝑦 1 = 0.000, 𝑦 2

= 0.000, 𝑦 3 = 0.000,

𝑦 4 = 0.537, 𝑦 5

= 0.000, 𝑦 6 = 0.000,

𝑦 7 = 0.000, 𝑦 8

= 1.000

Total Cost = 10033.68

Insolvable by Excel.

Lagrangian Relaxations

• Lagrangian relaxations partially relax some of the main, linear constraints of an ILP by moving them to the objective function as terms

⋯+ 𝜈𝑖 𝑏𝑖 − 𝑎𝑖,𝑗𝑥𝑗𝑗

Here 𝜈𝑖 is a Lagrangian multiplier chosen as the relaxation is formed. If the relaxed constraint has form 𝑎𝑖,𝑗𝑥𝑗 ≥ 𝑏𝑖 ,𝑗

multiplier 𝜈𝑖 ≤ 0 for a maximize model and 𝜈𝑖 ≥ 0 for a minimize. If the relaxed constraint has form 𝑎𝑖,𝑗𝑥𝑗 ≤ 𝑏𝑖 ,𝑗

multiplier 𝜈𝑖 ≥ 0 for a maximize model and 𝜈𝑖 ≤ 0 for a minimize model. Equality constraints 𝑎𝑖,𝑗𝑥𝑗 = 𝑏𝑖𝑗 have

URS multiplier 𝜈𝑖. [12.18]

Page 15: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

15

Example 11.6

CDOT Generalized Assignment

The Canadian Department of Transportation encountered a problem

of the generalized assignment form when reviewing their allocation of coast

guard ships on Canada’s Pacific coast. The ships maintain such

navigational aids as lighthouses and buoys. Each of the districts along the

coast is assigned to one of a smaller number of coast guard ships. Since

the ships have different home bases and different equipment and operating

costs, the time and cost for assigning any district varies considerably among

the ships. The task is to find a minimum cost assignment.

Table 11.6 shows data for our (fictitious) version of the problem.

Three ships-the Estevan, the Mackenzie, and the Skidegate--are available to

serve 6 districts. Entries in the table show the number of weeks each ship

would require to maintain aides in each district, together with the annual cost

(in thousands of Canadian dollars). Each ship is available SG weeks per

year.

Example 11.6

CDOT Generalized Assignment

District, i

Ship, j 1 2 3 4 5 6

1.Estevan Cost 130 30 510 30 340 20

Time 30 50 10 11 13 9

2.Mackenzie Cost 460 150 20 40 30 450

Time 10 20 60 10 10 17

3.Skidegate Cost 40 370 120 390 40 30

Time 70 10 10 15 8 12

𝑥𝑖,𝑗 ≡ 1 if district i is assigned to ship j0 otherwise

Page 16: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

16

CDOT Assignment Model

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3

+20x6,1+450x6,2 +30x6,3

s.t.

(12.12)

x1,1 + x1,2 + x1,3 = 1

x2,1 + x2,2 + x2,3 = 1

x3,1 + x3,2 + x3,3 = 1

x4,1 + x4,2 + x4,3 = 1

x5,1 + x5,2 + x5,3 = 1

x6,1 + x6,2 + x6,3 = 1

Xi,j = 0 or 1 i=1,…,6; j=1,…,3

x1,1*=x4,1*=x6,1*=x2,2*=

x5,2*=x3,3*=1

Total Cost = 480

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Largrangian Relaxation of the

CDOT Assignment Example

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3

+20x6,1+450x6,2 +30x6,3 + 𝜈1(1- x1,1 - x1,2 - x1,3 ) + 𝜈2(1- x2,1 - x2,2 - x2,3 ) + 𝜈3(1- x3,1 - x3,2 - x3,3 ) + 𝜈4(1- x4,1 - x4,2 - x4,3 ) + 𝜈5(1- x5,1 - x5,2 - x5,3 )

+ 𝜈6(1- x6,1 + x6,2 + x6,3 ) s.t.

(12.13) Xi,j = 0 or 1 i=1,…,6; j=1,…,3

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Page 17: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

17

More on Lagrangian Relaxations

• Constraints chosen for dualization in Lagrangian relaxations should leave a still-integer program with enough special structure to be relatively tractable. [12.19]

• The optimal value of any Lagrangian relaxation of a maximize model using multipliers conforming to [12.18] yields an upper bound on the optimal value of the full model. The optimal value of any valid Lagrangian relaxation of a minimize model yields a lower bound. [12.20]

• A search is usually required to identify Lagrangian multiplier values 𝜈𝑖 defining a strong Lagrangian relaxation. [12.21]

Largrangian Relaxation of the

CDOT Assignment Example

min 130x1,1+460x1,2 +40x1,3 +30x2,1+150x2,2 +370x2,3 +510x3,1+20x3,2

+120x3,3 +30x4,1+40x4,2 +390x4,3 +340x5,1+30x5,2 +40x5,3

+20x6,1+450x6,2 +30x6,3 + 𝜈1(1- x1,1 - x1,2 - x1,3 ) + 𝜈2(1- x2,1 - x2,2 - x2,3 ) + 𝜈3(1- x3,1 - x3,2 - x3,3 ) + 𝜈4(1- x4,1 - x4,2 - x4,3 ) + 𝜈5(1- x5,1 - x5,2 - x5,3 )

+ 𝜈6(1- x6,1 + x6,2 + x6,3 ) s.t.

(12.13) Xi,j = 0 or 1 i=1,…,6; j=1,…,3

30x1,1 + 50x2,1 + 10x3,1 + 11x4,1 + 13x5,1 + 9x6,1 50

10x1,2 + 20x2,2 + 60x3,2 + 10x4,2 + 10x5,2 + 17x6,2 50

70x1,3 + 10x2,3 + 10x3,3 + 15x4,3 + 8x5,3 + 12x6,3 50

Using 𝜈1=300, 𝜈2=200, 𝜈3=200, 𝜈4=45, 𝜈5=45, 𝜈6=30

x1,1*= x2,2*= x3,3*= x4,1*= x4,2*= x5,2*= x5,3*= x6,1 =1

Total Cost = 470

Page 18: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

18

12.4 Branch and Bound Search

• Branch and bound algorithms combine a subset

enumeration strategy with the relaxation methods.

They systematically form classes of solutions and

investigate whether the classes can contain optimal

solutions by analyzing associated relaxations.

Example 12.2

River Power

River Power has 4 generators currently available for production

and wishes to decide which to put on line to meet the expected 700-

MW peak demand over the next several hours. The following table

shows the cost to operate each generator (in $1000/hr) and their

outputs (in MW). Units must be completely on or completely off.

Generator, j

1 2 3 4

Operating Cost 7 12 5 14

Output Power 300 600 500 1600

Page 19: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

19

Example 12.2

River Power

The decision variables are defined as

𝑥𝑗 ≡ 1 if generator j is turned on0 otherwise

min 7x1+ 12x2 + 5x3 + 14x4 s.t. 300x1 + 600x2 + 500x3 + 1600x4 700

xi = 0 or 1 j=1,…,4

x1* = x3* = 1

Cost = 12

Branch and Bound Search: Definitions

• A partial solution has some decision variables fixed,

with others left free or undetermined. We denote free

components of a partial solution by the symbol #.

[12.22]

• The completions of a partial solution to a given model

are the possible full solutions agreeing with the partial

solution on all fixed components. [12.23]

Page 20: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

20

Tree Search

• Branch and bound search begins at initial or root solution

x(0) = (#,…,#) with all variables free. [12.24]

• Branch and bound searches terminate or fathom a partial

solution when they either identify a best completion or prove

that none can produce an optimal solution in the overall

model. [12.25]

• When a partial solution cannot be terminated in a branch-

and-bound search of a 0-1 discrete optimization model, it is

branched by creating a 2 subsidiary partial solutions derived

by fixing a previously free binary variable. One of these

partial solutions matches the current except that the variable

chosen is fixed =1, and the other =0. [12.26]

Tree Search

• Branch and bound search stops when every partial solution

in the tree has been either branched or terminated. [12.27]

• Depth first search selects at each iteration an active partial

solution with the most components fixed (i.e., one deepest in

the search tree). [12.28]

Page 21: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

21

LP-Based Branch and Bound Solution

of the River Power Example

0 x(0)=(#,#,#,#), 𝝂 =+

𝒙 (0)=(0,0,0,.4375); 𝜈 =6.125

X4=1 X4=0

1

x(1)=(#,#,#,1)

𝒙 (1)=(0,0,0,1)

𝜈 =14

Terminated by solving

𝝂 = 14

2

x(2)=(#,#,#,0)

𝒙 (2)=(0,.33,1,0); 𝜈 =9

3

X2=1 X2=0

x(3)=(#,1,#,0)

𝒙 (3)=(0,1,.2,0); 𝜈 =13

X3=1 X3=0

4

𝒙 (4)=(0,1,1,0); 𝜈 =17

Terminated by bound

5

𝒙 (5)=(.33,1,1,0); 𝜈 =14.33

Terminated by bound

6

x(6)=(#,0,#,0)

𝒙 (6)=(.67,0,1,0); 𝜈 =9.67

X1=1 X1=0

7

𝒙 (7)=(1,0,.8,0)

𝜈 =11

X3=1 X3=0

𝒙 (8)=(1,0,1,0)

𝜈 =12 8 T. by solving; 𝝂 = 12 9 𝒙 (9)=(1,0,0,0)

Infeasible

10

𝒙 (10)=(0,0,1,0)

Infeasible

Incumbent Solutions

• The incumbent solution at any stage in a search of a

discrete model is the best (in terms of objective value)

feasible solution known so far. We denote the incumbent

solution 𝒙 and its objective function value 𝝂 . [12.29]

• If a branch and bound search stops as in [12.27], with all

partial solutions having been either branched or terminated,

the final incumbent solution is a global optimum if one exists.

Otherwise, the model is infeasible. [12.30]

Page 22: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

22

Candidate Problems

• The candidate problem associated with any partial solution

to an optimization model is the restricted version of the

model obtained when variables are fixed as in the partial

solution. [12.31]

• The feasible completions of any partial solution are exactly

the feasible solutions to the corresponding candidate

problem, and thus the objective value of the best feasible

completion is the optimal objective value of the candidate

problem. [12.32]

Terminating Partial Solutions with Relaxations

• If any relaxation of a candidate problem proves infeasible,

the associated partial solution can be terminated because it

has no feasible completions. [12.33]

• If any relaxation of a candidate problem has optimal objective

value no better than the current incumbent solution value, the

associated partial solution can be terminated because no

feasible completion can improve on the incumbent. [12.34]

• If an optimal solution to any constraint relaxation of a

candidate problem is feasible in the full candidate, it is a best

feasible completion of the associated partial solution. After

checking whether a new incumbent has been discovered, the

partial solution can be terminated. [12.35]

Page 23: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

23

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 0: Initialization. Make the only active partial solution the

one with all discrete variable free, and initialize solution index

t0. If any feasible solutions are known for the model, also

choose the best as incumbent solution 𝒙 with objective value 𝝂 .

Otherwise, set 𝝂 - if the model maximizes, and 𝝂 + if the

model minimizes.

Step 1: Stopping. If active partial solutions remain, select one

as x(t), and proceed to Step 2. Otherwise, stop. If there is an

incumbent solution 𝒙 , it is optimal, and if not, the model is

infeasible.

Step 2: Relaxation. Attempt to solve the LP relaxation of the

candidate problem corresponding to x(t).

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 3: Termination by Infeasibility. If the LP relaxation

proved infeasible, there are no feasible completions of the

partial solution x(t). Terminate x(t), increment tt+1, and return

to Step 1.

Step 4: Termination by Bound. If the model maximizes and

LP relaxation value 𝜈 satisfies 𝜈 𝜈 , or it minimizes and 𝜈 𝜈 , the

best feasible completion of partial solution x(t) cannot improve

on the incumbent. Terminate x(t), increment tt+1, and return

to Step 1.

Page 24: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

24

Algorithm 12A: LP-Based Branch and Bound (0-1 ILPs)

Step 5: Termination by Solving. If the LP relaxation optimum

𝒙 (t) satisfies all binary constraints of the model, it provides the

best feasible completion of partial solution x(t). After saving it

as new incumbent solution

𝒙 𝒙 (t); 𝜈 𝜈 Terminate x(t), increment tt+1, and return to Step 1.

Step 6: Branching. Choose some free binary-restricted

component xp that was fractional in the LP relaxation optimum,

and branch x(t) by creating two new actives. One is identical to

x(t) except that xp is fixed = 0, and the other the same except

that xp is fixed = 1. Then increment tt+1, and return to Step

1.

Branching Rules for LP-Based Branch and Bound

• LP-based branch and bound algorithms always branch by

fixing an integer-restricted decision variable that had a

fractional value in the associated candidate problem

relaxation. [12.36]

• When more than one integer-restricted variable is fractional

in the relaxation optimum, LP-based branch and bound

algorithms often branch by fixing the one closest to an

integer value. [12.37]

• Tie-breaker

Page 25: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

25

LP-Based Branch and Bound Solution

of the River Power Example

The decision variables are defined as

𝑥𝑗 ≡ 1 if generator j is turned on0 otherwise

min 7x1+ 12x2 + 5x3 + 14x4 s.t. 300x1 + 600x2 + 500x3 + 1600x4 700

xi = 0 or 1 j=1,…,4

x1* = x3* = 1

Cost = 12

LP-Based Branch and Bound Solution

of the River Power Example

0 x(0)=(#,#,#,#), 𝝂 =+

𝒙 (0)=(0,0,0,.4375); 𝜈 =6.125

X4=1 X4=0

1

x(1)=(#,#,#,1)

𝒙 (1)=(0,0,0,1)

𝜈 =14

Terminated by solving

𝝂 = 14

2

x(2)=(#,#,#,0)

𝒙 (2)=(0,.33,1,0); 𝜈 =9

3

X2=1 X2=0

x(3)=(#,1,#,0)

𝒙 (3)=(0,1,.2,0); 𝜈 =13

X3=1 X3=0

4

𝒙 (4)=(0,1,1,0); 𝜈 =17

Terminated by bound

5

𝒙 (5)=(.33,1,1,0); 𝜈 =14.33

Terminated by bound

6

x(6)=(#,0,#,0)

𝒙 (6)=(.67,0,1,0); 𝜈 =9.67

X1=1 X1=0

7

𝒙 (7)=(1,0,.8,0)

𝜈 =11

X3=1 X3=0

𝒙 (8)=(1,0,1,0)

𝜈 =12 8 T. by solving; 𝝂 = 12 9 𝒙 (9)=(1,0,0,0)

Infeasible

10

𝒙 (10)=(0,0,1,0)

Infeasible

Page 26: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

26

12.5 Rounding, Parent Bounds, Enumerations Sequences, and

Stopping Early in Branch and Bound • Refinements of Branch and bound algorithms.

NASA Example Model

max 200x1+3x2 +20x3+50x4 +70x5+20 x6 +5x7+10x8+200x9 +150x10+18x11 +8x12 +300x13+185x14

s.t. 6x1 + 2x2 + 3x3 + 1x7 + 4x9 + 5x12 10

3x2 + 5x3 + 5x5 + 8x7 + 5x9 + 8x10 + 7x12 + 1x13 + 4x14 12

8x5 + 1x6 + 4x10 + 2x11 + 4x13 + 5x14 14

8x6 + 5x8 + 7x11 + 1x13 + 3x14 14

10x4 + 4x6 + 1x13 + 3x14 14

x4 + x5 1

x8 + x11 1

x9 + x14 1

xi = 0 or 1 j=1,…,14

(12.16)

x11 x2

x4 x3

x5 x3

x6 x3

x7 x3

x1* = x3* = x4* = x8*

= x13* = x14* =1

Value = 765

Page 27: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

27

Rounding for Incumbent Solutions

• If convenient rounding schemes are available, the relaxation

optimum for every partial solution that cannot be terminated

in a branch and bound search is usually rounded to a

feasible solution for the full model prior to branching. The

feasible solution provides a new incumbent if it is better than

any known. [12.38]

Branch and Bound Family Tree Terminology

• Any node created directly from another by branching is called

a child, and the branched node is its parent.

• The relaxation optimal value for the parent of any partial

solution to a minimize model provides a lower bound on the

objective value of any completion of its children. The

relaxation optimal value for the parent in a maximize model

provides an upper bound. [12.39]

• Whenever a branch and bound search discovers a new

incumbent solution, any active partial solution with parent

bound no better than the new incumbent solution value can

immediately be terminated. [12.40]

Page 28: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

28

Bounds on the Error of Stopping with the Incumbent Solution

• The least relaxation optimal value for parents of the active

partial solutions in a minimizing branch and bound search

always provides a lower bound on the optimal solution value

of the full model. The greatest relaxation optimal value for

parents in a maximizing search provides an upper bound.

[12.41]

• At any stage of a branch and bound search, the difference

between the incumbent solution value and the best parent

bound of any active partial solution shows the maximum error

in accepting the incumbent as an approximate optimum.

[12.42]

Depth First, Best First, and Depth Forward Best Back Sequences

• Best first search selects at each iteration an active partial

solution with best parent bound. [12.43]

• Depth forward best back search selects a deepest active

partial solution after branching a node, but one with best

parent bound after a termination. [12.44]

• When several active partial solutions tie for deepest or best

parent bound, the nearest child chooses the one with last

fixed variable value nearest the corresponding component of

the parent LP relaxation. [12.45]

Page 29: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

29

Branch and Cut Search

• Branch and cut algorithms modify the basic branch and

bound strategy of Algorithm 12A by attempting to strengthen

relaxations with new inequalities before branching a partial

solution. Added constraints should hold for all feasible

solutions to the full discrete model, but they should cut off

(render infeasible) the last relaxation optimum. [12.46]

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 0: Initialization. Make the only active partial solution the

one with all discrete variable free, and initialize solution index

t0. If any feasible solutions are known for the model, also

choose the best as incumbent solution 𝒙 with objective value 𝝂 .

Otherwise, set 𝝂 - if the model maximizes, and 𝝂 + if the

model minimizes.

Step 1: Stopping. If active partial solutions remain, select one

as x(t), and proceed to Step 2. Otherwise, stop. If there is an

incumbent solution 𝒙 , it is optimal, and if not, the model is

infeasible.

Step 2: Relaxation. Attempt to solve the LP relaxation of the

candidate problem corresponding to x(t).

Page 30: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

30

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 3: Termination by Infeasibility. If the LP relaxation proved

infeasible, there are no feasible completions of the partial solution

x(t). Terminate x(t), increment tt+1, and return to Step 1.

Step 4: Termination by Bound. If the model maximizes and LP

relaxation value 𝜈 satisfies 𝜈 𝜈 , or it minimizes and 𝜈 𝜈 , the best

feasible completion of partial solution x(t) cannot improve on the

incumbent. Terminate x(t), increment tt+1, and return to Step 1.

Step 5: Termination by Solving. If the LP relaxation optimum 𝒙 (t)

satisfies all binary constraints of the model, it provides the best

feasible completion of partial solution x(t). After saving it as new

incumbent solution

𝒙 𝒙 (t); 𝜈 𝜈 Terminate x(t), increment tt+1, and return to Step 1.

Algorithm 12B: Branch and Cut (0-1 ILPs)

Step 6: Valid Inequality. Attempt to identify a valid inequality

for the full ILP model that is violated by the current relaxation

optimum 𝒙 (t). If successful, make the constraint a part of the

full model, increment tt+1, and return to Step 2.

Step 7: Branching. Choose some free binary-restricted

component xp that was fractional in the LP relaxation optimum,

and branch x(t) by creating two new actives. One is identical to

x(t) except that xp is fixed = 0, and the other the same except

that xp is fixed = 1. Then increment tt+1, and return to Step

1.

Page 31: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

31

12.6 Improving Search Heuristics for Discrete Optimization INLPS

• Many large combinatorial optimization models,

especially INLPs with nonlinear objective functions, are

too large for enumeration and lack strong relaxations

that are tractable.

• Discrete Neighborhoods and Move Sets: Improving

searches over discrete variables define neighborhoods

by specifying a move set M of moves allowed. The

current solution and all reachable from it in a single

move x M comprise its neighborhood. [12.47]

Algorithm 12C: Rudimentary Improving Search Algorithm

Step 0: Initialization. Choose any starting feasible solution

x(0), and set solution index t 0.

Step 1: Local Optimum. If no move x in move set M is both

improving and feasible at current solution x(t), stop. Point x(t) is

a local optimum.

Step 2: Move. Choose some improving feasible move x M

as x(t+1).

Step 3: Step. Update x(t+1) x(t) + x(t+1).

Step 4: Increment. Increment t t+1, and return to Step 1.

Page 32: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

32

Example 11.8

NCB Circuit Board TSP

Figure 11.4 shows the tiny example that we will investigate for

fictional board manufacturer NCB. We seek an optimal route through the 10

hole locations indicated. Table 11.7 reports straight-line distances di,j

between hole locations i and j. Lines in Figure 11.4 show a fair quality

solution with total length 92.8 inches. The best route is 11 inches shorter

(see Section 12.6). 1 2 3 4 5 6 7 8 9 10

1 3.6 5.1 10.0 15.3 20.0 16.0 14.2 23.0 26.4

2 3.6 3.6 6.4 12.1 18.1 13.2 10.6 19.7 23.0

3 5.1 3.6 7.1 10.6 15.0 15.8 10.8 18.4 21.9

4 10.0 6.4 7.1 7.0 15.7 10.0 4.2 13.9 17.0

5 15.3 12.1 10.6 7.0 9.9 15.3 5.0 7.8 11.3

6 20.0 18.1 15.0 15.7 9.9 25.0 14.9 12.0 15.0

7 16.0 13.2 15.8 10.0 15.3 25.0 10.3 19.2 21.0

8 14.2 10.6 10.8 4.2 5.0 14.9 10.3 10.2 13.0

9 23.0 19.7 18.4 13.9 7.8 12.0 19.2 10.2 3.6

10 26.4 23.0 21.9 17.0 11.3 15.0 21.0 13.0 3.6

1

2

3

4 5

6

7 8 9

10

𝑦𝑘,𝑖 ≡ 1 if kth point visited is i0 otherwise

min 𝑑𝑖,𝑗𝑦𝑘,𝑖𝑦𝑘+1,𝑗

10

𝑗=1

10

𝑖=1

10

𝑘=1

(Total length)

𝑦𝑘,𝑖 = 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑘 = 1,⋯ , 10 (each position occupied)

𝑖

𝑥𝑘,𝑖 = 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖

𝑘

= 1,⋯ , 10 (each point visited)

𝑦𝑘,𝑖 = 0 𝑜𝑟 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑘, 𝑖

Quadratic Assignment Formulation

of the TSP

(12.20)

s.t.

Let

Page 33: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

33

Choosing a Move Set

• The move set M of a discrete improving search must be

compact enough to be checked at each iteration for

improving feasible neighbors. [12.48]

• The solution produced by a discrete improving search

depends on the move set (or neighborhood) employed, with

larger move sets generally resulting in superior local optima.

[12.49]

Multistart Search

• Multistart or keeping the best of several local optima

obtained by searches from different starting solutions is one

way to improve the heuristic solutions produced by

improving search. [12.50]

Page 34: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

34

12.7 Tabu, Simulated Annealing, and Genetic Algorithm Extensions of

Improving Search Difficulty with Allowing Nonimproving Moves

• Nonimproving moves will lead to infinite cycling of

improving search unless some provision is added to

prevent repeating solutions. [12.51]

Tabu Search

• Tabu search deals with cycling by temporarily

forbidding moves that would return to a solution

recently visited. [12.52]

Page 35: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

35

Algorithm 12D: Tabu Search

Step 0: Initialization. Choose any starting feasible solution

x(0), and an iteration limit tmax. Then set solution 𝒙 x(0) and

solution index t 0. No moves are tabu.

Step 1: Stopping. If no non-tabu move x in move set M

leads to a feasible neighbor of current solution x(t), or if t=tmax,

then stop. Incumbent solution 𝒙 is an approximate optimum.

Step 2: Move. Choose some non-tabu feasible move x M

as x(t+1).

Step 3: Step. Update x(t+1) x(t) + x(t+1).

Step 4: Incumbent Solution. If the objective function value of

x(t+1) is superior to that of incumbent solution 𝒙 , replace 𝒙

x(t+1) .

Algorithm 12D: Tabu Search

Step 5: Tabu List. Remove from the list of tabu of forbidden

moves any that have been on it for a sufficient number of

iterations, and add a collection of moves that includes any

returning immediately from x(t+1) to x(t).

Step 6: Increment. Increment t t+1, and return to Step 1.

Page 36: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

36

Simulated Annealing Search

• Simulated annealing algorithms control cycling by

accepting non-improving moves according to

probabilities tested with computer-generated random

numbers. [12.53]

• The move selection process at each iteration begins

with random choice of a provisional feasible move,

totally ignoring its objective function impact. Next, the

net objective function improvement obj is computed

for the chosen move. The move is always accepted if it

improves (obj > 0), and otherwise

probability of acceptance = eobj/q where q=temp.

Algorithm 12E: Simulated Annealing Search

Step 0: Initialization. Choose any starting feasible solution

x(0), an iteration limit tmax, and a relative large initial temp q>0.

Then set incumbent solution 𝒙 x(0) and solution index t 0.

Step 1: Stopping. If no move x in move set M leads to a

feasible neighbor of current solution x(t), or if t=tmax, then stop.

Incumbent solution 𝒙 is an approximate optimum.

Step 2: Provisional Move. Randomly choose a feasible move

x M as a provisional x(t+1), and compute the net objective

function improvement obj for moving from x(t) to (x(t)+x(t+1))

(increase for a maximize, decrease for a minimize).

Page 37: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

37

Algorithm 12E: Simulated Annealing Search

Step 3: Acceptance. If x(t+1) improves, or with probability

eobj/q if obj0, accept x(t+1) and update

x(t+1) x(t) + x(t+1).

Otherwise, return to Step 2.

Step 4: Incumbent Solution. If the objective function value of

x(t+1) is superior to that of incumbent solution 𝒙 , replace 𝒙

x(t+1) .

Step 5: Temperature Reduction. If a sufficient number of

iterations have passed since the last temperature change,

reduce temperature q.

Step 6: Increment. Increment t t+1, and return to Step 1.

Genetic Algorithms

• Genetic algorithms evolve good heuristic optima by

operations combining members of an improving

population of individual solutions. [12.54]

• Crossover combines a pair of “parent” solutions to

produce a pair of :children” by breaking both parent

vectors at the same point and reassembling the first

part of one parent solution with the second part of the

other, and vice versa. [12.55]

Page 38: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

38

Genetic Algorithms

• The elitest strategy for implementation of genetic algorithms forms each new generation as a mixture of

elite (best) solutions held over from the previous

generation, immigrant solutions added arbitrarily to

increase diversity, and children of crossover operations

on non-overlapping pairs of solutions in the previous

population. [12.56]

• Effective genetic algorithm search requires a choice for

encoding problem solutions that often, if not always,

preserves solution feasibility after crossover. [12.57]

Algorithm 12F: Genetic Algorithm Search

Step 0: Initialization. Choose a population size p, initial

starting feasible solutions x(1), …, x(p), an generation limit tmax,

and a population subdivisions pe for elites, pi for immigrants,

and pc for crossovers. Also set the generation index t 0.

Step 1: Stopping. If t=tmax, stop and report the best solution of

the current population as an approximate optimum.

Step 2: Elite. Initialize the population of generation t+1 with c

opies of the pe best solutions in the current generation.

Step 3: Immigrants. Arbitrarily choose pi new immigrant

feasible solutions, and include them in the t+1 population.

Page 39: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

39

Algorithm 12F: Genetic Algorithm Search

Step 4: Crossover. Choose pc/2 non-overlapping pairs of

solutions from the generation t population, and execute

crossover on each pair at an independently chosen random cut

point to complete the generation t+1 population.

Step 5: Increment. Increment t t+1, and return to Step 1.

12.8 Constructive Heuristics

• Improving search heuristics in Section 12.6 and 12.7

move from complete solution to complete solution.

• The constructive search alternative follows a strategy

more like the branch and bound searches proceeding

through partial solutions, choosing values for decision

variables one at a time and (often) stopping upon

completion of a first feasible solution.

Page 40: Discrete Optimization Methods - web.eng.fiu.eduweb.eng.fiu.edu/leet/OR_1/chap12_2011.pdf · 11/30/2011 4 Example 12.1 Bison Booster The Boosters are trying to decide what fundraising

11/30/2011

40

Algorithm 12G: Rudimentary Constructive Search

Step 0: Initialization. Start with all-free initial partial solution

x(0) = (#, …, #) and set the solution index t 0.

Step 1: Stopping. If all components of a current solution x(t)

are fixed, stop and output 𝒙 x(t) as an approximate optimum.

Step 2: Step. Choose a free component xp of partial solution

x(t) and a value for it that plausibly leads to good feasible

completions. Then, advance to partial solution x(t+1) identical to

x(t) except that xp is fixed at the chosen value.

Step 3: Increment. Increment t t+1, and return to Step 1.

Greedy Choices of Variables to Fix

• Greedy constructive heuristics elect the next variable to fix

and its value that does least damage to feasibility and most

helps the objective function, based on what has already

been fixed in the current partial solution. [12.58]

• In large, especially non-linear, discrete models, or when

time is limited, constructive search is often the only effective

optimization-based approach to finding good solutions.

[12.59]