metaheuristics in optimization1 panos m. pardalos university of florida ise dept., florida, usa...

122
Metaheuristics in Optimiz ation 1 Panos M. Pardalos University of Florida ISE Dept., Florida, Workshop on the European Chapter on Metaheuristics and Large Scale Optimization Vilnius, Lithuania May 19-21, 2005 Metaheuristics in Optimization a

Upload: marian-quinn

Post on 16-Jan-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 1

Panos M. Pardalos University of Florida

ISE Dept., Florida, USA

Workshop on the European Chapter on Metaheuristics and Large Scale Optimization

Vilnius, LithuaniaMay 19-21, 2005

Metaheuristics in Optimization

a

Page 2: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 2

1. Quadratic Assignment & GRASP

2. Classical Metaheuristics

3. Parallelization of Metaheuristcs

4. Evaluation of Metaheuristics

5. Success Stories

6. Concluding Remarks

Outline

1. Quadratic Assignment & GRASP

2. Classical Metaheuristics

3. Parallelization of Metaheuristcs

4. Evaluation of Metaheuristics

5. Success Stories

6. Concluding Remarks

(joint work with Mauricio Resende and Claudio Meneses)

Page 3: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 3

• Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone.

• Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP.

Metaheuristics

• Metaheuristics are high level procedures that coordinate simple heuristics, such as local search, to find solutions that are of better quality than those found by the simple heuristics alone.

• Examples: simulated annealing, genetic algorithms, tabu search, scatter search, variable neighborhood search, and GRASP.

Page 4: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 4

Quadratic assignment problem (QAP)

• Given N facilities f1,f2,…,fN and N locations l1,l2,…,lN

• Let AN×N = (ai,j) be a positive real matrix where ai,j is the flow between facilities fi and fj

• Let BN×N = (bi,j) be a positive real matrix where bi,j is the distance between locations li and lj

Page 5: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 5

Quadratic assignment problem (QAP)

• Given N facilities f1,f2,…,fN and N locations l1,l2,…,lN

• Let AN×N = (ai,j) be a positive real matrix where ai,j is the flow between facilities fi and fj

• Let BN×N = (bi,j) be a positive real matrix where bi,j is the distance between locations li and lj

Page 6: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 6

Quadratic assignment problem (QAP)

• Given N facilities f1,f2,…,fN and N locations l1,l2,…,lN

• Let AN×N = (ai,j) be a positive real matrix where ai,j is the flow between facilities fi and fj

• Let BN×N = (bi,j) be a positive real matrix where bi,j is the distance between locations li and lj

Page 7: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 7

Quadratic assignment problem (QAP)

• Let p: {1,2,…,N} {1,2,…,N} be an assignment of the N facilities to the N locations

• Define the cost of assignment p to be

• QAP: Find a permutation vector p ∏N that minimizes the assignment cost:

N

1j p(j)p(i),ji,

N

1iba c(p)

min c(p): subject to p ∏N

Page 8: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 8

Quadratic assignment problem (QAP)

• Let p: {1,2,…,N} {1,2,…,N} be an assignment of the N facilities to the N locations

• Define the cost of assignment p to be

• QAP: Find a permutation vector p ∏N that minimizes the assignment cost:

N

1j p(j)p(i),ji,

N

1iba c(p)

min c(p): subject to p ∏N

Page 9: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 9

Quadratic assignment problem (QAP)

• Let p: {1,2,…,N} {1,2,…,N} be an assignment of the N facilities to the N locations

• Define the cost of assignment p to be

• QAP: Find a permutation vector p ∏N that minimizes the assignment cost:

N

1j p(j)p(i),ji,

N

1iba c(p)

min c(p): subject to p ∏N

Page 10: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 10

Quadratic assignment problem (QAP)

l1 l2

l3

10

30 40

locations and distances

f1 f2

f3

1

510

facilities and flows

cost of assignment: 10×1+ 30×10 + 40×5 = 510

Page 11: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 11

Quadratic assignment problem (QAP)

l1 l2

l3

10

30 40

f1 f2

f3

1

510

cost of assignment: 10×1+ 30×10 + 40×5 = 510

f1 f2

f3

1

510

facilities and flows

cost of assignment: 10×10+ 30×1 + 40×5 = 330

swap locations of facilities f2 and f3

Page 12: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 12

Quadratic assignment problem (QAP)

l1 l2

l3

10

30 40

f1 f3

f2

51

10

cost of assignment: 10×10+ 30×5 + 40×1 = 290Optimal!

f1 f2

f3

1

510

facilities and flows

swap locations of facilities f1 and f3

Page 13: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 13

GRASP for QAP

• GRASP multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003)

• GRASP for QAP– Li, Pardalos, & Resende (1994): GRASP for QAP– Resende, Pardalos, & Li (1996): Fortran subroutines

for dense QAPs– Pardalos, Pitsoulis, & Resende (1997): Fortran

subroutines for sparse QAPs– Fleurent & Glover (1999): memory mechanism in

construction

Page 14: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 14

GRASP for QAP

• GRASP multi-start metaheuristic: greedy randomized construction, followed by local search (Feo & Resende, 1989, 1995; Festa & Resende, 2002; Resende & Ribeiro, 2003)

• GRASP for QAP– Li, Pardalos, & Resende (1994): GRASP for QAP– Resende, Pardalos, & Li (1996): Fortran subroutines

for dense QAPs– Pardalos, Pitsoulis, & Resende (1997): Fortran

subroutines for sparse QAPs– Fleurent & Glover (1999): memory mechanism in

construction

Page 15: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 15

repeat {

x = GreedyRandomizedConstruction();x = LocalSearch(x);

save x as x* if best so far;

}

return x*;

GRASP for QAP

Page 16: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 16

Construction

• Stage 1: make two assignments {filk ; fjll}

• Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

Page 17: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 17

Construction

• Stage 1: make two assignments {filk ; fjll}

• Stage 2: make remaining N–2 assignments of facilities to locations, one facility/location pair at a time

Page 18: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 18

Stage 1 construction

• sort distances bi,j in increasing order: bi(1),j(1)≤bi(2),j(2) ≤ ≤ bi(N),j(N) .

• sort flows ak,l in decreasing order: ak(1),l(1)ak(2),l(2) ak(N),l(N) .

• sort products: ak(1),l(1) bi(1),j(1), ak(2),l(2) bi(2),j(2), …, ak(N),l(N)

bi(N),j(N)

• among smallest products, select ak(q),l(q) bi(q),j(q) at random: corresponding to assignments {fk(q)li(q) ; fl(q)lj(q)}

Page 19: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 19

Stage 1 construction

• sort distances bi,j in increasing order: bi(1),j(1)≤bi(2),j(2) ≤ ≤ bi(N),j(N) .

• sort flows ak,l in decreasing order: ak(1),l(1)ak(2),l(2) ak(N),l(N) .

• sort products: ak(1),l(1) bi(1),j(1), ak(2),l(2) bi(2),j(2), …, ak(N),l(N)

bi(N),j(N)

• among smallest products, select ak(q),l(q) bi(q),j(q) at random: corresponding to assignments {fk(q)li(q) ; fl(q)lj(q)}

Page 20: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 20

Stage 1 construction

• sort distances bi,j in increasing order: bi(1),j(1)≤bi(2),j(2) ≤ ≤ bi(N),j(N) .

• sort flows ak,l in decreasing order: ak(1),l(1)ak(2),l(2) ak(N),l(N) .

• sort products: ak(1),l(1) bi(1),j(1), ak(2),l(2) bi(2),j(2), …, ak(N),l(N)

bi(N),j(N)

• among smallest products, select ak(q),l(q) bi(q),j(q) at random: corresponding to assignments {fk(q)li(q) ; fl(q)lj(q)}

Page 21: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 21

Stage 1 construction

• sort distances bi,j in increasing order: bi(1),j(1)≤bi(2),j(2) ≤ ≤ bi(N),j(N) .

• sort flows ak,l in decreasing order: ak(1),l(1)ak(2),l(2) ak(N),l(N) .

• sort products: ak(1),l(1) bi(1),j(1), ak(2),l(2) bi(2),j(2), …, ak(N),l(N)

bi(N),j(N)

• among smallest products, select ak(q),l(q) bi(q),j(q) at random: corresponding to assignments {fk(q)li(q) ; fl(q)lj(q)}

Page 22: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 22

Stage 2 construction

• If Ω = {(i1,k1),(i2,k2), …, (iq,kq)} are the q assignments made so far, then

• Cost of assigning fjll is

• Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

ki,

lk,ji,lj, bac

Page 23: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 23

Stage 2 construction

• If Ω = {(i1,k1),(i2,k2), …, (iq,kq)} are the q assignments made so far, then

• Cost of assigning fjll is

• Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

ki,

lk,ji,lj, bac

Page 24: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 24

Stage 2 construction

• If Ω = {(i1,k1),(i2,k2), …, (iq,kq)} are the q assignments made so far, then

• Cost of assigning fjll is

• Of all possible assignments, one is selected at random from the assignments having smallest costs and is added to Ω

ki,

lk,ji,lj, bac

Sped up in Pardalos, Pitsoulis, & Resende (1997) forQAPs with sparse A or B matrices.

Page 25: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 25

Swap based local search

a) For all pairs of assignments {filk ; fjll}, test if swapped assignment {fill ; fjlk} improves solution.

b) If so, make swap and return to step (a)

Page 26: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 26

Swap based local search

a) For all pairs of assignments {filk ; fjll}, test if swapped assignment {fill ; fjlk} improves solution.

b) If so, make swap and return to step (a)

repeat (a)-(b) until no swap improves current solution

Page 27: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 27

Path-relinking• Path-relinking:

– Intensification strategy exploring trajectories connecting elite solutions: Glover (1996)

– Originally proposed in the context of tabu search and scatter search.

– Paths in the solution space leading to other elite solutions are explored in the search for better solutions:

• selection of moves that introduce attributes of the guiding solution into the current solution

Page 28: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 28

Path-relinking• Path-relinking:

– Intensification strategy exploring trajectories connecting elite solutions: Glover (1996)

– Originally proposed in the context of tabu search and scatter search.

– Paths in the solution space leading to other elite solutions are explored in the search for better solutions:

• selection of moves that introduce attributes of the guiding solution into the current solution

Page 29: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 29

Path-relinking• Path-relinking:

– Intensification strategy exploring trajectories connecting elite solutions: Glover (1996)

– Originally proposed in the context of tabu search and scatter search.

– Paths in the solution space leading to other elite solutions are explored in the search for better solutions:

• selection of moves that introduce attributes of the guiding solution into the current solution

Page 30: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 30

Path-relinking

• Exploration of trajectories that connect high quality (elite) solutions:

initialsolution

guidingsolution

path in the neighborhood of solutions

Page 31: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 31

Path-relinking• Path is generated by selecting moves

that introduce in the initial solution attributes of the guiding solution.

• At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected:

initialsolution

guiding solution

Page 32: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 32

Path-relinking• Path is generated by selecting moves

that introduce in the initial solution attributes of the guiding solution.

• At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected:

initialsolution

guiding solution

Page 33: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 33

Combine solutions x and y

(x,y): symmetric difference between x and y

while ( |(x,y)| > 0 ) {

-evaluate moves corresponding in (x,y)

-make best move

-update (x,y)

}

Path-relinking

x

y

Page 34: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 34

GRASP with path-relinking• Originally used by Laguna and Martí (1999).• Maintains a set of elite solutions found

during GRASP iterations.• After each GRASP iteration (construction

and local search):– Use GRASP solution as initial solution. – Select an elite solution uniformly at random:

guiding solution.– Perform path-relinking between these two

solutions.

Page 35: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 35

GRASP with path-relinking• Originally used by Laguna and Martí (1999).• Maintains a set of elite solutions found

during GRASP iterations.• After each GRASP iteration (construction

and local search):– Use GRASP solution as initial solution. – Select an elite solution uniformly at random:

guiding solution.– Perform path-relinking between these two

solutions.

Page 36: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 36

GRASP with path-relinking• Originally used by Laguna and Martí (1999).• Maintains a set of elite solutions found

during GRASP iterations.• After each GRASP iteration (construction

and local search):– Use GRASP solution as initial solution. – Select an elite solution uniformly at random:

guiding solution.– Perform path-relinking between these two

solutions.

Page 37: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 37

GRASP with path-relinkingRepeat for Max_Iterations:

Construct a greedy randomized solution.

Use local search to improve the constructed solution.

Apply path-relinking to further improve the solution.

Update the pool of elite solutions.

Update the best solution found.

Page 38: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 38

P-R for QAP (permutation vectors)

Page 39: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 39

If swap improves solution: local search is applied

Path-relinking for QAP

initialsolution

guidingsolution

local minlocal minIf local min

improvesincumbent, it is saved.

Page 40: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 40

Results of path relinking: S*

Path-relinking for QAP

initialsolution

guidingsolution

path in the neighborhood of solutions

S*

If c(S*) < min {c(S), c(T)}, and c(S*) ≤ c(Si), for i=1,…,N,i.e. S* is best solution in path, then S* is returned.

ST

S0

S1

S2

S3SN

Page 41: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 41

initialsolution

guidingsolution

Path-relinking for QAP

S*

S T

S0

Si–1

Si

Si+1

SN

Si is a local minimum w.r.t. PR: c(Si) < c(Si–1) and c(Si) < c(Si+1), for all i=1,…,N.

If path-relinking does not improve (S,T), then if Si is a best local min w.r.t. PR: return S* = Si

If no local min exists, return S*=argmin{S,T}

Page 42: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 42

PR pool management

• S* is candidate for inclusion in pool of elite solutions (P)

• If c(S*) < c(Se), for all Se P, then S* is put in P• Else, if c(S*) < max{c(Se), Se P} and |

(S*,Se)| 3, for all Se P, then S* is put in P• If pool is full, remove

argmin {|(S*,Se)|, Se P s.t. c(Se) c(S*)}

Page 43: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 43

PR pool management

• S* is candidate for inclusion in pool of elite solutions (P)

• If c(S*) < c(Se), for all Se P, then S* is put in P• Else, if c(S*) < max{c(Se), Se P} and |

(S*,Se)| 3, for all Se P, then S* is put in P• If pool is full, remove

argmin {|(S*,Se)|, Se P s.t. c(Se) c(S*)}

Page 44: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 44

PR pool management

• S* is candidate for inclusion in pool of elite solutions (P)

• If c(S*) < c(Se), for all Se P, then S* is put in P• Else, if c(S*) < max{c(Se), Se P} and |

(S*,Se)| 3, for all Se P, then S* is put in P• If pool is full, remove

argmin {|(S*,Se)|, Se P s.t. c(Se) c(S*)}

Page 45: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 45

PR pool management

• S* is candidate for inclusion in pool of elite solutions (P)

• If c(S*) < c(Se), for all Se P, then S* is put in P• Else, if c(S*) < max{c(Se), Se P} and |

(S*,Se)| 3, for all Se P, then S* is put in P• If pool is full, remove

argmin {|(S*,Se)|, Se P s.t. c(Se) c(S*)}

Page 46: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 46

PR pool management

S is initial solution for path-relinking: favor choice of target solution T with large symmetric difference with S.

This leads to longer paths in path-relinking.

PR

ee

|R)(S,||)S(S,|

)p(S

Probability of choosing Se P:

Page 47: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 47

Experimental results

• Compare GRASP with and without path-relinking.

• New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms

• All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40

• 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

Page 48: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 48

Experimental results

• Compare GRASP with and without path-relinking.

• New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms

• All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40

• 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

Page 49: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 49

Experimental results

• Compare GRASP with and without path-relinking.

• New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms

• All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40

• 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

Page 50: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 50

Experimental results

• Compare GRASP with and without path-relinking.

• New GRASP code in C outperforms old Fortran codes: we use same code to compare algorithms

• All QAPLIB (Burkhard, Karisch, & Rendl, 1991) instances of size N ≤ 40

• 100 independent runs of each algorithm, recording CPU time to find the best known solution for instance

Page 51: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 51

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 52: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 52

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 53: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 53

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 54: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 54

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 55: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 55

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 56: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 56

• SGI Challenge computer (196 MHz R10000 processors (28) and 7 Gb memory)

• Single processor used for each run• GRASP RCL parameter chosen at random in

interval [0,1] at each GRASP iteration.• Size of elite set: 30• Path-relinking done in both directions (S to T to

S)• Care taken to ensure that GRASP and GRASP

with path-relinking iterations are in sync

Experimental results

Page 57: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 57

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

pro

babili

ty

time to target solution value (seconds)

Random variable time-to-target-solution value fits a two-parameter exponential distribution (Aiex, Resende, & Ribeiro, 2002).

Time-to-target-value plots

Sort times such that t1 ≤ t2 ≤ ∙∙∙ ≤ t100 and plot{ti,pi}, for i=1,…,N, wherepi = (i–.5)/100

Page 58: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 58

Time-to-target-value plots

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

pro

babili

ty

time to target solution value (seconds)

In 80% of trials targetsolution is found in less than 1.4 s

Probability of finding target solution in less than 1 s is about 70%.

Page 59: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 59

Time-to-target-value plots

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100 200 300 400 500 600 700

cu

mu

lative

pro

ba

bili

ty

time to target solution

ALG 1 ALG 2

For a given time, compare probabilities of finding targetsolution in at most that time.

For a given probability, compare times required to find with given probability.

We say ALG 1 is faster than ALG 2

Page 60: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 60

C.E. Nugent, T.E. Vollmann and J. Ruml [1968]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2000 4000 6000 8000 10000 12000 14000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: nug30

GRASP with PRGRASP0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250 300

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: nug25

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: nug20

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1 1.2

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: nug12

GRASP with PRGRASP

nug12 nug20

nug25 nug30

Page 61: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 61

E.D. Taillard [1991, 1994]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tai15a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30 35 40 45

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tai17a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600 800 1000 1200

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tai20a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5000 10000 15000 20000 25000 30000 35000 40000 45000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tai25a

GRASP with PRGRASP

tai15a tai17a

tai20a tai25a

Page 62: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 62

Y. Li and P.M. Pardalos [1992]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 1 2 3 4 5 6 7 8 9

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: lipa20a

GRASP with PRGRASP 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100 200 300 400 500 600 700

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: lipa30a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 20000 40000 60000 80000 100000 120000 140000 160000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: lipa40a

GRASP with PRGRASP

lipa20alipa30a

lipa40a

Page 63: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 63

U.W. Thonemann and A. Bölte [1994]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100000 200000 300000 400000 500000 600000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tho40

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 500 1000 1500 2000 2500 3000 3500

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: tho30

GRASP with PRGRASP

tho30

tho40

Page 64: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 64

L. Steinberg [1961]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 20000 40000 60000 80000 100000 120000 140000 160000 180000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: ste36c

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50000 100000 150000 200000 250000 300000 350000 400000

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: ste36a

GRASP with PRGRASP

ste36a

ste36c

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 1000 2000 3000 4000 5000 6000 7000 8000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: ste36b

GRASP with PRGRASP

ste36b

Page 65: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 65

M. Scriabin and R.C. Vergin [1975]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: scr12

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: scr15

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 10 20 30 40 50 60 70 80 90 100

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: scr20

GRASP with PRGRASP

scr12 scr15

scr20

Page 66: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 66

S.W. Hadley, F. Rendl and H. Wolkowicz

[1992]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: had14

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.05 0.1 0.15 0.2 0.25 0.3

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: had16

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: had18

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.5 1 1.5 2 2.5 3 3.5

cu

mu

lative

pro

ba

bility

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: had20

GRASP with PRGRASP

had14 had16

had18 had20

Page 67: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 67

R.E. Burkard and J. Offermann [1977]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30 35

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: bur26a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 20 40 60 80 100 120 140

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: bur26b

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 10 20 30 40 50 60 70 80

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: bur26c

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 10 20 30 40 50 60 70 80 90

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: bur26d

GRASP with PRGRASP

bur26a bur26b

bur26c bur26d

Page 68: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 68

N. Christofides and E. Benavent [1989]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 50 100 150 200 250

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: chr18a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: chr20a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2000 4000 6000 8000 10000 12000 14000 16000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: chr22a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 2000 4000 6000 8000 10000 12000 14000 16000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: chr25a

GRASP with PRGRASP

chr18a chr20a

chr22a chr25a

Page 69: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 69

C. Roucairol [1987]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: rou12

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.5 1 1.5 2 2.5 3 3.5 4

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: rou15

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100 200 300 400 500 600 700 800

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: rou20

GRASP with PRGRASP

rou12 rou15

rou20

Page 70: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 70

J. Krarup and P.M. Pruzan [1978]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 100 200 300 400 500 600 700 800 900

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: kra30a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 1000 2000 3000 4000 5000 6000 7000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: kra30b

GRASP with PRGRASP

kra30a

kra30b

Page 71: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 71

B. Eschermann and H.J. Wunderlich [1990]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 1000 2000 3000 4000 5000 6000

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32a

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32b

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.5 1 1.5 2 2.5 3 3.5

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32d

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 1 2 3 4 5 6

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32h

GRASP with PRGRASP

esc32a esc32b

esc32d esc32h

Page 72: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 72

B. Eschermann and H.J. Wunderlich [1990]

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.02 0.04 0.06 0.08 0.1 0.12

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32c

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 0.06

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32e

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32f

GRASP with PRGRASP

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.01 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 0.06

cu

mu

lative

pro

ba

bili

ty

time to target value (seconds on an SGI Challenge 196MHz R10000)

prob: esc32g

GRASP with PRGRASP

esc32c esc32e

esc32f esc32g

Page 73: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 73

Remarks

• New heuristic for the QAP is described.• Path-relinking shown to improve performance of

GRASP on almost all instances.• Experimental results and code are available at

http://www.research.att.com/~mgcr/exp/gqapspr

Page 74: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 74

Remarks

• New heuristic for the QAP is described.• Path-relinking shown to improve performance of

GRASP on almost all instances.• Experimental results and code are available at

http://www.research.att.com/~mgcr/exp/gqapspr

Page 75: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 75

Remarks

• New heuristic for the QAP is described.• Path-relinking shown to improve performance of

GRASP on almost all instances.• Experimental results and code are available at

http://www.research.att.com/~mgcr/exp/gqapspr

Page 76: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 76

Classical Metaheuristics

• Simulated Annealing• Genetic Algorithms• Memetic Algorithms• Tabu Search• GRASP• Variable Neighborhood Search• etc

(see Handbook of Applied Optimization, P. M. Pardalos and M. G. Resende, Oxford University Press, Inc., 2002)

Page 77: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 77

Input: A problem instanceOutput: A (sub-optimal) solution

1. Generate an initial solution at random and initialize the temperature T2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule

3. Output the solution having the lowest energy

Simulated AnnealingInput: A problem instanceOutput: A (sub-optimal) solution

1. Generate an initial solution at random and initialize the temperature T2. While (T > 0) do (a) While (thermal equilibrium not reached) do (i) Generate a neighbor state at random and evaluate the change in energy level ΔE (ii) If ΔE < 0, update current state with new state (iii) If ΔE < 0, update current state with new state with probability (b) Decrease temperature T according to annealing schedule

3. Output the solution having the lowest energy

TK

E

Be

Page 78: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 78

Input: A problem instanceOutput: A (sub-optimal) solution

1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t)2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t)

3. Output the best solution among all the population as the (sub-optimal) solution

Input: A problem instanceOutput: A (sub-optimal) solution

1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t)2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t) and evaluate P(t)

3. Output the best solution among all the population as the (sub-optimal) solution

Genetic Algorithms

Page 79: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 79

Input: A problem instanceOutput: A (sub-optimal) solution

1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t)2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t)

3. Output the best solution among all the population as the (sub-optimal) solution

Input: A problem instanceOutput: A (sub-optimal) solution

1. t=0, Initialize P(t), evaluate the fitness of the individuals in P(t)2. While (termination condition is not satisfied) do (i) t = t+1 (ii) Select P(t), recombine P(t), perform local search on each individual of P(t), evaluate P(t)

3. Output the best solution among all the population as the (sub-optimal) solution

Memetic Algorithms

Page 80: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 80

Input: A problem instanceOutput: A (sub-optimal) solution1. Initialization

(i) Generate an initial solution x and set x*=x(ii) Initialize the tabu list T=Ø(ii) Set iteration cunters k=0 and m=0

2. While (N(x)\T ≠ Ø) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T

(iii) If f(x) < f(x*) then update x*=x and set m=0

(iv) if k=kmax or m=mmax go to step 3

3. Output the best solution found x*

Input: A problem instanceOutput: A (sub-optimal) solution1. Initialization

(i) Generate an initial solution x and set x*=x(ii) Initialize the tabu list T=Ø(ii) Set iteration cunters k=0 and m=0

2. While (N(x)\T ≠ Ø) do (i) k=k+1; m=m+1 (ii) Select x as the best solution from set N(x)\T

(iii) If f(x) < f(x*) then update x*=x and set m=0

(iv) if k=kmax or m=mmax go to step 3

3. Output the best solution found x*

Tabu Search

Page 81: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 81

Input: A problem instance

Output: A (sub-optimal) solution

1. Repeat for Max_Iterations

(i) Construct a greedy randomized solution

(ii) Use local search to improve the constructed

solution

(ii) Update the best solution found

2. Output the best solution among all the population as the (sub-optimal) solution

GRASPInput: A problem instance

Output: A (sub-optimal) solution

1. Repeat for Max_Iterations

(i) Construct a greedy randomized solution

(ii) Use local search to improve the constructed

solution

(ii) Update the best solution found

2. Output the best solution among all the population as the (sub-optimal) solution

Page 82: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 82

Input: A problem instanceOutput: A (sub-optimal) solution

1. Initialization:

(i) Select the set of neighborhood structures Nk, k=1,…,kmax, that will be used in the search(ii) Find an initial solution x(iii) Choose a stopping condition

2. Repeat until stopping condition is met (i) k=1

(ii) While (k ≤ kmax) do

(a) Shaking: Generate a point y at random from Nk(x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1

3. Output the incumbent solution

Input: A problem instanceOutput: A (sub-optimal) solution

1. Initialization:

(i) Select the set of neighborhood structures Nk, k=1,…,kmax, that will be used in the search(ii) Find an initial solution x(iii) Choose a stopping condition

2. Repeat until stopping condition is met (i) k=1

(ii) While (k ≤ kmax) do

(a) Shaking: Generate a point y at random from Nk(x) (b) Local Search: Apply some local search method with y as initial solution; Let z be the local optimum (c) Move or not: If z is better than the incumbent, move there (x = z), and set k=1; otherwise set k=k+1

3. Output the incumbent solution

VNS (Variable Neighborhood Search)

Page 83: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 83

• Construction phase: greediness + randomization– Builds a feasible solution:

• Use greediness to build restricted candidate list and apply randomness to select an element from the list.

• Use randomness to build restricted candidate list and apply greediness to select an element from the list.

• Local search: search in the current neighborhood until a local optimum is found– Solutions generated by the construction procedure are not

necessarily optimal:• Effectiveness of local search depends on: neighborhood

structure, search strategy, and fast evaluation of neighbors, but also on the construction procedure itself.

GRASP in more detaitls

Page 84: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 84

• Greedy Randomized Construction:– Solution – Evaluate incremental costs of candidate

elements– While Solution is not complete do:

• Build restricted candidate list (RCL)• Select an element s from RCL at random• Solution Solution {s}• Reevaluate the incremental costs.

endwhile

Construction phase

Page 85: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 85

Construction phase• Minimization problem

• Basic construction procedure: – Greedy function c(e): incremental cost

associated with the incorporation of element e into the current partial solution under construction

– cmin (resp. cmax): smallest (resp. largest) incremental cost

– RCL made up by the elements with the smallest incremental costs.

Page 86: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 86

Construction phase• Cardinality-based construction:

– p elements with the smallest incremental costs

• Quality-based construction: – Parameter defines the quality of the elements

in RCL.– RCL contains elements with incremental cost

cmin c(e) cmin + (cmax –cmin) = 0 : pure greedy construction = 1 : pure randomized construction

• Select at random from RCL using uniform probability distribution

Page 87: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 87

5

10

15

20

0 0.2 0.4 0.6 0.8 1

tim

e (

seco

nd

s)

for

10

00

ite

ratio

ns

RCL parameter alpha

total CPU time

local search CPU time

Illustrative results: RCL parameter

weighted MAX-SAT instance

random greedyRCL parameter α

SGI Challenge 196 MHz

Page 88: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 88

0

5

10

15

20

25

30

35

40

45

0 0.2 0.4 0.6 0.8 1 400000

405000

410000

415000

420000

425000

430000

435000

440000

445000

450000

best solution

average solution

time

tim

e (

seco

nd

s) f

or

10

00

ite

rati

on

s

solu

tion

valu

e

RCL parameter α

Illustrative results: RCL parameter

random greedy

weighted MAX-SAT instance: 100 variables and 850 clauses

SGI Challenge 196 MHz

Page 89: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 89

Path-relinking• Path-relinking:

– Intensification strategy exploring trajectories connecting elite solutions: Glover (1996)

– Originally proposed in the context of tabu search and scatter search.

– Paths in the solution space leading to other elite solutions are explored in the search for better solutions:

• selection of moves that introduce attributes of the guiding solution into the current solution

Page 90: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 90

Path-relinking

• Exploration of trajectories that connect high quality (elite) solutions:

initialsolution

guidingsolution

path in the neighborhood of solutions

Page 91: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 91

Path-relinking• Path is generated by selecting moves

that introduce in the initial solution attributes of the guiding solution.

• At each step, all moves that incorporate attributes of the guiding solution are evaluated and the best move is selected:

initialsolution

guiding solution

Page 92: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 92

Elite solutions x and y

(x,y): symmetric difference between x and y

while ( |(x,y)| > 0 ) {

evaluate moves corresponding in (x,y) make best move

update (x,y)

}

Path-relinking

Page 93: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 93

GRASP: 3-index assignment (AP3)

cost = 10

Complete tripartite graph:Each triangle made up ofthree distinctly colored nodes has a cost.

cost = 5

AP3: Find a set of trianglessuch that each node appearsin exactly one triangle and thesum of the costs of the triangles is minimized.

Page 94: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 94

3-index assignment (AP3)

• Construction: Solution is built by selecting n triplets, one at a time, biased by triplet costs.

• Local search: Explores O(n2) size neighborhood of current solution, moving to better solution if one is foundAiex, Pardalos, Resende, & Toraldo (2003)

Page 95: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 95

3-index assignment (AP3)

• Path relinking is done between:– Initial solution

S = { (1, j1S, k1S ), (2, j2S, k2

S ), …, (n, jnS, knS

) }– Guiding solution

T = { (1, j1T, k1T ), (2, j2T, k2

T ), …, (n, jnT, kn

T ) }

Page 96: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 96

GRASP with path-relinking• Originally used by Laguna and Martí (1999).• Maintains a set of elite solutions found during GRASP

iterations.• After each GRASP iteration (construction and local

search):– Use GRASP solution as initial solution. – Select an elite solution uniformly at random: guiding

solution (may also be selected with probabilities proportional to the symmetric difference w.r.t. the initial solution).

– Perform path-relinking between these two solutions.

Page 97: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 97

GRASP with path-relinking• Repeat for Max_Iterations:

– Construct a greedy randomized solution.– Use local search to improve the constructed

solution.– Apply path-relinking to further improve the

solution.– Update the pool of elite solutions.– Update the best solution found.

Page 98: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 98

GRASP with path-relinking• Variants: trade-offs between computation time and

solution quality– Explore different trajectories (e.g. backward, forward):

better start from the best, neighborhood of the initial solution is fully explored!

– Explore both trajectories: twice as much the time, often with marginal improvements only!

– Do not apply PR at every iteration, but instead only periodically: similar to filtering during local search.

– Truncate the search, do not follow the full trajectory.– May also be applied as a post-optimization step to all

pairs of elite solutions.

Page 99: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 99

GRASP with path-relinking• Successful applications:

1) Prize-collecting minimum Steiner tree problem: Canuto, Resende, & Ribeiro (2001) (e.g. improved all solutions found by approximation algorithm of Goemans & Williamson)

2) Minimum Steiner tree problem: Ribeiro, Uchoa, & Werneck (2002) (e.g., best known results for open problems in series dv640 of the SteinLib)

3) p-median: Resende & Werneck (2002) (e.g., best known solutions for problems in literature)

Page 100: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 100

GRASP with path-relinking• Successful applications (cont’d):

4) Capacitated minimum spanning tree:Souza, Duhamel, & Ribeiro (2002) (e.g., best known results for largest problems with 160 nodes)

5) 2-path network design: Ribeiro & Rosseti (2002) (better solutions than greedy heuristic)

6) Max-Cut: Festa, Pardalos, Resende, & Ribeiro (2002) (e.g., best known results for several instances)

7) Quadratic assignment: Oliveira, Pardalos, & Resende (2003)

Page 101: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 101

GRASP with path-relinking• Successful applications (cont’d):

8) Job-shop scheduling: Aiex, Binato, & Resende (2003)

9) Three-index assignment problem: Aiex, Resende, Pardalos, & Toraldo (2003)

10) PVC routing: Resende & Ribeiro (2003)

11) Phylogenetic trees: Ribeiro & Vianna (2003)

Page 102: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 102

GRASP with path-relinking

• P is a set (pool) of elite solutions.

• Each iteration of first |P| GRASP iterations adds one solution to P (if different from others).

• After that: solution x is promoted to P if:– x is better than best solution in P.– x is not better than best solution in P, but is

better than worst and is sufficiently different from all solutions in P.

Page 103: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 103

Page 104: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 104

• GRASP is easy to implement in parallel:

– parallelization by problem decomposition• Feo, R., & Smith (1994)

– iteration parallelization• Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, &

R. (1996)• Alvim (1998); Martins & Ribeiro (1998)• Murphey, Pardalos, & Pitsoulis (1998)• R. (1998); Martins, R., & Ribeiro (1999)• Aiex, Pardalos, R., & Toraldo (2000)

• GRASP is easy to implement in parallel:

– parallelization by problem decomposition• Feo, R., & Smith (1994)

– iteration parallelization• Pardalos, Pitsoulis, & R. (1995); Pardalos, Pitsoulis, &

R. (1996)• Alvim (1998); Martins & Ribeiro (1998)• Murphey, Pardalos, & Pitsoulis (1998)• R. (1998); Martins, R., & Ribeiro (1999)• Aiex, Pardalos, R., & Toraldo (2000)

Parallelization of GRASP

Page 105: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 105

Parallel independent implementation

• Parallelism in metaheuristics: robustnessCung, Martins, Ribeiro, & Roucairo (2001)

• Multiple-walk independent-thread strategy: – p processors available– Iterations evenly distributed over p processors– Each processor keeps a copy of data and algorithms. – One processor acts as the master handling seeds, data, and

iteration counter, besides performing GRASP iterations.– Each processor performs Max_Iterations/p iterations.

Page 106: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 106

Parallel independent implementation

seed(1) seed(2) seed(3) seed(4) seed(p-1)

Best solution is sent to the master.

1 2 3 4 p-1Elite Elite Elite Elite Elite

Elite

pseed(p)

Page 107: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 107

Parallel cooperative implementation

• Multiple-walk cooperative-thread strategy: – p processors available– Iterations evenly distributed over p-1 processors– Each processor has a copy of data and algorithms.– One processor acts as the master handling seeds,

data, and iteration counter and handles the pool of elite solutions, but does not perform GRASP iterations.

– Each processor performs Max_Iterations/(p–1) iterations.

Page 108: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 108

Parallel cooperative implementation

2

Elite

1

p3

Elite solutions are stored in a centralized pool.Master

Slave SlaveSlave

Page 109: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 109

Cooperative vs. independent strategies (for 3AP)

• Same instance: 15 runs with different seeds, 3200 iterations

• Pool is poorer when fewer GRASP iterations are done and solution quality deteriorates

procs best avg. best avg.

1 673 678.6 - -

2 676 680.4 676 681.6

4 680 685.1 673 681.2

8 687 690.3 676 683.1

16 692 699.1 674 682.3

32 702 708.5 678 684.8

Independent Cooperative

Page 110: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 110

5

10

15

20

25

1 2 4 8 16

ave

rage s

peed-u

p

number of processors

independentcooperative

linear speedup

Speedup on 3-index assignment: bs24

3-index assignment (AP3)

SGI Challenge 196 MHz

Page 111: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 111

Evaluation of Heuristics

• Experimental design

- problem instances

- problem characteristics of interest (e.g.,

instance size, density, etc.)

- upper/lower/optimal values

Page 112: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 112

Evaluation of Heuristics (cont.)

• Sources of test instances

- Real data sets

It is easy to obtain real data sets

- Random variants of real data sets

The structure of the instance is preserved

(e.g., graph), but details are changed (e.g.,

distances, costs)

Page 113: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 113

Evaluation of Heuristics (cont.)

- Test Problem Libraries - Test problem collections with “best known” solution

- Test problem generators with known optimal

solutions (e.g., QAP generators, Maximum Clique,

Steiner Tree Problems, etc)

Page 114: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 114

Evaluation of Heuristics (cont.)Test problem generators with known optimal solutions (cont.)

• C.A. Floudas, P.M. Pardalos, C.S. Adjiman, W.R. Esposito, Z. Gumus, S.T. Harding, J.L. Klepeis, C.A. Meyer, and C.A. Schweiger, Handbook of Test Problems for Local and Global Optimization, Kluwer Academic Publishers, (1999).

• C.A. Floudas and P.M. Pardalos, A Collection of Test Problems for Constrained Global Optimization Algorithms, Springer-Verlag, Lecture Notes in Computer Science 455 (1990).

• J. Hasselberg, P.M. Pardalos and G. Vairaktarakis, Test case generators and computational results for the maximum clique problem, Journal of Global Optimization 3 (1993), pp. 463-482.

• B. Khoury, P.M. Pardalos and D.-Z. Du, A test problem generator for the steiner problem in graphs, ACM Transactions on Mathematical Software, Vol. 19, No. 4 (1993), pp. 509-522.

• Y. Li and P.M. Pardalos, Generating quadratic assignment test problems with known optimal permutations, Computational Optimization and Applications Vol. 1, No. 2 (1992), pp. 163-184.

• P. Pardalos, "Generation of Large-Scale Quadratic Programs", ACM Transactions on Mathematical Software, Vol. 13, No. 2, p. 133.

• P.M. Pardalos, Construction of test problems in quadratic bivalent programming, ACM Transactions on Mathematical Software, Vol. 17, No. 1 (1991), pp. 74-87.

• P.M. Pardalos, Generation of large-scale quadratic programs for use as global optimization test problems, ACM Transactions on Mathematical Software, Vol. 13, No. 2 (1987), pp. 133-137.

Page 115: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 115

Evaluation of Heuristics (cont.)

- Random generated instances (quickest and easiest way to obtain supply of test

instances)

Page 116: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 116

Evaluation of Heuristics (cont.)

• Performance measurement

- Time (most used, but difficult to assess

due to differences among computers)

- Solution Quality

Page 117: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 117

Evaluation of Heuristics (cont.)

• Solution Quality

- Exact solutions of small instances

For “small” instances verify results with

exact algorithms

- Lower and upper bounds

In many cases the problem of finding good bounds

is as difficult as solving the original problem

Page 118: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 118

Evaluation of Heuristics (cont.)

• Space covering techniques

Page 119: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 119

Success Stories

The success of metaheuristics can be seen by the numerous applications for which they have been applied.

Examples:• Scheduling, routing, logic, partitioning• location• graph theoretic• QAP & other assignment problems• miscellaneous problems

Page 120: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 120

Concluding Remarks

• Metaheuristics have been shown to perform well in practice

• Many times the globally optimal solution is found but there is no “certificate of optimality”

• Large problem instances can be solved implementing metaheuristics in parallel

• It seems it is the most practical way to deal with massive data set

Page 121: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 121

References

• Handbook of Applied Optimization Edited by Panos M. Pardalos and Mauricio G. C. Resende,

Oxford University Press, Inc., 2002

• Handbook of Massive Data SetsSeries: Massive Computing, Vol.  4 Edited by J. Abello, P.M. Pardalos, M.G. Resende,

Kluwer Academic Publishers, 2002.

Page 122: Metaheuristics in Optimization1 Panos M. Pardalos University of Florida ISE Dept., Florida, USA Workshop on the European Chapter on Metaheuristics and

Metaheuristics in Optimization 122

THANK YOU ALL.