iterated tabu search for the maximum diversity problem

13
Iterated tabu search for the maximum diversity problem Gintaras Palubeckis Department of Practical Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania Abstract In this paper, we deal with the maximum diversity problem (MDP), which asks to select a specified number of elements from a given set so that the sum of distances between the selected elements is as large as possible. We develop an iterated tabu search (ITS) algorithm for solving this problem. We also present a steepest ascent algorithm, which is well suited in application settings where solutions of satisfactory quality are required to be provided very quickly. Computational results for problem instances involving up to 5000 elements show that the ITS algorithm is a very attractive alternative to the existing approaches. In particular, we demonstrate the outstanding performance of ITS on the MDP instances taken from the literature. For 69 such instances, new best solutions were found. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Maximum diversity problem; Combinatorial optimization; Integer programming; Metaheuristics; Tabu search 1. Introduction Suppose that we are given a set of n elements, two integers m 1 , m 2 ,0 6 m 1 6 m 2 6 n, and an n n symmet- ric matrix D ¼ðd ij Þ, where d ij is an estimate of diversity (or distance) between elements i and j. In the general case, the entries of the matrix D are unrestricted in sign. The maximum diversity problem (MDP for short) is to select at least m 1 but not more than m 2 elements of the set so that the sum of distances between the selected elements is as large as possible. A common and important special case occurs when m 1 ¼ m 2 . In this case, it may be assumed without loss of generality that the matrix D is nonnegative. The problem is also known under a few other names: maxisum dispersion [1], MAX-AVG dispersion [2], edge-weighted clique [3], remote-clique [4], maximum edge-weighted subgraph [5], and dense k-subgraph [6]. The maximum diversity problem arises in various contexts, including location of undesirable or mutually competing facilities, aiding decision analysis with multiple objectives, composing jury panels and genetic engi- neering (see, e.g., [4,7–9] for more details on these and some other applications). In a recent paper [7], Duarte and Martı ´ introduced a new application of the MDP. Actually, they proposed to use the algorithms for the MDP in the context of evolutionary methods. The role devoted to such an algorithm is to update the popu- lation of individuals (solutions) by selecting a set of good and maximally diverse solutions from a larger set of possible candidates. Hence, the algorithm is used as part of an evolutionary method applied to solve the 0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.11.090 E-mail address: [email protected] Applied Mathematics and Computation 189 (2007) 371–383 www.elsevier.com/locate/amc

Upload: gintaras-palubeckis

Post on 26-Jun-2016

219 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Iterated tabu search for the maximum diversity problem

Applied Mathematics and Computation 189 (2007) 371–383

www.elsevier.com/locate/amc

Iterated tabu search for the maximum diversity problem

Gintaras Palubeckis

Department of Practical Informatics, Kaunas University of Technology, Studentu 50, 51368 Kaunas, Lithuania

Abstract

In this paper, we deal with the maximum diversity problem (MDP), which asks to select a specified number of elementsfrom a given set so that the sum of distances between the selected elements is as large as possible. We develop an iteratedtabu search (ITS) algorithm for solving this problem. We also present a steepest ascent algorithm, which is well suited inapplication settings where solutions of satisfactory quality are required to be provided very quickly. Computational resultsfor problem instances involving up to 5000 elements show that the ITS algorithm is a very attractive alternative to theexisting approaches. In particular, we demonstrate the outstanding performance of ITS on the MDP instances taken fromthe literature. For 69 such instances, new best solutions were found.� 2006 Elsevier Inc. All rights reserved.

Keywords: Maximum diversity problem; Combinatorial optimization; Integer programming; Metaheuristics; Tabu search

1. Introduction

Suppose that we are given a set of n elements, two integers m1, m2, 0 6 m1 6 m2 6 n, and an n� n symmet-ric matrix D ¼ ðdijÞ, where dij is an estimate of diversity (or distance) between elements i and j. In the generalcase, the entries of the matrix D are unrestricted in sign. The maximum diversity problem (MDP for short) is toselect at least m1 but not more than m2 elements of the set so that the sum of distances between the selectedelements is as large as possible. A common and important special case occurs when m1 ¼ m2. In this case, itmay be assumed without loss of generality that the matrix D is nonnegative. The problem is also known undera few other names: maxisum dispersion [1], MAX-AVG dispersion [2], edge-weighted clique [3], remote-clique [4],maximum edge-weighted subgraph [5], and dense k-subgraph [6].

The maximum diversity problem arises in various contexts, including location of undesirable or mutuallycompeting facilities, aiding decision analysis with multiple objectives, composing jury panels and genetic engi-neering (see, e.g., [4,7–9] for more details on these and some other applications). In a recent paper [7], Duarteand Martı introduced a new application of the MDP. Actually, they proposed to use the algorithms for theMDP in the context of evolutionary methods. The role devoted to such an algorithm is to update the popu-lation of individuals (solutions) by selecting a set of good and maximally diverse solutions from a larger set ofpossible candidates. Hence, the algorithm is used as part of an evolutionary method applied to solve the

0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2006.11.090

E-mail address: [email protected]

Page 2: Iterated tabu search for the maximum diversity problem

372 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

problem at hand. Keeping the genetic diversity of the population is an especially complex and important issuein evolutionary multiobjective optimization [10,11].

The MDP can be defined more formally as follows. Let G ¼ ðV ;EÞ be an edge-weighted complete graphwith vertices corresponding to elements of a given set. The weight of an edge ði; jÞ 2 E is equal to the distancedij between elements i and j. The maximum diversity problem is to find a subset V 0 � V such thatm1 6 jV 0j 6 m2 and the sum

Pi;j2V 0 ;i<jdij is maximized. Letting, for each i 2 V , xi ¼ 1 if i 2 V 0 and xi ¼ 0 other-

wise, we obtain the following 0� 1 quadratic program:

maximize f ðxÞ ¼Xn�1

i¼1

Xn

j¼iþ1

dijxixj ð1Þ

subject to m1 6

Xn

i¼1

xi 6 m2; ð2Þ

xi 2 f0; 1g; i ¼ 1; . . . ; n: ð3Þ

When all the coefficients dij in (1) are nonnegative, then (2) can be replaced by the following equation:

Xn

i¼1

xi ¼ m; ð4Þ

where, obviously, m ¼ m2.The MDP is known to be NP-hard and suffers from high computational complexity. Actually, the recently

proposed exact algorithms [3,12,13] were able to solve only instances of size less than 50 variables in reason-able computation times. Therefore, efficient algorithms, which provide good, but not necessarily optimal solu-tions, are required. Several authors ([2,6,14] among others) proposed approximation algorithms withguaranteed performance ratios for solving the MDP. However, numerical results for such algorithms usuallyare not provided. Thus, it is not fully clear how well they perform in practice when compared to algorithmsbased on metaheuristics like evolutionary computation, simulated annealing, tabu search and otherapproaches. Metaheuristics have shown to be very successful in finding solutions of high quality to many com-binatorial optimization problems. Early implementations of simulated annealing and tabu search for themodel (1), (3), (4) were described in [15]. In that study, computational results were presented for three setsof problem instances of size 25. They show that tabu search performs somewhat better than simulated anneal-ing. Ghosh [16] proposed a multi-start algorithm for the MDP. A restart consists of two phases: a constructionphase followed by a local improvement phase. The first of them, performing m steps, builds a feasible solutionto the problem. The basis of the second phase is a straightforward ‘‘hill climbing’’ procedure. Another tabusearch algorithm for the MDP was presented by Macambira [5]. The algorithm follows the general guidelinesprovided in [17]. The results of experiments are reported for instances with up to 100 vertices. Macambiraobserved that, from a computational standpoint, the most difficult problem instances were obtained by takingm ¼ n=2. A different implementation of tabu search for the MDP was proposed by Alidaee et al. [18]. Theirmethod is centered around the use of strategic oscillation. The method alternates between constructive phasesthat progressively set variables to 1 and destructive phases that progressively set variables to 0. However, in[18], computational experience only for very small instances (of size less than 50) is given. Silva et al. [19] devel-oped a series of GRASP algorithms for solving the MDP. These algorithms are built by combining one ofthree constructive procedures with one of two iterative improvement procedures, one of which is that pro-posed by Ghosh [16] (see [19] for details). In [20], Silva et al. combined one of the GRASP algorithms withthe path-relinking technique. This hybrid, named KLD + PR, produced good solutions for instances of sizeup to 500. Unfortunately, KLD + PR requires long computation times (e.g., about 10 h for n ¼ 500, see[20]). Duarte and Martı [7] proposed two new types of constructive algorithms for the MDP and combinedthem with three iterative improvement procedures: local search by Ghosh [16], improved local search andshort-term memory tabu search, both described in [7]. They presented computational results for probleminstances involving up to 2000 vertices. Especially good performance was achieved by combining a construc-tive method based on the tabu search methodology with the short-term memory tabu search procedure. Thishybrid, named Tabu_D2+LS_TS, outperforms previous methods in the literature.

Page 3: Iterated tabu search for the maximum diversity problem

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 373

In some applications, the interest is focused on finding solutions of satisfactory quality very quickly. In suchcases, fast constructive algorithms can be used. For the MDP, several such algorithms were developed by Glo-ver et al. [8]. Different constructive algorithms are described in [7,19].

This paper is motivated by a need to increase the efficiency of tabu search for the MDP by incorporating apowerful search diversification mechanism. In order to provide such a mechanism, we propose a solution per-turbation procedure. Our approach is iterative by nature. It alternates between two phases: solution pertur-bation and tabu search. To find an initial solution very quickly, we present an innovative constructiveheuristic, called the steepest ascent algorithm. The results of extensive computational experiments show thatthe use of the solution perturbation procedure makes the tabu search algorithm much smarter. We comparethe results of our algorithm, obtained on problem instances from the literature, with the best solutionsavailable.

The remainder of this paper is structured as follows. In Sections 2 and 3, we describe the steepest ascentand, respectively, iterated tabu search algorithms for the MDP. In Section 4, we present computational results.Finally, Section 5 concludes the paper.

2. Steepest ascent algorithm

In this section, we describe a one-pass algorithm for the MDP. We consider the formulation withm1 ¼ m2 ¼ m and coefficients dij unrestricted in sign. The basic idea of the developed algorithm is to performa steepest ascent from a point within the n-dimensional unit cube to some its vertex (0–1 vector), defining afeasible solution of the problem, by fixing one variable at either 0 or 1 at each step of this climb. It has beenfound that a good option is to start from the point ðm=n;m=n; . . . ;m=nÞ. Each coordinate of this point is equalto the probability that a randomly chosen vertex of the graph G will belong to the constructed solution. Toavoid fractional variables, it is convenient to transform the objective function f(x) by setting xi ¼ yi=n,i ¼ 1; . . . ; n. Then we get the following problem:

maximize gðyÞ ¼Xn�1

i¼1

Xn

j¼iþ1

dijyiyj ð5Þ

subject toXn

i¼1

yi ¼ nm; ð6Þ

yi 2 f0; ng; i ¼ 1; . . . ; n: ð7Þ

Clearly, f ðxÞ ¼ gðyÞ=n2. For the initial vector y0 ¼ ðm;m; . . . ;mÞ we have

gðy0Þ ¼ m2Xn�1

i¼1

Xn

j¼iþ1

dij: ð8Þ

Fixing the variable xi at 1 corresponds to fixing yi at n. Assume that we set yi to a 2 f0; ng. Then the change ofthe objective function is given by the following expression:

DiðaÞ ¼ ða� mÞðm~di þ ndiÞ; ð9Þ

where

~di ¼X

j2W nfigdij; ð10Þ

di ¼X

j2U

dij; ð11Þ

U ¼ fk 2 V jyk ¼ ng, W ¼ fk 2 V jyk ¼ mg. The set W consists of the vertices for which the corresponding vari-ables are free in the sense that their values remain to be fixed (either to 0 or n). At each execution of the mainloop of the algorithm, a vertex k 2 W with the largest value of maxðDið0Þ;DiðnÞÞ is selected and the corre-sponding variable yk is assigned a better of two possible values (0 or n). If the cardinality of U becomes equalto m, then each remaining free variable yj, j 2 W , is forced to 0.

Page 4: Iterated tabu search for the maximum diversity problem

374 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

A detailed description of the algorithm, named STA, is given below. Recall that we are dealing with theformulation containing both positive and negative coefficients dij. For the case of all nonnegative dij, the algo-rithm can be simplified slightly. In the algorithm, b1 (respectively, b0) stands for the cardinality of the set U

(respectively, V n ðU [ W Þ).

STA

1. Set W :¼ V , b0 :¼ 0, b1 :¼ 0, and di :¼ 0 for each i 2 V . Set g* to the initial value of the function g calcu-~

lated according to Eq. (8). Using (10), compute di for each i 2 V .

2. Select a vertex k at random from the set of those vertices j 2 W for which Dj ¼ maxi2W Di, whereDi ¼ maxðDið0Þ;DiðnÞÞ and DiðaÞ, a 2 f0; ng, is given by (9).

3. Fix yk at n (respectively, at 0) if DkðnÞ > Dkð0Þ (respectively, DkðnÞ < Dkð0Þ). If DkðnÞ ¼ Dkð0Þ, then fix yk atan arbitrary value from f0; ng at random. Add Dk to g*. Remove k from W. Increment b1 (if yk ¼ n) or b0 (ifyk ¼ 0) by one.

4. For each i 2 W , set ~di :¼ ~di � dik and, if yk ¼ n, also set di :¼ di þ dik.5. If b1 ¼ m, then go to 6. Otherwise, check whether n� b0 ¼ m. If so, then go to 7; else return to 2.6. For each j 2 W do

6.1. Fix yj at 0. Add Djð0Þ to g*. Remove j from W.6.2. For each i 2 W , set ~di :¼ ~di � dij.Go to 8.

7. For each j 2 W do7.1. Fix yj at n. Add DjðnÞ to g*. Remove j from W.7.2. For each i 2 W , set ~di :¼ ~di � dij, di :¼ di þ dij.

8. Set xi :¼ yi=n for each i 2 V . Stop with the solution x of value f � :¼ g�=n2.

As it can be seen from the description of the algorithm, it is possible to operate directly on the variables xi,i 2 V , for example, to fix xk at 1 if DkðnÞ > Dkð0Þ in Step 3. We, however, have used variables yi instead of xi tomake the description consistent with the reformulation of the problem (5)–(7).

Notice that the algorithm can be simplified in the case where all the coefficients dij of the objective functionare nonnegative. In this case, the only possible value for yk in Step 3 is n, and the variable itself is selectedaccording to the criterion D0k ¼ maxi2W D0i, where D0i ¼ m~di þ ndi is a substitute for the formula (9) (yet,DkðnÞ must be added to g*). Also, in this case, Step 7 can be dropped and all operations regarding maintenanceand usage of b0 can be eliminated.

It is easy to see that the time complexity of STA is Oðn2Þ. Indeed, the loop consisting of Steps 2 through 5 isexecuted less than n times. Each of Steps 2 and 4 performs OðnÞ operations. Observing that the complexity ofSteps 1, 6 and 7 is Oðn2Þ, we get the above declared estimate.

The developed algorithm was used as an alternative method for generating initial solutions for the iteratedtabu search technique to be described in the next section. The algorithm could also be used in other scenarios.For example, it can easily be randomized and applied in a construction phase of GRASP implementations forthe considered problem.

3. Iterated tabu search

In this section, we present an iterated tabu search algorithm for the model (1)–(3). The main ingredients ofthis algorithm are tabu search and solution perturbation procedures.

Like STA, the algorithm maintains, for each i 2 V , the sum of the weights of the edges connecting i with thevertices that belong to the current solution U ¼ fj 2 V j xj ¼ 1g. As before, we denote this sum bydi ¼

Pj2U dij. The algorithm can be described as follows.

ITS

1. Construct an initial 0–1 vector x satisfying (2) (for example, randomly or using the STA algorithm). Set

x :¼ x.
Page 5: Iterated tabu search for the maximum diversity problem

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 375

2. Apply TS(x; x�).3. Check if stopping criterion is met. If so, then stop with the solution x* of value f ðx�Þ. Otherwise proceed to

4.4. Apply GSP(x; �p), where �p is an integer number randomly and uniformly drawn from the interval ½a1; ba2nc�.

Return to 2.

The algorithm repeatedly invokes two procedures – GSP (Get Start Point) for construction of a startingsolution and TS (Tabu Search) for iterative improvement of this solution. The input to the first of themincludes the parameter �p that specifies the number of components of x whose values must be replaced bythe opposite ones, that is, 0 to 1 or vice versa. This integer number is randomly drawn from the interval withbounds depending on the parameters a1 and a2. Clearly, we can require that 0 < a1 6 n, a2 6 1 and a1 6 ba2nc.The precise values of a1 and a2 should be selected experimentally. We note that this pair of parameters controlsthe depth of solution perturbation.

In Step 3 of ITS, a stopping criterion is required to be specified. It may be any, for example, an upper boundon the number of TS invocations or a stopping rule based on the CPU clock. In our experiments, we adoptedthe latter alternative.

The procedure TS invoked in Step 2 of ITS contains only the main ingredient of tabu search, namely, a short-term memory tabu list without aspiration criterion. The input to TS includes two 0–1 vectors: an initial solutionx and the best solution x* found so far. We should note that the best vector x* is not overridden. It is passedfrom one run of TS to the next one. Thus, invocations of TS are not fully independent. In the case of finding,during the run of TS, a solution x that is better than x*, a local search procedure LS is executed. It accepts x andreturns, through the same parameter x, a locally optimal solution together with the difference flocal between thevalue of f on this solution and that on the submitted vector x. In the description given below, Ti, i 2 V , and T ij,i; j 2 V , i < j, denote tabu values. The indices of positive Ti and T ij constitute the tabu list. T ij > 0 for a pairi; j 2 V indicates that the assignments xi :¼ 1� xi and xj :¼ 1� xj are forbidden. Likewise, T i > 0, i 2 V , sup-presses flipping the value of xi. We assume, for simplicity, that both T ij and T ji stand for the same tabu value.The procedure TS can be stated as follows:

TS(x; x�)

1. Set c :¼ 0, ~f :¼ f ðxÞ, T i :¼ 0, i ¼ 1; . . . ; n, T ij :¼ 0, i ¼ 1; . . . ; n� 1, j ¼ iþ 1; . . . ; n. Using (11) for

U ¼ fj 2 V j xj ¼ 1g, compute di for each i 2 V .2. Set h :¼ �1, c :¼ 0, r :¼ �1.3. If jU j ¼ m2, then go to 4. Otherwise, for k ¼ 1; . . . ; n such that T k ¼ 0 and k 62 U do

3.1. Increment c by one. If ~f þ dk > f ðx�Þ, then set h :¼ dk, q :¼ k, c :¼ 1 and go to 6.3.2. If dk > h, then set h :¼ dk, q :¼ k.

4. If jU j ¼ m1, then go to 5. Otherwise, for k ¼ 1; . . . ; n such that T k ¼ 0 and k 2 U do4.1. Increment c by one. If ~f � dk > f ðx�Þ, then set h :¼ �dk, q :¼ k, c :¼ 1 and go to 6.4.2. If �dk > h, then set h :¼ �dk, q :¼ k.

5. If m1 < jU j < m2, then go to 6. Otherwise, for k; l ¼ 1; . . . ; n such that T kl ¼ 0, k 2 U and l 62 U do5.1. Increment c by one. Set d ¼ dl � dk � dkl. If ~f þ d > f ðx�Þ, then set h :¼ d, q :¼ k, r :¼ l, c :¼ 1 and

go to 6.5.2. If d > h, then set h :¼ d, q :¼ k, r :¼ l.

6. Set ~f :¼ ~f þ h, xq :¼ 1� xq. If r > 0, then also set xr :¼ 1. Update U accordingly. Update di for each i 2 V(except q if r < 0). If c ¼ 0, then go to 8. Otherwise proceed to 7.

7. Apply LS(x;U ; d; flocal; c), d ¼ ðd1; . . . ; dnÞ, getting, possibly, improved solution x. Set ~f :¼ ~f þ flocal,x� :¼ x.

8. If c is greater than or equal to a predetermined upper limit �c, then return. Otherwise, decrement by one eachpositive Ti, i 2 V , and each positive T ij, i; j 2 V , i < j. If r < 0, then set T q :¼ T . Otherwise, set T qr :¼ T 0

(here T and T 0 are the tabu tenure values selected experimentally). Go to 2.

In Step 8 of TS, three parameters T ; T 0 and �c are used. After performing some preliminary experiments,it has been found that a good strategy is to fix both T and T 0 at 20 (except the case n < 80, in which the

Page 6: Iterated tabu search for the maximum diversity problem

376 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

value of n=4, for example, can be chosen). The parameter �c together with the counter c define a stopping ruleapplied in TS. We prefer the choice �c ¼ maxð10; 000; bnÞ, where b is a tuning factor controlling searchduration.

Obviously, if m1 ¼ m2, then Steps 3 and 4 in the above description of TS become redundant. Also, in thiscase, the tabu values Ti, i 2 V , are not used and, therefore, can be removed from the statement of Steps 1 and8.

The local search procedure applied within TS is a standard routine performing an ascent from the givenpoint to a local optimum. It consists of the following five steps.

LS(x;U ; d; flocal; c)

1. Set flocal :¼ 0, c :¼ 0.2. If jU j ¼ m , then go to 3. Otherwise, for q ¼ 1; . . . ; n such that q 62 U perform the following actions. Incre- 2

ment c by one. Check whether dq > 0. If so, then set c :¼ 1, xq :¼ 1, flocal :¼ flocal þ dq, U :¼ U [ fqg andupdate di for each i 2 V n fqg. If jU j ¼ m2, then break from the ‘‘for’’ loop.

3. If jU j ¼ m1, then go to 4. Otherwise, for q ¼ 1; . . . ; n such that q 2 U perform the following actions. Incre-ment c by one. Check whether dq < 0. If so, then set c :¼ 1, xq :¼ 0, flocal :¼ flocal � dq, U :¼ U n fqg andupdate di for each i 2 V n fqg. If jU j ¼ m1, then break from the ‘‘for’’ loop.

4. If m1 < jU j < m2, then go to 5. Otherwise, for q; r ¼ 1; . . . ; n such that q 2 U and r 62 U perform the follow-ing actions. Increment c by one. Set d ¼ dr � dq � dqr. Check whether d > 0. If so, then set c :¼ 1, xq :¼ 0,xr :¼ 1, flocal :¼ flocal þ d, U :¼ U [ frg n fqg and update di for each i 2 V .

5. If c > 0, then set c :¼ 0 and go to 2. Otherwise return.

Again, if m1 ¼ m2, then two steps, namely, Step 2 and Step 3 of LS can be dropped. In this case, pairwiseexchanges of vertices performed in Step 4 keep the size of the set U constant.

Another important mechanism employed in ITS is the procedure GSP implementing a strategy for pertur-bation of a given solution. The input to GSP includes the number �p of variables to be chosen among x1; . . . ; xn.The task assigned to the procedure is to select this number of variables and flip their values (from 0 to 1 or viceversa). The variable selection process is randomized. A variable (only if m1 < m2) or a suitable pair of variablesis randomly selected from the candidate list of length b. This list is constructed by including variables xj orpairs of variables xj, xk, j 2 U , k 2 V n U , that are not flipped and for which the values of jdjj and, respec-tively, dk � dj � djk are largest. The perturbed 0–1 vector x is a feasible solution to (1)–(3) and serves as a start-ing point for the tabu search procedure. We assume that �p is not very large as compared to m. Particularly, ifm1 ¼ m2, we require that �p 6 2 minðm; n� mÞ. A similar restriction can be derived for the case m1 < m2.According to Step 4 of ITS, the value of �p is randomly drawn from the interval ½a1; ba2nc�. After performingpreliminary tests, we have chosen a2 ¼ 0:1. For all instances of the MDP we have solved, 0:1n was less than2 minðm; n� mÞ. The perturbation procedure can be formally stated as follows.

GSPðx; �pÞ

1. Set p :¼ 0, I :¼ V , U :¼ fi 2 V j xi ¼ 1g. Let U ¼ V n U . 2. Consider the set S ¼ fs1; . . . ; si; . . . ; slg ¼ S0 [ S1 [ S2, where S0 ¼ fsi ¼ fjg j j 2 I \ Ug if jU j < m2 and

S0 ¼ ; otherwise, S1 ¼ fsi ¼ fjg j j 2 I \ Ug if jU j > m1 and S1 ¼ ; otherwise, S2 ¼ fsi ¼ fj; kg jj 2 I \ U ; k 2 I \ Ug if jU j ¼ m1 or jU j ¼ m2 and S2 ¼ ; otherwise. Attach zi ¼ dj and, respectively,zi ¼ �dj to si ¼ fjg if si belongs to S0 and S1, respectively, and zi ¼ dk � dj � djk to si ¼ fj; kg 2 S2. Forma subset S0 � S, jS0j ¼ b, such that zl P zi for each l 2 S0 and each i 2 S n S0 (in other words, pick the b larg-est values zl among z1; . . . ; zl, breaking ties randomly).

3. Randomly select si 2 S0. If si defines a single vertex, say q, then remove q from I and set xq :¼ 1� xq,p :¼ p þ 1. If si defines a pair of vertices, say q and r, then remove both q and r from I and setxq :¼ 1� xq, xr :¼ 1� xr, p :¼ p þ 2. In both cases, update U and di for each i 2 I .

4. If p < �p, then go to 2. Otherwise return with the perturbed solution x.

Clearly, if m1 ¼ m2, then the description of GSP can be simplified. In this case, both S0 and S1 are emptyand, therefore, S ¼ S2.

Page 7: Iterated tabu search for the maximum diversity problem

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 377

The performance of ITS, of course, depends on the size b of S0. An empirical investigation has shown thatthe best results are obtained when b 2 ½5; 10�. For such b, the complexity of GSP is Oðn3Þ. When the value of �psubmitted to GSP is close to the constant a1, the complexity of GSP decreases to Oðn2Þ.

As it can be seen from the description, ITS alternates between calls to tabu search and solution perturbationprocedures. Each time the second of them is applied to the solution x coming from the first one. Basically,within TS, the variable x is used to represent the current solution of tabu search. Upon termination of TS,this solution appears to be rather good, that is, not much worse than the currently best solution x*. The strat-egy to submit x to GSP instead of x* increases a level of diversification in the search process, yet the perturbedsolution, for small b and modest �p, remains to be of sufficiently high quality and can serve as a good startingpoint for the next invocation of TS. Computational experience with ITS allowed us to conclude that such astrategy and the developed solution perturbation procedure provide an efficient mechanism for restarting thesearch.

4. Computational results

The main purpose of experimentation was to show the competitiveness and the attractiveness of theapproach. The proposed algorithms were coded in the C programming language and run on a Pentium M1733 MHz notebook. The sources are publicly available at http://www.soften.ktu.lt/~gintaras/max_div.html.To evaluate the performance of ITS, the following seven sets of problem instances were used: Silva instancesintroduced in [19], four sets of Duarte-Martı instances [7], Beasley instances from the OR-Library [21], andsome larger instances of our own. For comparison purposes, we also implemented in C the tabu search algo-rithm of Macambira [5]. While writing the code, we followed the detailed description of this algorithm given in[5] making only one change – we replaced the termination criterion involving the number of iterations per-formed without improvement of the best solution by a stopping rule based on the CPU clock.

After preliminary testing, we have fixed the values of the following ITS parameters: T ¼ 20, T 0 ¼ 20,a1 ¼ 10, a2 ¼ 0:1, b ¼ 5. So, the only algorithm’s parameter whose value is required to be submitted to theprogram implementing ITS is b (used to calculate �c). It has been found, however, that the quality of theobtained results is not very sensitive to the choice of b. In our experiments, we run ITS with b set to 1000 whenn 6 2000 and to 10,000 otherwise. In Step 1 of ITS, we used the steepest ascent algorithm only for small prob-lem instances (n 6 200). We observed that, for larger instances, the strategy of submitting a good quality ini-tial solution to tabu search does not provide better results than the strategy of generating an initial feasiblesolution randomly. Therefore, for n > 200, in Step 1 of ITS, a vector x is constructed by randomly selectingm 2 ½m1;m2� of its components and setting each of them to 1 and the rest to 0. In the main phase of experi-ments, we run both ITS and the tabu search algorithm of Macambira (named here as MTS) 10 times on eachinstance. For larger instances, we additionally performed longer runs of ITS.

Table 1 shows the results obtained for a series of Silva instances. Each dij in (1) for these instances is an integerrandomly and uniformly drawn from the interval [0,9]. The number of variables n 2 f100; 200; 300; 400; 500gand the problem’s parameter m 2 f0:1n; 0:2n; 0:3n; 0:4ng are included in the name of an instance (first column).The second column gives, for each instance, the value of the best solution delivered by ITS. A maximum CPUtime limit of 20 s was set for each run of ITS as well as MTS. For each instance, the first number in the thirdcolumn is the difference between the best value, displayed in the second column, and the value of the best solu-tion found by ITS. The number in parenthesis is the difference between the best value and the average value of 10runs of ITS. The next column gives these two characteristics for MTS. We also tried the steepest ascent algo-rithm on the Silva instances. The results are presented in the fifth column. The algorithm took much less thanone second even for the largest problems. The last two columns display, for each instance, the difference betweenthe best value and the objective value of a solution found by the KLD + PR method proposed by Silva et al. [20]and, respectively, by the Tabu_D2 + LS_TS method proposed by Duarte and Martı [7] (these values arereported, respectively, in [20] and [7]). The results for Tabu_D2 + LS_TS were obtained in 10 s per instanceon a Pentium IV 3 GHz PC (see Appendix A in [7]). The computation times required for KLD + PR are muchlonger. For example, KLD + PR took 52,497 s on an AMD Athlon 1.4 GHz PC for the last problem instance inthe set (see [20] for the other CPU times). The last row of Table 1 presents the results averaged over all 20instances.

Page 8: Iterated tabu search for the maximum diversity problem

Table 1Performance on the Silva instances

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS STA KLD + PR Tabu_D2 + LS_TS

Silva_100_10 333 0(0) 0 (3.8) 8 0 0Silva_100_20 1195 0(0) 0 (8.6) 32 0 0Silva_100_30 2457 0(0) 0 (40.1) 104 0 0Silva_100_40 4142 0(0) 0 (2.5) 75 0 0Silva_200_20 1247 0(0) 6 (17.4) 51 0 0Silva_200_40 4450 0(0) 0 (16.6) 151 0 0Silva_200_60 9437 0(0) 0 (16.5) 159 0 0Silva_200_80 16,225 0(0) 0 (14.7) 173 0 0Silva_300_30 2694 0(0) 11 (31.8) 25 3 0Silva_300_60 9689 0(3.2) 0 (41.2) 268 0 0Silva_300_90 20,743 0(0) 18 (69.6) 315 0 9Silva_300_120 35,881 0(0) 3 (20.7) 291 2 3Silva_400_40 4658 0(0) 27 (52.6) 91 0 3Silva_400_80 16,956 0(0) 24 (78.7) 413 11 8Silva_400_120 36,317 0(0) 0 (101.6) 487 11 19Silva_400_160 62,487 0(7.0) 10 (88.1) 744 0 31Silva_500_50 7141 0(0) 24 (74.3) 170 11 8Silva_500_100 26,258 0(0) 15 (96.8) 352 4 4Silva_500_150 56,572 0(0) 0 (110.0) 824 0 0Silva_500_200 97,344 0(0) 18 (108.7) 868 0 0

Average 0(0.5) 7.8(49.7) 280.0 2.1 4.2

378 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

As Table 1 shows, the proposed iterated tabu search algorithm is superior to other algorithms. It was ableto find best solutions in almost all runs in a reasonable amount of time on a rather slow computer we haveused. Moreover, for three instances, namely, Silva_400_120, Silva_500_50 and Silva_500_100, ITS produced

Table 2Performance on the Duarte-Martı Type1_55 instances (n ¼ 500;m ¼ 50)

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS Tabu_D2 + LS_TS

Type1_55.1 7833.83 0(0) 46.82(118.86) 0.01Type1_55.2 7771.66 0(6.42) 44.03(83.35) 16.76Type1_55.3 7759.36 0(0) 2.30(66.54) 9.72Type1_55.4 7770.24 0(1.97) 0(56.87) 11.12Type1_55.5 7755.23 0(2.85) 49.25(102.08) 6.72Type1_55.6 7773.71 0(0) 38.07(80.99) 10.67Type1_55.7 7771.73 0(0.84) 22.11(67.35) 19.03Type1_55.8 7750.88 0(0) 15.92(88.47) 15.72Type1_55.9 7770.07 0(2.45) 9.38(59.70) 16.14Type1_55.10 7780.35 0(0) 12.65(77.19) 1.70Type1_55.11 7770.95 0(2.10) 49.07(91.43) 1.34Type1_55.12 7757.65 0(0) 7.10(90.24) 0Type1_55.13 7798.43 0(0) 54.90(108.80) 14.63Type1_55.14 7795.63 0(0) 48.24(87.37) 4.55Type1_55.15 7736.84 0(3.20) 29.59(82.38) 18.13Type1_55.16 7792.77 0(0) 39.33(85.43) 0Type1_55.17 7787.20 0(0) 0(78.44) 1.22Type1_55.18 7756.26 0(1.19) 31.38(83.06) 0Type1_55.19 7755.41 0(0) 58.53(101.13) 0Type1_55.20 7733.86 0(0) 18.50(71.08) 0

Average 0(1.05) 28.86(84.04) 7.37

Page 9: Iterated tabu search for the maximum diversity problem

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 379

solutions of value (shown in boldface) that is larger than the best known value reported in the literature. Asexpected, STA provided worse solutions then the iterative approaches. However, STA is a very fast construc-tive algorithm, therefore the results for STA do not seem to be very poor.

Computational experiments with ITS and MTS were also performed on the four sets of instances recentlyintroduced by Duarte and Martı [7]. Those data sets are named Type1_55, Type1_52, Type1_22 and Type2.Their parameters are as follows: for Type1_55, n ¼ 500, m ¼ 50; for Type1_52, n ¼ 500, m ¼ 200; forType1_22, n ¼ 2000, m ¼ 200; for Type2, n ¼ 500, m ¼ 50. For the instances in the Type2 set, each dij inthe objective function is a real number randomly and uniformly drawn from the interval (0,1000). Otherinstances were generated similarly, except that the shorter interval (0,10) was used. The results obtainedfor the Duarte-Martı instances are summarized in Tables 2–5. The entries of the last three columns of thesetables are calculated in the same way as those of the corresponding columns of Table 1. As in the case of theSilva instances, the runs of both ITS and MTS were limited to 20 s. The second column of Table 2 (respec-tively, Table 5) contains, for each instance in the Type1_55 (respectively, Type2) data set, the value of the bestsolution obtained from the 10 runs of ITS. Meanwhile, for each instance in the other two sets, an additional,longer run of ITS was performed. For these runs, the time limits were increased to 1200 s for Type1_52 and3600 s for Type1_22. The objective function values of the solutions produced are displayed in the second col-umn of Table 3 and 4, respectively. The data in the last column of the tables are obtained by taking the dif-ference between the values listed in the second column and those reported in [7] for the Tabu_D2 + LS_TSmethod. A few such differences (in Tables 3 and 5), however, appear to be negative. Since the distance matricesconsist of real numbers (except the case of Type1_22 where all dij are integers), the small discrepancy betweenthe results can be explained due to rounding errors in floating-point computations. In the programs imple-menting ITS as well MTS, the real numbers are stored in variables of type ‘‘double’’, thus all calculationsare performed on double precision floating-point numbers.

The results given in Tables 2–5 clearly show that ITS performs significantly better than MTS. Comparisonwith Tabu_D2 + LS_TS also is in favour of the ITS algorithm. Specifically, ITS produced better solutionsthan those reported in [7] for 66 problem instances. The objective function values for these solutions are dis-played in boldface (see the second column of the tables). The instances in the Type1_22 set appear to be more

Table 3Performance on the Duarte-Martı Type1_52 instances (n ¼ 500, m ¼ 200)

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS Tabu_D2 + LS_TS

Type1_52.1 107394.58 0(0.68) 26.82(59.53) �0.19Type1_52.2 107251.75 0(0) 82.13(152.68) 95.76Type1_52.3 107260.39 0(0) 2.23(81.10) 12.73Type1_52.4 107010.90 0(0.63) 17.77(119.15) 24.60Type1_52.5 106944.55 0(12.82) 0(149.52) 22.29Type1_52.6 107167.36 0(1.15) 55.45(156.64) 2.83Type1_52.7 107079.44 0(0) 0.82(118.72) 39.38Type1_52.8 107077.45 0(14.66) 1.50(113.60) 63.21Type1_52.9 107482.71 0(0) 3.90(69.45) 6.52Type1_52.10 107265.81 0(0) 29.23(121.58) 70.06Type1_52.11 107193.08 0(1.15) 18.29(97.82) 48.81Type1_52.12 106853.46 0(5.25) 20.78(112.29) 40.73Type1_52.13 107647.28 0(0) 2.55(61.42) 15.41Type1_52.14 107427.17 0(3.34) 8.36(124.92) 23.30Type1_52.15 107054.79 16.54(16.54) 25.37(125.79) 47.90Type1_52.16 107420.66 0(0) 5.31(151.35) 50.64Type1_52.17 107111.01 0(5.32) 38.16(108.82) 54.09Type1_52.18 107006.35 0(2.43) 24.25(166.63) 52.71Type1_52.19 107052.95 0(12.46) 24.05(87.74) 44.36Type1_52.20 106815.65 0(6.54) 106.67(224.12) 80.51

Average 0.83(4.15) 24.68(120.14) 39.78

Page 10: Iterated tabu search for the maximum diversity problem

Table 4Performance on the Duarte-Martı Type1_22 instances (n ¼ 2000, m ¼ 200)

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS Tabu_D2 + LS_TS

Type1_22.1 114,271 110 (280.1) 634 (943.2) 431Type1_22.2 114,327 75 (293.2) 652 (1117.1) 491Type1_22.3 114,195 69 (242.1) 709 (957.1) 664Type1_22.4 114,093 93 (200.9) 685 (936.7) 668Type1_22.5 114,196 171 (362.7) 652 (822.9) 780Type1_22.6 114,265 105 (249.8) 586 (869.9) 304Type1_22.7 114,361 33 (208.3) 721 (988.9) 660Type1_22.8 114,327 25 (271.7) 711 (1039.4) 599Type1_22.9 114,199 9 (200.4) 551 (878.4) 501Type1_22.10 114,229 166 (266.9) 618 (829.7) 658Type1_22.11 114,214 166 (306.1) 639 (947.9) 583Type1_22.12 114,214 124 (283.4) 548 (919.7) 566Type1_22.13 114,233 163 (301.2) 278 (834.8) 777Type1_22.14 114,216 147 (355.0) 589 (1001.1) 614Type1_22.15 114,240 11 (138.2) 403 (809.8) 390Type1_22.16 114,335 44 (263.6) 562 (952.4) 718Type1_22.17 114,255 187 (260.7) 707 (916.9) 498Type1_22.18 114,408 93 (201.9) 630 (860.4) 675Type1_22.19 114,201 118 (281.0) 777 (974.8) 643Type1_22.20 114,349 177 (329.1) 435 (830.2) 775

Average 104.3 (264.8) 604.3 (921.6) 599.7

380 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

difficult than in the other sets. Although, for these instances, some extra iterations of tabu search producesome improvements in the performance of ITS, the solutions obtained in 20 s of computation time are alreadyof sufficiently high quality.

Table 5Performance on the Duarte-Martı Type2 instances (n ¼ 500, m ¼ 50)

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS Tabu_D2 + LS_TS

Type2.1 778030.57 0(0) 5013.29(10039.02) 140.44Type2.2 779963.54 0(0) 1018.11(9600.52) �0.27Type2.3 776768.17 0(0) 2958.72(8743.68) 296.73Type2.4 775394.47 0(0) 524.02(8633.02) 85.72Type2.5 775610.96 0(329.84) 4851.53(8466.06) �0.48Type2.6 775153.58 0(0) 983.37(8812.20) 793.52Type2.7 777232.88 0(0) 1159.68(4868.34) 844.13Type2.8 779168.62 0(0) 0(8161.97) 945.12Type2.9 774802.05 0(0) 2879.18(6736.58) 0.30Type2.10 774961.12 0(105.68) 1286.24(6419.93) 1020.12Type2.11 777468.78 0(263.99) 3844.00(11566.61) 518.47Type2.12 775492.89 0(0) 5704.64(8981.63) 2722.20Type2.13 780191.78 0(197.62) 2179.18(11166.08) �0.16Type2.14 782232.68 0(398.76) 2329.20(8345.42) 576.74Type2.15 780300.33 0(0) 5108.39(11707.44) �0.30Type2.16 775436.19 0(0) 432.20(6718.14) 0.13Type2.17 776618.99 0(557.87) 2694.07(7400.73) 1956.18Type2.18 775850.64 0(57.56) 2263.07(8840.64) 349.58Type2.19 778802.82 0(36.39) 1612.60(8320.34) 0.07Type2.20 778644.65 0(0) 4149.72(10292.15) 72.90

Average 0(97.39) 2549.56(8691.02) 516.06

Page 11: Iterated tabu search for the maximum diversity problem

Table 6Performance on larger problem instances

Instance Density Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS

p3000_1 10 6,501,999 330 (854.2) 5615 (8916.5)p3000_2 30 18,272,568 0 (1124.3) 11,680 (19699.4)p3000_3 50 29,867,138 1271 (2181.5) 9472 (16153.6)p3000_4 80 46,914,817 1159 (2250.0) 12,521 (18588.0)p3000_5 100 58,095,034 0 (818.2) 6096 (14428.2)p5000_1 10 17,508,071 902 (1920.7) 11,591 (17426.4)p5000_2 30 50,101,514 2205 (2796.2) 21,972 (31588.9)p5000_3 50 82,038,723 4331 (6817.2) 25,795 (37562.6)p5000_4 80 129,411,337 658 (2705.7) 26,290 (37354.4)p5000_5 100 160,597,469 1370 (3644.1) 18,037 (28876.4)

Average 1222.6 (2511.2) 14906.9 (23059.4)

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 381

Table 6 reports on our computational experience with a set of randomly generated problems that are largerthan those considered in [7,19]. Basically, this set consists of two subsets with five instances each. Instances inthe first subset are of size 3000 while in the second are of size 5000. The density of the matrix D is indicated inthe second column. All nonzero coefficients of the objective function are integers drawn uniformly at randomfrom the interval [0,100]. The value of m is set to 0.5n. The third column of the table provides, for eachinstance in the first (respectively, second) subset, the value of the solution found during the 5 h run (respec-tively, 10 h run) of ITS. The last two columns summarize the results obtained in 10 shorter runs of the twotested algorithms. Their structure is the same as that of similar columns of the previous tables. The imposedtime limit for a run was 1200 s for instances in the first subset and 3600 s for instances in the second subset. AsTable 6 shows, ITS again is definitely superior to the MTS algorithm. Though the problem instances are moredifficult than those considered so far, ITS was able to find good solutions in relatively modest computationtimes.

The two final computational experiments have been conducted on a set of test cases taken from theOR-Library [21]. Originally, these test cases were introduced as instances of the unconstrained binary qua-dratic optimization problem. We adopted these instances for testing algorithms for the MDP. In the data filesspecifying problem instances, the dii values are excessive in the case of the MDP and were not used. All theinstances, named b2500-1, . . . ,b2500-10, are of size 2500 and have density 10%. In each case, the matrix D

contains both positive and negative numbers. We performed two experiments. In the first one, we setm ¼ m1 ¼ m2 ¼ 1000, whereas in the second, we set m1 ¼ 1620, m2 ¼ 1655. The numerical results are summa-rized in Tables 7 and 8, respectively. The structure of Table 7 is similar to that of Table 6 (except that column

Table 7Performance on the Beasley instances: m ¼ 1000

Instance Best value Solution difference (i. e., best value – heuristic solution value)

ITS MTS

b2500-1 1,153,068 808 (2550.2) 8784 (16487.8)b2500-2 1,129,310 602 (1251.6) 11,074 (17601.0)b2500-3 1,115,538 208 (2074.4) 13,446 (20356.4)b2500-4 1,147,840 746 (1688.6) 12,636 (18488.8)b2500-5 1,144,756 558 (1211.4) 13,054 (19598.6)b2500-6 1,133,572 250 (1512.4) 15,942 (20561.6)b2500-7 1,149,064 306 (1044.4) 7586 (15856.6)b2500-8 1,142,762 324 (1754.4) 12,850 (17868.2)b2500-9 1,138,866 810 (2273.2) 11,506 (21466.6)b2500-10 1,153,936 426 (1457.8) 11,740 (16726.8)

Average 503.8 (1681.8) 11861.8 (18501.2)

Page 12: Iterated tabu search for the maximum diversity problem

Table 8Performance on the Beasley instances: m1 ¼ 1620;m2 ¼ 1655

Instance Best value ~m Solution difference for ITS(i. e., best value – heuristic solution value)

b2500-1 1,514,640 1655 0 (182.0)b2500-2 1,470,970 1651 6 (209.0)b2500-3 1,412,726 1620 48 (599.2)b2500-4 1,508,654 1655 10 (85.4)b2500-5 1,490,366 1655 0 (86.0)b2500-6 1,470,200 1644 0 (0)b2500-7 1,478,924 1633 0 (0)b2500-8 1,483,440 1620 0 (175.8)b2500-9 1,482,384 1649 0 (0.4)b2500-10 1,484,416 1625 132 (159.0)

Average 19.6(149.7)

382 G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383

showing density is not present). In Table 8, ~m is the number of ones in a solution x whose value is reported inthe second column. Table 8 does not contain results for MTS because this algorithm was developed for solvingthe model (1), (3), (4), and not (1)–(3) with m1 < m2. The second column of each table gives, for each instance,the value of the solution found during the 5 h run of ITS. The time limit for 10 shorter runs in both experi-ments was set to 1200 s. Table 7 shows that ITS again consistently outperforms the MTS algorithm. Compar-ing the results for ITS in Tables 7 and 8, we see that solutions to the less constrained problem (when m1 < m2)are generally closer to the best solutions than those to the more constrained problem (when m1 ¼ m2). Noticethat there are five instances of the former problem for which the number of ones ~m in the best solutionappeared to be strictly inside the interval [1620,1655].

5. Conclusions

In this paper we have described steepest ascent and iterated tabu search algorithms for the maximum diver-sity problem. Both algorithms are rather simple and easy to implement. The steepest ascent algorithm mayappear to be particularly appropriate in application settings where solutions of satisfactory quality arerequired to be provided for large problem instances very quickly, e.g. in less than one second of computationtime. For obtaining high quality solutions, the proposed iterated tabu search algorithm is a very attractivealternative to the existing algorithms based on various metaheuristics. Computational experience shows thatthis algorithm provides a significantly better performance than other state-of-the-art algorithms. In particular,the proposed algorithm found new best solutions for 69 test problems appearing in the literature. The size ofproblem instances for which good solutions can be obtained in a reasonable amount of time using the pre-sented iterated tabu search algorithm reaches 5000 of variables. We believe that similar principles could besuccessfully applied to develop practical algorithms for a number of other hard combinatorial optimizationproblems.

References

[1] M.J. Kuby, Programming models for facility dispersion: the p-dispersion and maxisum dispersion problems, Geographical Analysis19 (1987) 315–329.

[2] S.S. Ravi, D.J. Rosenkrantz, G.K. Tayi, Heuristic and special case algorithms for dispersion problems, Operations Research 42 (1994)299–310.

[3] E.M. Macambira, C.C. de Souza, The edge-weighted clique problem: valid inequalities, facets and polyhedral computations,European Journal of Operational Research 123 (2000) 346–371.

[4] B. Chandra, M.M. Halldorsson, Approximation algorithms for dispersion problems, Journal of Algorithms 38 (2001) 438–465.[5] E.M. Macambira, An application of tabu search heuristic for the maximum edge-weighted subgraph problem, Annals of Operations

Research 117 (2002) 175–190.[6] U. Feige, G. Kortsarz, D. Peleg, The dense k-subgraph problem, Algorithmica 29 (2001) 410–421.[7] A. Duarte, R. Martı, Tabu search and GRASP for the maximum diversity problem, European Journal of Operational Research 178

(2007) 71–84.

Page 13: Iterated tabu search for the maximum diversity problem

G. Palubeckis / Applied Mathematics and Computation 189 (2007) 371–383 383

[8] F. Glover, C.C. Kuo, K.S. Dhir, Heuristic algorithms for the maximum diversity problem, Journal of Information and OptimizationSciences 19 (1998) 109–132.

[9] G. Kochenberger, F. Glover, Diversity data mining. Working paper, University of Mississippi, University, MS, 1999.[10] O. Koksoy, T. Yalcinoz, Mean square error criteria to multiresponse process optimization by a new genetic algorithm, Applied

Mathematics and Computation 175 (2006) 1657–1674.[11] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, IT–CEMOP: an iterative co-evolutionary algorithm for multiobjective optimization

problem with nonlinear constraints, Applied Mathematics and Computation 183 (2006) 373–389.[12] M. Hunting, U. Faigle, W. Kern, A Lagrangian relaxation approach to the edge-weighted clique problem, European Journal of

Operational Research 131 (2001) 119–131.[13] M.M. Sorensen, New facets and a branch-and-cut algorithm for the weighted clique problem, European Journal of Operational

Research 154 (2004) 57–70.[14] R. Hassin, S. Rubinstein, A. Tamir, Approximation algorithms for maximum dispersion, Operations Research Letters 21 (1997) 133–

137.[15] R.K. Kincaid, Good solutions to discrete noxious location problems via metaheuristics, Annals of Operations Research 40 (1992)

265–281.[16] J.B. Ghosh, Computational aspects of the maximum diversity problem, Operations Research Letters 19 (1996) 175–181.[17] F. Glover, Tabu search — part I, ORSA Journal on Computing 1 (1989) 190–206.[18] B. Alidaee, F. Glover, G. Kochenberger, H. Wang, Solving the maximum edge weight clique problem via unconstrained quadratic

programming, European Journal of Operational Research, in press, doi:10.1016/j.ejor.2006.06.035.[19] G.C. Silva, L.S. Ochi, S.L. Martins, Experimental comparison of greedy randomized adaptive search procedures for the maximum

diversity problem, Lecture Notes in Computer Science 3059 (2004) 498–512.[20] G.C. Silva, M.R.Q. de Andrade, L.S. Ochi, S.L. Martins, A. Plastino, New heuristics for the maximum diversity problem. Working

paper, University Federal Fluminense, Niteroi, Brazil, 2006.[21] J.E. Beasley, Obtaining test problems via Internet, Journal of Global Optimization 8 (1996) 429–433.