essential particle swarm optimization queen with tabu search for mkp resolution

25
Computing DOI 10.1007/s00607-013-0316-2 Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution Raida Ktari · Habib Chabchoub Received: 4 July 2012 / Accepted: 23 February 2013 © Springer-Verlag Wien 2013 Abstract The Particle Swarm Optimization (PSO) algorithm is an innovative and promising optimization technique in evolutionary computation. The Essential Particle Swarm Optimization queen (EPSOq) is one of the recent discrete PSO versions that further simplifies the PSO principles and improves its optimization ability. Hybridiza- tion is a principle of combining two (or more) approaches in a wise way such that the resulting algorithm includes the positive features of both (or all) the algorithms. This paper proposes a new heuristic approach such that various features inspired from the Tabu Search are incorporated in the EPSOq algorithm in order to obtain another improved discrete PSO version. The implementation of this idea is identified with the acronym TEPSOq (Tabu Essential Particle Swarm Optimization queen). Experi- mentally, this approach is able to solve optimally large-scale strongly correlated 0–1 Multidimensional Knapsack Problem (MKP) instances. Computational results show that TEPSOq has outperforms not only the EPSOq, but also other existing PSO-based approaches and some other meta-heuristics in solving the 0–1 MKP. It was discovered also that this algorithm is able to locate solutions extremely close and even equal to the best known results available in the literature. Keywords Particle Swarm Optimization · Essential Particle Swarm Optimization queen · Tabu Search · Hybridization · Multidimensional Knapsack Problem Mathematics Subject Classification 68T20 Problem solving (heuristics, search strategies, etc.) · 68R05 Combinatorics R. Ktari (B ) · H. Chabchoub L.O.G.I.Q. Research unit, Sfax University, Rte Mharza km 1,5, 3018 Sfax, Tunisia e-mail: [email protected] H. Chabchoub e-mail: [email protected] 123

Upload: habib-chabchoub

Post on 13-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

ComputingDOI 10.1007/s00607-013-0316-2

Essential Particle Swarm Optimization queenwith Tabu Search for MKP resolution

Raida Ktari · Habib Chabchoub

Received: 4 July 2012 / Accepted: 23 February 2013© Springer-Verlag Wien 2013

Abstract The Particle Swarm Optimization (PSO) algorithm is an innovative andpromising optimization technique in evolutionary computation. The Essential ParticleSwarm Optimization queen (EPSOq) is one of the recent discrete PSO versions thatfurther simplifies the PSO principles and improves its optimization ability. Hybridiza-tion is a principle of combining two (or more) approaches in a wise way such thatthe resulting algorithm includes the positive features of both (or all) the algorithms.This paper proposes a new heuristic approach such that various features inspired fromthe Tabu Search are incorporated in the EPSOq algorithm in order to obtain anotherimproved discrete PSO version. The implementation of this idea is identified withthe acronym TEPSOq (Tabu Essential Particle Swarm Optimization queen). Experi-mentally, this approach is able to solve optimally large-scale strongly correlated 0–1Multidimensional Knapsack Problem (MKP) instances. Computational results showthat TEPSOq has outperforms not only the EPSOq, but also other existing PSO-basedapproaches and some other meta-heuristics in solving the 0–1 MKP. It was discoveredalso that this algorithm is able to locate solutions extremely close and even equal tothe best known results available in the literature.

Keywords Particle Swarm Optimization · Essential Particle Swarm Optimizationqueen · Tabu Search · Hybridization · Multidimensional Knapsack Problem

Mathematics Subject Classification 68T20 Problem solving (heuristics, searchstrategies, etc.) · 68R05 Combinatorics

R. Ktari (B) · H. ChabchoubL.O.G.I.Q. Research unit, Sfax University, Rte Mharza km 1,5, 3018 Sfax, Tunisiae-mail: [email protected]

H. Chabchoube-mail: [email protected]

123

Page 2: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

1 Introduction

Swarm intelligence is concerned with the design of intelligent multi-agent systems bytaking inspiration from the collective behavior of flocks of birds or schools of fish.Swarm intelligence methods such as Particle Swarm Optimization (PSO) have beenvery successful in the area of optimization, which is of great importance for both,the industrial world, as well as for the science. In fact, PSO has gained widespreadappeal amongst researchers and has been shown to offer efficiency in a variety of appli-cation domains such as power and voltage control [1], mass-spring system [2], taskassignment [3], biomedical image registration [4], and even music composition [5].A comprehensive survey of the PSO algorithms and applications can be found in [6].

Particle swarm optimization (PSO) was first introduced as an optimization methodfor solving continuous problem [7,8]. Later, a binary version for the PSO (BPSO)was proposed to accommodate discrete binary variables and allow it to operate in abinary problem space [9]. Since the BPSO did not provide satisfying results [10], someBPSO versions have been proposed in order to improve its findings and performance.Very recently, Chen et al. [11] suggested an improved version of BPSO, called EPSOq(Essential Particle Swarm Optimization queen).

Hybrid (or cooperative) algorithms are selected as the topic of the present articlesince they are a growing area of intelligent systems research, which aims to combinethe desirable properties of different approaches to mitigate their individual weak-nesses. These methods are not new in the operational research community. It is notso easy to classify the existing optimization methods. Beyond the classical separationbetween exact methods and heuristic methods, several papers are devoted to the tax-onomy of hybrid methods. In general, cooperative systems have drawn much attentionover the recent years since they often significantly outperform the individual “pure”approaches. In fact, the main purpose of this paper is to suggest an improving methodof the original EPSOq algorithm and consequently, assess its performance relativelyto other methods.

Indeed, our contribution has a twofold aim: first, is to incorporate in the EPSOqalgorithm various features inspired from the Tabu Search which has dramaticallysolved a lot of problems in applied science, business and engineering. These featuresyield to the exploration (diversification) and the exploitation (intensification) of thesearch process in the aim to come as near as possible to the optimum solution. Second,is to explore the unfeasible area of the search space and the repair operator that issuggested by Chu and Beasley [12] and based on the pseudo-utility ratios will beembed in our algorithm in order to change an unfeasible solution to feasible one.

This paper deals with the application of EPSOq and TEPSOq in the field of Combi-natorial Optimization (CO) problems, which is a quite rare field tackled by PSO. Theproblem discussed in this paper is the Multidimensional Knapsack Problem (MKP).It is well-known to be NP-hard Combinatorial Optimization problem [13]. Althoughrecent advances have made possible the solution of medium size instances, solvingthis NP-hard problem remains an increasingly interesting challenge, especially whenthe number of constraints increases.

The remainder of this paper is organized as follows: Sect. 2 presents a description ofthe 0–1 Multidimensional Knapsack Problem. Section 3 explains briefly the different

123

Page 3: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

principles of the PSO in its continuous version, the BPSO and the EPSOq that we useas a basis for our algorithm. Section 4 presents different details about our contributionwhich includes the incorporation of the Tabu Search characteristics in the EPSOq. Theperformance assessment of the EPSOq and TEPSOq is a critical point in this paper.That is why, in Sect. 5 we experiment these approaches on a very large variety ofbig size 0–1 MKP instances from OR-Library [14], which are considered to be ratherdifficult for optimization approaches. Then, we compare our results with the EPSOqperformance, with the best known ones in the literature [12,15,16] and also with otherexisting approaches. Finally, Sect. 6 provides some concluding remarks.

2 Multidimensional Knapsack Problem

2.1 MKP and its application domains

The 0–1 Multidimensional Knapsack Problem (MKP) is one of the most well-knownconstrained integer programming problems that have received wide attention from theoperational research community during the last four decades. Several names have beenmentioned in the literature for the MKP: m-dimensional knapsack problem, multidi-mensional knapsack problem, multiknapsack problem, multiconstraint 0–1 knapsackproblem, etc. Weingartner and Ness [17] were the first to name the MKP since it is ageneralization of the standard 0–1 knapsack problem (m=1) [18].

Indeed, this problem consists in selecting a subset of n given objects (or items) insuch a way that the total profit of the selected objects is maximized while a set of knap-sack constraints are satisfied. More formally, the 0–1 MKP can be stated as follows:

(M K P)

⎧⎪⎪⎨

⎪⎪⎩

Max∑n

j=1 c j x j (1)

s.s.∑n

j=1 ai j x j ≤ bi ; (2)

∀ i ∈ M = {1, . . . , m}x j ∈ {0, 1} ∀ j ∈ N = {1, . . . , n} (3)

Equation (1) describes the objective function for the MKP. Each of the m constraintsdescribed in condition (2) is called a knapsack constraint while M = {1, 2, . . ., m}and N = {1, 2, . . ., n}, with bi > 0 for all i ∈ M and ai j ≥ 0 for all i ∈ M, j ∈ N , awell-stated MKP assumes that c j > 0 and ai j ≤ bi <

∑ai j for all i ∈ M, j ∈ N .

MKP is one of the most intensively studied discrete programming problems, mainlybecause its simple structure which can be seen as a general model for any kind of binaryproblems with positive coefficients.

The large domain of its applications has greatly contributed to its fame. In fact,many applications of this NP-hard problem are resource allocation-based [17,19].Recently, Meier et al. [20] used the MKP as a sub-problem in a new capital budgetingmodel. There are other applications such as cutting stock [21], loading problems [22],processor allocation in distributed systems [23] and the daily management of a satellite[24]. Given the practical and theoretical importance of the 0–1 MKP, it is not surprisingto find a large number of studies in the literature; we give a brief review of these studiesin the next section.

123

Page 4: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

2.2 MKP and its resolution methods

Several exact and heuristic optimization methods designed to find the optimal solu-tion of the problem were successfully applied to small sized instances. Such a successis unfortunately not guaranteed for problems with moderate and large size due tomemory and computational time requirements. Some exact algorithms which are onlyapplicable to very small MKPs are presented e.g. by [25]. Besides, Balas and Martin[26] introduced a heuristic algorithm for the MKP which utilizes linear programming(LP) by relaxing the integrity constraints x j ∈ {0, 1} to 0 ≤ x j ≤ 1. Linear program-ming problems are not NP-hard and can be solved efficiently, e.g. with the famousSimplex algorithm [27]. The fractional x j are then set to 0 or 1 according to a heuristicwhich maintains the feasibility. Further examples of heuristic algorithms for the MKPresolution can be found in [28–31]. A comprehensive review on exact and heuristicalgorithms is given in [32,33].

Due to the NP-hardness of the 0–1 MKP problem, which leads to intractable com-putation time for large instances, plenty of efficient algorithms have been used to solveit. Several of those are meta-heuristic based algorithms. Indeed, Chu and Beasley [12]proposed a genetic algorithm for the MKP, Hanafi and Fréville [34] used the TabuSearch for the MKP resolution whereas Alonso et al. [35] suggested an evolution-ary strategy for MKP based on genetic computation of surrogate multipliers. Drexl[36] proposed a simulated annealing approach for the MKP and Stefka Fidanova [37]applied the ant colony optimization. Li et al. [38] suggested a genetic algorithm basedon the orthogonal design for MKP. Zhou et al. [39] suggested a chaotic neural networkcombined heuristic strategy. Angelelli et al. [40] proposed Kernel search which is ageneral heuristic for MKP.

Reference [41] was the first tentative in solving the MKP using the ParticleSwarm Optimization (PSO). Authors incorporated a heuristic repair operator utilizingproblem-specific knowledge into the modified algorithm. Reference [42] proposed anovel Probability Binary Particle Swarm Optimization algorithm for the 0–1 MKPresolution, in which, a novel updating strategy is adopted to update the swarm andsearch the global solution, so as to further simplify the computations and improvethe optimization ability. Reference [43] suggested a hybrid PSO which were obtainedusing linear programming and in [44] the candidate solution and velocity are definedas a crisp set, and a set with possibilities, respectively. All arithmetic operators in thevelocity and position updating rules used in the original PSO are replaced by the oper-ators and procedures defined on crisp sets, and sets with possibilities in their method(S-CLPSO) that can thus follow a similar structure to the original PSO for searchingin a discrete space.

3 Particle Swarm Optimization

3.1 Real-valued Particle Swarm Optimization

Particle Swarm Optimization (PSO) is one of the evolutionary optimization meth-ods inspired by nature which include evolutionary strategy (ES), evolutionary

123

Page 5: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

programming (EP), genetic algorithm (GA), and genetic programming (GP) [45].It is a population based search algorithm based on the simulation of the social behav-ior of animals [46]. PSO was originally designed and introduced by [7,8]. The PSOalgorithm is based on the exchange of information between individuals, so called par-ticles, of the population, so called swarm. Indeed, each particle adjusts its own positiontowards its previous experience and towards the best previous position obtained in theswarm.

In fact, a swarm consists of Np particles moving around a n-dimensional searchspace, each representing a potential solution. The i th particle is characterized by itsposition vector Xi = [xi1, xi2, . . . , xin] and velocity vector Vi = [vi1, vi2, . . . , vin].A particle flies through the solution space on each dimension searching for the globaloptimal solution. Each particle remembers at which position it achieved its highestperformance (x∗

i j ). Every particle is also a member of some neighborhood of particles,and remembers which particle achieved the best overall position in that neighborhood(given by the index i ′). In the process of flying the current position and velocity of thei th particle are updated according to the following Eqs. (4) and (5):

vi j (t + 1) = ω vi j (t) + U (0, ϕ1)(

x∗i j (t) − xi j (t)

)(4)

+ U (0, ϕ2)(

x∗i ′ j (t) − xi j (t)

)

xi j (t + 1) = xi j (t) + vi j (t + 1) (5)

where the function U (min, max) is a uniform random number generator. It generatesa new value every time it is called. ϕ1 is the weight given to the attraction to theprevious best location of the current particle and ϕ2 is the weight given to the attractionto the previous best location of the particle neighborhood. Parameter ω is an inertiaweight in the interval [0.0, 1.0] to control the influence of the previous velocity. Itslows the velocity over time to prevent explosions of the swarm and ensure ultimateconvergence.

The velocity vi of each particle is clamped to a maximum velocity Vmax which isspecified by the user. Vmax determines the resolution with which regions between thepresent position and the target position are searched. Large values of Vmax facilitateglobal exploration, while smaller values encourage local exploitation. If Vmax is toosmall, the swarm may not explore sufficiently beyond locally good regions. On theother hand, too large values of Vmax risk the possibility of missing a good region [7].The working of PSO in space is illustrated in Fig. 1.

3.2 Binary Particle Swarm Optimization

Standard PSO and most of its variants are developed for optimization problems inwhich the elements of the solution are continuous real numbers and therefore are notable to solve the binary combinational optimization problem. In order to tackle thisproblem, Kennedy and Eberhart [9] extended the original Particle Swarm Optimizationand suggested the first BPSO. A sigmoid function, given by S(x) = 1/(1 + e−x), is

123

Page 6: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

Fig. 1 PSO searchingmechanism

applied to transform all real valued velocities value in the range [0.0, 1.0], and so, abinary position can be decoded employing the velocity as a probability.

In BPSO, the updating velocity Eq. (4) remains unchanged whereas, the positionupdating rule given in Eq. (5) is substituted by the following equation:

xi j (t + 1) ={

1 if U (0, 1) < S(vi j (t + 1))

0 otherwise(6)

The velocity value under the sigmoid function is directly used so as to precise thevalue of xi j (t + 1) without considering its pervious value xi j (t). In fact, it is obviousthat the sigmoid function seems to be completely different from the update step in thecontinuous PSO version.

To improve the PSO performance, few modified versions have been proposed. Shenet al. [47] developed a modified BPSO for feature selection in MLR and PLS modeling.Then, Lee et al. [48] proposed a modified version of BPSO using the concepts of thegenotype–phenotype and the mutation of genetic algorithm.

To the best of our knowledge, the Essential Particle Swarm Optimization queen(EPSOq) introduced by [11] is the most recent improved PSO variant. The latter wastested on several Benchmark functions. Its Simulation results demonstrated that it hasa high possibility of providing an excellent performance for binary space problems.This version is detailed further in the next section.

3.3 Essential Particle Swarm Optimization queen

The Essential Particle Swarm Optimization queen (EPSOq) [11] is one of the fewestresearches that are focused in improving the BPSO. In fact, the EPSOq idea con-sists in introducing the concept of the queen informant particle as pheromone arrayin Ant Colony Optimization [49]. The queen informant has two parts and eachpart owns the same dimension as any other particle in the swarm. One part isq0 = [q01, q02, . . . , q0n], the other part is q1 = [q11, q12, . . . , q1n], where qkj (k =0; 1and j = 1, 2, . . . , n) represents the probability that xgj (t + 1) equals to k. Only

123

Page 7: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

the global best particle (given by the index g) updates the queen informer after eachloop. The updating rules are given by:

q0 j (t + 1) = (1 − ρ)q0 j (t) + ρ(1 − xgj (t)) (7)

q1 j (t + 1) = (1 − ρ)q1 j (t) + ρ(xgj (t)) (8)

where ρ ∈ (0, 1), and (1 − ρ) is persistence rate of previous probability. In fact, ρ

reflects the evaporation of probability with the time, hence the summation q0 j (t) +a1 j (t) always equals 1. As a result, the computation of Eq. (8) would be simplified inthe following one:

q1 j (t + 1) = 1 − q0 j (t + 1) (9)

Indeed, the EPSOq idea consists in removing algebraically the velocity componentfrom the PSO approach. Because xi j (t + 1) = xi j (t) + vi j (t + 1) in the Eq. (5),authors understand by rearrangement that vi j (t + 1) = xi j (t + 1) + xi j (t). This alsohappened on the previous time-step. So, they know that vi j (t) = xi j (t) + xi j (t − 1).By substitution, the two rules (4) and (5) will be depleted into one:

xi j (t + 1) = xi j (t) + ω(xi j (t) − xi j (t − 1)

)(10)

+ U (0, ϕ1)(

x∗i j (t) − xi j (t)

)+ U (0, ϕ2)

(x∗

i ′ j (t) − xi j (t))

By rearrangement, we get:

xi j (t + 1) = (1 + ω −U (0, ϕ1) − U (0, ϕ2))xi j (t)−ω xi j (t − 1) + U (0, ϕ1)x∗

i j (t) + U (0, ϕ2)x∗i ′ j (t)

(11)

In fact, (1 + ω −U (0, ϕ1) − (0, ϕ2)) is referred as the probability that xi j equalsxi j (t), U (0, ϕ1) as the probability that xi j (t + 1) is equal to x∗

i j (t), and U (0, ϕ2) asthe probability that xi j (t + 1) is equal to x∗

i ′ j (t). On the other hand, the inertia weightω is regarded as the probability that xi j (t + 1) does not equal xi j (t − 1) becausethe coefficient of xi j (t − 1) is negative in the item −ω xi j (t − 1), that is why, ω isconsidered as the probability that xi j (t + 1) equals 1 − xi j (t − 1) in the binary-valuedproblems. As a result, P ′{xi j (t +1) = 0} is set as the probability that xi j (t +1) equals0, and P ′{xi j (t + 1) = 1} as the probability that xi j (t + 1) equals 1 by simplify-ing the Eqs. (10) and (11) and incorporating the queen informant as mentioned hereafter:

P ′ {xi j (t + 1) = 0} = U (0, 1)

(1 − xi j (t)

) + U (0, c0)xi j (t − 1) (12)

+ U (0, c1)(

1 − x∗i j (t)

)+ U (0, c2)

(1 − x∗

i ′ j (t))

+ U (0, c3)q0 j (t)

123

Page 8: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

P ′ {xi j (t + 1) = 1} = U (0, 1)xi j (t) + U (0, c0)

(1 − xi j (t − 1)

)(13)

+ U (0, c1)x∗i j (t) + U (0, c2)x∗

i ′ j (t) + U (0, c3)(1 − q0 j (t))

where c3 is a new parameter that is a relative value of the weight of qkj compared to theweight of xi j (t). In these rules, authors set the weight of xi j (t) to 1. As to c0, c1 andc2, they are relative values of the weight of xi j xi j (t − 1), x∗

i j (t) and x∗i ′ j (t) compared

to the weight of xi j (t) respectively. The likelihoods (10) and (12) are normalized asfollowing:

P{

xi j (t + 1) = 0} = P ′ {xi j (t + 1) = 0

}

(P ′ {xi j (t + 1) = 0

} + P ′ {xi j (t + 1) = 1}) (14)

P{

xi j (t + 1) = 1} = P ′ {xi j (t + 1) = 1

}

(P ′ {xi j (t + 1) = 0

} + P ′ {xi j (t + 1) = 1}) (15)

Such that P{xi j (t + 1) = 0} + P{xi j (t + 1) = 1} ≡ 1.In this research work, authors dismantle the BPSO algorithm qualitatively, breaking

it into its essential components and then reinterpreting it in another way as a newprogram. They do not increase the number of function evaluations by incorporating thequeen informant because it was just added a new informer that only offers informationto the other particles. Improving further this approach is our main interest as we havementioned in this paper.

3.4 Particle Swarm Optimization: advantages and drawbacks

Compared to other population-based optimization methods, the advantages of PSO arethe structure simplicity, immediate accessibility for practical applications, easiness ofimplementation, quickness to acquire solutions and robustness [50].

But, it is known that the original PSO version has troubles in controlling the trade-off between exploration and exploitation and, thus, has the disadvantage of prematureconvergence. The underlying principle behind this problem is that, for the global bestPSO, particles converge to a single point, which is on the line between the global bestand the personal best positions. This point is not guaranteed for a local optimum [32].Another reason for this problem is the fast rate of information flow between particles,resulting in the creation of similar particles with a loss in diversity that increases thepossibility of being trapped in local optima. Although PSO may outperform otherevolutionary algorithms in the early iterations, its performance may not remain com-petitive as the number of generations is increasing [51].

A further drawback is that stochastic approaches have problem-dependent perfor-mance. This dependency usually results from the parameter settings in each algorithm.The different parameter settings for a stochastic search algorithm result in high perfor-mance variances. In general, no single parameter setting can be applied to all problems.Increasing the inertia weight (w) will increase the speed of the particles resulting inmore exploration (global search) and less exploitation (local search) or on the other

123

Page 9: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

hand, reducing the inertia weight will decrease the speed of the particles resulting inmore exploitation and less exploration. Thus finding the best value for the parameteris not an easy task and it may differ from one problem to another.

Therefore, from the above, it can be concluded that the PSO performance isproblem-dependent. The problem-dependent performance can be addressed throughhybrid mechanism. It combines different approaches to be benefited from the advan-tages of each approach.

Hence, to overcome these limitations of PSO, several investigations have beenundertaken to improve the performance of standard PSO [52]. Many hybrid algo-rithms with GA are proposed. The basis behind this is that such a hybrid approach isexpected to have merits of PSO with those of GA. Attempts in literature have enhancedthe performance of PSO by incorporating in it the fundamentals of other popular tech-niques like selection, mutation and crossover of Genetic Algorithm (GA) [53] andDifferential Evolution (DE) [54].

4 Tabu Essential Particle Swarm Optimization queen

A natural evolution of the population based search algorithms like that of PSO can bereached by including methods that have already been tested successfully for solvingcomplex and intricate problems.

Our main goal, as we mentioned, is to harness the strong points of both, the EssentialParticle Swarm Optimization queen (EPSOq) and Tabu Search (TS) algorithm inthe aim to keep a balance between the exploration (diversification) and exploitation(intensification) factors thereby avoiding the stagnation of population and preventingpremature convergence.

The next subsections present a combination of the EPSOq principles with someconcepts that characterize the Tabu Search in order to improve the efficiency of thesearch process and thus, to efficiently solve the Multidimensional Knapsack Problem(MKP). We identify this hybridization with the acronym TEPSOq.

In fact, the TS becomes an established optimization technique that has shown anexceptional efficiency in solving various problems [55]. It can compete with almostall known techniques and can beat many classical procedures thanks to its flexibility.The roots of TS go back to the 1970’s; it was first presented in its current formby Glover [56]; the pivotal ideas have also been roughly sketched by Hansen [57].Extra formalization efforts are detailed in [58–60] and [61]. The proposed TEPSOqalgorithm with the specific Tabu Search features will be in more detail in the nextsub-sections.

4.1 Tabu lists

The risk of visiting again a solution and more generally of cycling is present. This is thepoint where the use of memory is helpful to avoid cycles and consequently, to forbidmoves which might lead to recently visited solutions. Tabu Search manages a memoryof the solutions or moves recently applied, which is called the Tabu list having a FIFO(First In First Out) data structure. The use of memory, which stores information relatedto the search process, represents the particular feature of TS. This tabu list constitutes

123

Page 10: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

the short-term memory. For efficiency purposes, it may be convenient to associatefor each particle one tabu list that its constituents are not allowed to be involved in aparticle move already visited. At each iteration, every short-term memory is updated.Otherwise, the last move is added in the tabu list, whereas the oldest move is removedfrom the list. Indeed, we have to check at each iteration if a generated solution of aparticle does not belong to its tabu list.

The tabu list will provide many restrictions, ensure the lack of cycles and conse-quently incite the diversification of the search as many moves are forbidden.

4.2 Diversification

Some advanced mechanisms are commonly introduced in Tabu Search to deal with theexploration (diversification) of the search. Diversification is worthwhile to ensure thatevery part of the solution domain is searched enough so as to provide a reliable estimateof the global optimum. This concept encourages the exploration of the unvisited areas.Usually, in PSO, a given particle is dynamically attracted by the social and the cognitivecomponents. Therefore, during evolution towards the best global solution, the particlescan exceed the boundaries of the feasible search space.

Indeed, the strategic oscillation [62] is a popular diversification strategy that may beadopted in our algorithm. This strategy allows the search toward unfeasible solutionsand then penalizes them in the aim to come back to feasible solutions. Unfeasiblesolutions have to be evaluated and it is important to perform this computation in anefficient way. Indeed, PSO usually makes use of penalty function technique in orderto reduce the constrained problem to an unconstrained problem by imposing a penaltyto the fitness of such particle [9].

Despite penalty function technique is sufficient for nearly all the PSO applications tothe constrained problems; it encloses a few parameters adjustment problem. Assumingthat the penalty parameter values are over high, the optimization algorithms often gettrapped in local minima. In other ways, if penalty values are over low, detecting feasibleoptimal solutions becomes intricate. Besides, because the penalty function techniquedoes not utilize the problem specific information, the definitive results are usually notconvinced in dealing with Combinatorial Optimization problems.

Instead of this methodology, we intend to incorporate the heuristic repair proceduresuggested in [12] specially designed for MKP. The repair operator is based on thenotion of the pseudo-utility ratios u j derived from the surrogate duality approach. Thepseudo-utility is defined as follows:

u j = c j∑m

i=1 wi ai j(16)

where w = (w1, w2, . . . , wm) is a set of surrogate multipliers (or weights) of somepositive real numbers. To obtain reasonably good surrogate weights we can solve theLinear Programming (LP) relaxation of the original MKP and use the values of thedual variables as the weights. Otherwise, wi is set equal to the shadow price of the i thconstraint in the LP relaxation of the MKP.

123

Page 11: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

A brief description behind this reparation method is given as follows: The firstphase, which is called DROP phase, changes the value of each bit of the solutionfrom one to zero in increasing order of u j if feasibility is violated. The second phase,so called ADD phase, reverses the process by changing each bit from zero to onein decreasing order of u j as long as feasibility is satisfied and not violated. So, thefeasibility of the solution will be always guaranteed. The pseudo-code for the repairoperator is given as follows:

The diversification process tends to spread the exploration effort over differentregions. Thus, it is shown to be so fruitful to the search because an unfeasible particlenow can be feasible later with a good fitness.

4.3 Aspiration

While central to the Tabu Search method, tabus are sometimes too powerful and restric-tive. They may prohibit attractive moves, even when there is no danger of cycling, or

123

Page 12: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

they may lead to an overall stagnation of the search process. Hence, we can rejectsolutions that have not yet been generated. If a move is profitable, but it is tabu, do westill reject it?

It is thus necessary to use algorithmic devices that will allow one to revoke (cancel)tabus. These are called aspiration criteria. The simplest and most commonly usedaspiration criterion, found in almost all Tabu Search implementations, allows a tabumove when it results in a solution with an objective value better than that of the currentbest-known solution (since the new solution has obviously not been previously visited).So, it is worthwhile to keep in mind the consideration of a tabu move if it yields anovel and non-tabu solution that is better than the global best one. The key rule is thatif cycling cannot occur, tabus can be disregarded. A pseudo aspiration algorithm ispresented below.

This aspiration algorithm overrules the tabu condition in order to investigate furthera promising region of the solution space which is not fully explored. This is anotherTabu Search feature that can improve the Essential Particle Swarm Optimization queen.

4.4 Intensification

The idea behind the concept of search intensification is that, as an intelligent humanbeing would probably do, one should explore more thoroughly the portions of thesearch space that seem “promising” in order to make sure that the best solutions inthese areas are found.

The intensification is important to concentrate the search effort around the bestsolutions found so far by searching their neighborhoods to reach better solutions.

A typical approach to intensification is to restart the search from the best currentlyknown solution and to fix in it the components that seem more attractive.

123

Page 13: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

The role of the intensification is to exploit the information of the best found solutionto guide the search in promising regions of the search space. This information is storedin a medium-term memory. The idea consists in extracting the (common) features of thebest solution and then intensifying the search around solutions sharing those features.A popular approach consists in restarting the search with the best solution obtained.Algorithm 3 summarizes the intensification process. Thus, the intensification idea canbe another solution to avoid the prematurely convergence to local optima.

5 TEPSOQ for 0–1 MKP resolution

5.1 TEPSOq pseudo-code

Combining all the ideas described above with the Essential Particle Swarm Optimiza-tion queen, we obtain a novel improved PSO variant. A pseudo-code of the proposedalgorithm (TEPSOq) optimizing the Multidimensional Kanpsack Problem (MKP) ispresent as following:

When we use the TEPSOq algorithm to optimize the MKP, firstly, the particlespositions are initialized randomly and the particles velocities as well as the tabu listsare initialized to zero. Moreover, we set the ring topology as the neighborhood structurewith number of neighbors set to 2.

5.2 Parameter tunings

Compared to other population-based algorithms, such as genetic algorithms, PSO havevery few parameters to adjust, which makes it particularly easy to implement. A PSOalgorithm generally must be carefully designed as the choice of its parameters mightinfluence not only the solution quality but also the computation time. Generally, a slow

123

Page 14: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

search will lead to better solutions. However, the slow search tends to consume morecomputation time. Therefore, it is necessary to take a tradeoff between them.

Being a novel algorithm, parameters for the suggested algorithm need to be identi-fied, and this is one of the challenges in getting the algorithm to work in an optimummanner. Obviously, the parameters of TEPSOq will seriously affect the real optimiza-tion performance. To know our algorithm well, we study and test its parameters. Onthe one hand, the inertia weight is set as ω = 0.8, and the learning factors are set asϕ1 = ϕ2 = 2.0. In order to keep the EPSO and TEPSOq algorithm work well, we setthe effect on xi j (t + 1) of both x∗

i j (t) and x∗i ′ j (t) equal to that of xi j (t); that is, we

select c1 = c2 = 1.0. Meanwhile, the influence on xi j (t +1) of both xi j (t −1) and thequeen informant is far less than that of xi j (t). So we set c0 = 0.02, c3 = 0.1 cursorily

123

Page 15: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

and ρ = 0.05 as in [11]. Besides, the two parts q0 and q1 of the queen particle areinitialized to 0.5.

On the other hand, the size of the tabu list is a new and critical parameter that hasa great impact on the performances of the TEPSOq. Experimentally, it appeared thatlarge tabu list size will provide many moves’ restrictions and incite consequently thediversification of the search as many moves are forbidden.

A compromise must be found that depends on the landscape structure of the Multi-dimensional Knapsack Problem and its associated instances. In general, a static valueis associated with the tabu list. It may depend on the size of the MKP instance andparticularly the size of the neighborhood. There is no optimal value of the tabu listfor all problems or even all instances of a given problem. Moreover, the optimal valuemay vary during the search progress. To overcome this problem, a variable size of thetabu list may be used. The tabu list size is updated upon the performance of the searchin the last iterations [63].

The swarm size Np has also a great impact on the performance of our algorithm.In general, a small swarm size can accelerate the search process, but the diversity ofthe population is reduced. In other ways, a large swarm size brings higher diversityto the population but the computational effort in each single iteration significantlyincreases, and therefore the search process becomes slow. In this case, and after manyexperiments, it can be seen that Np = n is able to ensure good results.

It can be argued that such a parameter tuning process does not guarantee the bestset of parameters is obtained; it serves as a good starting point for setting up initialexperiments.

5.3 Experiments, results and discussions

To investigate the performance and assess the effectiveness and viability of TEPSOqalgorithm, we apply it on large-scale strongly correlated 0–1 Multidimensional Knap-sack Problem (MKP) instances from OR-Library [14] which is widely used in theliterature and will also the basis of most of the experiments presented in this paper.Indeed, we concentrated in 270 MKP instances with n = 100, 250 and 500 vari-ables, with m = 5, 10 and 30 constraints and with α = 0.25, 0.5 and 0.75 tightnessratios. So, thirty problems were generated for each (m_n_α) combination. The nameof each problem is ORmxn-α_r , where m is the number of constraints, n the numberof variables, α the value of the tightness ratios and r the number of the instance. Ouralgorithm has been tested on a machine with an Intel � CoreTM i5 with 4 Gb of RAMand 2.67 GHz CPU.

After adjusting the parameters as mentioned and verifying the satisfaction of theMKP constraints, the simulation results obtained with the 10 constraint, 250 variableinstances, (OR10x250) are presented in Table 1 and those obtained with the 5 con-straint, 500 variable instances, (OR10x500) are presented in Table 2. The first columnindicates the instance name, the second column is the best-known solutions from theOR-Library, and the next two columns record the average solutions over 30 runs ofTEPSOq and EPSOq respectively.

123

Page 16: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

Table 1 Performance of theTEPSOq and EPSOq algorithmwith the 10 constraint, 250variable instances (OR10x250)

Results are averaged over 30runs

Instance Best known TEPSOq EPSOq

OR10x250-0.25_1 59187 59187 59128

OR10x250-0.25_2 58781 58781 58244

OR10x250-0.25_3 58097 58097 57369

OR10x250-0.25_4 61000 60662 60369

OR10x250-0.25_5 58092 58092 57463

OR10x250-0.25_6 58824 58549 58425

OR10x250-0.25_7 58704 58350 57860

OR10x250-0.25_8 58936 57902 58141

OR10x250-0.25_9 59387 59387 58963

OR10x250-0.25_10 59208 59208 58612

OR10x250-0.50_1 110913 110913 109929

OR10x250-0.50_2 108717 108713 107989

OR10x250-0.50_3 108932 108491 108363

OR10x250-0.50_4 110086 110086 109446

OR10x250-0.50_5 108485 108225 107877

OR10x250-0.50_6 110845 110257 110002

OR10x250-0.50_7 106077 106077 105416

OR10x250-0.50_8 106686 106455 105689

OR10x250-0.50_9 109829 109225 108225

OR10x250-0.50_10 106723 106723 106319

OR10x250-0.75_1 151809 151194 151158

OR10x250-0.75_2 148772 148772 148572

OR10x250-0.75_3 151909 151858 151473

OR10x250-0.75_4 151324 151324 150595

OR10x250-0.75_5 151966 151372 151133

OR10x250-0.75_6 152109 152007 151905

OR10x250-0.75_7 153131 153046 152913

OR10x250-0.75_8 153578 153578 152709

OR10x250-0.75_9 149160 149160 148976

OR10x250-0.75_10 149704 149637 149324

These two tables clearly show the contribution of the proposed method in termsof computational results. On the one hand, it can be seen that TEPSOq outperformsEPSOq with better solution quality in all the tested instances. While EPSOq meetssome difficulties in dealing with large size MKP problems, TEPSOq is still able tofind good solutions.

On the other hand, the TEPSOq algorithm yields to locate results that are slightlyless favorable, and even equal the best known results in the literature [12,15,16].Indeed, it can be seen that TEPSOq manages to reach 189 optimal solutions amongthe available 270 instances. Otherwise, the performance of the TEPSOq algorithmrelatively to the optimum results would be presented statistically in Fig. 2.

123

Page 17: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

Table 2 Performance of theTEPSOq and EPSOq algorithmwith the 10 constraint, 500variable instances (OR10X500)

Results are averaged over 30runs

Instance Best known TEPSOq EPSOq

OR10x500-0.25_1 117811 117811 116747

OR10x500-0.25_2 119232 119232 118431

OR10x500-0.25_3 119215 118997 118893

OR10x500-0.25_4 118813 117999 117341

OR10x500-0.25_5 116509 115828 114800

OR10x500-0.25_6 119504 119410 118811

OR10x500-0.25_7 119827 119063 118701

OR10x500-0.25_8 118329 118329 117404

OR10x500-0.25_9 117815 117025 116271

OR10x500-0.25_10 117815 117815 117815

OR10x500-0.50_1 217377 217377 216920

OR10x500-0.50_2 219077 219068 218059

OR10x500-0.50_3 217847 217847 217321

OR10x500-0.50_4 216868 216257 216195

OR10x500-0.50_5 213859 213796 212713

OR10x500-0.50_6 215086 215086 214646

OR10x500-0.50_7 217940 217825 216257

OR10x500-0.50_8 219984 219825 218330

OR10x500-0.50_9 214375 214368 213884

OR10x500-0.50_10 220899 220168 219006

OR10x500-0.75_1 304387 304387 303023

OR10x500-0.75_2 302379 302196 301258

OR10x500-0.75_3 302416 302416 302416

OR10x500-0.75_4 300757 300645 299985

OR10x500-0.75_5 304374 304001 302651

OR10x500-0.75_6 301836 299774 299432

OR10x500-0.75_7 304952 304841 303496

OR10x500-0.75_8 296478 295875 295829

OR10x500-0.75_9 301359 300964 299575

OR10x500-0.75_10 307089 306010 305440

In fact, the Tabu Search features that are adopted in the proposed algorithm achievea higher exploration of the search space and prevent in most cases the trap in localoptima at the start of algorithm. Moreover, the heuristic repair operator adopted in thepresent study plays a critical role in a higher diversification and exploitation near theglobal optimum solutions at the end of algorithm.

We compare the TEPSOq algorithm with other existing PSO-based approachesand some other meta-heuristics. In order to study the performance of TEPSOq onthe MKP, we first compare our algorithm with two PSO algorithm implemented in[41] (PSO-R) and [44] (S-CLPSO). The algorithm in [41] is based on Kennedy andEberhart’s binary PSO [9]. It uses the ring topology as the neighborhood structure with

123

Page 18: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

Fig. 2 TEPSOq performance 70% ofinstanceshavingoptimality

30% ofinstances lessfavorable thanthe optimality

Table 3 Comparing TEPSOq With the S-CLPSO and PSO-R for the MKP

Instance Best known TEPSOq S-CLPSO PSO-R

OR5x100-0.25_1 24381 24381 24356 24356

OR5x100-0.25_2 24274 24274 24213 24036

OR5x100-0.25_3 23551 23551 23530 23523

OR5x100-0.25_4 23534 23534 23478 23481

OR5x100-0.25_5 23991 23991 23963 23966

OR10x100-0.25_1 23064 23064 23051 23050

OR10x100-0.25_2 22801 22801 22725 22668

OR10x100-0.25_3 22131 22131 22073 22029

OR10x100-0.25_4 22772 22772 22741 22733

OR10x100-0.25_5 22751 22751 22605 22632

Results are averaged over 30 runs

the number of neighborhoods set to 2 and the repair procedure already described. Thealgorithm in [44] is a set-based Particle Swarm Optimization method that has definedthe position and velocity using the concept of set and possibility. Its search behavioris very similar to that of the original PSO in continuous space. In implantation, weuse MATLAB as the linear programming solver and the MATLAB program is linkedwith the C program of the algorithm in the execution. Note that the surrogate weightsonly need to be computed once and remain unchanged during the whole process ofthe algorithm.

These algorithm versions are tested on the first five instances of the instance setsOR5x100 and OR10x100 from the ORLIB. The results of PSO-R and S-CLPSO areextracted from [44] and verified from [41]. The TEPSOq algorithm is run 30 times foreach instance and the number of evaluations in each run is the same as that of PSO-Rand S-CLPSO. According to the results in Table 3, it is obvious that TEPSOq reachesthe optimality in all these cases; S-CLPSO yields better average results on six out ofthe ten instances, while PSO-R obtains better results on three instances. In this sense,TEPSOq performs slightly better S-CLPSO that, in its turn, performs slightly betterthan PSO-R

In Table 4, the TEPSOq algorithm is further compared with Clerc’s simplifiedBPSO (S-BPSO) algorithm with various strategies [64]. In S-BPSO, four strategies

123

Page 19: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

Table 4 Comparing TEPSOq with S-BPSO_03 and the S-BPSO_23 for the MKP

Instance Best known TEPSOq S-BPSO_03 S-BPSO_23

OR5x100-0.25_1 24381 24381 24336 24365

OR5x100-0.25_2 24274 24274 24178 24194

OR5x100-0.25_3 23551 23551 23521 23523

OR5x100-0.25_4 23534 23534 23486 23500

OR5x100-0.25_5 23991 23991 23933 23941

OR5x100-0.25_6 24613 24613 24580 24588

OR5x100-0.25_7 25591 25591 25537 25538

OR5x100-0.25_8 23410 23410 23397 23398

OR5x100-0.25_9 24216 24195 24184 24196

OR5x100-0.25_10 24411 24375 24339 24341

OR10x100-0.25_1 23064 23064 23040 23041

OR10x100-0.25_2 22801 22801 22662 22651

OR10x100-0.25_3 22131 22131 22010 22022

OR10x100-0.25_4 22772 22772 22737 22723

OR10x100-0.25_5 22751 22751 22633 22613

OR10x100-0.25_6 22777 22716 22641 22664

OR10x100-0.25_7 21875 21821 21774 21783

OR10x100-0.25_8 22635 22573 22454 22481

OR10x100-0.25_9 22511 22511 22329 22372

OR10x100-0.25_10 22702 22702 22642 22640

Results are averaged over 30 runs

are provided. Strategy zero converts the binary coding into integer coding and followsthe updating rule in the Standard PSO. Strategy 1 builds two positions around the pbestand gbest positions respectively, and merges them by the majority rule. Strategy 2 isthe updating rule in Kennedy and Eberhart’s BPSO, but rather than using the globaltopology, a random topology is adopted. Strategy 3 simply builds a random positionaround the gbest position. These strategies can be combined together. We denote thealgorithm with strategy zero as S-BPSO_0, the algorithm with strategies 0 and 1 asS-BPSO_01, and so on. The source code of S-BPSO is available in [65]. We modifythe code to solve MKP in the experiment. All the parameter configurations in thesource code remain unchanged in the experiment. For the strategy zero of S-BPSO,the additional parameter numSize is set to n/25.

Two relatively good strategies for the MKP are selected in the experiments, i.e.,S-BPSO_03, and S-BPSO_23. (Due to the characteristics of the MKP, strategies 0 and1 seem to be not suitable and the performance is not satisfying). Each instance is run30 times. According to Table 4, it can be observed that TEPSOq manages to reach thebest results among the two algorithms in 18 out of 20 instances.

In order to test further the scalability of our approach the TEPSOq algorithm iscompared with Leguizamon and Michalewicz’s Ant System algorithm [66] and theAnt Algorithm proposed by Alaya et al. [67]. Experimentally, the maximum number

123

Page 20: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

Table 5 Comparing TEPSOq With the published results of the Ant algorithms in [66] and [67] for theMKP

Instance Best known TEPSOq Ant system [66] Ant algorithm [67]

OR5x100-0.25_1 24381 24381 24331 24342

OR5x100-0.25_2 24274 24274 24245 24247

OR5x100-0.25_3 23551 23551 23527 23529

OR5x100-0.25_4 23534 23534 23463 23462

OR5x100-0.25_5 23991 23991 23949 23946

OR5x100-0.25_6 24613 24613 24563 24587

OR5x100-0.25_7 25591 25591 25504 25512

OR5x100-0.25_8 23410 23410 23361 23371

OR5x100-0.25_9 24216 24195 24173 24172

OR5x100-0.25_10 24411 24375 24326 24356

OR10x100-0.25_1 23064 23064 22996 23016

OR10x100-0.25_2 22801 22801 22672 22714

OR10x100-0.25_3 22131 22131 21980 22034

OR10x100-0.25_4 22772 22772 22631 22634

OR10x100-0.25_5 22751 22751 22578 22547

OR10x100-0.25_6 22777 22716 22565 22602

OR10x100-0.25_7 21875 21821 21758 21777

OR10x100-0.25_8 22635 22573 22519 22453

OR10x100-0.25_9 22511 22511 22292 22351

OR10x100-0.25_10 22702 22702 22588 22591

Results are averaged over 30 runs

of iterations run by the TEPSOq algorithm is 30. The average numbers of iteration forTEPSOq to achieve the best solution are also reported in Table 5.

According to these results, the convergence process of TEPSOq is slightly betterthan the two ACO algorithms. The TEPSOq find the optimal solutions in 15 out of 20instances and obtains the best average results in all these 20 instances comparativelyto the ant system algorithm [66] and the ant algorithm [67]. These results reveal thatour algorithm has better consistency to obtain relatively good solutions in differentruns.

Finally, we performed further comparisons between our approach and another onesuggested in [68], where the results of these experiments are displayed in Table 6.In [68], authors combined CPLEX and a Memetic Algorithm (MA). The MA isbased on Chu and Beasley’s principles and includes some improvements suggested in[69–71]. Indeed, the CPLEX and the MA are performed (quasi-) parallel and continu-ously exchange information in a bidirectional asynchronous way for solving the MKPto solve the hardest instances of Chu and Beasley’s benchmark library with n = 500items and m = 30 constraints. The neighborhood size parameter was set to 25, whichyielded on average the best results.

It can be observed from Table 6 that results achieved by the TEPSOq are slightlybetter than those of CPLEX-MA for m = 5 and m = 10 and even equal to the

123

Page 21: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

Table 6 Comparing TEPSOqwith CPLEX-MA algorithmwith the 30 constraints and 500variable instances (OR30x500)

Results are averaged over 30runs

Instance Best known TEPSOq CPLEX-MA

OR30x500-0.25_1 116056 116055 116014

OR30x500-0.25_2 114810 114810 114810

OR30x500-0.25_3 116712 115998 116661

OR30x500-0.25_4 115329 115268 115241

OR30x500-0.25_5 116525 116525 116420

OR30x500-0.25_6 115741 117626 115741

OR30x500-0.25_7 114181 114122 114024

OR30x500-0.25_8 114348 114305 114290

OR30x500-0.25_9 115419 115287 115419

OR30x500-0.25_10 117116 117101 117023

OR30x500-0.50_1 218104 218073 218041

OR30x500-0.50_2 214648 214645 214626

OR30x500-0.50_3 215978 215918 215903

OR30x500-0.50_4 217910 217836 217836

OR30x500-0.50_5 215689 213625 215601

OR30x500-0.50_6 215890 215086 215847

OR30x500-0.50_7 215907 214999 215883

OR30x500-0.50_8 216542 216425 216448

OR30x500-0.50_9 217340 216368 217312

OR30x500-0.50_10 214739 214168 214701

OR30x500-0.75_1 301675 301601 301656

OR30x500-0.75_2 300055 300002 299992

OR30x500-0.75_3 305087 304416 305051

OR30x500-0.75_4 302032 301645 302008

OR30x500-0.75_5 304462 304001 304423

OR30x500-0.75_6 297012 296774 296959

OR30x500-0.75_7 303364 303329 303322

OR30x500-0.75_8 307007 306940 306961

OR30x500-0.75_9 303199 303158 303199

OR30x500-0.75_10 300572 300129 300509

best-known solutions in some cases, whereas for the instances with m = 30, our resultsare slightly worse than results achieved by CPLEX-MA algorithm and obviously, weare not able to obtain the best known solutions.

From all the above comparisons, it can concluded that TEPSOq is able to achievecompetitive results compared to the best-known solutions and it can perform well forthis class of combinatorial problems, even for large instances. That is, TEPSOq seemsto be efficient in navigating the hyper-surface of the search space and finding goodresults.

Besides, the reader needs to bear in mind that this algorithm stills in the early stageof development, and there is room to further improve it. Firstly, it needs to be stressed

123

Page 22: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

that the parameter tuning process utilized is extremely simplistic, and it is potentialthat using a set of parameters obtained from a better parameter tuning process can bemore fruitful.

Secondly, the repair operator used in the TEPSOq is intricate and expensive incomputational term. It is the one proposed by Chu and Beasley [12]. Therefore, itis possible that the proposed algorithm can be improved by substituting this repairoperator with another one that is more efficient and more straightforward. Eventually,due to the simplicity and extensibility of the TEPSOq algorithm, it may be make upnew heuristics and append it to the proposed algorithm.

Since the efficacy of any algorithm still plays a critical and pivotal role whendeciding if it is suitable and adequate for an application, future work that assesses theefficiency of a range of algorithm is need, with high emphasis given to maintain aconsistent execution environment that can provide accurate and exact measurements.

6 Conclusion

To the best of our knowledge, none researcher has extended the Essential ParticleSwarm Optimization queen. This paper presents the first study that embeds variousfeatures distinguishing the Tabu Search in a discrete binary PSO variant. In fact, ouralgorithm uses various concepts inspired from the Tabu Search in order to prevent thetrack in local optima, to diversify the search space and to attempt to come near theoptimal solution.

The performance of TEPSOq algorithm is evaluated and compared with the EPSOqas well as with other existing PSO-based approaches and some other meta-heuristics ona big number of the Benchmark Multidimensional Knapsack Problem instances. Theexperimental results support the claim that the proposed TEPSOq algorithm exhibitsgood optimization performance in term of global search ability. It can be also concludedthat the proposed algorithm is able to locate fitness values that are very close and evenequal to the optimal solutions reported in the literature. These results can be usefulfor future research in the field of meta-heuristics who would like to evaluate theiralgorithms.

Seeing that this simple, extensible algorithm is still in the early stage of develop-ment, there exist the potential to increase its performance with better sets of para-meters or addition of new heuristics. It is hoped that these finding will inspire futureresearchers in analyzing the proposed algorithm from different perspectives to furtherimprove upon it.

On a concluding note we would like to say that hybridization of algorithms is aninteresting and promising field that can give us more insights regarding the behaviorand potential advantages and disadvantages of different meta-heuristics. The presentstudy may motivate and help the researchers working the field of evolutionary algo-rithms to develop new hybrid models or to apply the existing Tabu Essential ParticleSwarm Optimization queen in a variety of application area, especially when the valuesof the search space are discrete like decision making, solving lot sizing problem, thetraveling salesman problem, scheduling and routing.

123

Page 23: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

Acknowledgments The authors would like to thank the referees for useful comments, which help toimprove the paper.

References

1. Abido AM (2002) Optimal power flow using particle swarm optimization. Electr Power Energy Syst24(7):563–571

2. Brandstatter B, Baumgartner U (2002) Particle swarm optimization. Mass-spring system analogon.IEEE Trans Magn 38(2):997–1000

3. Salman A, Ahmad I, Al-Madani S (2002) Particle swarm optimization for task assignment problem.Microprocess Microsyst 26(8):363–371

4. Wachowiak M, Smolikova R, Zheng Y, Zurada J, Elmaghraby A (2004) An approach to multimodal bio-medical image registration utilizing particle swarm optimization. IEEE Trans Evol Comput 8(3):289–301

5. Blackwell T, Bentley PJ (2002) Improvised music with swarms. In: Fogel DB, El-Sharkawi MA,Yao X, Greenwood G, Iba H, Marrow P, Shackleton M (eds) Proceedings of the 2002 Congress onEvolutionary Computation CEC 2002. IEEE Press, pp 1462–1467

6. Kennedy J, Eberhart RC, Shi Y (2001) Swarm intelligence. Morgan Kaufmann Publishers, San Fran-cisco

7. Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the6th International Symposium on Micro Machine and Human Science, pp 39–43

8. Kennedy J, Eberhart RC (1995) Particle swarm optimisation. In: Proceedings of the IEEE InternationalConference, pp 942–948

9. Kennedy J, Eberhart RC (1997) A discrete binary version of the particle swarm algorithm. In: IEEEInternational Conference on Systems, Man, and, Cybernetics, pp 4104–4109

10. Luh GC, Lin CY, Lin YS (2011) A binary particle swarm optimization for continuum structural topologyoptimization. Appl Soft Comput 11:2833–2844

11. Chen E, Li J, Liuc X (2011) In search of the essential binary discrete particle swarm. Appl Soft Comput11(3):3260–3269

12. Chu P, Beasley J (1998) A genetic algorithm for the multidimensional Knapsack problem. J Heuristics4:63–86

13. Martello S, Toth P (1990) Knapsack problems, algorithms and computer implementations. Wiley, NewYork

14. OR-Library, Beasley JE. http://people.brunel.ac.uk/~mastjjb/jeb/orlib/files/15. Angelelli E, Speranza MG, Savelsbergh MWP (2007) Competitive analysis for dynamic multiperiod

uncapacitated routing problems. Networks 49(4):308–31716. Vasquez M, Vimont Y (2005) Improved results on the 0–1 multidimensional Knapsack problem. Eur

J Oper Res 165:70–8117. Weingartner HM, Ness DN (1967) Methods for the solution of the multidimensional 0/1 Knapsack

problem. Oper Res 15(1):83–10318. Fayard D, Plateau G (1982) An algorithm for the solution of the 0–1 Knapsack problem. Computing

28:269–28719. Lorie JH, Savage LJ (1955) Three problems in capital rationing. J Bus 28:229–23920. Meier H, Christofides N, Salkin G (2001) Capital budgeting under uncertainty: an integrated approach

using contingent claims analysis and integer programming. Oper Res 49(2):196–20621. Gilmore PC, Gomory RE (1966) The theory and computation of Knapsack functions. Operat Res

14(6):1045–107522. Shih W (1979) A branch and bound method for the multiconstraint zero–one Knapsack problem.

J Oper Res Soc 39:369–37823. Gavish B, Pirkul H (1982) Allocation of databases and processors. In: Akola DJ (ed) A distributed

data processing. Management of Distributed Data Processing, North-Holland, pp 215–23124. Vasquez M, Hao JK (2001) A logic-constrained Knapsack formulation and a Tabu algorithm for the

daily photograph scheduling of an Earth observation satellite. Comput Optim Appl 20(2):137–15725. Gavish B, Pirkul H (1985) Efficient algorithms for solving multiconstraint zero–one Knapsack prob-

lems to optimality. Math Program 31:78–105

123

Page 24: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

R. Ktari, H. Chabchoub

26. Balas E, Martin CH (1980) Pivot and complement-A heuristic for 0–1 programming. Manag Sci26:86–96

27. Bronstein IN, Semendjajew KA (1991) Taschenbuch der Mathematik. B. G, Teubner, Leipzig28. Magazine MJ, Oguz D (1984) A heuristic algorithm for the multidimensional zero–one Knapsack

problem. Eur J Oper Res 16:319–32629. Martello S, Toth P (1990) Knapsack problems. Algorithms and computer implementations. Wiley,

New York30. Pirkul H (1987) A heuristic solution procedure for the multiconstrained zero–one Knapsack problem.

Nav Res Logist 34:161–17231. Volgenant A, Zoon JA (1990) An improved heuristic for multidimensional 0–1 Knapsack problems.

J Operat Res Soc 41:963–97032. Chu PC (1997) A genetic algorithm approach for combinatorial optimization problems. Ph.D. thesis,

The Management School, Imperial College of Science, London33. Chu PC, Beasley JE (1997) A genetic algorithm for the multidimensional Knapsack problem. Working

paper, The Management School, Imperial College of Science, London34. Hanafi S, Fréville A (1998) An efficient Tabu search approach for the 0–1 multidimensional Knapsack

problem. Eur J Oper Res 106(2):659–67535. Alonso CL, Caro F, Montana JL (2005) An evolutionary strategy for the multidimensional 0–1 Knap-

sack problem based on genetic computation of surrogate multipliers. In: Mira J, Alvarez JR (eds)IWINAC 2005. LNCS vol 3562, pp 63–73

36. Drexl A (1988) A simulated annealing approach to the multiconstraint zero–one Knapsack problem.Computing 40:1–8

37. Fidanova S (2005) Ant colony optimization for multiple Knapsack problem and model. In: Li BZ etal (eds) NAA 2004. LNCS vol 3401, pp 280–287

38. Li H, Jiao YC, Zhang L, Gu ZW (2006) Genetic algorithm based on the orthogonal design for multidi-mensional Knapsack problems. In: Jiao L et al (eds) ICNC 2006. Part I, LNCS vol 4221, pp 696–705

39. Zhou Y, Kuang Z, Wang J (2008) A chaotic neural network combined heuristic strategy for multidi-mensional Knapsack problem. In: Kang L et al (eds) ISICA 2008. LNCS vol 5370, pp 715–722

40. Angelelli E, Mansini R, Speranza MG (2010) Kernel search: a general heuristic for the multi-dimensional Knapsack problem. Comput Oper Res 37:2017–2026

41. Kong M, Tian P (2006) Apply the particle swarm optimization to the multidimensional Knapsackproblem. In: Rutkowski L et al (eds) ICAISC 2006. LNAI vol 4029, pp 1140–1149

42. Wang L, Wang X, Fu J (2008) A novel probability binary particle swarm optimization algorithm andits application. J Softw 3:28–35

43. Wan NF (2008) The particle swarm optimisation algorithm and the 0–1 Knapsack problem. MSc thesis,Nottingham Trent University, Nottingham

44. Chen WN, Zhang J, Chung HSH, Zhong WL, Wu WG, Shi Yh (2010) A novel set-based particle swarmoptimization method for discrete optimization problems. IEEE Trans Evol Comput 14:278–300

45. Banks A, Vincent J, Anyakoha C (2007) A review of particle swarm optimization. Part I: backgroundand development. Nat Comput 6(4):467–484

46. Banks A, Vincent J, Anyakoha C (2007) A review of particle swarm optimization. Part II: hybridisation,combinatorial, multicriteria and constrained optimization and indicative applications. Nat Comput7(1):109–124

47. Shen Q, Jiang JH, Jiao CX, Shen GL, Yu RQ (2004) Modified particle swarm optimization algorithmfor variable selection in MLR And PLS modeling. QSAR studies of antagonism of angiotensin IIantagonists. Eur J Pharm Sci 22(2–3):145–152

48. Wang L, Wang X, Fu J, Zhen L (2008) A novel probability binary particle swarm optimization algorithmand its application. J Softw 3(9):28–35

49. Lee S, Soak S, Oh S, Pedrycz W, Jeon M (2008) Modified binary particle swarm optimization. ProgNat Sci 18(9):1161–1166

50. Pan QK, Tasgetiren MF, Liang YC (2008) A discrete particle swarm optimization algorithm for theno-wait flowshop scheduling problem. Comput Oper Res 35(9):2807–2839

51. Angeline PJ (1998) Evolutionary optimization versus particle swarm optimization: philosophy andperformance difference. In: Proceedings of the Evolutionary Programming Conference, San Diego,pp 601–610

52. Poli R, Kennedy J, Blackwell T (2007) Particle swarm optimization: an overview. Swarm Intell 1(1):33–57

123

Page 25: Essential Particle Swarm Optimization queen with Tabu Search for MKP resolution

Essential Particle Swarm Optimization queen

53. Dorigo M, Stützle T (2004) Ant Colony Optim. The MIT Press, Cambridge54. Robinson J, Sinton S, Samii YR (2002) Particle swarm, genetic algorithm, and their hybrids: optimiza-

tion of a profiled corrugated horn antenna. In: Proceedings of the IEEE International Symposium inAntennas and Propagation Society, pp 314–317

55. Talbi H, Batouche M (2004) Hybrid particle swarm with differential evolution for multimodal imageregistration. Proc IEEE Int Conf Ind Technol 3:1567–1573

56. Faigle U, Kern W (1992) Some convergence results for probabilistic Tabu search. ORSA J Comput4:32–37

57. Glover F (1986) Future paths for integer programming and links to artificial intelligence. Comput OperRes 13:533–549

58. Hansen P (1986) The steepest ascent mildest descent heuristic for combinatorial programming. Pre-sented at the Congress on Numerical Methods in Combinatorial Optimization, Capri

59. Glover F (1989) Tabu search. Part I—ORSA J Comput 1:190–20660. Friden C, Hertz A, de Werra D (1989) STABULUS: a technique for finding stable sets in large graphs

with Tabu search. Computing 42:35–4461. Glover F (1990) Tabu search. Part II—ORSA J Comput 2:4–3262. de Werra D, Hertz A (1989) Tabu search techniques: a tutorial and an application to neural networks.

OR Spektrum 11:131–14163. Nanobe K, Ibaraki T (1998) A Tabu search approach to the constrained satisfaction problem as a

general problem solver. Eur J Oper Res 106:599–62364. http://clerc.maurice.free.fr/pso/65. http://clerc.maurice.free.fr/pso/binary_pso/simpleBinaryPSO_C.zip66. Leguizamon G, Michalewicz Z (1999) A new version of ant system for subset problems. Proc Congr

Evol Comput 2:1459–146467. Alaya I, Solnon C, Ghéira K (2004) Ant algorithm for the multidimensional knapsack problem. In:

Proceedings of the International Conference on Bio-Inspired Optimization Methods Their Application,pp 63–72

68. Puchinger J, Raidl GR, Pferschy U (2010) The multidimensional Knapsack problem: structure andalgorithms. INFORMS J Comput 22:250–265

69. Raidl GR (1998) An improved genetic algorithm for the multiconstrained 0–1 knapsack problem. In:Proceedings of the 5th IEEE International Conference on Evolutionary Computation, pp 207–211

70. Gottlieb J (1999) On the effectivity of evolutionary algorithms for multidimensional knapsack prob-lems. In: Proceedings of Artificial Evolution: Fourth European Conference, LNCS vol 1829, pp 22–37

71. Raidl GR, Gottlieb J (2005) Empirical analysis of locality, heritability and heuristic bias in evolutionaryalgorithms: a case study for the multidimensional knapsack problem. Evol Comput J 13:441–475

123