solving 0–1 knapsack problem by a novel global harmony search algorithm

9
Applied Soft Computing 11 (2011) 1556–1564 Contents lists available at ScienceDirect Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc Review Article Solving 0–1 knapsack problem by a novel global harmony search algorithm Dexuan Zou a,, Liqun Gao a , Steven Li b , Jianhua Wu a a School of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110004, PR China b Division of Business University of South Australia GPO Box 2471, Adelaide, SA 5001, Australia article info Article history: Received 14 March 2009 Received in revised form 22 September 2009 Accepted 25 July 2010 Available online 6 August 2010 Keywords: Novel global harmony search algorithm 0–1 knapsack problems Position updating Genetic mutation abstract This paper proposes a novel global harmony search algorithm (NGHS) to solve 0–1 knapsack problems. The proposed algorithm includes two important operations: position updating and genetic mutation with a small probability. The former enables the worst harmony of harmony memory to move to the global best harmony rapidly in each iteration, and the latter can effectively prevent the NGHS from trapping into the local optimum. Computational experiments with a set of large-scale instances show that the NGHS can be an efficient alternative for solving 0–1 knapsack problems. © 2010 Elsevier B.V. All rights reserved. Contents 1. Introduction ........................................................................................................................................ 1556 2. Three harmony search algorithms ................................................................................................................. 1557 2.1. Harmony search algorithm (HS) ........................................................................................................... 1557 2.2. The improved harmony search algorithm (IHS) ........................................................................................... 1558 2.3. A novel global harmony search algorithm (NGHS) ........................................................................................ 1558 3. Some preparation work for using the NGHS to solve reliability problems ....................................................................... 1558 3.1. Constrained optimization .................................................................................................................. 1558 3.2. Process for discrete variables .............................................................................................................. 1558 3.3. Discrete genetic mutation .................................................................................................................. 1559 4. Experimental results and analysis ................................................................................................................. 1559 4.1. The effect of p m on the performance of the NGHS ......................................................................................... 1559 4.2. Comparison among three harmony search algorithms on solving 0–1 knapsack problems .............................................. 1560 4.2.1. The performance of three algorithms on solving 0–1 knapsack problems with small dimension sizes ........................ 1561 4.2.2. The performance of three algorithms on solving 0–1 knapsack problems with large dimension sizes ........................ 1562 4.3. The effect of step i (adaptive step) on the performance of the NGHS ...................................................................... 1563 5. Conclusions ........................................................................................................................................ 1563 Acknowledgments ................................................................................................................................. 1564 References ......................................................................................................................................... 1564 1. Introduction The knapsack problem is one of the classical NP-hard problems and it has been thoroughly studied in the last few decades. It offers Corresponding author. E-mail address: [email protected] (D. Zou). many practical applications in many areas, such as project selection [1], resource distribution, investment decision-making and so on. Given N objects, where the j th object owns its weight w j and profit p j , and a knapsack that can hold a limited weight capability C, the goal of this problem is to pack the knapsack so that the objects in it have the maximal value among all possible ways the knapsack can be packed. Mathematically, the 0–1 knapsack problem can be 1568-4946/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2010.07.019

Upload: dexuan-zou

Post on 26-Jun-2016

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Solving 0–1 knapsack problem by a novel global harmony search algorithm

R

S

Da

b

a

ARR2AA

KN0PG

C

1

a

1d

Applied Soft Computing 11 (2011) 1556–1564

Contents lists available at ScienceDirect

Applied Soft Computing

journa l homepage: www.e lsev ier .com/ locate /asoc

eview Article

olving 0–1 knapsack problem by a novel global harmony search algorithm

exuan Zoua,∗, Liqun Gaoa, Steven Lib, Jianhua Wua

School of Information Science and Engineering, Northeastern University, Shenyang, Liaoning 110004, PR ChinaDivision of Business University of South Australia GPO Box 2471, Adelaide, SA 5001, Australia

r t i c l e i n f o

rticle history:eceived 14 March 2009eceived in revised form2 September 2009

a b s t r a c t

This paper proposes a novel global harmony search algorithm (NGHS) to solve 0–1 knapsack problems.The proposed algorithm includes two important operations: position updating and genetic mutation witha small probability. The former enables the worst harmony of harmony memory to move to the globalbest harmony rapidly in each iteration, and the latter can effectively prevent the NGHS from trapping into

ccepted 25 July 2010vailable online 6 August 2010

eywords:ovel global harmony search algorithm–1 knapsack problems

the local optimum. Computational experiments with a set of large-scale instances show that the NGHScan be an efficient alternative for solving 0–1 knapsack problems.

© 2010 Elsevier B.V. All rights reserved.

osition updatingenetic mutation

ontents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15562. Three harmony search algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557

2.1. Harmony search algorithm (HS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15572.2. The improved harmony search algorithm (IHS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15582.3. A novel global harmony search algorithm (NGHS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558

3. Some preparation work for using the NGHS to solve reliability problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15583.1. Constrained optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15583.2. Process for discrete variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15583.3. Discrete genetic mutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559

4. Experimental results and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15594.1. The effect of pm on the performance of the NGHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15594.2. Comparison among three harmony search algorithms on solving 0–1 knapsack problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1560

4.2.1. The performance of three algorithms on solving 0–1 knapsack problems with small dimension sizes . . . . . . . . . . . . . . . . . . . . . . . . 15614.2.2. The performance of three algorithms on solving 0–1 knapsack problems with large dimension sizes . . . . . . . . . . . . . . . . . . . . . . . . 1562

4.3. The effect of stepi (adaptive step) on the performance of the NGHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15635. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1564References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1564

. Introduction

The knapsack problem is one of the classical NP-hard problemsnd it has been thoroughly studied in the last few decades. It offers

∗ Corresponding author.E-mail address: [email protected] (D. Zou).

568-4946/$ – see front matter © 2010 Elsevier B.V. All rights reserved.oi:10.1016/j.asoc.2010.07.019

many practical applications in many areas, such as project selection[1], resource distribution, investment decision-making and so on.Given N objects, where the j th object owns its weight wj and profitpj, and a knapsack that can hold a limited weight capability C, thegoal of this problem is to pack the knapsack so that the objects in

it have the maximal value among all possible ways the knapsackcan be packed. Mathematically, the 0–1 knapsack problem can be
Page 2: Solving 0–1 knapsack problem by a novel global harmony search algorithm

ompu

d

iat

epiciltslcttastffLtebstfuufdbsprbtolp[tappdg

aTosftb

D. Zou et al. / Applied Soft C

escribed as follows:

Maximizef (x) =N∑

j=1

pjxj

s.t.

⎧⎪⎨⎪⎩

N∑j=1

wjxj ≤ C

xj = 0 or 1, j = 1, 2, . . . , N

(1)

The binary decision variables xj are used to indicate whethertem j is included in the knapsack or not. It may be assumed thatll profits and weights are positive, and that all weights are smallerhan the capacity C.

In recent decades, many heuristic algorithms have beenmployed to solve 0–1 knapsack problems: Shi modified thearameters of the ant colony optimization (ACO) model to adapt

tself to 0–1 knapsack problems [2]. The improved ACO has strongapability of escaping from the local optimum through artificialnterference of tabuk; Liu and Liu proposed a schema-guiding evo-utionary algorithm (SGEA) to solve 0–1 knapsack problems [3]. Forhe SGEA, there are mainly two improvements: first, it proposes achema-modified operator to adjust the distribution of the popu-ation; second, it constructs an elite-schema space and utilizes theluster-center schema to guide the direction of individual’s evolu-ion. The two improvements not only can improve the diversity ofhe population, but also enhance the capability of the local searchnd the global search for the SGEA. Lin used genetic algorithm toolve knapsack problem with imprecise weight [4], and he inves-igated the possibility of using genetic algorithms in solving theuzzy knapsack problem without defining membership functionsor each imprecise weight coefficient. The approach proposed byin simulates a fuzzy number by distributing it into some parti-ion points. Lin used genetic algorithms to evolve the values inach partition point so that the final values represent the mem-ership grade of a fuzzy number. Li proposed a binary particlewarm optimization based on multi-mutation strategy (MMBPSO)o solve knapsack problem [5]. The MMBPSO can effectively escaperom the local optima to avoid premature convergence due to thetilization of Multi-Mutation strategy. Moreover, this algorithmses two methods called greedy transform algorithm and penaltyunction method to produce the best outcomes for constraint han-ling, respectively. Although many 0–1 knapsack problems haveeen solved successfully by these methods, the research on them istill important, because some new and more difficult 0–1 knapsackroblems hidden in the real world have not been solved. Many algo-ithms provide possible solutions for some 0–1 knapsack problems,ut they may lose their efficiency on solving these problems due toheir own disadvantages and limitations. For example, some meth-ds proposed recently only solve 0–1 knapsack problems with veryow dimension, but they may be unavailable to solve 0–1 knapsackroblems with high dimension sizes. Harmony search algorithm6] is a recently developed heuristic algorithm which is inspired byhe phenomenon of musician attuning. The HS and its improvedlgorithms have been applied to many engineering optimizationroblems, but they have never been used to solve 0–1 knapsackroblems. Further more, the HS cannot be directly applied to someifficult engineering optimization problems due to its poor conver-ence, thus it should be improved significantly.

Given the above consideration, a novel global harmony searchlgorithm (NGHS) is proposed to solve the 0–1 knapsack problems.he NGHS is inspired by the swarm intelligence of particle swarm

ptimization (PSO) [7]. Two important factors including adaptivetep and trust region are defined in the NGHS. Based on the twoactors, a novel position updating equation is designed to makehe worst solution of the harmony memory move to the globalest solution in each iteration. Genetic mutation operation with

ting 11 (2011) 1556–1564 1557

a low probability is carried out for the worst solution after updat-ing position, for it can prevent the NGHS from being trapped intothe local optimum. Based on a large number of experiments, theproposed algorithm has demonstrated promising performance onsolving 0–1 knapsack problems, and it can find the required optimain some cases when the problem to be solved is too complicated andcomplex.

The remainder of the paper is organized as follows. In Section2, three harmony search algorithms are introduced, and the pro-cedures are also given. Especially, a novel global harmony searchalgorithm (NGHS) is mainly introduced, and the procedure of theNGHS is described adequately in this section. In Section 3, somepreparation work is considered for using the NGHS to solve 0–1knapsack problems. In Section 4, a series of experiments are carriedout to test the optimization performance of the NGHS for knapsackproblems. Some conclusions and comments are made for furtherresearch in Section 5.

2. Three harmony search algorithms

This section mainly describes the application of the NGHS toknapsack problems. Before describing the NGHS, two other har-mony search algorithms are introduced adequately to furtherunderstand the NGHS. First, a brief overview of the HS is provided.Second, the proposed improved harmony search (IHS) algorithm issummarized. Finally, the modification procedures of the proposedNGHS algorithm are introduced to solve 0–1 knapsack problems.

2.1. Harmony search algorithm (HS)

Harmony search (HS) [6] algorithm is a recently developed algo-rithm, and it is based on natural musical performance processesthat occur when a musician searches for a better state of harmony,such as during jazz improvisation. Jazz improvisation seeks to findmusically pleasing harmony (a perfect state) as determined by anaesthetic standard, just as the optimization process seeks to finda global solution (a perfect state) as determined by an objectivefunction. In general, the HS algorithm works as follows:

• Step 1. Initialize the problem and algorithm parameters. Theoptimization problem is defined as Minimize f(x) subject toxiL ≤ xi ≤ xiU(i = 1, 2, . . ., N). xiL and xiU are the lower and upperbounds for decision variables. The HS algorithm parametersare also specified in this step. They are the harmony memorysize (HMS), or the number of solution vectors in the harmonymemory; harmony memory considering rate (HMCR); bandwidth(bw); pitch adjusting rate (PAR); and the number of improvisa-tions (K), or stopping criterion.

• Step 2. Initialize the harmony memory. The initial harmonymemory is randomly generated in the region [xiL, xiU](i = 1, 2,. . ., N). This is done as follows: xj

i= xiL + rand() × (xiU − xiL),

j=1,2,. . .,HMS, rand() is a random from a uniform distribution of[0,1].

• Step 3. Improvise a new harmony. Generating a new harmony x′iis

called improvisation, and the procedure works as follows:

for each i ∈ [1, N]doif rand()≤ HMCR thenx′

i= xj

i(j = 1, 2, . . . ,HMS) %memory consideration

if rand()≤ PAR then

x′

i= x′

i± r × bw %pitch adjustment

endelsex′

i= xiL + rand() × (xiU − xiL) %random selection

end

Page 3: Solving 0–1 knapsack problem by a novel global harmony search algorithm

1 omputing 11 (2011) 1556–1564

1vnwdm

2

[aTbbet(tpbm

2

cpatp

iIhsa

(

(

ma

|a

558 D. Zou et al. / Applied Soft C

endx′i(i = 1, 2, . . . , N) is the i th component of x′, and xj

i(j =

, 2, . . . ,HMS) is the i th component of the j th candidate solutionector in HM. Both r and rand() are uniformly generated randomumber in the region of [0,1], and bw is an arbitrary distance band-idth. In short, the new harmony vector x′ = (x′

1, x′2, . . . , x′

N) isetermined by three rules: memory consideration, pitch adjust-ent and random selection.

Step 4. Update harmony memory. If the fitness of the improvisedharmony vector x′ = (x′

1, x′2, . . . , x′

N) is better than that of theworst harmony, replace the worst harmony in the HM with x′.Step 5. Check the stopping criterion. If the stopping criterion(maximum number of iterations K) is satisfied, computation isterminated. Otherwise, step 3 is repeated.

.2. The improved harmony search algorithm (IHS)

An improved harmony search algorithm (IHS) is proposed in8], in which the key modifications are PAR and bw. In the HS, PARnd bw are all constants, but the IHS updated them dynamically.hey are expressed as PAR(k) = PARmin + ((PARmax − PARmin)/K)k andw(k) = bwmax exp((ln(bwmin/bwmax)/K)k). Here k is current num-er of iterations, and K is maximum number of iterations. IHSmploys a novel method for generating new solution vectorshat enhances accuracy and convergence rate of harmony searchHS) algorithm. The IHS algorithm has been successfully appliedo various benchmarking and standard engineering optimizationroblems. Numerical results reveal that the IHS algorithm can findetter solutions compared to the HS and other heuristic or deter-inistic methods.

.3. A novel global harmony search algorithm (NGHS)

The HS and its improved algorithms have demonstrated suc-essful performance on solving some engineering optimizationroblems. Thus, it is unsurprising that the HS and its improvedlgorithms can be used to solve 0–1 knapsack problems. However,he HS needs be modified for solving some difficulty 0–1 knapsackroblems.

A prominent characteristic of particle swarm optimization (PSO)s that the individual is inclined to mimic its successful companion.nspired by the swarm intelligence of particle swarm, a novel globalarmony search algorithm (NGHS) is proposed to solve 0–1 knap-ack problems. The NGHS and the HS are different in three aspectss follows.

1) Harmony memory considering rate (HMCR) and pitch adjustingrate (PAR) are excluded from the NGHS, and genetic mutationprobability (pm) is included in the NGHS;

2) The NGHS modifies the improvisation step of the HS, and itworks as follows:

for each i ∈ [1, N]dostepi = |xbest

i− xworst

i| % Calculating the adaptive step

x′i= xbest

i± r × stepi %position updating

if rand()≤pmthenx′

i= xiL + rand() × (xiU−iL) %genetic mutation

endendHere, “best” and “worst” are the indexes of the global best har-

ony and the worst harmony in HM, respectively. r and rand() arell uniformly generated random numbers in [0,1].

Fig. 1 illustrates the principle of position updating. stepi =xbest

i− xworst

i| is defined as adaptive step of the i th decision vari-

ble. The region between P and R is defined as trust region for the

Fig. 1. The schematic diagram of position updating.

i th decision variable. The trust region is actually a region near theglobal best harmony. In the early stage of optimization, all solutionvectors are sporadic in solution space, so most adaptive steps arelarge and most trust regions are wide. This is beneficial to the globalsearch of the NGHS. In the late stage of optimization, all non-bestsolution vectors are inclined to move to the global best solutionvector, so most solution vectors are close to each other. Thus, mostadaptive steps are small and most trust regions are narrow. This isbeneficial to the local search of the NGHS. The reasonable design forstepi can guarantee that the proposed algorithm has strong globalsearch ability in the early stage of optimization, and has stronglocal search ability in the late stage of optimization. Dynamicallyadjusted stepi keeps a balance between the global search and thelocal search.

Genetic mutation operation with a small probability is carriedout for the worst harmony of harmony memory after updating posi-tion, for it can effectively prevent the premature convergence of theNGHS.

(3) After improvisation, the NGHS replaces the worst harmonyxworst in HM with the new harmony x′ even if x′ is worse thanxworst .

3. Some preparation work for using the NGHS to solvereliability problems

3.1. Constrained optimization

There is a big difference between unconstrained optimizationproblems and constrained ones. For a unconstrained optimizationproblem, its global best solution is the one who has the mini-mum objective function value. In the mean time, the global bestsolution vector of constrained optimization problem is hard todetermine and measure, for it is difficult to find a balance betweenthe constraints and the objective function value. A penalty func-tion method has been used for handling constrained 0–1 knapsackproblems in this paper, and it exerts the penalty on infeasible solu-tions based on the distance away from the feasible region. It is wellknown that the maximization of f(x) can be transformed into theminimization of −f(x), thus, according to Eq. (1) in the 0–1 knapsackproblem formulation, the corresponding penalty function has beendefined and described as: MinF(x) = − f(x) + � × max (0, g), where,g = ∑N

j=1wjxj − C; � represents penalty coefficient, and it is set to

1020 in this paper.

3.2. Process for discrete variables

Any variable adjusted by the position updating equation is a realnumber, and the most direct processing method is to replace it withthe nearest integer.

Page 4: Solving 0–1 knapsack problem by a novel global harmony search algorithm

D. Zou et al. / Applied Soft Computing 11 (2011) 1556–1564 1559

Table 1The dimension and parameters of ten test problems.

f Dim Parameter (w, C and p)

f1 10 w=(95,4,60,32,23,72,80,62,65,4,6),C=269,p=(55,10,47,5,4,50,8,61,85,87)

f2 20 w = (92, 4, 43, 83, 84, 68, 92, 82, 6, 44,32, 18, 56, 83, 25, 96, 70, 48, 14, 58),C = 878, p = (44, 46, 90, 72, 91, 40, 75,35, 8, 54, 78, 40, 77, 15, 61, 17, 75, 29,75, 63)

f3 4 w=(6,5,9,7),C =20,p=(9,11,13,15)f4 4 w=(2,4,6,7),C =11,p=(6,10,12,13)f5 15 w = (56.358531, 80.874050, 47.987304,

89.596240, 74.660482, 85.894345,51.353496, 1.498459, 36.445204,16.589862, 44.569231, 0.466933,37.788018, 57.118442, 60.716575),C = 375, p = (0.125126, 19.330424,58.500931, 35.029145, 82.284005,17.410810, 71.050142, 30.399487,9.140294, 14.731285, 98.852504,11.908322, 0.891140, 53.166295,60.176397)

f6 10 w=(30,25,20,18,17,11,5,2,1,1),C=60,p=(20,18,17,15,15,10,5,3,1,1)

f7 7 w=(31,10,20,19,4,3,6),C=50,p=(70,20,39,37,7,5,10)

f8 23 w = (983, 982, 981, 980, 979, 978, 488,976, 972, 486, 486, 972, 972, 485, 485,969, 966, 483, 964, 963, 961, 958, 959),C = 10000, p = (981, 980, 979, 978, 977,976, 487, 974, 970, 485, 485, 970, 970,484, 484, 976, 974, 482, 962, 961, 959,958, 857)

f9 5 w=(15,20,17,8,31),C=80,p=(33,24,36,37,12)

f10 20 w = (84, 83, 43, 4, 44, 6, 82, 92, 25, 83,56, 18, 58, 14, 48, 70, 96, 32, 68, 92),

3

m

4

sEtc

4

p

ktpio1

ttes

Table 2The detailed information of the optimal solutions.

f Opt.solution x∗ Opt.valuef(x∗)

Value ofconstraint g(x∗)

f1 (0,1,1,1,0,0,0,1,1,1) 295 0f2 (1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,0,1,0,1,1) 1024 -7f3 (1,1,0,1) 35 −2f4 (0,1,0,1) 23 0f5 (0,0,1,0,1,0,1,1,0,1,1,1,0,1,1) 481.0694 −20.0392f6 (0,0,1,0,1,1,1,1,0,0) 50 0f7 (1,0,0,1,0,0,0) 107 0f (1,1,1,1,1,1,1,1,0,1,0,0,0,0,0,1,1,0,0,0,0,0,0) 9767 −232

C = 879, p = (91, 72, 90, 46, 55, 8, 35, 75,61, 15, 77, 40, 63, 75, 29, 75, 17, 78, 40,44)

.3. Discrete genetic mutation

After position updating and integer transforming, the geneticutation operator can be modified as x′

i= |x′

i− 1|.

. Experimental results and analysis

In this section, the performance of the NGHS algorithm is exten-ively investigated by a large number of experimental studies.ighteen 0–1 knapsack programming problems are considered toestify the validity of the NGHS. All computational experiments areonducted with Matlab7.0.

.1. The effect of pm on the performance of the NGHS

Ten test problems are used to study the effect of pm on theerformance of the NGHS and they are recorded as Table 1.

Shi proposed an improved ant colony algorithm to solve two 0–1napsack problems (test problem 1 and 2) in [2]. Unfortunately,he optimal solutions of the two problems cannot be found in thisaper. The optimal solution of test problem 1 obtained by the NGHS

s x∗ = (0, 1, 1, 1, 0, 0, 0, 1, 1, 1), and f(x∗) = 295. The optimal solutionf test problem 2 obtained by the NGHS is x∗ = (1, 1, 1, 1, 1, 1, 1, 1,, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1), and f(x∗) = 1024.

An and Fu proposed a method called sequential combinationree algorithm to solve the test problem 3 in [9]. The optimal solu-ion by this method is x∗ = (1, 1, 0, 1), and f(x∗) = 35. This method canasily find the required solutions of 0–1 knapsack problems withmall dimension sizes, but it does not have obvious advantage to

8

f9 (1,1,1,1,0) 130 −20f10 (1,1,1,1,1,1,1,1,1,0,1,1,1,1,0,1,0,1,1,1) 1025 −8

solve 0–1 knapsack problems with large dimension sizes comparedto the other algorithms.

You used a greedy-policy-based algorithm to solve the test prob-lem 4 in [10]. This method introduces value density and modifiesthe greedy-policy. The optimal solution found by this method isx∗ = (0, 1, 0, 1) and f4(x∗) = 23.

Yoshizawa and Hashimoto used the information of search-spacelandscape to search the optimum of the test problem 5 in [11].The landscape is the distribution of the solution set as the func-tion values on search space of the problem. The optimum foundby this method is x∗ = (0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1) andf5(x∗) = 481.0694.

Fayard and Plateau employed a method to solve the test problem6 in [12], and this method derives from the “shrinking boundarymethod”. The optimal solution found by this method is x∗ = (0, 0, 1,0, 1, 1, 1, 1, 0, 0) and f6(x∗) = 50.

Zhao proposed a method called nonlinear dimensionality reduc-tion to solve the test problem 7 and 8 in [13]. The optimal solutionof the test problem 7 found by this method is x∗ = (1, 0, 0, 1, 0,0, 0) and f7(x∗) = 107. The optimal solution of the test problem 8found by this method is x∗ = (1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0,1, 1, 0, 0, 0, 0, 1, 0) and f8(x∗) = 9749. The optimal solution of testproblem 8 obtained by the NGHS is better than that of nonlineardimensionality reduction (as Table 2).

Test problem 9 is from [14], in which the DNA algorithm is pro-posed to solve 0–1 knapsack problems. The optimal solution foundby this method is x∗ = (1, 1, 1, 1, 0) and f9(x∗) = 130.

Test problem 10 is from literature [15], in which three algo-rithms are used to solve 0–1 knapsack problems. The optimalsolution found in this paper is x∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1,1, 1, 0, 1, 0, 1, 1, 1) and f10(x∗) = 1025.

The detailed information of the ten test problems above are asTable 2.

In this subsection, the effect of genetic mutation probability pm

on the performance of the NGHS is investigated. For the ten 0–1knapsack problems above, harmony memory size HMS is set to5, and the maximum number of iterations K is set to 10000. 50experiments are carried out in each case, and Table 3 gives theoptimization success rate of the NGHS using different values forpm.

The best results were obtained when pm = 0.15 for all test prob-lems. In fact, the mutation probability with 0.1 ≤ pm ≤ 0.9 is moresuitable for f3, f4, f7 and f9 which shows a strong adaptivity of pm toproblems with very low dimension sizes. In addition, the adaptivityof pm to the problems with higher dimension sizes may decreasemore or less. This seems to imply that the adaptivity of geneticmutation probability pm to 0–1 knapsack problems will decrease

when the dimension sizes of problems increase. In other words,the complexity of 0–1 knapsack problem mainly depends on itsdimension size N. The NGHS should adapt itself to 0–1 knapsackproblems with different dimension sizes by dynamically adjusting
Page 5: Solving 0–1 knapsack problem by a novel global harmony search algorithm

1560 D. Zou et al. / Applied Soft Computing 11 (2011) 1556–1564

Table 3The effect of pm on the performance of the NGHS.

f pm

0 0.05 0.1 0.15 0.2 0.25 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

f1 2% 94% 100% 100% 100% 100% 100% 100% 100% 100% 78% 54% 34% 22%f2 0% 100% 100% 100% 100% 76% 50% 12% 0% 0% 0% 0% 0% 0%f3 32% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 90%f4 40% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100%f5 2% 98% 100% 100% 100% 100% 100% 80% 20% 4% 4% 0% 0% 0%f6 2% 66% 100% 100% 100% 100% 100% 100% 100% 100% 100% 78% 60% 30%

00%18%00%54%

tttmT

sattiiran5“caastssfo

aiwtfumd

v0

TT

f7 4% 34% 96% 100% 100% 100% 1f8 0% 100% 100% 100% 88% 34%f9 18% 100% 100% 100% 100% 100% 1f10 0% 100% 100% 100% 100% 80%

he genetic mutation probability pm, and a feasible choice is set-ing pm to 2/N. In general, using a small value of pm is beneficialo enhance the convergence and stability of the NGHS. The perfor-

ance of the NGHS can be further understood and analyzed fromable 4.

Here, “Fun” represents function; “Dim” represents dimensionize; “SR” represents success rate; “Time” represents the aver-ge time to reach a discrete global minimizer among 50 runs ofhe proposed algorithm; each number in the column “Min.iter” ishe minimal number of iterations to reach a discrete global min-mizer among 50 runs of the proposed algorithm; each numbern the column “Max.iter” is the maximal number of iterations toeach a discrete global minimizer among 50 runs of the proposedlgorithm; each number in the column “Mean.iter” is the averageumber of iterations to reach a discrete global minimizer among0 runs of the proposed algorithm; each number in the columnMedian.iter” is the median number of iterations to reach a dis-rete global minimizer among 50 runs of the proposed algorithm;nd each number in the column “Std.iter” represents standard devi-tion. As can be seen in Table 4, the NGHS uses little “Time” toolve 0–1 knapsack problems, and the maximal “Time” is smallerhan 0.7 s. In addition, the “Mean.iter” of most problems are muchmaller than the maximum number of iterations (K = 10000) whichhows that the NGHS has strong convergence. “SR” is equal to 100%or any problem, which indicates the high efficiency of the NGHSn solving 0–1 knapsack problems with small dimension sizes.

It should be emphasized that pm should not be 0 or 1, 0 meansbsolute position updating which will lead to the NGHS gettingnto the local optimum, and 1 means absolute mutation operation

hich will interfere the convergence of the NGHS greatly. To fur-her understand the effect of pm on the performance of the NGHS,unction f1 is considered to show the mutation behavior in consec-tive generations. Harmony memory size HMS is set to 5, and the

aximum number of iterations K is set to 5000. The states of HM in

ifferent iterations for the NGHS algorithm are shown in Tables 5–7.c = max(0, g) is penalty term, and f(x) is the objective function

alue. The best solution found by the NGHS is x∗ = (0, 0, 1, 0, 1, 1,, 1, 0, 1) and f(x∗) = 249 when pm=0 (see Table 5). This solution is

able 4he optimization results of ten 0–1 knapsack problems (pm = 2/N, HMS= 5, K = 10000).

Fun Dim SR Time (s) Min.iter

f1 10 100% 0.0093 13f2 20 100% 0.0293 49f3 4 100% 0.0005 1f4 4 100% 0.0006 1f5 15 100% 0.0210 63f6 10 100% 0.0052 6f7 7 100% 0.0087 1f8 23 100% 0.0617 215f9 5 100% 0.0023 1f10 20 100% 0.0307 43

100% 100% 100% 100% 100% 100% 100%0% 0% 0% 0% 0% 0% 0%

100% 100% 100% 100% 100% 100% 100%8% 0% 0% 0% 0% 0% 0%

strictly a feasible solution, but it is far from the global optimum. Inaddition, the NGHS has reached premature convergence after 500iterations because of the strong convergence of position updatingequation and no mutation operation (pm = 0). In general, mutationoperation is very important and absolutely necessary for improvingthe capability of space exploration for the NGHS.

The best solution found by the NGHS is x∗ = (0, 0, 1, 0, 0, 0, 0,1, 1, 1) and f(x∗) = 280 when pm=1 (see Table 6) which is really abad solution. In addition, all solutions in the HM are dispersed andunorganized for blind mutation operation. In general, the mutationoperation with a very large probability is harmful for improving theconvergence of the NGHS.

As can be seen in Table 7 that the optimal solution is obtained bythe NGHS when pm = 2/N. In fact, the optimal solution is obtainedat most in the 500 th iteration which is much smaller than themaximum number of the iterations. The fewer iterations shows thatthe NGHS (pm = 2/N) has higher efficiency than above two states onfinding optimal solutions of 0–1 knapsack problems. In general,suitable mutation operation can increase the diversity of candidatesolutions and improve the capability of space exploration for theNGHS.

4.2. Comparison among three harmony search algorithms onsolving 0–1 knapsack problems

In this paper, eighteen 0–1 knapsack problems are consideredto compare the performance of three harmony search algorithms:the HS, the IHS and the NGHS. The parameters settings of the threealgorithms are as follows.

For the HS algorithm, harmony memory size HMS=5, har-mony memory consideration rate HMCR=0.9, pitch adjusting ratePAR=0.3, bandwidth bw = xU − xL. For the IHS algorithm, HMS=5

and HMCR=0.9, the minimum adjusting rate PARmin=0.01 and themaximum adjusting rate PARmax=0.99, the minimum bandwidthbwmin=0.1×(xU−xL) and the maximum bandwidth bwmax=xU−xL.For the NGHS algorithm, HMS=5, and genetic mutation probabilitypm = 2/N.

Max.iter Mean.iter Median.iter Std.iter

1065 263.14 220.5 218.502143 754.06 590.5 531.49

50 11.1 7.5 12.0169 12.56 5 16.97

3755 578.56 389 609.11956 234.66 187.5 187.86

1494 325.46 216.5 328.345781 1727 1491.5 1075.10

161 28.72 16.5 34.283323 831.38 784.5 651.24

Page 6: Solving 0–1 knapsack problem by a novel global harmony search algorithm

D. Zou et al. / Applied Soft Computing 11 (2011) 1556–1564 1561

Table 5HM state in different iterations for the function f1 using the NGHS algorithm (pm = 0).

Rank x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 f(x) c

Initial HM1 0 0 1 0 1 0 1 1 1 0 205 212 1 1 1 0 0 1 0 0 0 1 249 83 1 0 0 0 0 0 0 0 0 1 142 04 0 0 1 1 1 1 0 1 0 0 167 05 1 0 1 0 0 0 1 0 1 0 195 31

Subsequent HM1 0 0 1 0 1 0 1 1 1 0 205 212 1 1 1 0 0 1 0 0 0 1 249 83 1 0 0 0 0 0 0 0 0 1 142 04 0 0 1 1 1 1 0 1 0 0 167 05 0 0 1 1 0 0 0 1 0 0 113 0

HM after 10 iterations1 0 0 1 0 1 1 0 1 0 1 249 02 0 0 1 0 1 1 0 1 0 1 249 03 0 0 1 0 1 1 0 1 0 1 249 04 0 0 1 0 1 1 0 1 0 1 249 05 0 0 1 0 1 0 0 1 0 1 199 0

HM after 500 iterations1 0 0 1 0 1 1 0 1 0 1 249 02 0 0 1 0 1 1 0 1 0 1 249 03 0 0 1 0 1 1 0 1 0 1 249 04 0 0 1 0 1 1 0 1 0 1 249 05 0 0 1 0 1 1 0 1 0 1 249 0

HM after 5000 iterations1 0 0 1 0 1 1 0 1 0 1 249 0

1111

4k

pbr

TH

2 0 0 1 0 13 0 0 1 0 14 0 0 1 0 15 0 0 1 0 1

.2.1. The performance of three algorithms on solving 0–1napsack problems with small dimension sizes

The ten problems in Section 4.1 are selected to test theerformance of the three algorithms, and the maximum num-er of iterations is accordingly set to 10000. 50 independentuns were made for the three algorithms, and the results

able 6M state in different iterations for the function f1 using the NGHS algorithm (pm = 1).

Rank x1 x2 x3 x4 x5 x6

Initial HM1 1 1 0 0 0 12 1 1 0 1 0 13 1 1 1 1 1 14 1 1 1 0 1 05 0 1 1 0 1 0

Subsequent HM1 1 1 0 0 0 12 1 1 0 1 0 13 0 0 1 0 1 04 1 1 1 0 1 05 0 1 1 0 1 0

HM after 10 iterations1 0 0 1 0 1 02 1 1 0 1 0 13 0 0 1 0 0 04 1 1 1 0 1 05 0 0 1 0 1 0

HM after 500 iterations1 0 0 1 0 0 02 1 1 0 1 0 13 0 1 0 1 1 14 1 1 0 0 1 05 0 1 1 0 0 1

HM after 5000 iterations1 0 0 1 0 0 02 1 1 0 1 1 03 1 1 0 1 0 04 1 1 0 1 1 05 0 1 1 0 0 1

0 1 0 1 249 00 1 0 1 249 00 1 0 1 249 00 1 0 1 249 0

obtained by three harmony search algorithms are presented inTable 8.

As can be seen in Table 8, the NGHS has the best performance,and it can easily find the optimal solution in all cases; the HS canfind the optimal solutions with SR=100% in most cases except f6;The IHS has the worst performance, and it can not guarantee finding

x7 x8 x9 x10 f(x) c

1 0 0 0 123 00 0 0 1 207 01 0 0 0 179 970 0 0 1 203 01 1 0 0 130 0

1 0 0 0 123 00 0 0 1 207 01 1 1 0 205 210 0 0 1 203 01 1 0 0 130 0

0 1 1 0 197 00 0 0 1 207 01 1 1 0 201 00 0 0 1 203 01 1 1 1 292 67

0 1 1 1 280 00 0 0 1 207 00 1 1 0 215 01 0 0 0 77 00 0 1 1 279 0

0 1 1 1 280 00 0 1 1 246 00 0 1 1 242 01 1 1 1 315 1380 0 1 1 279 0

Page 7: Solving 0–1 knapsack problem by a novel global harmony search algorithm

1562 D. Zou et al. / Applied Soft Computing 11 (2011) 1556–1564

Table 7HM state in different iterations for the function f1 using the NGHS algorithm (pm = 2/N).

Rank x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 f(x) c

Initial HM1 0 1 0 1 0 0 0 1 1 1 248 02 1 0 0 1 1 1 1 0 0 0 122 333 0 0 0 1 1 0 1 0 0 1 104 04 1 1 1 1 0 1 0 1 0 0 228 565 0 1 1 0 1 1 0 1 1 0 257 17

Subsequent HM1 0 1 0 1 0 0 0 1 1 1 248 02 1 0 0 1 1 1 1 0 0 0 122 333 0 0 0 1 1 0 1 0 0 1 104 04 1 0 0 0 0 1 0 1 1 1 338 715 0 1 1 0 1 1 0 1 1 0 257 17

HM after 10 iterations1 0 1 0 1 0 0 0 1 1 1 248 02 0 0 0 0 1 0 0 0 1 1 176 03 0 1 1 0 1 0 0 1 1 1 294 04 0 1 0 1 0 0 0 1 1 1 248 05 0 1 1 0 1 0 0 1 1 1 294 0

HM after 500 iterations1 0 1 1 0 1 0 0 1 1 1 294 02 0 1 1 1 0 0 0 1 1 1 295 03 0 1 1 1 0 0 0 1 1 1 295 04 0 1 1 1 0 0 0 1 1 1 295 05 0 1 1 1 0 0 0 1 1 1 295 0

HM after 5000 iterations1 0 0 1 1 1 0 0 1 1 0 202 02 0 1 1 1 0 0 0 1 1 1 295 03 0 1 1 1 0 0 0 1 1 1 295 0

00

tttNs

TC

4 0 1 1 1 05 0 1 1 1 0

he optimal solutions with SR=100% for f1, f3, f5, f6 and f7. In short,

hree algorithms can successfully find the optimal solutions of theen 0–1 knapsack problems with small dimension sizes, and theGHS has a better performance than the other two algorithms on

olving the ten problems.

able 8omparison among three algorithms on solving 0–1 knapsack problems with small dime

Fun Opt.value Algorithm Best

f1 295 HS 295IHS 295NGHS 295

f2 1024 HS 1024IHS 1024NGHS 1024

f3 35 HS 35IHS 35NGHS 35

f4 23 HS 23IHS 23NGHS 23

f5 481.07 HS 481.07IHS 481.07NGHS 481.07

f6 50 HS 50IHS 50NGHS 50

f7 107 HS 107IHS 107NGHS 107

f8 9767 HS 9767IHS 9767NGHS 9767

f9 130 HS 130IHS 130NGHS 130

f10 1025 HS 1025IHS 1025NGHS 1025

0 1 1 1 295 00 1 1 1 295 0

4.2.2. The performance of three algorithms on solving 0–1

knapsack problems with large dimension sizes

Eight 0–1 knapsack problems with large scales are devised totestify and compare the performance of three harmony search algo-rithms. N is set to 100, 200, 300, 500, 800, 1000, 1200 and 1500,

nsion sizes.

Worst Mean Median Sta.dev

295 295 295 0288 294.78 295 1.06295 295 295 0

1024 1024 1024 01024 1024 1024 01024 1024 1024 0

35 35 35 028 34.58 35 1.6835 35 35 023 23 23 023 23 23 023 23 23 0

481.07 481.07 481.07 0.00437.93 478.48 481.07 10.35481.07 481.07 481.07 0.00

50 50 50 044 49.2 50 1.8550 50 50 0

105 106.8 107 0.6193 103.98 105 4.48

107 107 107 09767 9767 9767 09767 9767 9767 09767 9767 9767 0

130 130 130 0130 130 130 0130 130 130 0

1025 1025 1025 01025 1025 1025 01025 1025 1025 0

Page 8: Solving 0–1 knapsack problem by a novel global harmony search algorithm

D. Zou et al. / Applied Soft Computing 11 (2011) 1556–1564 1563

Table 9Comparison among three algorithms on solving 0–1 knapsack problems with large dimension sizes.

Fun Dim Iterations Algorithm Best Worst Mean Median Sta.dev

f11 100 15000 HS 7381 7271 7331.4 7329.5 36.94IHS 7527 7466 7497.1 7497 17.68NGHS 7531 7452 7497.2 7498 16.33

f12 200 15000 HS 10191 9935 10052 10052 65.10IHS 10589 10444 10522 10529 39.97NGHS 10749 10610 10677 10671 37.41

f13 300 20000 HS 12777 12378 12566 12580 97.52IHS 13506 13271 13368 13382 72.74NGHS 13848 13618 13724.4 13721.5 57.08

f14 500 20000 HS 15196 14492 14929 14939 195.33IHS 17059 16399 16697 16665 170.91NGHS 18293 17955 18110.6 18101.5 84.33

f15 800 30000 HS 33293 32965 33131.3 33148 107.05IHS 35102 34664 34935.3 34918 106.84NGHS 37582 37170 37374.05 37375.5 116.19

f16 1000 30000 HS 52804 50918 51554.25 51501.5 467.29IHS 59938 58685 59287.85 59277.5 342.15NGHS 65025 64600 64779.55 64777.5 101.13

f17 1200 40000 HS 61773 60351 61055.65 61061 416.70IHS 71184 69286 70331.3 70295 564.77NGHS 86399 86105 86220.90 86227.5 75.00

f18 1500 40000 HS 75321 73418 74197.35 74255.5 417.52IHS 86847 84503 85322.45 85276.5 642.88NGHS 102143 101557 10191.12 101911 160.85

Table 10The effect of stepi on the performance of the NGHS.

f stepi

|xbesti

− xworsti

| 0 0.2 0.4 0.6 0.8 1

f11 7531 (7497.2) 7505 (7481.25) 7505 (7482.1) 7505 (7462.4) 7363 (7300.15) 6867 (6609.2) 6430 (6278)f12 10749 (10677) 10737 (10676.2) 10734 (2.4e+20) 10742 (10666.55) 10111 (9980.8) 9680 (9575.4) 9562 (9416.6)f13 13848 (13724.4) 13797 (3.65e+20) 13818 (13689.05) 13804 (13720.95) 12642 (12510.85) 12044 (11907.15) 11837 (11713.05)f14 18293 (18110.6) 18201 (1.8e+20) 18185 (4.0e+20) 18223 (18041) 15097 (14741.75) 13818 (8.2e+20) 12452 (1.2695e+22)f15 37582 (37374.05) 37523 (1.0e+19) 37527 (2.7e+20) 37556 (37333.4) 33347 (32989.5) 32355 (31938.35) 31779 (31523.35)

83 (6484 (1.0129 (1

rpgav2udori

wktrerstsHsbttk

f16 65025 (64779.55) 64913 (1.45e+20) 64918 (64734.95) 649f17 86399 (86220.9) 86330 (86207.6) 86385 (86235.9) 863f18 102143 (10191.12) 102109 (101881.7) 102127 (101985.6) 102

espectively, in order to test the three algorithm with differentroblem scales. For each N, the values of weight and profit areenerated randomly: the weight wj(j = 1, 2, . . . , N) is between 5nd 20, the profit pj(j = 1, 2, . . ., N) is between 50 and 100. Thealue of weight capability C is accordingly set to 1100, 1500, 1700,000, 5000, 10000, 14000 and 16000, respectively, for the eight val-es of N. For eachN, once the randomly generated parameters areetermined, the same parameters are used to test the performancef three harmony search algorithms for 20 independent runs. Theesults obtained by three harmony search algorithms are presentedn Table 9.

As can be seen in Table 9, the NGHS has demonstrated an over-helming advantage over the other two algorithms on solving 0–1

napsack problems with large scales. The best solutions found byhe NGHS are all better than those obtained by the other two algo-ithms. Furthermore, the worst solutions found by the NGHS areven better than the best solutions obtained by the other two algo-ithms except f11. The HS has better performance than the IHS onolving the ten problems in Section 4.1, however, it loses this advan-age on solving the above eight 0–1 knapsack problems with largecales. For the problem with large scales, the IHS outperforms theS. The IHS can find a better solution than the HS, and the worst

olution found by the IHS is even better than the best solution foundy the HS in each case. In short, the NGHS has demonstrated bet-er performance on solving 0–1 knapsack problems than the otherwo algorithms, and it thus provides an efficient alternative for 0–1napsack problems.

749.25) 51650 (50855.6) 47177 (46538.75) 45546 (45179.8)5e+20) 60974 (60359.05) 56594 (55316.25) 54464 (53777.4)01771) 74671 (73824.9) 69355 (67894.9) 67686 (66585.7)

4.3. The effect of stepi (adaptive step) on the performance of theNGHS

The above eight test problems are used to study the effect of stepion the performance of the NGHS. For f11–f18, the best solutions and(average solutions) are presented in Table 10.

As can be seen in Table 10, the NGHS with dynamically adap-tive step outperforms the other states, and it can find bettersolutions than the other states. The NGHS with constant step can-not even find a feasible solution in some cases, which indicatesthe bad convergence and stability of the NGHS with constantstep.

5. Conclusions

In this paper, the performance of the NGHS has been extensivelyinvestigated by using a large number of experimental studies. Theexperimental results show that the NGHS has demonstrated strongconvergence and stability for 0–1 knapsack problems due to the uti-lization of the position updating equation. The results also revealthat the NGHS has strong capacity of preventing premature con-

vergence of the NGHS throughout the whole iteration due to theutilization of the genetic mutation. The proposed algorithm thusprovides a new method for 0–1 knapsack problems, and it mayfind the required optima in cases when the problem to be solved istoo complicated and complex.
Page 9: Solving 0–1 knapsack problem by a novel global harmony search algorithm

1 ompu

A

C

R[

[

[

[

Edition) 36 (1) (2007) 25–29.[14] Y. Zhu, L.H. Ren, Y. Ding, DNA ligation design and biological realization

564 D. Zou et al. / Applied Soft C

cknowledgments

This work was supported by National Science Foundation of P.R.hina under Grants 60674021.

eferences

[1] G. Mavrotas, D. Diakoulaki, Athanasios Kourentzis, Selection among rankedprojects under segmentation, policy and logical constraints, European Journalof Operational Research 187 (2008) 177–192.

[2] H.X. Shi, Solution to 0/1 knapsack problem based on improved ant colonyalgorithm, in: International Conference on Information Acquisition, 2006, pp.1062–1066.

[3] Y. Liu, C. Liu, A schema-guiding evolutionary algorithm for 0–1 knapsack prob-lem, in: 2009 International Association of Computer Science and InformationTechnology—Spring Conference, 2009, pp. 160–164.

[4] F.T. Lin, Solving the knapsack problem with imprecise weight coefficients using

genetic algorithms, European Journal of Operational Research 185 (1) (2008)133–145.

[5] Z.K. Li, N. Li, A novel multi-mutation binary particle swarm optimization for 0/1knapsack problem, in: Control and Decision Conference, 2009, pp. 3042–3047.

[6] Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm:harmony search, Simulation 76 (2) (2001) 60–68.

[

ting 11 (2011) 1556–1564

[7] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. IEEE Interna-tional Conference on Neural Networks, 1995, pp. 1942–1948.

[8] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony searchalgorithm for solving optimization problems, Applied Mathematics and Com-putation 188 (2007) 1567–1579.

[9] C. An, Y.J. Fu, On the sequential combination tree algorithm for 0–1 knap-sack problem, Journal of Wenzhou University (Natural Sciences) 29 (1) (2008)10–14.

10] W. You, study of greedy-policy-based algorithm for 0/1 knapsack problem,Computer and Modernization 4 (2007) 10–16.

11] H. Yoshizawa, S. Hashimoto, Landscape analyses and global search of knap-sack problems, in: Systems, Man, and Cybernetics, 2000 IEEE InternationalConference on 8-11 October 3, 2000, pp. 2311–2315.

12] D. Fayard, G. Plateau, Resolution of the 0–1 knapsack problem comparison ofmethods, Mathematical Programming 8 (1975) 272–307.

13] J.Y. Zhao, Nonlinear reductive dimension approximate algorithm for 0–1 knap-sack problem, Journal of Inner Mongolia Normal University (Natural Science

of knapsack problem, Chinese Journal of Computers 31 (12) (2008) 2207–2214.

15] B.D. Li, Research on the algorithm for 0/1 knapsack problem, Computer andDigital Engineering 5 (2008) 23–26.