chaotic harmony search algorithms
TRANSCRIPT
Applied Mathematics and Computation 216 (2010) 2687–2699
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier .com/ locate/amc
Chaotic harmony search algorithms
Bilal AlatasFirat University, Faculty of Engineering, Department of Computer Engineering 23119, Elazig, Turkey
a r t i c l e i n f o a b s t r a c t
Keywords:Harmony searchChaosPerformance
0096-3003/$ - see front matter � 2010 Elsevier Incdoi:10.1016/j.amc.2010.03.114
E-mail addresses: [email protected], bilalalatas
Harmony Search (HS) is one of the newest and the easiest to code music inspired heuristicsfor optimization problems. Like the use of chaos in adjusting note parameters such as pitch,dynamic, rhythm, duration, tempo, instrument selection, attack time, etc. in real music andin sound synthesis and timbre construction, this paper proposes new HS algorithms thatuse chaotic maps for parameter adaptation in order to improve the convergence character-istics and to prevent the HS to get stuck on local solutions. This has been done by using ofchaotic number generators each time a random number is needed by the classical HS algo-rithm. Seven new chaotic HS algorithms have been proposed and different chaotic mapshave been analyzed in the benchmark functions. It has been detected that coupling emer-gent results in different areas, like those of HS and complex dynamics, can improve thequality of results in some optimization problems. It has been also shown that, some ofthe proposed methods have somewhat increased the solution quality, that is in some casesthey improved the global searching capability by escaping the local solutions.
� 2010 Elsevier Inc. All rights reserved.
1. Introduction
Classical optimization algorithms are inflexible to adapt the solution procedure to an optimization problem. Generally agiven problem is modeled in such a way that a classical algorithm can handle it. This generally requires making severalassumptions and/or modifications which might not be easy to validate in many situations. These modifications and/orassumptions on the original problem parameters (rounding variables, softening constraints, etc.) certainly affect the solutionquality [1]. They are insufficient if integer and/or discrete decision variables are required in optimization models [1]. Solutionstrategies of classical optimization algorithms are generally depended on the type of objective and constraint functions (lin-ear, nonlinear, etc.) and the type of variables used in the problem modeling (integer, real, etc.). Their efficiency is also verymuch dependent on the size of the solution space, number of variables and constraints used in the problem modeling, andthe structure of the solution space (convex, non-convex, etc.). Briefly, they do not offer general solution strategies that can beapplied to problem formulations where, different type of variables, objective and constraint functions are simultaneouslyrequired by the optimization problems [1].
Inefficiency of classical optimization algorithms in solving larger scale and/or highly nonlinear problems forced research-ers to find more flexible and adaptable general purpose novel algorithms. Problem and model independent heuristic optimi-zation algorithms have been proposed by researchers to overcome the drawbacks of the classical optimization procedures.These algorithms are efficient and flexible and they can be modified and/or adapted to suit specific problem requirements.Researches on these algorithms are still continuing all around the globe. Fig. 1 shows the classifications of the heuristicalgorithms.
. All rights reserved.
@yahoo.com
Heuristics
HybridBiology basedSocial basedPhysics based
Single point Multi point
Stable objective function
Dynamic objective function
Single neighborhood
Dynamic neighborhood
Without memory
With memory
Musical based
Fig. 1. Heuristic algorithms.
2688 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
A meta-heuristic algorithm, mimicking the improvisation process of music players, has been developed and named Har-mony Search (HS) [2–7]. HS has several advantages with respect to traditional optimization techniques such as the following:
(a) HS imposes fewer mathematical requirements.(b) HS is free from divergence.(c) HS does not require initial value settings of the decision variables, thus it may escape the local optima. Furthermore it
may be easily adapted for multi-modal problems [8].(d) As the HS uses stochastic random searches, derivative information is unnecessary. HS has novel stochastic derivative
[9].(e) HS can handle both discrete and continuous variables.(f) HS algorithm could overcome the drawback of genetic algorithm’s building block theory by considering the relation-
ship among decision variables using its ensemble operation. HS generates a new vector, after considering all of theexisting vectors, whereas the genetic algorithm only considers the two parent vectors [10].
These features increase the flexibility of the HS algorithm and produce better solutions.HS is good at identifying the high performance regions of the solution space at a reasonable time, but gets into trouble in
performing local search for numerical applications. Researchers are still trying in order to improve the fine-tuning charac-teristic and convergence rate of HS algorithm [4,5,11,12].
Nonlinear dynamic systems have been iteratively used to generate chaotic sequences of numbers that are then mapped tovarious note parameters (pitch, dynamic, rhythm, duration, tempo, instrument selection, attack time, etc.) in real music. Fourpioneers of these methods are Jeff Pressing, Michael Gogins, Rick Bidlack, and Jeremy Leach [13]. Chaos has also been used insound synthesis and timbre construction [13].
Many chaotic maps in the literature possess certainty, ergodicity, and the stochastic property. Recently, chaotic sequenceshave been adopted instead of random sequences and very interesting and somewhat good results have been shown in manyapplications [14–16]. They have also been used together with some heuristic optimization algorithms [17–19] to expressoptimization variables. The choice of chaotic sequences is justified theoretically by their unpredictability, i.e., by theirspread-spectrum characteristic, non periodic, complex temporal behavior, and ergodic properties.
In this paper, sequences generated from different chaotic systems substitute random numbers for different parameters ofHS, musical inspired heuristic algorithm, where it is necessary to make a random-based choice. For this purpose, different HSmethods that use chaotic maps as efficient alternatives to pseudorandom sequences have been proposed. By this way, it isintended to enhance the global convergence and to prevent to stick on a local solution. However, in general, it is hard to
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2689
estimate how good most chaotic random number generators by applying statistical tests are, as they do not follow the uni-form distribution. The simulation results show that the application of deterministic chaotic signals instead of random se-quences may be a possible strategy to improve the performances of HS.
The remaining of this paper is organized as follows. Review of HS is summarized in Section 2. Section 3 offers a shortintroduction on improvements for HS. Section 4 describes the proposed methods, Chaotic Harmony Search Algorithms,shortly CHSAs. Section 5 describes the benchmark problems used for comparisons of the proposed methods. In Section 6,the testing of the proposed methods through benchmark problems are carried out and the simulation results are comparedwith those obtained via other algorithms that have been reported to have good performance. Finally, the conclusion is drawnbased on the comparison analysis reported and presented in Section 7.
2. Harmony search algorithm
Harmony Search (HS) algorithm, originated by Geem et al. [2], is based on natural musical performance processes thatoccur when a musician searches for a better state of harmony [2–7]. The resemblance, for example between jazz improvi-sation that seeks to find musically pleasing harmony and the optimization is that the optimum design process seeks to findthe optimum solution as determined by the objective function. The pitch of each musical instrument determines the aes-thetic quality just as the objective function is determined by the set of values assigned to each design variable. Aestheticsound quality can be improved practice after practice just as objective function value can be improved iteration by iteration[2,3].
The analogy between improvisation and optimization is shown in Fig. 2. Each musician (double bassist, guitarist, and sax-ophonist) has some notes in their memories and they can correspond to each decision variable ðx1; x2; and x3Þ. The range ofeach music instrument (double bass = {Do, Re, Mi}; guitar = {Mi, Fa, Sol}; and saxophone = {La, Si, Do}) corresponds to eachvariable value (x1 = {1.2, 2.2, 3.1}; x2 = {3.2, 2.4, 1.8}; and x3 = {1.7, 2.8, 2.3}). If the double bassist plucks the note Mi, the gui-tarist plucks Sol, and the saxophonist toots Do, their notes together make a new harmony (Mi, Sol, Do). They intend to impro-vise a new harmony by considering which note to play in their minds. If the new harmony is better than existing worstharmony, the new harmony is kept. Likewise, the new solution vector (1.2, 3.0, 1.6) generated in optimization process is keptif it is better than existing worst harmony in terms of objective function value. Just as the harmony quality is enhanced prac-tice after practice, the solution quality is enhanced iteration by iteration. HS algorithm was originally developed for discreteoptimization and later expanded for continuous optimization [2,3].
According to the above algorithm concept, the HS algorithm optimization procedure of which is shown in Fig. 3 consists ofthe following five steps.
Step 1: Problem and algorithm parameter initializationStep 2: Harmony memory initialization and evaluationStep 3: New harmony improvisationStep 4: Harmony memory updateStep 5: Termination criterion check
Do, Re, Mi Mi, Fa, Sol La, Si, Do
Considered: Do Considered: Mi Considered: None
1.22.23.1
1.72.82.3
3.22.41.8
Considered: 1.2
x1 x2 x3
f (1.2, 3.0, 1.6)
Sol Re
Chosen: Do Chosen: Mi Sharp Chosen: Sol
1.6 2.7
Considered: 3.2 Considered: None
Chosen: 1.2 Chosen: 3.0 Chosen: 1.6
HM Considering Pitch Adjusting Random Selection
Fig. 2. Analogy between improvisation and optimization (adapted from [3]).
• Initialize the Harmony Memory (HM) with random vectors as many as the vectors of HMS
• Evaluate HM
With probability HMCR• Select a new value for a variable from HM
With probability 1-PAR• Do nothing
With probability PAR• Choose a neighboring value With probability 1-HMCR• Select a new value from the possible value set
Update HM
Termination criteria met
Output HM
Improvise a new harmony
Yes
No
• Initialize the problem parameters Objective function (f(x))Decision variable (xi)Number of decision variables (N)
• Initialize the algorithm parametersHarmony Memory Size (HMS)Harmony Memory Considering Rate (HMCR)Pitch Adjustment Rate (PAR)The Number of Improvisations (NI)Distance bound wide (bw)
New harmony vector is better than existing harmony vectors in the HM
Yes
No
Step 1
Step 2
Step 3
Step 4
Step 5
Fig. 3. Block diagram of HS.
2690 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
The pseudo-code for HS algorithm is shown in Fig. 4. Five steps of HS are described in the next five subsections.
2.1. Problem and algorithm parameter initialization
The optimization problem is specified as follows:
Minimize f ðxÞ subject to xj 2 Xj ¼ 1;2; . . . ;N;
where f ðxÞ is an objective function; x is the set of each decision variable xj; N is the number of decision variables, Xj is the setof the possible range of values for each decision variable, that is xmin
j and xmaxj are the lower and upper bound of the jth
Fig. 4. Pseudo-code of HS.
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2691
decision parameter respectively. The HS algorithm parameters are also specified in this step. These are the harmony memorysize (HMS), or the number of solution vectors in the harmony memory; harmony memory considering rate (HMCR); pitchadjusting rate (PAR); bandwidth distance (bw); and the number of improvisations (NI), or stopping criterion. Recently thebandwith (bw) term has been changed as fret width (fw) due to its being a more musical term [20]. However, in this paperbw will be used. The harmony memory (HM) is a memory location where all the solution vectors (sets of decision variables)are stored. This HM is similar to the genetic pool in the genetic algorithm [21]. Here, HMCR, PAR, and bw are used to improvethe solution vector. They are defined in Step 2.3.
2.2. Harmony memory initialization and evaluation
In Step 2, the HM matrix is filled with as many randomly generated solution vectors as the HMS. This matrix has N col-umns where N is the total number of decision variables and HMS rows which are selected in the first step. This initial mem-ory is created by assigning random values that lie inside the lower and upper bounds of the decision variable to each decisionparameter of each vector of the memory as shown in (1)
x0i;j ¼ xmin
j þ rj � xmaxj � xmin
j
� �; i ¼ 1; . . . ;HMS; j ¼ 1; . . . ;N; ð1Þ
where xminj and xmax
j are the lower and upper bound of the jth decision parameter respectively, and rj 2 ½0;1� is an uniformlydistributed random number generated anew for each value of j. Pseudo-code of memory initialization can be shown in Fig. 5.
Thus, HM form can be shown as in Fig. 6:Candidate solution vectors in HM shown in Fig. 6 are then analyzed and their objective function values are calculated
ðf ðxi;:Þ; i ¼ 1;2; . . . ;HMSÞ.
2.3. New harmony improvisation
In Step 3, a new harmony vector, x0i ¼ ðx0i;1; x0i;2; . . . ; x0i;NÞ is generated based on three rules. They are memory consideration,pitch adjustment, and random selection. The value of a design variable can be selected from the values stored in HM with aprobability of harmony memory considering rate (HMCR). It can be further adjusted by moving to a neighbor value of a se-lected value from the HM with a probability of pitch adjusting rate (PAR). Or, it can be selected randomly from the set of allcandidate values without considering the stored values in HM, with the probability of (1-HMCR). The improvisation process
Fig. 5. Memory initialization.
Fig. 6. HM form.
2692 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
of the HS algorithm is depicted in Fig. 2. The for. . .do–end do loop in Fig. 4 depicts this step of HS. Memory consideration,pitch adjustment or random selection is applied to each variable of the new harmony vector in turn.
2.3.1. Memory considerationIn Fig. 2, the first musician in the figure has a memory containing 3 notes, {Do, Re, Mi}. He decides to choose Do in his
memory and plays it, directly. Likewise, if the first decision variable represents the first musician, the value of 1.2 can bechosen from a memory. Likewise, in the memory consideration part of the HS, the values of decision variables x0i;j for thenew vectors are chosen from any of the values in the specified HM range. The HMCR, which varies between 0 and 1, isthe rate of choosing one value from the historical values stored in the HM, while (1 - HMCR) is the rate of randomly selectingone value from the possible range of values as shown in Fig. 7.
2.3.2. Pitch adjustmentIn Fig. 2, the second musician also has a 3-notes memory, {Mi, Fa, Sol}. Differently from the first musician, he first chooses
the note Mi. Then, he can play its neighbor pitch such as Mi Sharp. Likewise, the value of 3.2 is chosen from the memory, andthen it can be adjusted into a neighbor value 3.0 in this step.
Likewise, in the pitch adjustment step of the algorithm, every component obtained by the memory consideration is exam-ined to determine whether it should be pitch-adjusted. The second if-end if part of the pseudo code shown in Fig. 4 explainsthis step of HS. This operation uses the PAR parameter, which is the rate of pitch adjustment and pseudo-code can be seen inFig. 8.
r is a random number between 0 and 1; and bw is an arbitrary distance bandwidth.
2.3.3. Random selectionIn Fig. 2, the last musician has some notes in his memory, {La, Si, Do}, as well. Although this memory was used during the
past improvisations, due to his musical knowledge he can also play all possible pitches, {Do, Re, Mi, Fa, Sol, La, Si, Do+}. Thus,
Fig. 7. Memory consideration.
Fig. 8. Pitch adjustment.
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2693
when he decides to play a note randomly, he can choose any of these notes, Sol as shown in Fig. 2. As being in thepossible data set, 1.6 can be chosen in this step, randomly, even if it doesn’t exist in the memory. After each musician hasdecided what to play, the new harmony is composed of (Do, Mi#, Sol). Similarly, a new solution vector is determined as(1.2, 3.0, 1.6).
This step of HS has been shown in else part of Memory Consideration step. The value of decision variable x0:;j is randomlychosen within the value range Xi.
2.4. Harmony memory update
If the newly generated harmony vector gives a better function value than the worst one, the new harmony vector is in-cluded in the HM and the worst harmony is excluded.
2.5. Termination criterion check
The HS algorithm is terminated when the termination criterion (e.g. maximum number of improvisations) has been met.Otherwise, Steps 2.3 and 2.4 are repeated.
3. Improvements on convergence of HS
Some works have been performed in order to improve the convergence of the HS. Mahdavi et al. [4] proposed a new var-iant of the HS, called the improved harmony search (IHS). The IHS dynamically updates PAR according to (2):
PARðtÞ ¼ PARmin þPARmax � PARminð Þ
NI� t; ð2Þ
where PAR(t) is the pitch adjusting rate for iteration t, PARmin is the minimum adjusting rate, PARmax is the maximum adjust-ing rate, NI is the umber of iteration, and t is the iteration number.
In addition, bw is dynamically updated as (3)
bwðtÞ ¼ bwmaxe
lnbwminbwmax
� �NI �t
0@
1A; ð3Þ
where bw(t) is the bandwidth for iteration t, bwmin is the minimum bandwidth and bwmax is the maximum bandwidth.In IHS algorithm, four new extra parameters (PARmin, PARmax, bwmin, and bwmax) should be adjusted for different problems
and it is not easy to initialize these terms. Selecting the best variables is very difficult and may be another optimizationproblem.
Another work is called as global-best harmony search (GHS) and is inspired by the concept of particle swarm optimization(PSO) [5]. It modifies the pitch adjustment step of the HS such that the new harmony can mimic the best harmony in the HM.Thus, this approach replaces the bw parameter altogether and adds a social dimension to the HS [5]. GHS has exactly thesame steps as the IHS with the exception that New Harmony Improvisation step as depicted in Fig. 9.
However, another research result has shown that this PSO feature works well only for small-sized problems [22]. Accord-ing to the obtained results in this work, HS outperformed PSO-HS, GA, SA, and SA + TS for a large-scale real-world problemwith 454 variables.
In [11], a more rational PAR function using the idea of simulated annealing has been proposed which increase the robust-ness of algorithm and therefore leads to a highly reliable algorithm. Their PAR function is:
PARðtÞ ¼ PARmax �PARmax � PARminð Þ
NI� t: ð4Þ
In [12], some improvements on convergence of HS have also been performed.
Fig. 9. New harmony improvisation step of GHS.
2694 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
4. Chaotic harmonic search algorithms
In simulating complex phenomena, sampling, numerical analysis, decision making and especially heuristic optimizationneeds random sequences with a long period and good uniformity [23]. Chaos is a deterministic, random-like process found innonlinear, dynamical system, which is non-period, non-converging and bounded. Moreover, it has a very sensitive depen-dence upon its initial condition and parameter [19,23]. The nature of chaos is apparently random and unpredictable andit also possesses an element of regularity. Mathematically, chaos is randomness of a simple deterministic dynamical systemand chaotic system may be considered as sources of randomness [19,23].
A chaotic map is a discrete-time dynamical system
xkþ1 ¼ f ðxkÞ; 0 < xk < 1; k ¼ 0;1;2; . . . ð5Þ
running in chaotic state. The chaotic sequence
fxk : k ¼ 0;1;2; . . .g
can be used as spread-spectrum sequence as random number sequence.Chaotic sequences have been proven easy and fast to generate and store, there is no need for storage of long sequences
[24]. Merely a few functions (chaotic maps) and few parameters (initial conditions) are needed even for very long sequences.In addition, an enormous number of different sequences can be generated simply by changing its initial condition. Moreover,these sequences are deterministic and reproducible.
Recently, chaotic sequences have been adopted instead of random sequences and very interesting and somewhat goodresults have been shown in many applications such as secure transmission [25,26], nonlinear circuits [27], DNA computing[28], image processing [29], and etc. The choice of chaotic sequences is justified theoretically by their unpredictability, i.e., bytheir spread-spectrum characteristic and ergodic properties.
One of the drawbacks of the HS is its premature convergence, especially while handling problems with more local optima[30,31].
The classical HS algorithm uses fixed values for HMCR, PAR and bw. These values that are the key factors to affect the con-vergence of HS are adjusted in initialization step (Step 1) and cannot be changed during new iterations. The main drawbackof this method appears in the number of iterations to find an optimal solution.
Small PAR values with large bw values can cause to poor performance of the algorithm and too many iterations needed tofind optimum solution. Small bw values in final iterations increase the fine-tuning of solution vectors by local exploitationand in early iterations bigger bw value can increase the diversity of solution vectors for global explorations. Furthermorelarge PAR values with small bw values usually cause the improvement of best solutions in final iterations which algorithmconverged to optimal solution vector. With these considerations, IHS has been proposed [4].
However for example decreased bw value is subject to trap the algorithms into the local optima and slows the conver-gence speed when it is near a minimum. Furthermore, HMCR parameter and randomly initializing of HM may affect the con-vergence speed. In fact, however, these parameters cannot ensure the optimization’s ergodicity entirely in phase space,because they are random in classical HS. That is why; these parameters may be selected chaotically by using chaotic maps.In this paper, sequences generated from chaotic systems substitute random numbers for the HS parameters where it is nec-essary to make a random-based choice. By this way, it is intended to improve the global convergence and to prevent to stickon a local solution.
This paper provides new approaches introducing chaotic maps with ergodicity, irregularity, and the stochastic property inHS to improve the global convergence by escaping the local solutions. The use of chaotic sequences in HS can be helpful to
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2695
escape more easily from local minima than can be done through the classical HS. When a random number is needed by theclassical HS algorithm, it is generated by iterating one step of the chosen chaotic map that has been started from a randominitial condition at the first iteration of the HS. The selected chaotic maps for the experiments have been listed in followingsubsections.
4.1. Used chaotic maps
The chaotic maps that generate chaotic sequences in HS steps used in the experiments are listed below:
4.1.1. Logistic mapLogistic map, whose equation is given in Eq. (6), has been brought to the attention of scientists by Sir Robert May in 1976
[32]. It appears in nonlinear dynamics of biological population evidencing chaotic behavior.
Xnþ1 ¼ aXnð1� XnÞ: ð6Þ
In this equation, Xn is the nth chaotic number where n denotes the iteration number. Obviously, Xn 2 ð0;1Þ under the con-ditions that the initial X0 2 ð0;1Þ and that X0 R f0:0;0:25;0:5;0:75;1:0g. a ¼ 4 have been used in the experiments.
4.1.2. Tent mapTent map [33] resembles the logistic map. It generates chaotic sequences in (0, 1) assuming the following form:
Xnþ1 ¼Xn=0:7; Xn < 0:7;10=3Xnð1� XnÞ; otherwise:
�ð7Þ
4.1.3. Sinusoidal iteratorThird chaotic sequence generator used in this paper is the so-called sinusoidal iterator [32] and it is represented by
Xnþ1 ¼ ax2n sinðpxnÞ: ð8Þ
When a = 2.3 and X0 ¼ 0:7 it has the simplified form represented by
Xnþ1 ¼ sinðpxnÞ: ð9Þ
It generates chaotic sequence in (0, 1).
4.1.4. Gauss mapThe Gauss map is used for testing purpose in the literature [33] and is represented by:
Xnþ1 ¼0; Xn ¼ 0;1=Xn mod ð1Þ; Xn 2 ð0;1Þ;
�ð10Þ
1=Xnmodð1Þ ¼ 1Xn� 1
Xn
� �ð11Þ
and bzc denotes the largest integer less than z and acts as a shift on the continued fraction representation of numbers. Thismap also generates chaotic sequences in (0, 1).
4.1.5. Circle mapThe Circle map [34] is represented by:
Xnþ1 ¼ Xn þ b� ða=2pÞ sinð2pXnÞ mod ð1Þ: ð12Þ
With a = 0.5 and b = 0.2, it generates chaotic sequence in (0, 1).
4.1.6. Sinus mapSinus map is defined as follows:
Xnþ1 ¼ 2:3ðXnÞ2 SinðpXnÞ: ð13Þ
4.1.7. Henon mapThe Henon map is a nonlinear 2-dimensional map most frequently employed in testing purposes. It is represented by:
Xnþ1 ¼ 1� aX2n� þ bYn; ð14Þ
Ynþ1 ¼ Xn; ð15Þ
2696 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
It is sometimes written as a 2-step recurrence relation represented by:
Xnþ1 ¼ 1� aX2n þ bXn�1: ð16Þ
The suggested parameter values are a = 1.4 and b = 0.3.
4.2. Proposed chaotic harmony search algorithms
New chaotic harmony search (CHS) algorithms may be simply classified and described as follows:
4.2.1. CHS1Initial HM is generated by iterating the selected chaotic maps until reaching to the HMS as shown in Fig. 10.
4.2.2. CHS2In this algorithm, PAR value has not been fixed in algorithm parameter initialization step of HS and it has been modified
by the selected chaotic maps as follows
PARðt þ 1Þ ¼ f ðPARðtÞÞ; 0 < PARðtÞ < 1; t ¼ 0;1;2; . . . ð17Þ
f() is a selected chaotic map beginning with a value taking into account the constraints.
4.2.3. CHS3In this algorithm, bw value has not been fixed in algorithm parameter initialization step of HS and it has been modified by
the selected chaotic maps as follows
bwðt þ 1Þ ¼ f ðbwðtÞÞ; 0 < bwðtÞ < 1; t ¼ 0;1;2; . . . ð18Þ
f() is a selected chaotic map beginning with a value taking into account the constraints.
4.2.4. CHS4In this algorithm PAR and bw values have not been fixed in HS and they have been modified by the selected chaotic maps
as follows
PARðt þ 1Þ ¼ f ðPARðtÞÞ; 0 < PARðtÞ < 1; t ¼ 0;1;2; . . .
bwðt þ 1Þ ¼ f ðbwðtÞÞ; 0 < bwðtÞ < 1; t ¼ 0;1;2; . . .ð19Þ
4.2.5. CHS5CHS1 and CHS2 are combined, that is initial HM is generated by iterating the selected chaotic maps and PAR value has
been modified by the selected chaotic maps when needed.
4.2.6. CHS6CHS1 and CHS2 are combined, that is initial HM is generated by iterating the selected chaotic maps and bw value has been
modified by the selected chaotic maps when needed.
Fig. 10. Pseudo-code of CHS1.
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2697
4.2.7. CHS7CHS1, CHS2, and CHS3 are combined. In this approach:
� HM is generated by iterating the selected chaotic maps.� PAR value has been modified by the selected chaotic maps.� bw value has been modified by the selected chaotic maps.
5. Test problems
Well-defined benchmark functions which are based on mathematical functions can be used as objective functions to mea-sure and test the performance of optimization methods. The nature, complexity and other properties of these benchmarkfunctions can be easily obtained from their definitions. The difficulty levels of most benchmark functions are adjustableby setting their parameters. From the standard set of benchmark problems available in the literature, two important func-tions which are multi-modal (containing many local optima, but only one global optimum) are considered to test the efficacyof the proposed methods. Table 1 shows the main properties of the selected benchmark functions used in the experiments.
6. Experimental results
Selected two benchmark problems are solved by simulating the HS, IHS and GHS algorithms. Two criteria are applied toterminate the simulation of the algorithms: reaching maximum number of iterations which is set to a constant number andthe second criterion is getting a minimum error.
All HM was initialized in regions that include the global optimum for a fair evaluation. The algorithms were run for 100times to catch their stochastic properties. In this experiment, maximum iteration number was set to 500 and the goal is notto find the global optimum values but to find out the potential of the algorithms. Algorithm success rate defined in Eq. (20)has been used for comparison of the results obtained from different HS algorithms.
Table 1Propert
Func
1
2
Table 2The use
HSGHS
Table 3The use
HMS
40
Table 4Success
Qleve
1.e�1.e�
S ¼ 100NTsuccessful
NTall
����Q level: ð20Þ
ies of test problems, lb indicates lower bound, ub indicates upper bound, opt indicates optimum point.
tion no. Function name Definition lb ub opt Property
Griewangk f1ðxÞ ¼PN
i¼1x2
i4000
� ��QN
i¼1 cos xiffiffiip� �
þ 1 �50 50 0 Multi-modal
Rastrigin f2ðxÞ ¼ 10� N þPN
i¼1ðx2i � 10 � cosð2pxiÞÞ �5.12 5.12 0 Multi-modal
d parameters for HS and GHS algorithms.
HMS HMCR PAR bw
40 0.9 0.8 0.240 0.9 0.8 0.2
d parameters for IHS algorithm.
HMCR PARmin PARmax bwmin bwmax
0.9 0.10 0.99 0.10 2.0
rates of HS algorithms for Rastrigin Function (N=2).
l HS IHS GHS
5 44 51 476 10 14 11
2698 B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699
Nsuccessful is the number of trials, which found the solution on the Q level in the allowable maximum iteration. Nall is the numberof all trials. Q level is the end condition to stop the algorithm, when it converges into Qlevel tolerance.
The used parameters for HS and GHS have been shown in Table 2 while the used parameters for IHS have been shown inTable 3. Table 4 depicts the success rates of HS algorithms for Rastrigin function for N = 2. Success rates of CHS algorithms
Table 5Success rates of CHS algorithms using different chaotic maps for Rastrigin Function (N = 2).
Q level CHS1 CHS2 CHS3 CHS4 CHS5 CHS6 CHS7
Logistic map1.e�5 56 48 66 79 48 99 591.e�6 18 10 21 53 13 52 41
Tent map1.e�5 49 58 54 52 54 55 841.e�6 15 29 19 12 22 16 44
Sinusoidal iterator1.e�5 48 53 87 52 52 99 541.e�6 15 21 28 18 29 80 17Gauss map1.e�5 45 47 55 55 47 55 601.e�6 10 13 18 35 14 39 33
Circle map1.e�5 52 71 59 57 75 64 841.e�6 16 33 19 37 28 37 41
Sinus map1.e�5 52 74 71 57 64 52 621.e�6 19 31 41 45 24 23 37
Henon map1.e�5 50 49 52 52 47 52 521.e�6 19 14 31 45 12 26 30
Table 6Success rates of HS algorithms for Griewangk Function (N = 2).
Qlevel HS IHS GHS
1.e�5 62 68 681.e�6 55 62 56
Table 7Success rates of CHS algorithms using different chaotic maps for Griewangk Function (N = 2).
Q level CHS1 CHS2 CHS3 CHS4 CHS5 CHS6 CHS7
Logistic map1.e�5 64 64 75 72 67 84 781.e�6 62 62 67 64 64 74 68
Tent map1.e�5 67 72 91 71 60 73 711.e�6 62 69 72 65 56 64 64
Sinusoidal Iterator1.e�5 63 62 91 74 60 86 731.e�6 60 58 71 69 48 66 67
Gauss map1.e�5 66 55 76 63 65 75 751.e�6 60 51 72 57 63 69 69
Circle map1.e�5 84 77 74 71 88 93 721.e�6 84 67 58 66 78 85 63
Sinus map1.e�5 84 73 81 68 80 80 641.e�6 59 63 63 62 70 68 62
Henon map1.e�5 72 65 79 62 63 78 731.e�6 69 62 66 60 60 68 67
B. Alatas / Applied Mathematics and Computation 216 (2010) 2687–2699 2699
using different chaotic maps for Rastrigin function for N = 2 is shown in Table 5. CHS algorithms have somewhat shown bet-ter performance than other HS algorithms. The performances of CHS3, CHS4, and CHS6 algorithms are better than others.From the result demonstrated from these tables, it can be concluded that adjusting of bw values with chaotic maps haveimproved the convergence property of the HS algorithm.
Table 6 depicts the success rates of HS algorithms for the second benchmark function, Griewangk function for N = 2. Thesame values selected to for Rastrigin function have also been used for this function. Success rates of CHS algorithms usingdifferent chaotic maps for Griewangk function for N = 2 is shown in Table 7. CHS algorithms have somewhat shown betterperformance than other HS algorithms as in Rastrigin function. The performances of CHS3, CHS4, and CH6 algorithms arebetter than others.
7. Conclusions
In this paper, different chaotic maps have been embedded to adapt the parameters of music inspired HS algorithm. Thishas been done using the chaotic number generators each time a random number is needed by the classical HS algorithm.Seven new chaotic HS algorithms have been proposed and different chaotic maps have been analyzed in the benchmarkfunctions. It has been detected that coupling emergent results in different areas, like those of HS and complex dynamics,can improve the quality of results in some optimization problems and also that chaos may be a desired process as in realmusic. It has been also shown that, these methods especially CHS3, CHS4, and CHS6 algorithms have somewhat increasedthe solution quality, that is in some cases they improved the global searching capability by escaping the local solutions.These proposed methods are new and more elaborated experiments may be performed with parallel or distributedimplementation.
References
[1] A. Baykasoglu, L. Ozbakir, P. Tapkan, Artificial bee colony algorithm and its application to generalized assignment problem, in: Felix T.S. Chan, TiwariManoj Kumar (Eds.), Chapter 8 of Swarm Intelligence: Focus on Ant and Particle Swarm Optimization, Itech Education and Publishing, 2007. p. 532.
[2] Z.W. Geem, J.H. Kim, G.V. Loganathan, A new heuristic optimization algorithm: harmony search, Simulation 76 (2) (2001) 60–68.[3] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continues engineering optimization: harmony search theory and practice, Comput. Meth.
Appl. Mech. Eng. 194 (2005) 3902–3933.[4] M. Mahdavi, M. Fesanghary, E. Damangir, An improved harmony search algorithm for solving optimization problems, Appl. Math. Comput. 188 (2007)
1567–1579.[5] M.G.H. Orman, M. Mahdavi, Global-best harmony search, Appl. Math. Comput. 198 (2008) 643–656.[6] M. Mahdavi, M. Haghir, C.H. Abolhassani, R. Forsati, Novel meta-heuristic algorithms for clustering web documents, Appl. Math. Comput. 201 (2008)
441–451.[7] Z.W. Geem, Novel derivative of harmony search algorithm for discrete design variables, Appl. Math. Comput. 199 (1) (2008) 223–230.[8] X.Z. Gao, X. Wang, S.J. Ovaska, Uni-modal and multi-modal optimization using modified harmony search methods, Int. J. Innov. Comput. Inform.
Control 5 (10(A)) (2009) 2985–2996.[9] Z.W. Geem, Global optimization using harmony search: theoretical foundations and applications, in: A. Abraham, A.E. Hassanien, P. Siarry, A.
Engelbrecht (Eds.), Foundations of Computational Intelligence, vol. 3, Springer, 2009, pp. 57–73.[10] Z.W. Geem, Improved harmony search from ensemble of music players, Lect. Notes Artif. Int. 4251 (2006) 86–93.[11] N. Taherinejad, Highly reliable harmony search algorithm, Eur. Conf. Circuit Theory Des. (2009) 818–822.[12] Z.W. Geem, W.E. Roper, Various continuous harmony search algorithms for web-based hydrologic parameter optimization, Int. J. Math. Model. Numer.
Optim. 1 (3) (2010) 213–226.[13] J.A. Maurer, The influence of chaos on computer-generated music, <http://ccrma.stanford.edu/�blackrse/chaos.html>, 1999 (accessed 04.03.2009).[14] M. Suneel, Chaotic sequences for secure CDMA, Ramanujan Inst. Adv. Study Math. (2006) 1–4.[15] H. Gao, Y. Zhang, S. Liang, D.A. Li, New chaotic algorithm for image encryption, Chaos Solitons Fractals (29) (2006) 393–399.[16] P. Arena, R. Caponetto, L. Fortuna, A. Rizzo, M. La Rosa, Self organization in non recurrent complex system, Int. J. Bifur. Chaos 10 (5) (2000) 1115–1125.[17] B. Alatas, E. Akin, B. Ozer, Chaos embedded particle swarm optimization algorithms, Chaos Solitons Fractals 40 (4) (2009) 1715–1734.[18] R. Caponetto, L. Fortuna, S. Fazzino, M.G. Xibilia, Chaotic sequences to improve the performance of evolutionary algorithms, IEEE Trans. Evol. Comput. 7
(3) (2003) 289–304.[19] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Syst. Appl. (2010). <http://dx.doi.org/10.1016/j.eswa.2010.02.042>.[20] Z.W. Geem, Recent Advances in Harmony Search Algorithm, Springer, Berlin, 2010.[21] Z.W. Geem, J.H. Kim, G.V. Loganathan, Harmony search optimization: application to pipe network design, Int. J. Model. Simul. 22 (2) (2002) 125–133.[22] Z.W. Geem, Particle-swarm harmony search for water network design, Eng. Optim. 41 (4) (2009) 297–311.[23] H.G. Schuster, Deterministic Chaos: An Introduction, 2nd Revised Ed., Physick- Verlag GmnH, D-6940 Weinheim, Federal Republic of Germany, 1988.[24] G. Heidari-Bateni, C.D. McGillem, A chaotic direct-sequence spread spectrum communication system, IEEE Trans. Commun. 42 (2–4) (1994) 1524–
1527.[25] K. Wong, K.P. Man, S. Li, X. Liao, More secure chaotic cryptographic scheme based on dynamic look-up table, Circuits Syst. Signal Process. 24 (5) (2005)
571–584.[26] M. Suneel, Chaotic Sequences for Secure CDMA, Ramanujan Inst. Adv. Study Math. (2006) 1–4.[27] P. Arena, R. Caponetto, L. Fortuna, A. Rizzo, M. La Rosa, Self organization in non recurrent complex system, Int. J. Bifur. Chaos 10 (5) (2000) 1115–1125.[28] G. Manganaro, G.J. de Pineda, DNA computing based on chaos, in: Proc. IEEE International Conference on Evolutionary Computation, IEEE Press,
Piscataway, NJ, 1997, pp. 255–260.[29] F. Han, J. Hu, X. Yu, Y. Wang, Fingerprint images encryption via multi-scroll chaotic attractors, Appl. Math. Comput. 185 (2) (2007) 931–939.[30] P. Chakraborty, G.G. Roy, S. Das, D. Jain, A. Abraham, An improved harmony search algorithm with differential mutation operator, IOS Press,
Fundamenta Informaticae, 95 (4) (2009) 401-426.[31] S.O. Degertekin, Optimum design of steel frames via harmony search algorithm, in: Z.W. Geem (Ed.), Harmony Search Algorithms for Structural Design
Optimization, Springer, 2009, pp. 51–78.[32] R.M. May, Simple mathematical models with very complicated dynamics, Nature 261 (1976) 459.[33] H. Peitgen, H. Jurgens, D. Saupe, Chaos and Fractals, Springer-Verlag, Berlin, Germany, 1992.[34] W.M. Zheng, Kneading plane of the circle map, Chaos Solitons Fractals 4 (1994) 1221.