metaheuristics: review and application
TRANSCRIPT
This article was downloaded by: [Moskow State Univ Bibliote]On: 20 November 2013, At: 06:11Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Experimental & TheoreticalArtificial IntelligencePublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/teta20
Metaheuristics: review and applicationAnupriya Gognaa & Akash Tayalba ECE Department, Northern India Engineering College, GGSIPUniversity, FC-26, Shastri Park, Delhi, 110053, Indiab ECE Department, Indira Gandhi Institute of Technology, GGSIPUniversity, Delhi, IndiaPublished online: 20 May 2013.
To cite this article: Anupriya Gogna & Akash Tayal (2013) Metaheuristics: review andapplication, Journal of Experimental & Theoretical Artificial Intelligence, 25:4, 503-526, DOI:10.1080/0952813X.2013.782347
To link to this article: http://dx.doi.org/10.1080/0952813X.2013.782347
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Metaheuristics: review and application
Anupriya Gognaa* and Akash Tayalb
aECE Department, Northern India Engineering College, GGSIP University, FC-26, Shastri Park, Delhi110053, India; bECE Department, Indira Gandhi Institute of Technology, GGSIP University, Delhi, India
(Received 24 May 2012; final version received 30 September 2012)
The area of metaheuristics has grown immensely in the past two decades as a solutionto real-world optimisation problems. They are able to perform well in situations whereexact optimisation techniques fail to deliver satisfactory results. For complexoptimisation problems (Nondeterministic polynomial time-hard problems), metaheur-istic techniques are able to generate good quality solution in relatively much less timethan traditional optimisation techniques. Metaheuristics find applications in a widerange of areas including finance, planning, scheduling and engineering design. Thispaper presents a review of various metaheuristic algorithms, their methodology, recenttrends and applications.
Keywords: application of metaheuristics; memetic algorithm; P-metaheuristics;S-metaheuristics
1. Introduction
Most real-world problems have high complexity, non-linear constraints, interdependencies
amongst variables and a large solution space. This warrants the use of a technique that is capable
of solving complex optimisation problems in real time. Metaheuristic algorithms are one such
technique. They are optimisation methods that deliver reasonably good solution in a reasonable
amount of time. Optimisation is concerned with finding the best value of a set of variables in
order to achieve the goal of minimising (or maximising) an objective function subject to given
set of constraints. Need for optimisation is inherent in almost all walks of life ranging from our
daily life to financial and business planning, and from industrial control to engineering design.
Optimisation techniques can be broadly classified into exact and approximate methods
(Talbi, 2009). Exact methods obtain optimal solutions and guarantee their optimality. This
category includes methods such as branch and bound algorithm, dynamic programming,
Bayesian search algorithms and successive approximation methods. Approximate methods are
aimed at providing good-quality solution (instead of guaranteeing a global optimum solution) in
a reasonable amount of time. Approximate methods can be further split into approximation
algorithms and heuristic methods (Talbi, 2009). The former scheme provides provable solution
quality and provable run-time bounds, whereas the latter involves finding reasonably good
solution in a reasonable time. Heuristic algorithms are highly problem specific. Metaheuristics
form a class of algorithms that act like guiding mechanism for the underlying heuristics. They
are not problem or domain specific and can be applied to any optimisation problem. The term
metaheuristics was introduced by Glover (1986).
q 2013 Taylor & Francis
*Corresponding author. Email: [email protected]
Journal of Experimental & Theoretical Artificial Intelligence, 2013
Vol. 25, No. 4, 503–526, http://dx.doi.org/10.1080/0952813X.2013.782347
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Metaheuristics is formally defined as an iterative generation process that guides a
subordinate heuristic by combining intelligently different concepts for exploring and exploiting
the search space; learning strategies are used to structure information in order to find efficiently
near-optimal solutions (Osman & Laporte, 1996). Metaheuristic algorithms create a balance
between intensification and diversification of the search space. They can be used to efficiently
solve NP-hard problems with large number of variables and non-linear objective functions.
Development in the field of metaheuristics largely stems from the importance of complex
optimisation problems to the industrial and scientific world. In the past two decades, many
metaheuristic algorithms have been proposed. Some of them are genetic algorithms (GA;
Holland, 1975), memetic algorithm (MA; Moscato, 1989), artificial immune system (AIS;
Farmer, Packard, & Perelson, 1986), simulated annealing (SA; Kirkpatrick, Gellat, & Vecchi,
1983), tabu search (TS; Glover & Laguna, 1997), ant colony optimisation (ACO; Dorigo, 1992),
particle swarm optimisation (PSO; Kennedy & Eberhart, 1995) and differential evolution (DE;
Price, Storn, & Lampinen, 2006). The need for having a large number of metaheuristic
algorithms arise because of the implication of no free lunch (NFL) theorem proposed byWolpert
and Macready (1997). According to the NFL theorem, the averaged performance for all possible
problems is the same for all algorithms. Thus, no global distinction can be made between
performances of any two algorithms apart from conditions where one algorithm is more suited to
an application than the other. Metaheuristics find application in areas such as telecommunica-
tion, engineering design, machine learning, logistics and business planning.
This paper is organised as follows. Section 2 describes classification of metaheuristics.
Section 3 describes single-solution-based metaheuristic algorithms. Population-based
metaheuristic algorithms are discussed in Section 4. Section 5 discusses the applications of
metaheuristics. Finally, concluding remarks are mentioned in Section 6.
2. Classification of metaheuristics
There are different ways to classify metaheuristic algorithms based on characteristics used to
differentiate amongst them (Talbi, 2009). They include:
. Nature inspired versus non-nature inspired
The former ones are inspired by natural phenomena such as evolution (GA) or bird flocking
behaviour (PSO). The non-nature-inspired ones include tabu search and iterated local search
(ILS).
. Population-based versus single-point search
Single-solution-based methods, also called trajectory methods, manipulate a single solution.
Population-based methods iterate and manipulate a whole family of solutions. Single-solution-
based techniques (e.g. TS, SA and local search) are intensification oriented while population-
based techniques (e.g. GA and PSO) are more focused on exploration of search space. This
classification is used in our paper.
. Iterative versus greedy
Iterative algorithms (e.g. PSO and SA) start from a (set of) solution that is manipulated as the
search process proceeds. Greedy algorithms (e.g. ACO) start from an empty solution, and at each
stage, a decision variable is assigned a value, which is added to the solution set.
. Dynamic versus static objective function
A. Gogna and A. Tayal504
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
The algorithms with static objective function (e.g. PSO) keep the objective function given in
the problem representation ‘as it is’ while others (e.g. guided local search) modify it during the
search process.
. Memory usage versus memory less
Memory-less algorithms (e.g. SA) use information only about the current state of search,
while others make use of some information gathered during the iterative process (e.g. TS).
. One versus multiple neighbourhood structures
Most metaheuristic algorithms work on a single neighbourhood structure (e.g. ILS). Other
metaheuristics use a set of neighbourhood structures (e.g. variable neighbourhood search).
3. Single-solution-based metaheuristics (S-metaheuristics)
S-metaheuristics apply generation and replacement procedure to a single solution. In the
generation stage, a set of candidate solutions is generated from the current solution. In the
replacement stage, a suitable solution, from the generated set, is chosen to replace the current
solution. This process continues till a satisfactory result (good-quality solution) is obtained.
Main concepts of S-metaheuristics are:
. Generation of initial solution
Initial solution can be generated either randomly or by using a greedy heuristic. Use of the
latter technique is more complex but results in faster convergence.
. Definition of neighbourhood
A neighbourhood function N is a mapping, N: S ! 2S, that assigns to each solution s of S
(search space) a set of solutions, N(s) , S. The neighbourhood N(s) of a solution s in a
continuous space is the ball with centre s and radius equal to r, with r . 0. For a discrete
optimisation problem, the neighbourhood N(s) of a solution s is represented by the set {s0/d(s0, s)#r}, where d represents a given distance that is related to the move operator. A solution s0in the neighbourhood of s (s0 [ N(S)) is called a neighbour of s. A neighbourhood with strong
locality leads to better performance. Choosing a rich neighbourhood improves the chances of
finding good solution and also increases the computation time. Some of the S-metaheuristic
algorithms has been described in this section.
3.1 Simulated annealing
SA algorithm was proposed by Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller (1953). It
was first applied in the field of optimisation by Kirkpatrick et al. (1983). SA is based on the
annealing process in statistical mechanics, which involves heating a substance and then gradually
cooling it to obtain a high-strength crystalline structure. Aim of annealing is to reach the lowest
energy statewhileminimising the total entropy production.Analogy between physical system and
optimisation problem is stated in Table 1. SA is a neighbourhood search, memory-less algorithm
with a capability to escape local optima and hence avoid premature convergence.
3.1.1 Methodology of SA
SA algorithm starts with an initial solution, and at each iteration, a new solution (random
neighbour) is generated using the neighbourhood strategy adopted. If the value of objective
Journal of Experimental & Theoretical Artificial Intelligence 505
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
function for the new solution is less (for a minimising problem), then it is accepted, and existing
solution is replaced. If the cost function of a new neighbour is higher than that of the current
solution, then a probability measure is applied to decide whether to replace the original solution
or not. Probability measure is defined as
P ¼ exp2DE
KT
� �; ð1Þ
where DE is the change in energy (cost function) between old and candidate solutions, K is the
Boltzmann constant and T is the control parameter value at a given iteration.
Probability measure allows selection of non-improving solutions, thereby helping algorithm
to get out of local optima. The algorithm starts with high initial temperature to ensure that large
number of solutions can be accepted in the beginning. As the iterations proceed, temperature (T)
reduces and algorithm begins to converge towards an optimum solution.
3.1.2 Pseudocode for SA
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Initial temperature T0 $ 0
Cooling schedule CT (t)
Maximum iteration at fixed temperature, N
Objective function f (*)
Begin
Generate initial solution s [ S
s ˆ rand £ (h_b – l_b) þ l_b
Start iteration at initial temperature
T ˆ T0
Calculate fitness value for initial solution
val ˆ f(s)
Initialize repetition counter
n ˆ 0
Table 1. Analogy between Physical System and Optimization Problem
Physical System Optimization Problem
States SolutionEnergy Objective FunctionGround state Optimal SolutionTemperature Control parameterChange of state Neighbour
A. Gogna and A. Tayal506
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Repeat
While n # N
Generate candidate solution in neighbourhood of s, s 0 [ S
Compute fitness value for candidate
val_new ˆ f(s 0)
If val . new_val,
s ˆ s 0
elseif (exp(val_new - val)/KT) , rand,
s ˆ s 0
end if
Increment iteration counter
n ˆ n þ 1
End while
Reduce temperature as per cooling schedule
T ˆ CT (T)
Until stopping criterion is met
The stopping criteria could be minimum temperature reached or no change in fitness over a
given number of iterations, etc. A list of various cooling schedules is given in Section 3.1.3.
3.1.3 Variations and advancements in SA
Cooling schedules affect the total entropy production in the process. Hence, they have a
significant impact on the performance of the algorithm. Because of this, several cooling
strategies have been proposed (Nourani & Andresen, 1998). Most commonly used cooling
strategies are as follows:
. Exponential, T(t) ¼ a t T0: it helps keep the system close to equilibrium.
. Linear, T(t) ¼ T0 – bt: it is the most popular strategy used.
. Logarithmic, T(t) ¼ c/log(d þ t): it gives guaranteed, but very slow, convergence.
. Very slow decrease, T(t) ¼ T(t 2 1)/1 þ bT(t 2 1).
. Non-monotonic (Hu, Kahng, & Tsao, 1995), in which the temperature rises again. It
helps improve algorithm’s finite time performance.. Adaptive, in which cooling rate varies during search based on information gathered
(Ingber, 1996).
3.1.4 Features and considerations for SA
SA can be easily implemented to give good solutions if certain precautions are taken into
account. First, the initial temperature should be set adequately high. Too low a temperature can
Journal of Experimental & Theoretical Artificial Intelligence 507
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
cause the system to get stuck in local optima, and a very high value of T can cause difficulty in
reaching the optimum solution. Also, there should be a gradual reduction in control parameter T.
Iterations at each temperature should be enough to stabilise the system. The employed cooling
schedule greatly influences the quality of solution. SA is well suited to problems with a rough
landscape (large number of local optima), but if the optimisation problem has a smooth
landscape with limited local optima, SA is not of much use and unnecessarily delays
convergence.
3.2 Tabu search
TS algorithm, a local search methodology, was proposed in 1986 by Glover and Laguna (1997).
It uses the information gathered during the iterations (stored in memory) to make the search
process more efficient. As in case of SA, it also accepts non-improving solutions to move out of
local optima. However, TS algorithm searches the whole neighbourhood deterministically
unlike SA that employs a random search. The distinguishing feature of TS is the use of a short-
term memory that prevents previously visited solutions from being accepted again. It speeds up
the attainment of optimum solution.
3.2.1 Methodology of TS
TS algorithm starts with an initial (randomly selected) solution. It maintains a tabu list (short-term
memory) that has a record of previously visited solutions. Anymove (solution) stored in tabu list is
not allowed (accepted). This helps the algorithm from being trapped in cycles. At each iteration,
new candidate solutions are generated. A new solution is accepted, in accordance with the
contents of tabu list, if it is better than the current one. If no better solution exists, the best
neighbour is selected to replace the existing solution. Worse solutions are also accepted to move
out of local optima.Medium- and long-termmemories can also be used to enhance intensification
and diversification of TS algorithm (Talbi, 2009). The former stores the best solution obtained
during the search, and the latter helps in exploring unexplored areas. Aspiration criteria is another
feature that helps override the tabu list and allow tabu moves, if they result in better solutions.
3.2.2 Pseudocode for TS
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Objective function f (*)
Maximum size of tabu list, s_t
Begin
Initialize tabu list, short term and long term memory
Tabu list ˆ Ø
Medium term memory ˆ Ø
Long term memory ˆ Ø
Generate initial solution s [ S
A. Gogna and A. Tayal508
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
s ˆ rand £ (h_b – l_b) þ l_b
Calculate fitness value for initial solution
val ˆ f(s)
Repeat
Generate candidate solution in neighbourhood of s, s 0 [ S
if s0 � tabu list or s 0 [ aspiration criteria
Compute fitness value for candidate
val_new ˆ f(s 0)
If val . new_val,
s ˆ s0
end if
end if
Update tabu list, aspiration criteria, medium and long term memory
If tabu list size . s_t
Delete earliest entry
end if
Until stopping criterion is met
The stopping criteria could be minimum value of fitness function, maximum number of
iterations or maximum iterations without improvement, etc.
3.2.3 Variations and advancements in TS
Advancements in original TS scheme are proposed to make the search process more effective.
One of the variations is to store moves or solution attributes rather than actual solutions in the
tabu list. This reduces the time and memory requirement of the algorithm. Also, the size of tabu
list may be static, dynamic or adaptive. The advancements in TS algorithm concentrate on better
exploitation of information gathered, efficient neighbourhood operators, better initial solutions
and application of parallel search strategies. An efficient way of utilising the information
gathered is to use the elite (best) solution found so far to generate new solutions to speed up
convergence. Other methods, such as the reactive TS attempts to find ways of making the search
move away from local optima that have already been visited. An exhaustive description of
advancements in TS can be found in the study by Gendreau (2002).
3.2.4 Features and considerations for TS
The effectiveness of TS algorithm depends on appropriate selection of neighbourhood operator
and search space. This requires in-depth knowledge of problem at hand. In addition,
Journal of Experimental & Theoretical Artificial Intelligence 509
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
diversification should be very carefully handled to avoid getting stuck at local minima. Also,
tabu list should be carefully formulated to make search effective while minimising computation
time and memory requirement. The best feature of TS is its ability to avoid cycles. However,
unlike SA, it is not efficient enough in moving out of the local minima.
4. Population-based metaheuristics (P-metaheuristics)
P-metaheuristics apply generation and replacement procedure to a family of solutions spread
over the search space. A set of solution is initialised. It is then iterated and manipulated to
generate a new set. The new solution set is selected from amongst the existing and newly
generated solution set using a replacement strategy. This process continues until the stopping
criterion is met. Most P-metaheuristics are inspired by nature. P-metaheuristic algorithms vary
amongst themselves in the way they perform generation, selection and how search memory is
organised. Main concepts of such algorithms are:
. Initial population
A well-initialised population leads to an efficient algorithm. Some of the initialisation
techniques are summarised in Table 2.
. Stopping criteria
The stopping criteria can be a fixed number of iterations, minimum value of objective
function or maximum iterations without improvement.
. Some of the prominent P-metaheuristic algorithms have been described in this section.
4.1 Genetic algorithm
GA, a global search heuristic, was proposed by Holland (1975). It is inspired by Darwin’s theory
of evolution. They are characterised by inherent parallelism.
4.1.1 Methodology of GA
GA begins with a set of solution or initial population, which can be generated using any of the
techniques mentioned in Table 2. The individuals can be represented in the form of binary
strings, tree structure, vector of real numbers, etc. based on problem under consideration. Each
encoded solution is called a chromosome, with each decision variable in a chromosome being
called a gene. From the current population, a set of individuals is selected based on a selection
criterion to be parents. The selection criterion is based on the fitness function (objective
Table 2. Methods for Initializing Population
Method Technique Diversity
Random Generation Solution set Initialized using pseudo- randomnumbers
Medium to high
Sequential Diversification Solutions are generated in sequence into optimize diversity
Very High
Parallel Diversification Solutions generated in a parallel, independentway
Very High
Heuristic Initialization Any heuristic can be used forinitialization
Low
A. Gogna and A. Tayal510
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
function) value corresponding to each individual. These parents produce offsprings to be a part
of new generation. The selection criteria used can be:
. Roulette wheel selection
In this, the selection probability of an individual is directly proportional to its fitness.
. Rank-based fitness assignment
In this method, relative fitnesses are associated with individuals. It prevents the selection
process from being dominated by highly fit individuals, thus preserving diversity in population.
. Tournament selection
In this, a set of individuals are randomly chosen and the best amongst them is selected for
mating.
. Elitism
In this scheme, a fixed number of best chromosomes are kept and remaining population is
generated using any of selection procedure discussed above. This helps in preserving the best
solution in the population.
Selected individuals are used to produce new solutions (offsprings) using crossover and
mutation operators.
. Crossover operators
The operation of crossover is used to produce offsprings that inherit characteristics from
both parents. Crossover has a high probability ranging from 0.6–1.0. Some of the crossover
techniques are single-point crossover, two-point crossover or uniform crossover. For real-coded
GAs (RCGAs), we can use techniques such as arithmetic crossover, heuristic crossover and
geometrical crossover.
. Mutation operator
Mutation is a rare phenomenon with a low probability of around 0.001–0.1. It is used to
change some information contained in randomly selected chromosomes. It aids in diversification
of population and larger exploration of the search space.
The next task is to produce a new generation. For this, old generation needs to be replaced by
the new one. The extreme replacement strategies are (Talbi, 2009):
. Generational replacement
The replacement will concern the whole population of size m. The offspring population will
systematically replace the parent population.
. Steady-state replacement
At each generation, only one offspring is generated and it may replace the worst individual
of the parent population.
Between these two extreme replacement strategies, many distinct schemes exist that involve
replacement of a given number, l (1 , l , m), of individuals of the population.
4.1.2 Pseudocode for GA
Select algorithm’s parameters
Journal of Experimental & Theoretical Artificial Intelligence 511
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Bounds of solution space, h_b, l_b
Population size, NP
Chromosome size, ND
Objective function f (*)
Maximum no. of generations, NG
Crossover probability, pc
Mutation probability, pm
Begin
Generate initial population
pop ˆ rand(NP, ND) £ (h_b – l_b) þ l_b
Calculate fitness value for initial population
for i ¼ 1:NP
val (i) ˆ f (pop(i,:))
end
Repeat
While no of offspring # required
Select parents (pa, ma) from current population based on their fitness
If pc . rand,
Offspring ˆ crossover (ma, pa)
If pm . rand,
Offspring ˆ mutate (offspring)
End while
Generate new population using some replacement strategy
Until stopping criterion is met
4.1.3 Variations and advancements in GA
In addition to the traditional crossover (e.g. uniform and single-point) and mutation (e.g.
Gaussian and uniform) operators, new operators have been introduced to improve the quality of
solution. Some of the crossover operators are as follows:
. Unimodal normally distributed crossover (Ono & Kobayashi, 1997): in this crossover,
three parents are used to generate two or more children.. Simulated binary crossover (SBX): it is used to generate offspring close to parents (Deb
& Agrawal, 1995).
A. Gogna and A. Tayal512
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
. Hybrid crossover with multiple descendants (Sanchez, Lozano, Villar, & Herrera,
2009). Independent component-analysis-based crossover (Takahashi & Kita, 2001): it shows
good search ability.
Variations proposed in mutation operator are as follows:
. Directed mutation operator (Korejo, Yang, & Li, 2010) that introduces a bias towards
promising areas.. Principal component analysis-based mutation operator (Munteanu & Lazarescu, 1999):
it is used to homogenise the components such that a situation where only a few
principal components are important (and rest negligible) can be avoided to ensure
diversity.
4.1.4 Features and considerations for GA
GA exhibits inherent parallelism and is a good technique for solving complex optimisation
problems. However, it can suffer from premature convergence. The effectiveness of algorithm is
based on selection of objective function, population size, and probability of crossover and
mutation. Hence, these parameters must be carefully selected. A larger population size and large
number of generations increase the likelihood of obtaining a global optimum solution, but
substantially increases processing time.
4.2 Differential evolution
DE was proposed by Storn and Price (1995). It resembles evolutionary algorithm (EA) but
differs from the traditional ones in the way candidate solutions are generated and the use of
greedy selection scheme.
4.2.1 Methodology of DE
DE begins with an initial randomly generated population. Each solution is a real number vector;
hence, DE is traditionally used for continuous optimisation. At each generation, a new set of
candidate solutions is generated. DE, instead of the usual crossover operator, uses a
recombination operator based on linear combination. The offsprings replace the parents only if
they are better. A child is created by mutating existing individuals, largely picked at random
from the population. The notation DE/x/y/z is generally used to define a DE strategy (Talbi,
2009), where x specifies the vector to be mutated, y is the number of difference vectors used and
z denotes the crossover scheme. The commonly used crossover variants are binomial and
exponential. The former is similar to uniform crossover and the latter is similar to a two-point
crossover in GA. More popular amongst the two is the binomial crossover that is used in
pseudocode given in section 4.2.2.
4.2.2 Pseudocode for DE
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Population size, NP
Journal of Experimental & Theoretical Artificial Intelligence 513
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Chromosome size, ND
Objective function f (*)
Scaling factor, F
Maximum no. of generations, NG
Crossover constant, cr
Begin
Generate initial population
pop ˆ rand(NP, ND) £ (h_b – l_b) þ l_b
Calculate fitness value for initial population
for i ¼ 1:NP
val (i) ˆ f(pop(i,:))
end
Repeat
Select 3 parents at random
for i ¼ 1:ND
If rand(i) # cr
offspring(i) ˆ parent3(i) þ F £ (parent1(i) 2 parent2(i))
else
offspring (i) ˆ parent(i)
End if
Compute fitness of offspring
val_new ˆ f (offspring)
if val_new , val
parent ˆ offspring
end if
Until stopping criteria is met
4.2.3 Variations and advancements in DE
DE is a simple and efficient algorithm and hence finds use in large number of optimisation
problems. Many improvements have been proposed to the original DE algorithm. DE can be
used with trigonometric mutation operator (Fan & Lampinen, 2003) or a neighbourhood-based
mutation operator. Also, an adaptive crossover operator can be employed. They help in
A. Gogna and A. Tayal514
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
balancing exploration and exploitation components. A summary of these advancements is given
in the study by Neri and Tirrone (2010).
4.2.4 Features and considerations for DE
Convergence of DE algorithm can be controlled using the number of parents and scaling factor.
These parameters can be tuned to achieve a trade-off between convergence speed and
robustness. Increase in number of parents and reduction in scaling factor makes convergence
easier but is computationally intensive. Crossover constant is just a fine-tuning parameter in DE
and does not have much impact on convergence speeds.
4.3 Particle swarm optimisation
PSO was developed by Kennedy and Eberhart (1995). It is based on the swarm behaviour (birds
flocking). It is a population-based mechanism, but unlike EA, it maintains a single static
population whose members are tweaked in response to the search history. PSO can be easily
implemented in a wide range of optimisation problems.
4.3.1 Methodology of PSO
PSO also begins with an initial random population that mimics the birds in a flock. Each solution
is called a particle and the population is termed as a swarm. An analogue of directed mutation
moves the particle in space based on the globally best position attained by any particle in the
swarm (gBest or global best) and the best position attained by the concerned particle (pBest or
personal best). The velocity is updated at each stage using the following equation:
vðt þ 1Þ ¼ wðtÞ £ vðtÞ þ c1 £ r1ðgBest2 xðtÞÞc2 £ r2ðpBest2 xðtÞÞ: ð2Þ
The position is updated using the following equation:
xðt þ 1Þ ¼ xðtÞ þ vðt þ 1Þ; ð3Þ
where w is the inertia weight, r1 and r2 are the random numbers, and c1 and c2 are the
acceleration constants.
4.3.2 Pseudocode for PSO
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Swarm size, NS
Dimension of particle, ND
Objective function f (*)
Maximum no. of iteration, NMAX
Begin
Generate initial swarm
Journal of Experimental & Theoretical Artificial Intelligence 515
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
swarm ˆ rand(NS, ND) £ (h_b – l_b) þ l_b
Initialize velocity vector, V
V ˆ rand (NS, ND)
Calculate fitness value for initial swarm
for i ¼ 1:NS
val (i) ˆ f(swarm(i,:))
end
Compute gBest and pBest
Repeat
Generate new velocity vector using equation (2)
Compute new position vecto, new_swarm using equation (3)
for i ¼ 1:NS
new_val (i) ˆ f(new_swarm(i,:))
end
Replace pBest and gBest values
Until stopping criterion is met
4.3.3 Variations and advancements in PSO
Various modifications to the original PSO algorithm have been introduced. To reduce the
possibility of particles flying out of the problem space, a restriction can be imposed on the velocity
increment at each stage. Inertial weight can also be controlled via fuzzy techniques. Apart from
using the entire swarm to compute gBest, local small neighbourhood-based structures can be used.
This is similar to the neighbourhood topologies used with local search methods. A review of
developments in PSO and their application is given in the study by Eberhart and Shi (2001).
4.3.4 Features and considerations for PSO
Inertia weight decides the expanse of search space explored. A high value of inertia weightw, at the
start of iteration helps in exploring a wide search space. As iterations proceed,w is reduced in order
to intensify the search. Acceleration factors are used to make the particles follow the best obtained
solution. Low values of acceleration constants allow particles to roam far from target regions before
being tugged back, while high values result in abrupt movement towards the target regions. The
performance of PSO is relatively insensitive to swarm size, provided it is not too small.
4.4 Ant colony optimisation
ACO proposed by Dorigo et al. in 1982 (Dorigo, 1992) is based on the cooperative behaviour of
ants. Ants are able to find the shortest way to food source using the pheromone trail left by other
A. Gogna and A. Tayal516
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
ants foraging for food. The same principle was applied by Dorigo (1992) for solving
optimisation problems. Ants while moving around for food keep depositing pheromones on their
path. As the ants that took shortest path to food start returning, the concentration of pheromone
on that trail increases. Ants that move later, foraging for food, follow this trail. Hence, the
concentration of pheromones further increases. This can lead to shortest path being followed by
all ants. The same principle is employed in optimisation problem to find the best solution.
4.4.1 Methodology of ACO
ACO uses artificial ants as iterative stochastic solution-generating procedures. The process starts
with generation of a random set of ants (solutions). Their fitness is evaluated using an objective
function suited to the concerned problem. Accordingly, pheromone concentration associated
with each possible route (Dorigo, Maniezzo, & Colorni, 1996) is changed in a way to reinforce
good solutions according to the following equation:
GijðtÞ ¼ r £ Gijðt2 1Þ þ DGij; ð4Þ
where Gij(t) is the pheromone concentration at iteration t. DGij is the change in pheromone
concentration between two iterations and r is the pheromone evaporation rate. As the solution
improves, pheromone concentration increases. Based on the updated value of pheromone
concentration, the ant’s path (associated solution) is changed. Evaporation of pheromone is
necessary to avoid getting stuck at local optima.
4.4.2 Pseudocode for ACO
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
No of ants, NS
Pheromone evaporation rate, r
Objective function f (*)
Maximum no. of iteration, NMAX
Begin
Generate initial solution set
set ˆ rand(NS, ND) £ (h_b – l_b) þ l_b
Calculate fitness value for initial solution set
for i ¼ 1:NS
val (i) ˆ f(set(i,:))
end
Repeat
Generate new solution, set 0
Journal of Experimental & Theoretical Artificial Intelligence 517
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Compute fitness of new solution
new_val ˆ f(set 0)
For each ant determine its best position;
Determine the best global ant;
Mark better solutions/routes with pheromone, D Gij
Update pheromone concentration using equation (4)
Until stopping criteria is met
4.4.3 Variations and advancements in ACO
Modifications have beenmade in the traditionalACOalgorithm to enhance its performance.An elitist
strategy was introduced in the study by Dorigo et al. (1996). Elitist strategy provides a strong
additional weight to the best route since the beginning. This helps the search process tomove towards
the best solution and also increases the convergence speed.We can also introduce a limit to the value
of pheromone trails.
4.4.4 Features and considerations for ACO
ACO has an inherent parallelism with an in-built positive feedback mechanism that helps in
reaching the optimum solution faster. The evaporation (of pheromones) phenomenon is used to
prevent premature convergence. The value of evaporation constant ranges from 0.01 to 0.2
(Talbi, 2009). ACO can be used to find solutions for complex optimisation problems if the
parameters of the algorithm are carefully chosen.
4.5 Memetic algorithms
The term memetic algorithms (Moscato, 1989), first introduced in 1989, refer to a broad class of
metaheuristic algorithms that exhibit a hybrid approach. The idea behind MAs is to combine the
evolutionary concept of GA (survival of the fittest, and passing on of traits of parents to
offspring) with the idea of every individual (of the population) developing on its own during its
lifetime. It also aims at introducing problem-dependent knowledge into the search process. It is
one of the most powerful techniques of solving NP-hard optimisation problems.
4.5.1 Methodology of MA
MA integrates EA (such as GA) and a local search procedure (e.g. hill climbing) to generate a
hybrid algorithm with features of both its constituents. In MA, agents are processing unit that are
capable of holding multiple solutions (individuals of EA) and methods to improve them. These
agents interact with each other through cooperation and competition.
MA starts with generating an initial population of solutions (individual). This population can
either be randomly generated or, for better result, it could be generated using a heuristic
(problem-specific) approach. Next step is the generation of new population. This involves the
use of several operations to generate offsprings. The first operator is the selection operator that
selects the best individual from the population in a way similar to that used in EAs. It is followed
by the recombination of selected individuals to produce offsprings as in the case of GA.
A. Gogna and A. Tayal518
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Generated offsprings undergo mutation to enhance diversity in the population. In the next step, a
speciality of MA, a local search operator is used to cause random variations in the generated
population. This operation is repeated in order to search, in the neighbourhood of each
individual, for a candidate solution with better fitness than the original solution (as in SA). It
mimics the biological phenomena of an individual learning during its lifetime. This is followed
by updation of population. It follows either of the following two approaches:
. Plus strategy: it uses a union of sets containing existing and new population and selects
best individual from it to form the next generation.. Comma strategy: in this, the new generation is selected from only the newly generated
solution set.
The distinguishing and compulsory feature of MA is the restart population step. If the
population converges, i.e. no further improvement is likely, the entire process restarts. The best
individual from the current population is retained, while others are generated randomly as in the
case of the initial population. This ensures that there is no premature convergence. The process
continues till satisfactory results are achieved.
4.5.2 Pseudocode for MA
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Population size, NP
Chromosome size, ND
Objective function f (*)
Maximum no. of generations, NG
Crossover probability, pc
Mutation probability, pm
Begin
Generate initial population
pop ˆ rand(NS, ND) £ (h_b – l_b) þ l_b
Calculate fitness value for initial population
for i ¼ 1:NP
val (i) ˆ f(pop(i,:))
end
Repeat
While no of offspring # required
Select parents (pa, ma) from current population based on their fitness
Journal of Experimental & Theoretical Artificial Intelligence 519
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
If pc . rand,
Offspring ˆ crossover (ma, pa)
If pm . rand,
Offspring ˆ mutate (offspring)
Apply local search procedure
End while
Update population based on chosen strategy
If plus strategy
new_pop [ popS
offspring
end if
if comma strategy
new_pop [ offspring
end if
If population converges, restart
pop ˆ rand(NS, ND) £ (h_b – l_b) þ l_b
end if
Until stopping criterion is met
4.5.3 Variations and advancements in MA
There is a scope for variation in operators used with MAs. Traditionally uniform crossover is
used. The recombination operator can also be a hybrid or heuristic operator, with problem-
specific design. It will make the search process more efficient. The way and order in which
mutation, local search and recombination operator are applied can also be changed.
Combination of GA with various local search techniques such as SA or TS can also be used.
Techniques have been introduced to alter the neighbourhood definition. A study on these issues
is given in the study by Santos and Santos (2004).
4.5.4 Features and considerations for MA
MA exploits problem knowledge by incorporating heuristics, approximation algorithms, local
search techniques and truncated exact methods. It aims at creating a balance between explorative
feature of EA and exploitation feature of local search algorithms. For a good MA, solution
representation must be carefully selected. Also careful design and use of operators, which are
tuned to problem under consideration, generate better results.
4.6 Artificial immune system
Immune system is a highly robust, adaptive and inherently parallel system with learning
capabilities. These features inspired the design of optimisation approach based on natural
A. Gogna and A. Tayal520
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
immune system. The field of AIS applies the concept of immunology to optimisation and
machine learning. Immune system can be divided into two parts: innate immune system and
adaptive immune system. The former is a static mechanism that detects and destroys certain
invading organisms. The latter system shows the capability of responding to unknown foreign
bodies and builds a response to them that can remain in the body over a long period of time. The
biological processes that are simulated to design AIS algorithms (Talbi, 2009) include pattern
recognition, clonal selection for B cells, negative selection of T cells, affinity maturation, danger
theory and immune network theory.
Clonal selection theory: It is a widely used theory for modelling immune system response to
antigens (foreign bodies). It is based on the concept of cloning and affinity (measure of
interaction between system components) maturation. When the body is exposed to an antigen,
B cell-producing antibodies that best bind to antigen clone themselves. This leads to production
of a large amount of required antibodies, thus fighting the infection rapidly. This process is a
kind of selection (only desired B-cell clone) based on fitness criteria (affinity measure). In
addition to this, a high-rate mutation (somatic mutation) is applied to diversify the population.
One of the main algorithms based on clonal selection theory, the CLONALG algorithm (de
Castro & Zuben, 2000), was proposed in 2000.
Negative selection mechanism: It is a mechanism by which self-cells (body’s own cells) are
protected from attack by antibodies. It allows only those cells (lymphocytes) to survive that do
not respond to self-cells. The antibodies that attach to self-cells are eliminated. Negative
selection algorithm (Forrest, Perelson, Allen, & Cherukuri, 1994) is based on the principles of
the maturation of the T cells and their ability to distinguish between self-/non-self cells. It was
put forth in 1994.
Immune network theory: According to Immune network theory, immune cells interact not
only with the antigens but also amongst themselves by means of simulating, activating and
suppressing each other. aiNET (de castro & Zuben, 2001) is one the most popular algorithm
based on immune network theory.
4.6.1 Methodology of AIS (CLONALG)
Implementation of AIS requires representation of solution, description of affinity measure,
selection of cells to be cloned, mutation to increase diversity and elimination of undesired cells.
Representation of solution can be done in any of traditional ways (e.g. binary or real-valued
strings) based on the optimisation problem. Affinity measure can be described in terms of a
distance metric (e.g. Euclidean distance, Manhattan distance or Hamming distance). For
CLONALG, an initial population of solutions is generated randomly. A set of solutions from the
population is selected (based on the selection criteria used) and then cloned. The clone size is
directly proportional to the affinity measure. It is followed by mutation, whose rate is inversely
proportional to the affinity measure. It helps in escaping out of local minima. Fitness of new
population is evaluated, and fitter members of the new population replace the weakest ones in
the existing population. Alongside, some random solution(s) is also added to increase diversity.
4.6.2 Pseudocode for CLONALG
Select algorithm’s parameters
Bounds of solution space, h_b, l_b
Antibody population size, NP
Journal of Experimental & Theoretical Artificial Intelligence 521
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Objective function f (*)
Begin
Generate initial population
antibody ˆ rand(NP)
Calculate fitness value for initial population
for i ¼ 1:NP
val (i) ˆ f(antibody(i,:))
end
Repeat
Select a predetermined number of antibodies with highest affinities
Clone the selected antibodies
Maturate the cloned antibodies
Evaluate all cloned antibodies
Add some of best cloned antibodies to the pool of antibodies
Remove worst members of the antibodies pool
Add new random antibodies into the population
Until Stopping criteria is met
4.6.3 Variations and advancements in AIS
Several new features and immune theories have been incorporated in AIS in the past decade.
One of them is the Danger theory (Matzinger, 2001). Idea behind this theory is that the immune
system does not respond to foreign elements but to danger. Thus, there is a need to discriminate
between internal (self) and foreign. A summary of many changes proposed in AIS is presented in
the study by Dasgupta (2006).
4.6.4 Features and considerations for AIS
Implementation of an AIS algorithm should take note of the nature of the problem’s data, and
choose a representation that intuitively maps the data’s characteristics. Also affinity measure and
distance metric used can have some sort of bias reflecting the problem definition. However, care
should be taken not to introduce undesirable bias.
5. Application of metaheuristics
Metaheuristic algorithms are used in a range of real-world optimisation problems. The
application areas of metaheuristics include communication, image and signal processing,
scheduling problems, very-large-scale integration design and financial planning. Table 3 lists the
application areas of metaheuristic algorithms reviewed in this paper.
A. Gogna and A. Tayal522
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
A comprehensive list of various metaheuristic techniques and their application is given in the
study by Osman and Laporte (1996).
6. Conclusion
Metaheuristic algorithms have garnered immense interest in the past two decades because of
their applicability to real-world optimisation problems. Metaheuristics is well suited to solving
NP hard as well as multi-objective optimisation problems. Traditional optimisation methods fail
to deliver satisfactory result in such cases because of the complexity of problem structure and the
size of problem instance.
The salient features of metaheuristic algorithms that make them better suited to real-world
optimisation problems are mentioned below.
. General applicability
Table 3. Application area(s) of Metaheuristic
Algorithm Area (s) of Application
Simulated Annealing (SA) † Job shop scheduling in production system† Vehicle routing in transport and logistics
management† Communication: mobile network design, routing, channel
allocationTabu Search (TS) † Resource allocation in industry, university etc.
† Engineering technology: cell placement in VLSI,power distribution, structural design
† Artificial Intelligence: pattern recognition, data mining,clustering
Genetic Algorithm (GA) † Financial planning, stock predictions† Image processing: compression, segmentation† Sequencing in FMS (flexible management systems)
Differential Evolution (DE) † Signal and image processing† Product design: aerodynamic† Chemical engineering
Particle Swarm Optimization (PSO) † Telecommunications: network design, routing† Neural network training† System simulation and identification† Decision making and planning† Signal processing
Ant Colony Optimization (ACO) † Set problems: set partitioning and covering,maximum independent set, bin packing
† Scheduling: flow shop scheduling, process planning† Bioinformatics: protein folding, DNA sequencing
Memetic Algorithm (MA) † Machine learning: neural network training, patternclassification
† System engineering† Scheduling: production planning, rostering† Set problems: bin packing, set covering
Artificial Immune System (AIS) † Data mining and data analysis† Clustering and classification† Network and computer security† Engineering design optimization
Journal of Experimental & Theoretical Artificial Intelligence 523
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Exact methods and heuristic techniques are problem-specific methodologies. For each
problem, a new mechanism needs to be adopted. Also, learning from one application cannot be
easily applied to another problem. However, in case of metaheuristics the task is to just adapt the
algorithm to specific problem rather than starting from the scratch.
. Reasonable computation time
Even for highly complex problem structures, metaheuristic algorithms can generate near-
optimal solutions in a reasonable amount of time for real-time applications.
. Find global optima
They are more suited to problems with multiple local minima than exact methods that have
higher chances of being stuck at local optima. Metaheuristic algorithms create a good balance
between intensification and diversification of search space.
However, metaheuristic algorithms also have their own set of limitations. They are listed
below
. Metaheuristic algorithms do not guarantee optimal solutions.
. It is not easy to theoretically prove the efficiency of algorithms. Studies usually rely on
empirical results to prove the same.. Development times of algorithmic framework are often high.. Each metaheuristic algorithm has its own set of advantages and limitations that makes it
more suitable to a particular kind of application. Finding the best suited algorithm is a
challenging task.
S-metaheuristic algorithms are aimed at intensification. They are good at exploring promising
areas of the search space. However, they are local search algorithms and are thus not able to
efficiently explore the search space.This drawback is overcomebypopulation-basedmethods.They
iterate a set of solutions, which leads to better coverage and exploration of the search space. The
drawbacks of both can be eliminated by the use of hybrid metaheuristics. This approach aims at
combining more than one metaheuristic method for problem solving. Another possibility is the use
of hyper-heuristics, which helps in deciding on an optimum sequence of metaheuristic algorithms,
combining advantages of each, to get the best solution possible. Thus, it can be safely said that
metaheuristics, hybrid metaheuristics and hyper-heuristics provide an efficient way of dealing with
highly complex optimisation problems encountered in industrial and scientific domains.
References
Dasgupta, D. (2006). Advances in artificial immune systems. IEEE Computational Intelligence Magazine,
1(4), 40–49.
de Castro, L. N., & Zuben, F. J. (2000). The clonal selection algorithm with engineering applications. Paper
presented at the Workshop on artificial immune systems and their applications (GECCO’00), Las
Vegas, NV (pp. 36–37).
de Castro, L. N., & Zuben, F. J. (2001). aiNET: An artificial immune network for data analysis. In H.A.
Abbass, R.A. Sarker, & C.S. Newton, (eds.), Data mining: A heuristic approach, Chapter XII, (pp.
231–259). USA: Idea Group Publishing.
Deb, K., & Agrawal, R. B. (1995). Simulated binary crossover for continuous search space. Complex
Systems, 9, 115–148.
Dorigo,M. (1992).Optimization, learning and natural algorithms. (PhD thesis). Politecnico diMilano, Italy.
Dorigo, M., Maniezzo, V., & Colorni, A. (1996). The Ant System: Optimization by a colony of cooperating
agents. IEEE Transactions on Systems, Man, and Cybernetics—Part B, 26(1), 29–41.
A. Gogna and A. Tayal524
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Eberhart, R. C., & Shi, Y. (2001). Particle swarm optimization: Developments, applications and resources.
Proceedings of the 2001 congress on evolutionary computation (pp. 81–86). Piscataway, NJ: IEEE
Service Center.
Fan, H.-Y., & Lampinen, J. (2003). A trigonometric mutation operation to differential evolution. Journal of
Global Optimization, 27(1), 105–129.
Farmer, J.D., Packard, N., & Perelson A. (1986). The immune system, adaptation and machine learning.
Physica D, 2, 187–204.
Forrest, S., Perelson, L., Allen, R., & Cherukuri, R. (1994). Self-nonself discrimination in a computer.
IEEE symposium on research in security and privacy (pp. 202–212). Los Alamos, CA: IEEE
Computer Society Press.
Gendreau, M. (2002). Recent advances in tabu search. In C. C. Ribeiro & P. Hansen (Eds.), Essays and
surveys in metaheuristics (pp. 369–377). Dordrecht: Kluwer.
Glover, F. (1986). Future paths for integer programming and links to artificial intelligence. Computers and
Operation Research, 13(5), 533–549.
Glover, F., & Laguna, M. (1997). Tabu search. Boston, MA: Kluwer.
Holland, J. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press.
Hu, T. C., Kahng, A. B., & Tsao, C.-W. A. (1995). Old bachelor acceptance: A new class of nonmonotone
threshold accepting methods. ORSA Journal on Computing, 7(4), 417–425.
Ingber, L. (1996). Adaptive simulated annealing. Control and Cybernetics, 25(1), 33–54.
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceeding of the IEEE international
conference on neural networks (pp. 1942–1948). Piscataway, NJ: IEEE Press.
Kirkpatrick, S., Gellat, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220,
671–680.
Korejo, I., Yang, S., Li, C., Chio, C. D., Brabazon A., Di Caro G. A., . . . , Urquhart N. (Eds.) (2010). A
directed mutation operator for real coded genetic algorithms, Part I. Applications of evolutionary
computation (LNCS 6024) (pp. 491–500). Berlin/Heidelberg: Springer-Verlag.
Matzinger, P. (2001). The danger model in its historical context. Scandinavian Journal of Immunology, 54,
4–9.
Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., & Teller, E. (1953). Equation of state
calculations by fast computing machines. Journal of Chemical Physics, 21, 1087–1092.
Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts: Towards
memetic algorithms. Technical Report Caltech Concurrent Computation Program, Report No. 826.
California Institute of Technology, Pasadena, CA, USA.
Munteanu, C., & Lazarescu, V. (1999). Improving mutation capabilities in real coded genetic algorithm. In
R. Poli, H.-M. Voigt, S. Cagnoni, D. Corne, G. D. Smith & T. C. Fogarty (Eds.), Evolutionary image
analysis, signal processing and telecommunications (pp. 138–149). Berlin: Springer.
Neri, F., & Tirrone, V. (2010). Recent advances in differential evolution: A survey and experimental
analysis. Artificial Intelligence Review, 33, 61–106.
Nourani, Y., & Andresen, B. (1998). A comparison of simulated annealing cooling strategies. Journal of
Physics A—Mathematical and General, 31, 8373–8385.
Ono, I., & Kobayashi, S. (1997). A real-coded genetic algorithm for function optimization using unimodal
normal distribution crossover. I7th international conference on genetic algorithms (pp. 246–253).
Michigan, USA: Morgan Kaufman.
Osman, I. H., & Laporte, G. (1996). Metaheuristics: A bibliography. Annals of Operations Research, 63,
513–562.
Price, K. V., Storn, R. M., & Lampinen, J. A. (2006).Differential evolution: A practical approach to global
optimization. Berlin, Heidelberg: Springer-Verlag. ISBN 3540209506.
Sanchez, A. M., Lozano, M., Villar, P., & Herrera, F. (2009). Hybrid crossover operators with multiple
descendents for real-coded genetic algorithms: Combining neighborhood-based crossover operators.
International Journal of Intelligent System, 24, 540–567.
Journal of Experimental & Theoretical Artificial Intelligence 525
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013
Santos, E. E., & Santos, E., Jr. (2004). Reducing the computational load of energy evaluations for protein
folding. Proceeding 4th IEEE symposium on bioinformatics and bioengineering (BIBE 04),
(pp. 79–88), Taichung, Taiwan.
Storn, R., & Price, K. (1995). Differential evolution: A simple and efficient adaptive scheme for global
optimization over continuous spaces. Technical Report No. TR-95-012. Berkeley: International
Computer Science Institute.
Takahashi, M., & Kita, H. (2001). A crossover operator using independent component analysis for real-
coded genetic algorithms. Proceedings of IEEE congress on evolutionary computation Vol. 1
(pp. 643–649). Los Alamitos: IEEE Press.
Talbi, El-Ghazali (2009). Metaheuristics: From design to implementation (1st ed.). Chichester, UK: John
Wiley & Sons.
Wolpert, D., & Macready, W. (1997). No free lunch theorems for optimization. IEEE Transactions on
Evolutionary Computation, 1(1), 67–82.
A. Gogna and A. Tayal526
Dow
nloa
ded
by [
Mos
kow
Sta
te U
niv
Bib
liote
] at
06:
11 2
0 N
ovem
ber
2013