# fast clonal algorithm

TRANSCRIPT

ARTICLE IN PRESS

0952-1976/$ - se

doi:10.1016/j.en

�CorrespondE-mail addr

nifftian@gmail

mkt09@hotma

Engineering Applications of Artificial Intelligence 21 (2008) 106–128

www.elsevier.com/locate/engappai

Fast clonal algorithm

Nitesh Khilwania, Anoop Prakashb, Ravi Shankarc, M.K. Tiwarid,�

aDepartment of Metallurgy and Materials Engineering, National Institute of Foundry and Forge Technology, Hatia, Ranchi, IndiabDepartment of Manufacturing Engineering, National Institute of Foundry and Forge Technology, Hatia, Ranchi, India

cDepartment of Management Studies, Indian Institute of Technology, Delhi, IndiadDepartment of Forge Technology, National Institute of Foundry and Forge Technology, Hatia, Ranchi, Jharkhand 834003, India

Received 17 February 2006; received in revised form 6 January 2007; accepted 10 January 2007

Available online 30 March 2007

Abstract

The aim of this paper is to design an efficient and fast clonal algorithm for solving various numerical and combinatorial real-world

optimization problems effectively and speedily, irrespective of its complexity. The idea is to accurately read the inherent drawbacks of

existing immune algorithms (IAs) and propose new techniques to resolve them. The basic features of IAs dealt in this paper are:

hypermutation mechanism, clonal expansion, immune memory and several other features related to initialization and selection of

candidate solution present in a population set. Dealing with the above-mentioned features we have proposed a fast clonal algorithm

(FCA) incorporating a parallel mutation operator comprising of Gaussian and Cauchy mutation strategy. In addition, a new concept has

been proposed for initialization, selection and clonal expansion process. The concept of existing immune memory has also been modified

by using the elitist mechanism. Finally, to test the efficacy of proposed algorithm in terms of search quality, computational cost,

robustness and efficiency, quantitative analyses have been performed in this paper. In addition, empirical analyses have been executed to

prove the superiority of proposed strategies. To demonstrate the applicability of proposed algorithm over real-world problems, Machine-

loading problem of flexible manufacturing system (FMS) is worked out and matched with the results present in literature.

r 2007 Elsevier Ltd. All rights reserved.

Keywords: Clonal algorithm; Gaussian mutation; Cauchy mutation; Chaotic generator; Machine-loading problem

1. Introduction

Evolutionary computation (EC) is the generic computa-tional technique espoused from the progression of biolo-gical life in the natural humanity. Initially, EC wasproposed as an approach to artificial intelligence (AI)(Back et al., 1998), which further has been used to solvevarious numerical and combinatorial optimization pro-blems. Some algorithms that derive from the EC techni-ques include genetic programming, evolutionaryprogramming, and genetic algorithm (GA) (Fozel andCorne, 2003). However, all the EC techniques are somehow related to each other i.e. all algorithms require a set ofpopulation of individuals for initialization and further need

e front matter r 2007 Elsevier Ltd. All rights reserved.

gappai.2007.01.004

ing author. Tel.: +91651 291116; fax: +91 651 2290860.

esses: [email protected] (N. Khilwani), anoop.

.com (A. Prakash), [email protected] (R. Shankar),

il.com (M.K. Tiwari).

to evaluate the fitness value of the individuals to suit theselection procedure. The recombination operators ex-change generic materials to accentuate variation in thesolution strings. Finally, this procedure is iterated, until theoptimal solution is obtained (Eiben and Schoenauer, 2002).Almost, all evolutionary algorithms follow the above

general procedure but differ only in the way of presentingthese aspects. Immunological algorithm (IA) or artificialimmune system (AIS) is one of the recently developedevolutionary techniques, inspired by the theory of im-munology or immune system (IS). Immunology is thescientific discipline that studies the response of ISs, when anon-self antigenic pattern is recognized by antibodies(DeCastro and Zuben, 2005; Jong, 1975). The generalcharacteristics of IS are immune memory, hypermutationand receptor editing: immune memory is used to store thefeasible solutions; hypermutation is used to diversify thesearch process; and receptor editing helps in escaping fromthe local optima (Dasgupta and Gonzalez, 2002).

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 107

In general, clonal selection principle (DeCastro andZuben, 2002) is utilized to design the IAs due to its self-organizing and learning capability, while solving combina-torial optimization problems. Of particular interest inoptimization problems are the facts that non-self antigensare constraints and antibodies are candidate solution(Tiwari et al., 2005). Due to these peculiar characteristics,new computational models have been developed andsuccessfully applied in various issues related to science,industry and business such as machine learning, computerand network security, etc. The general concept of evolu-tionary immune algorithm (IA) utilized in literature isgiven as follows (DeCastro and Zuben, 2005):

ProcedureInitialize population (randomly)

Individuals (candidate solution)

Evaluation (fitness function) for all antibodies

While (termination criterion not satisfied)

Select (superior antibodies from parent population)

Cloning based on fitness value

Variation operators on clones (Hypermutation)

Evaluate new generated antibodies

Selection of superior antibodies

Creation of next generation population

End

There are certain critical issues that must be taken intoconsideration while designing and running a clonalalgorithm (CA). One such issue is to maintain diversity inpopulation as long as possible. It is well known that likeother evolutionary algorithms, IAs also maintain a set ofcandidate solution (antibodies) for concentrating on over-all search space to obtain optimal/near optimal solution.However, if the population set traces a very narrow region,it leads to premature convergence of algorithm withouthitting the desired target. Another dilemma in IA is theproper exploitation and exploration of search space(Cuttelo and Nicosia, 2002a, b), i.e. whether to search forglobal optima or explore different regions of the searchspace (as the current solution may be local optima). IAsmust be set in such a way that it searches the solution spaceeffectively and speedily with less computational cost andtime requirements.

To reduce the aforementioned criticalities, certainaspects of IAs must be taken into considerations. Someof the vital aspects that can be considered for designingeffective and efficient IA are (1) representation of popula-tion, (2) clonal expansion, (3) hypermutation mechanism,(4) generation of scheme, (5) immune memory. Even asmall change in any of these aspects may lead to aconsiderable change in the performance of IAs (DeCastroand Zuben, 2002).

Basically there are two driving forces in an IA viz. clonalselection theory and hypermutation mechanism (DeCastroand Zuben, 2005). The former maintains the quality ofsolution by triggering the growth of high-affinity anti-

bodies. And, hypermutation implemented along withreceptor editing is considered to be a local search operatorthat maintains diversity in solution. It is evident fromliterature that various modifications have been performedin IAs. DeCastro and Zuben (2002) have reviewed theClonal selection concept together with hypermutationoperator to develop a computation tool named CLO-NALG (here denoted by CA). Cutello et al. (2002) haveproposed opt-IA, a modified version of CA, by using threeimmune operators i.e. cloning, hypermutation and agingoperator. Recently, Cutello et al. (2006) introduced a newand improved version of opt-IA, named opt-IA, havingfeatures viz. real coding representation, cloning operator,inversely proportional hypermutation and aging operator.Following the path guided by Cutello et al. (2006), Eiben

and Schoenauer (2002), Muller et al. (2002) and observingthe modifications accomplished in the field of IAs as well asother evolutionary algorithms, this paper introduces anovel IA entitled as fast clonal algorithm (FCA). Themodifications performed in FCA are manifold. Firstly, itintroduces change in the basic driving force of IA i.e.hypermutation and clonal expansion. It introduces aparallel mutation (PM) operator for hypermutationmechanism (Gog and Dumitrescu, 2004). The PM operatorcomprises of Gaussian strategy for small step mutation andCauchy mutation (CM) for large step size mutation.Secondly, it gives a new equation for determining clonalexpansion and hypermutation rate. Thirdly, chaoticsequences are used to generate random numbers used inthe process (Determan and Foster, 1999; Caponetto et al.,2003). A new chaotic generator comprising the traits ofexisting chaotic sequences is proposed to generate therandom numbers used in population initialization, muta-tion process, etc. Fourthly, it implements the roulette wheelselection strategy to overcome the difficulties such aspremature convergence arising due to existing selection rule(Gen and Cheng, 1997). Finally, it uses immune memory aselitist mechanism to preserve the superior antibodiespresent in clonal pool and pass its traits to next generationpopulation. Further, it provides a simple method togenerate the population using chaotic sequences and toapply PM operator in different types of representationscheme viz. binary coding, decimal coding and permutationcoding.To prove the efficacy of proposed algorithm, an intensive

comparison with existing IAs i.e. (simple IA (SIA) (Cutelloand Nicosia, 2002b), CA (DeCastro and Zuben, 2002) andopt-IMMALG (Cutello et al., 2006)) have been carried outby testing it over standard benchmark functions takenfrom the literature. A comparative study has been done toprove the merits of FCA over CA and SIA with the help of23 benchmark functions with varying dimensions andcomplexities (Yao and Liu, 1999). An intensive comparisonwith a graphical aid among existing chaotic generators andproposed mutation operator is also performed here. Atlast, our algorithm is tested over a real-life machine-loading problem and the results are compared with results

ARTICLE IN PRESS

Cell with matured

Receptor

Antibody molecules

Selection

Proliferation And Maturation

High affinity

Memory cells

Non SelfAntigens

Fig. 1. Process of clonal selection, proliferation and affinity maturation.

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128108

obtained from the best heuristic procedures. In order toprove the superiority of proposed algorithm over combi-natorial optimization problems having especially permuta-tion representation scheme, a loading problem has beenconsidered along with the benchmark 23 functions.

The rest of this paper is organized as follows. Section 2discusses the preliminaries of clonal selection theory alongwith the chaotic generators. The proposed FCA ispresented in Section 3. Section 4 provides the test suiteconsidered for testing the efficacy of proposed algorithm.Section 5 consists of experimental results obtained byparameter tuning and analytical study of proposedalgorithm. Performance assessment of FCA and compar-ison with other algorithms is provided in Section 6. Detailsand results related to machine-loading problem is given inSection 7. Finally, Section 8 concludes the paper.

2. Preliminaries for clonal selection theory and chaotic

generators

2.1. Clonal selection theory

This theory demonstrates the response of IS due to anantigenic stimulus (DeCastro and Zuben, 2002, 2005). Theknowledge pertaining to IS is very important with regard tothe biological point of view as well as computationalperspective. In the recent past, the IS has gain the attentionof many human experts due to its complexity, flexibilityand computational capability (Cutello and Nicosia,2002b). As artificial neural network (ANN) is inspired byour nervous system, in the similar manner our IS motivatesfor the emergence of AIS. According to DeCastro andZuben (2002), AIS is defined as ‘‘An abstract ormetamorphic computational system using ideas gleanedfrom the theory and components of immunology.’’

The function of biological IS is to protect the body fromthe foreign matters, more preciously known as antigens.Antigens stimulate the antibodies that reside in the body.The key roles of antibodies are to identify, bind andeliminate the antigens. Clonal selection explains theresponse of IS, when a non-self antigen pattern isrecognized by the B-cells. It is selected to proliferate andproduce antibodies in high volume by cloning. The newclonal cells undergo hypermutation for improving anti-bodies affinity that leads to antigenic neutralization andelimination (Dasgupta, 1999). The overall procedure ofclonal selection is schematically shown in Fig. 1.

The class of IAs used for numerical and combinatorialoptimization is based on clonal selection principle. Inliterature, such clonal selection algorithms have beenapplied in various field of computation such as: patternrecognition; function approximation; machine learning;EC and programming; fault and anomaly detection;control and scheduling; computer and network security;generation and emergent behavior, etc. (Dasgupta andGonzalez, 2002; Cutello and Nicosia, 2002a). Somepractical applications of AIS include: data manipulation,

classification, reasoning, and representation methodologies(Dasgupta, 1999).Initially, the IS found its application for maintaining the

diversity in population while using GA (DeCastro andZuben, 2005). However, the significant use of the ISreported in literature is the work of Cutello and Nicosia(2002a). They gave SIA to analyze and predict thedynamics of algorithm, subsequently, DeCastro and Zuben(2002) introduced CA (CLONALG) by manipulating thecloning and hypermutation. They used the fitness value forproportional cloning and counter-proportional hypermu-tation. Cutello and Nicosia (2002b) introduced a modifiedIA that uses immune operators’ viz. cloning, hypermuta-tion and aging operator. Another related algorithm is thepsycho-CA introduced by Tiwari et al. (2005). It inherits itstraits from Maslow’s Need Hierarchy theory and thetheory of clonal selection. The special features of psycho-CA are the various levels of needs, immune memory andaffinity maturation.

2.2. Chaotic generators

Chaotic sequences are the type of random numbergenerator (RNG) whose choice is justified by their ergodicand stochastic behavior (spread spectrum characteristic)(Determan and Foster, 1999). Due to unpredictability inchaotic sequence, it has been applied in various fields, suchas secure transmission, natural modeling phenomena, etc.and obtained very interesting results. Recently, chaoticsequences have been adopted in evolutionary algorithmsfor random number generation. Caponetto et al. (2003)utilized the chaotic sequences in different phases ofevolutionary algorithm such as creation of individualpresent in a population set, selection of potential indivi-

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 109

duals from a population set, introducing the randomchanges into the individual present in the population, etc.This way they showed the applicability of chaoticsequences in EC techniques. Instigated from this concept,chaotic time series sequences viz. logistic function, tentmapping and sinusoidal mapping are also used in thispaper, as shown in Table 1. The reason behind opting thechaotic sequences is due to their ability to converge fasttoward optimal solution, while retaining a proper balancebetween exploitation and exploration (Chen and Aihara,1995; Luo and Shao, 2003).

The logistic map (Caponetto et al., 2003) shows one ofthe simplest dynamic systems evidencing chaotic behavior(Eq. (1)), where, N(t) is the value of chaotic variable in tthiteration and R shows the bifurcation parameter of thesystem. The graphical representation of one-dimensionallogistic chaotic function is illustrated in Fig. 2 (for 400generations with initial value of N(0) ¼ 0.1 and R ¼ 4).From this figure, it is evident that the spread spectrumcharacteristic of logistic mapping enables it to be utilized in

Table 1

Chaotic generators

Chaotic

sequences

Equations

Logistic map Nðtþ 1Þ ¼ R�NðtÞ � ð1�NðtÞÞ (1)

Tent map Nðtþ 1Þ ¼ GðNðtÞÞ,

where

GðNðtÞÞ ¼NðtÞ=0:7; if ðNðtÞo0:7Þ;13

NðtÞf1�NðtÞg; otherwise;

( )(2)

Sinusoidal map Nðtþ 1Þ ¼ R0 �NðtÞ2 sinðp�NðtÞÞ (3)

0 50 100 150 200 250 300 350 4000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 2. Logistic mapping of logistic function.

place of RNGs. The other two generators considered inthis paper are the tent function (see Eq. (2)) and sinusoidaloperator (Caponetto et al., 2003). The sinusoidal operator,as given in Eq. (3) is iterated with N(0) ¼ 0.7 and R0 ¼ 2.3.

3. Proposed FCA

The CA is an evolutionary search method that canprovide optimal or near-optimal solution for combinatorialoptimization problems. This section formally presents theproposed FCA that is marked by the positive traits ofreducing computation cost and searching the solutionspace effectively without depending on the difficulties ofthe problem. For designing a new algorithm, variousmodifications have been performed in the existing featuresof IA that are explained in following subsections:

3.1. Initialization

Traditionally, random key representation is one of themost vital approaches used to represent the individualspresent in a population while for solving combinatorialoptimization problems. The wide application of randomkey is basically due to the elimination of infeasibilitywithout creating additional overhead for a wide variety ofproblems (Gen and Cheng, 1997; Goldberg, 1989).In contrast, the convergence properties of an evolu-

tionary algorithm (Caponetto et al., 2003) are stronglyaffected by the random key representation used aspopulation initializer and variation operators during arun, as such algorithms usually satisfy on their ownstatistical tests. However, there are no analytical resultsthat guarantee an improvement in the performance indicesby using a particular RNG (Caponetto et al., 2003). Itsfurther limitation is that the solution gets sealed bysequential co-relation of successive cells by RNGs. Hence,it requires more number of iterations to converge towardoptimal/near-optimal solution (Caponetto et al., 2003; Luoand Shao, 2003).Conceiving from afore-mentioned criticalities, in the

present analysis, RNGs have been replaced with chaoticfunctions. Further, with the help of existing chaoticgenerators (see Table 1), authors have developed a newchaotic generator that selects one of the existing chaoticfunctions at a time in a stochastic manner. The proposedchaotic generator can be written as

Proposed chaotic sequence

¼ Random ðLogistic;Tent;SinusoidalÞ. ð4Þ

In general, chaotic sequences generate the number in therange of 0 and 1. Therefore, it cannot be directly used torepresent the individuals, which are usually encoded indifferent form of schemes, viz. binary coding, decimalcoding and permutation coding. Table 2 provides thegeneric steps for initializing the aforementioned encodingscheme using the chaotic generations. It is evident from the

ARTICLE IN PRESSTable

2

Initializationofvariouschaoticsequences

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128110

table that for generating random string using chaoticsequences, initially a chaotic string CS(m) is generated.This chaotic sequence is converted into string GS(m) ofcorresponding representation scheme by using conversionmethod provided in Table 2. Thus, by utilizing this table,chaotic sequences can be used to generate strings ofdifferent representation scheme.

3.2. Selection

The antibodies present in a population set contain muchinformation regarding the solution of the problem. Theinitial population is exposed to the threats posed theantigens and the antigenic affinity (fk) is evaluated foreach antibodies present in population. Based on theiraffinity, the antibodies are selected to proliferate and toproduce clones (greater the affinity of the antibodies,greater will be there chance of cloning). Traditionally,deterministic selection rule is adopted to select betterantibodies for proliferation. However, deterministic selec-tion rule selects only the best antibodies for proliferation,and that may leads to the premature convergence of thealgorithm.To overcome this difficulty, roulette wheel selection rule

is adopted for the selection of antibodies for proliferation(Gen and Cheng, 1997). The basic reason for espousing thisfitness proportional rule is to maintain diversity in clonalpool, so that it can contain much information for thesearch of optimal/near-optimal solution. In order toimplement the roulette wheel selection, initially theantibodies are ranked as per their fitness value and theselection probability of each antibody is calculated by

pk ¼f kPpop_size

i¼1 f i

. (6)

However, this simple scheme exhibits certain undesirableproperties such as (Gen and Cheng, 1997; Goldberg, 1989).

�

Too rapid taken over of some better antibodiesdominate the selection process. � Less competition among antibodies in later generation.Now to mitigate these problems, a power law scaling isused to solve the selection procedure (Gen and Cheng,1997). This scaling function transforms the raw fitnessinto scaled fitness using the function f 0l ¼ powðf l ; aÞ,where a is user-defined scaling parameter. The overallprocedure for power scaled roulette wheel selectionprocedure can be easily explained in the following stepwiseprocedure:

ProcedureParameter: Fitness value (fl), Ps (pop_size) where,lA{1, 2,y, pop_size}Begin

f 0l ¼ powðf l ; aÞ, (7)

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 111

T ¼Xpop_size

i¼1

f 0i,

pl ¼f 0iT,

ql ¼Xl

i¼1

pi,

where, kA{1, 2,y, pop_size}r ¼ random(0, 1) /*Chaotically generate a number

between 0 and 1*/if (roq1) Select 1st antibody with fitness f0

else if (qk�1proqk) Select kth antibody with fitness

fk

end ifend

0 10 20 30 40 50 60 70 80 90 1001

2

3

4

5

6

7

8

9

10

Rank

Nu

mb

er o

f C

lon

es

Fig. 3. Increasing trend of number of clones generated as per the rank.

3.3. Proliferation

As mentioned earlier, our proposed algorithm is basedon the clonal selection principle, receiving its traits thathighest affinity antibodies are selected and proliferated byduplication (cloning). This process is called clonal expan-sion (DeCastro and Zuben, 2005), where each antibodyproduces clones independently and proportionally to theirantigenic affinity. Hence, the higher the fitness value, thehigher the number of clones generated for each antibodyAbn (Cutello and Nicosia, 2002a, b). The number of clonesgenerated for each antibody based on their fitness value,obtained by following equation:

NCi ¼Ps� bð Þ þ Ps� r� 0:01� b� 1ð Þ½ �

Ps� 1

� �. (8)

In Eq. (8), the parameter b represents the maximumnumber of clones to be generated by best antibody selectedfrom population (with population size Ps) and theparameter r shows the rank of the selected antibody, where

r ¼ pk � 100� �

. (9)

From the above equation, it is clear that the number ofclones generated for each antibody depends on theantigenic affinity and the parameter b (user-defined). Thefollowing lemma is proved for the justification of aboveequation.

Lemma 1. Based on Eq. (8), the number of clones (NC)generated is directly proportional to the rank assigned to it.

Proof. Number of clones to be generated for an antibody isgiven by

NC ¼Ps� bð Þ þ Ps� r� 0:01� b� 1ð Þ½ �

Ps� 1

� �. (10)

The proof of above equation is by induction on r.

When r ¼ 100 (maximum value of r)

NC ¼ðPs� bÞ þ ½Ps� ðb� 1Þ�

ðPs� 1Þ

� �,

¼b� ðPs� 1Þ

ðPs� 1Þ

� �,

‘NC ¼ b. ð11Þ

This shows that maximum number of clones is generatedfor the antibody with maximum rank. Similarly, whenr ¼ 0, (minimum value of r)Fig. 3 r (0prp100) &

NC ¼ðPs� bÞ

ðPs� 1Þ

� �; _bX1.

‘NC ¼ 1. ð12Þ

This verifies that a single clone is made for antibodiesassigned with minimum rank. Now, we assume theinductive hypothesis that number of clones generated foran antibody is directly proportional to rank assigned to it.This is also clear from Fig. 3, which shows an increasingtrend for NC in increasing the value of r (0prp100) &

3.4. Variation operators

The performance of evolutionary algorithms dependsupon the ergodicity of the evolution operators that makesthe algorithm effective for performing a global search(Tiwari et al., 2005; DeCastro and Zuben, 2002). Suchoperators must maintain a compromise between explora-tion and exploitation of search space to achieve optimal/near-optimal solution. These operators are used to createnext generation population by performing random pertur-bation in the individual of present generation.In IAs, variation in antibodies is performed by

hypermutation and receptor editing mechanism. Thehypermutation operator works on a similar fashion to

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128112

the mutation operator of GA (Tiwari et al., 2005).However, the difference lies in that the inferior antibodiesare hyper-mutated at a higher rate as compare to theantibodies with high antigenic affinity. This phenomenon isknown as receptor editing. The following property clearlyshows the function of hypermutation and receptor editing(Dasgupta, 1999; Dasgupta and Gonzalez, 2002).

Property 1. Hypermutation guides the system toward localoptima, while receptor editing helps to avoid local optima.

Proof. Hypermutation is used in IAs to introduce randomchanges into the individuals. Generally, inferior antibodiesare more prone to hypermutation; whereas better anti-bodies require slight modification to increase its affinity.Therefore, the phenomenon of receptor editing is used tohyper-mutate the inferior antibodies at higher rate andhigher affinity antibodies at lower rate. Fig. 4 illustrates thehypermutation guiding toward local optima and receptorediting avoids the local optima to reach the globaloptima. &

George and Grey (1999) suggested point mutation tooffer random changes in the clones. Point mutation simplygenerates new antibodies by altering one or more genes(e.g. flipping of binary bits, or, swapping of bits inpermutation string) based on their hypermutation rategiven by (Decastro and Zuben, 2002)

s ¼ expð�df Þ, (13)

where s is the rate of hypermutation, d the control factor ofdecay, f the antigenic affinity. It is axiomatic from theabove equation that antibodies with low affinity aremutated at higher rate than antibodies with higher affinity.As per the hypermutation mechanism, the mutationstrategy should be in such a fashion that, it offers largerstep size for lower affinity antibodies and vice-versa.However, only on the basis of mutation rate, point

Aff

inity

Antigen binding site

Receptor editing

Hypermutation

Fig. 4. Schematic representation of shape space for antigen binding site.

mutation cannot offer a gradual increase in step size withdecreasing fitness value. The following property of pointmutation is important in this regard.

Property 2. Point mutation is unable to satisfy thenecessary conditions of hypermutation.

Proof. The point mutation is performed on an individualby altering a bit, randomly selected from an individual.While performing the point mutation, there is no restric-tion on the selection of a bit i.e. randomly any bit can beselected for mutation. Whereas, according to the conceptof hypermutation, slight variation is required for lowaffinity antibodies in comparison to high affinity anti-bodies. But point mutation is performed on individualswithout considering the step size of mutation. Although itdepends on the rate of mutation, but mutation rate have nosignificance effect on the step size.To explain this, consider a 6-bit string, as shown in

Fig. 5. If the change in made in first bit1 the fitness changesby 1. Similarly if the change is made in fourth and sixth bitthe fitness changes by 8 and 24, respectively. From this it isclear that point mutation cannot assure the variation onantibodies on the basis of affinity. Therefore, we can saythat point mutation does not satisfy the necessarycondition of hypermutation. &

The above property shows that point mutation offers arandom step size, which makes the algorithm to convergein more number of generations (Yao and Liu, 1999; Hyunand Oh, 2000). To overcome this difficulty, a newtechnique comprising of two mutation strategies viz.Gaussian mutation (GM) and Chaos mutation has beenproposed, where, Gaussian strategy is utilized for smallstep mutation and Cauchy strategy for large step mutation.These two mutation strategies are combined to form asearch operator entitled as PM operator (Gog andDumitrescu, 2004). The two strategies of PM operatorsare applied parallel to antibodies based on their antigenicaffinity. It implies that Gaussian strategy is applied for lowaffinity antibodies to escape from local minima and

Fitness = 22

Cha

nge

in 6

th b

it

Fitness = 46

Fitness = 30

Fitness = 23

Cha

nge

in1st

bit

Cha

nge

in 4

th b

it

0 1 0 1 1 0

0 1 0 1 1 1

0 1 1 1 1 0

1111 00

Fig. 5. Demonstration of fitness value with change in mutation bit.

ARTICLE IN PRESS

-5 -4 -3 -2 -1 0 1 2 3 4 50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4Gaussian Distribution

Chauchy distribution

Fig. 6. Plot of Cauchy and Gaussian distributions with standard deviation

1; mean ¼ 0; t ¼ 1.

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 113

Cauchy strategy is applied on high affinity antibodies toescape from local minima and to reach global minima. Ageneric procedure to implement the PM operator is givenas follows:

Procedure:Parameter: pk—probability for antibody k, (calculated in

equation - -), kA(1, 2,y, pop_size)Begin

for i ¼ 1:pop_size

r ¼ random(0, 1)/*chaotically generate a number between 0 and 1*/

if (ropk)Gaussian mutation is applied

elseCauchy mutation is applied

end ifend for

end

The following subsections reveal the details regardingthe aforementioned mutation strategies

3.4.1. GM

GM requires two parameters i.e. the mean value (m) andthe standard deviation (s), which are used to define the stepsize for the mutation. In this approach, mutation isperformed by replacing the element xg (selected formutation from antibody {x1, x2, x3,y, xg,y, x‘}) with x0gusing the following relation:

x0g ¼ xg þNðm;sÞ, (14)

where, N(m, s) is the random Gaussian number generatedby the Gaussian function, given as (Yao and Liu, 1999;Gog and Dumitrescu, 2004),

f GaussianðxÞ ¼1

sffiffiffiffiffiffi2pp exp

1

2

x� ms

� ��2 ; �1oxo1.

(15)

In order to simplify the mutation strategy, m is often setto 0 and value of s is considered to be equal for allvariables coming in evolution process (mostly s ¼ 1).Therefore, the simplified mutation can be realized byfollowing relation:

x0g ¼ xg þNð0; 1Þ. (16)

3.4.2. CM

CM is used to introduce large variations in antibodiesdue to its ability to perform longer jumps with higherprobability. The Cauchy density function is similar to thatof Gaussian density function but it approaches towards theaxis so slowly that its variance tends to infinity. Fig. 6shows the difference between Gaussian and Cauchy densityfunction (both have been plotted in the same scale) (Yaoand Liu, 1999; Gog and Dumitrescu, 2004).

The one dimension Cauchy density function centered atorigin is defined by

f CauchyðxÞ ¼t

pðt2 þ x2Þ; �1oxo1, (17)

where t40 is the scale parameter.The above-described function is used to introduce the

variation in antibodies with the help of following function:

x0c ¼ xc þ Zdk, (18)

where dk is a Cauchy random variable, and Z is thecorrection step.In the aforementioned mutation strategies Cauchy and

Gaussian density function simply generate a randomnumber that is used to decide the step size for mutation.Mathematically, equation for generating step size sg and scis given by (Yao and Liu, 1999)

sg ¼ randomðþ;�Þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi2 ln wg

ffiffiffiffiffiffi2pp� �r

, (19)

sc ¼ randomðþ;�Þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffit

1

wcp� t

� �s, (20)

where, wg and wc are the random number generatedchaotically in the range of [0, fGaussian(0)] and [0, fCauchy(0)],respectively. The Eqs. (19) and (20) is derived from thefollowing equation:

f GaussianðxÞ ¼1ffiffiffiffiffiffi2pp exp

1

2ðxÞ�2

; �1oxo1, (21)

f CauchyðxÞ ¼t

p t2 þ x2ð Þ; �1oxo1. (22)

However, these random step sizes cannot be directlyapplied to different types of encoding scheme. The simpletechnique of implementing the PM operator that containsthe GM and CM is given in Table 3. The limitations of

ARTICLE IN PRESS

Table 3

Mutation strategy for various coding

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128114

these operators have also been addressed and resolved inthe best possible way (given in Table 3).

3.5. Immune memory

After the process of proliferation and hypermutation,the progenies developed in clonal pool are evaluated fortheir antigenic affinity. As per the concept of clonal theory,instead of maintaining a large number of candidatesolution, a small set of high-affinity antibodies aremaintained in immune memory (DeCastro and Zuben,2005). We use the immune memory as an elitist mechanismin order to maintain the best solution found along theprocess (Tiwari et al., 2005). This is the most importantcharacteristic of IAs from evolutionary computing point ofview, as the candidate solution with high fitness value mustbe preserved, which can only be replaced by the improvedantibodies (DeCastro and Zuben, 2005).

The basic reason for maintaining the secondary memoryis to preserve the high-affinity antibodies present in clonalpool at that moment of generation. In classical IAs,immune memory has not been used to create the nextgeneration population, due to this reason, IAs needs morenumber of iteration to reach the final solution. In additionit increases the probability of entrapment of algorithm inlocal minima.

To overcome this difficulty, an operational characteristicof GA i.e. elitism (Ahn and Ramakrishna, 2003; Hariket al., 1999) has been introduced in the proposed FCA. Theconcept of elitism ensures that the best chromosome passtheir traits to the next generation. The aforementioned ideaof elitism has been linked with the general concept ofclassical IA to generate the population for next generation.Cutello et al. (2005) have used the concept of elitism in twosteps viz. eliminating the old strings by aging operator andsubsequently adding new operator by merging function. Inthis paper, a new strategy is proposed for utilizing theconcept of elitism, which is detailed in following procedure:

Procedure: Function of immune memory in tth generation.Parameter: Immune Memory at tth iteration (M_Ab)m

t,d ¼ Updation Limit, Population at tth iteration (P_Ab)Ps

t

Begin(M_Ab)m

t¼ Update{(M_Ab)m

t�1}m ¼ No. of new antibodies in IM in tth iteration

if (m4d)(P_Ab)n

t¼ m Antibodies of Current best and (Ps-m)

Random antibodies

else

w ¼p

1þ logðGÞ, (23)

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 115

(P_Ab)nt¼ w Antibodies from (M_Ab)m

t and (Ps-w)

Random Antibodies

G ¼ G+1;end if

end

From the above steps, it is evident that if the updation isgreater than certain limit in immune memory then the best

antibodies of current generation are selected to maintain thesearch direction. If the updation criterion is not satisfied thenwe go for the best antibodies stored in immune memory toexploit the search space more effectively and to escape fromlocal minima. It is to be noted that in memory selection isdone by using the roulette wheel selection rule.Selection Procedure

Hypermutation

Start

End

Yes

Parameter setting

Population Initialization

Is termination

criteria satisfied

Next Generation

Population

No

Elitism

Cloning

Fig. 7. Flowchart

3.6. Computational complexity

The general sequence of proposed FCA is summarized inFig. 7. From the figure, it is evident that the algorithm canbe divided into two main procedures. The first one consistsof clonal selection principle, where antibodies are evaluatedon the basis of their fitness value. The superior antibodiesare selected, cloned and hypermutated as per theirantigenic affinity. Finally these clones are again evaluatedand selected for next generation population. The secondprocedure prepares the next generation population as perthe underlying conditions of elitism.On the basis of above procedure the computational

complexity of our algorithm is calculated as follows

Gaussian Mutation

CauchyMutation

Using Chaotic Sequences

Roullete Wheel Selection Strategy

Population size; String length; Step size of PM operator; Power scaling;

Updation limit; Termination criteria;

Immune Memory

for fast CA.

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128116

(DeCastro and Zuben, 2005):

Generation of initial population: O (p. ‘)Generation of random antibodies: O (d. ‘) /*Assume thatd antibodies are generated in each generation*/Computation of Antigenic Affinity

Population O (f (£, ‘). p)Clones O (f((£, ‘). cl) /* £-Problem instant size*/

Selection operations

Population O (p. log p)Clones O (cl. log cl)

Cloning of antibodies O (cl. ‘)Hypermutation O (cl. ‘)Updation O (m. logm+ p. log p)

Finally, the overall computational complexity can becalculated by summing the individual complexity. Thecomplexity is of the order of

¼ O½p: ‘ þ gen_maxfd: ‘ þ f ðd; ‘Þ:pþ f ððd; ‘Þ:cl

þ p: log pþ cl: log clþ cl: ‘ þ cl: ‘

þ p: log pþm: log mg�; ð24Þ

¼ O½p: ‘ þ gen_maxf‘ð2:clþ dÞ þ f ðd; ‘Þ½pþ cl�

2p: log pþm: log mþ cl: log clg�:

Let cl ¼

A

.p and d ¼ O.p

¼ O½gen_maxf‘ð2:A

:pþ O:dÞ þ f ðd; ‘Þ½pþA

:p��

þ 3:p: log pþ

A

:p: log

A

:pg�

¼ O½gen_max :p:f‘ð2:

A

þ OÞ þ f ðd; ‘Þ½1þ

A

�

þ 3:p: log pþ

A

log

A

g�: ð25Þ

From (25), it is clear that the complexity of FCA mainlydepends on the maximum number of generation andpopulation size, as other factors are also depending uponthese two factors.

4. Test suite

In order to investigate the efficiency and effectiveness of theproposed algorithm, various numerical experiments have beenconducted. Initially, a set of well-known benchmark problemsfrom literature have been assimilated (Yao and Liu, 1999). Inorder to test the efficiency and effectiveness of the proposedalgorithm, numerical experiments have been conducted onwell-known benchmark problems assimilated from literature.The aim is to compare the performance of proposed algorithmFCA and existing CAs viz. CA, SIA and opt-IMMALG.Before executing the experimental studies on the benchmarkfunctions, some guidelines have been adopted in this paper,explained as following (Back et al., 2000):

�

Test function should include some unimodal functionsto exhibit the efficiency of the algorithm. � They should include some multi model functions ofvarying complexity to exploit the regularity of operatorsin the algorithm.

�

There should be some multi-dimensional models asthese models are more close to real-world problems. � All the function should be used in standard form. � The main purpose for this test suite is to test theproposed algorithm on the problems having differentrepresentation scheme.

In this paper, the benchmark functions have beenselected while considering the above guidelines. Todemonstrate the robustness and reliability of proposedFCA, a set of 23 benchmark functions (Yao and Liu, 1999)and 10 set of real-world machine-loading problem offlexible manufacturing system (FMS) are selected in thispaper. The former is considered to evaluate the perfor-mance for binary and decimal representation scheme andlater one is considered for permutation coding.The 23 benchmark functions with the behavior of their

solution space can be obtained in Yao and Liu (1999).Broadly, the test functions can be divided into threecategories viz.

1)

Unimodal function, f1�f7, (i.e. with no local minima). 2) Multimodal function with many local minima, f8�f13. 3) Multimodal function with less local minima, f14�f23.Without loss of generality, it can be stated that unimodalfunctions are considered to test the convergence rate ofproposed algorithm, whereas, the multimodal functions areutilized to demonstrate the searching ability of FCA inescaping from local optima and tracing the desired globaloptima. The entire test experiments presented in thispaper have been performed on a 2.4GHz Pentium 4processor and the entire algorithms have been coded inC++.Before testing the performance of proposed FCA for

numerical optimization problems, it is necessary todecide the proper setup of key parameters i.e. representa-tion scheme (binary or decimal), population size.Moreover, it is also essential to prove the superiorityof new features added with the proposed algorithm.Therefore, prior to the performance assessment of pro-posed algorithm and comparison with the existing algo-rithm, an empirical investigation is performed in the nextsection.

5. Empirical investigations

5.1. Parameter tuning

It has been an issue of challenge in the evolutionaryalgorithm optimization field to determine the appropriatevalue of parameters that yield efficient performance on theproblems at hand. As discussed earlier in Section 3.6,population is one of the critical factors of FCA. Therefore,it is essential to determine the suitable size of populationset. However, first of all it is important to decide theproper representation scheme for numerical optimization

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 117

problems. In the upcoming section, experiments areperformed for a proper representation scheme and appro-priate population size, respectively.

5.1.1. Representation scheme for benchmark functions

To encode a solution of the problem into an antibody isa key issue while solving the CA (Gen and Cheng, 1997).Choosing an appropriate representation scheme among theavailable relevant representation schemes requires a carefulanalysis to ensure proper convergence rate and betterquality solution. Among various representation schemes,the binary and decimal codings are mostly used forconstrained optimization problems.

In this section, empirical studies using binary anddecimal coding have been carried out to evaluate therelative strength and weakness of FCA over the first 23benchmark functions, adopted from Yao and Liu (1999).In all experimental setup, the initial parameters have beenkept similar for both the schemes. The average results of 20runs for the benchmark functions are summarized inTable 4.

From the table it is evident that almost similar results areobtained in both the underlying cases. However, the majoroutcome of above analysis is mainly related to thecomplexity of the problem inherent due to the order ofbuilding blocks (BBs) or numbers of variables. Therefore,it is concluded that binary coding is suitable for problemsinvolving lower order BBs or less number of variables.Whereas, decimal coding performs better for problemsinvolving large number of variables and complex multi-modal functions. Therefore, in the upcoming section all theexperiments have been performed by using decimal codingwith number of variables provided in Yao and Liu (1999).Once the representation scheme is decided, the appropriatepopulation size is decided in Section 5.1.2.

Table 4

Comparison between binary and decimal coding on 13 test functions

Scheme Binary coding

No. of

variables

2 30

Function Mean Std. Dev Mean Std. Dev

1 0.0 0.0 0.0 0.0

2 0.0 0.0 2.45� 10�6 3.18� 10�

3 0.0 0.0 4.31� 10�4 1.09� 10�

4 0.0 0.0 0.0318 2.348� 10

5 0.14 0.008 24.13 10.56

6 0.0 0.0 1.32� 10�2 0.845

7 5.42� 10�11 4.28� 10�12 3.817� 10�2 4.238� 10

8 �12569.49 0.001 �12532.59 2.18

9 0.0 0.0 1.78 2.69

10 0.0 0.0 7.71� 10�2 1.03� 10�

11 0.0 0.0 3.29� 10�2 4.65� 10�

12 0.0 0.0 4.37� 10�2 3.93� 10�

13 0.0 0.0 1.06� 10�3 2.45� 10�

5.1.2. Population size

As discussed in Section 3.6, population size is one of theimportant factors that determine the effectiveness of thealgorithm. Therefore, it becomes necessary to determinethe optimal size of population to avoid the prematureconvergence of the algorithm as well as to avoid theadditional computation. A set of 23 benchmark functionshave been selected to analyze the effect of population sizeon the performance of algorithm. As testing the algorithmfor all population size is time consuming and cumbersometask, the experiments have been performed with thepopulation size 10, 25, 50 and 100. The effect of populationsize on benchmarks functions is provided in Table 5. Fromthe above study, only, slight discrepancy in the fitness valueis observed with increase in the population size. The reasonbehind the futility of large population size may be that,although increase in population size decreases the numberof generations to reach the optimal value, but approxi-mately equal function evaluations are required to reach theoptimal value. Therefore, in this paper all the experimentshave been performed with the population size comprisingof 25 strings.

5.2. Analyses of proposed FCA

In this section, experiments are performed to test thesuperiority of proposed strategies viz. new chaotic gen-erator and PM operator.

5.2.1. Chaotic generator

As pointed in Section 2, due to certain demerits ofRNGs, authors tends to opt the chaotic generators as itprovides faster convergence by establishing a properbalance between the exploration and exploitation (Capo-netto et al., 2003). A comparative study has been carriedout among the chaotic functions that are described earlier.

Decimal coding

2 30

Mean Std. Dev Mean Std. Dev

0.0 0.0 0.0 0.06 0.0 0.0 0.0 0.05 0.0 0.0 4.54� 10�7 1.47� 10�7

�3 0.0 0.0 0.0 0.0

0.19 0.018 2.79 2.12

0.0 0.0 0.0 0.0�3 1.122� 10�9 3.762� 10�10 5.95� 10�6 3.07� 10�6

�12569.49 0.001 �12569.43 0.016

0.0 0.0 0.0 0.02 0.0 0.0 1.61� 10�7 4.01� 10�7

2 0.0 0.0 0.0 0.03 0.0 0.0 7.94� 10�11 4.25� 10�12

3 0.0 0.0 2.65� 10�14 8.17� 10�14

ARTICLE IN PRESS

Table 5

Comparison of proposed FCA with different population size

Pop size

function

10 25 50 100

M S M S M S M S

1 0.0 0.0 3.18� 10�11 2.67� 10�12 5.67� 10�9 5.12� 10�9

2.93� 10�10 3.73� 10�10

2 0.0 0.0 0.0 0.0 0.0 0.0 3.74� 10�7 1.74� 10�8

3 6.41� 10�5 9.38� 10�6 4.53� 10�6 1.52� 10�6 5.93� 10�5 2.52� 10�5 1.39� 10�5 2.79� 10�5

4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

5 3.16 2.97 2.74 2.49 2.98 2.12 2.86 3.07

6 2.18� 10�8 5.97� 10�9 0.0 0.0 0.0 0.0 4.53� 10�6 1.52� 10�6

7 8.97� 10�6 1.52� 10�6 5.95� 10�6 3.07� 10�6 6.33� 10�6 4.45� 10�6 9.87� 10�6 6.47� 10�6

8 �12568.91 0.24 �12569.43 0.016 �12569.01 0.098 �12568.41 0.37

9 3.11� 10�3 1.81� 10�3 0.0 0.0 7.78� 10�4 1.07� 10�4 0.0 0.0

10 2.72� 10�6 1.97� 10�6 5.05� 10�7 2.15� 10�7 8.97� 10�7 1.52� 10�8 6.09� 10�6 2.34� 10�6

11 0.0 0.0 0.0 0.0 0.0 0.0 1.96� 10�3 8.09� 10�4

12 4.81� 10�10 1.88� 10�10 7.94� 10�11 4.25� 10�12 1.39� 10�11 6.63� 10�11

8.37� 10�10 9.92� 10�11

13 1.59� 10�11 3.90� 10�12 2.65� 10�14 8.17� 10�14 6.21� 10�10 3.37� 10�10

2.87� 10�12 2.85� 10�12

14 1.12 1.75� 10�3 1.13 3.65� 10�3 1.14 6.32� 10�3 1.16 4.98� 10�2

15 3.21� 10�4 7.42� 10�5 3.17� 10�4 8.23� 10�6 3.17� 10�4 7.28� 10�6 3.19� 10�4 2.25� 10�3

16 �1.03148 8.96� 10�4 �1.03156 9.63� 10�6 �1.0313 1.12� 10�4 �1.03151 2.27� 10�5

17 0.411 9.39� 10�3 0.407 5.60� 10�3 0.424 7.46� 10�2 0.451 1.94� 10�2

18 3.181 6.48� 10�3 3.0129 7.73� 10�4 3.099 5.13� 10�3 3.147 6.02� 10�2

19 �3.7554 2.32� 10�5 �3.7418 2.61� 10�5 �3.7422 9.22� 10�4 �3.6912 6.41� 10�4

20 �3.297 8.51� 10�6 �3.3119 3.42� � 10�6 �3.301 6.35� 10�5 �3.314

7.5� 10�6

21 �9.8528 0.0273 �9.9178 0.0283 �9.9281 0.0908 �9.9528 0.0218

22 �9.9197 0.01278 �9.9314 0.0644 �9.8471 0.0833 �9.8197 0.0384

23 �9.9621 0.0767 �9.8623 0.0106 �9.8844 0.0225 �9.8911 0.0169

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128118

The analysis of chaotic functions has been carried out onall benchmark functions belonging to different categories.The comparative results for the proposed chaotic operatorand existing operator (i.e. tent, logistic and sinusoidal) isprovided in Table 6.

From the above experiments, it is clear that for differentcategories of function various chaotic generators givetheir performance in different ways. But for making ageneric algorithm, a new generator comprising the traits ofexisting chaotic generators is proposed in this paper thatperforms better than any other existing generator given inTable 1.

5.2.2. PM operator

The fundamental concepts of IAs reveal that hypermu-tation introduces random changes into the antibodies asper their antigenic affinity. Taking the above notion intoconsideration, we have proposed a PM operator, based onthe parallel action of two-mutation operator viz. GM andCM. As pointed out in Section 3.4 that CM have higherprobability of making longer jumps in comparison to GM.The above statement about CM, generating longer jumpsthan GM can be estimated by observing the expectedlength of Gaussian Cauchy jumps, calculated as follows

(Yao and Liu, 1999):

EGaussianðxÞ ¼ 2

Z þ10

1ffiffiffiffiffiffi2pp ðe�x2=2Þdx ¼

2ffiffiffiffiffiffi2pp ¼ 0:80, (26)

ECauchyðxÞ ¼ 2

Z þ10

x1

pð1þ x2Þdx ¼ þ1. (27)

(i.e. estimation is not possible).Thus, GM is more localized than CM. Also, it is clear

from Fig. 6 that GM generate individual near its parent dueto large hill toward centre; whereas CM generate individualfar away from its parent due to its long tail (Yao and Liu,1999; Gog and Dumitrescu, 2004). From above it is clearthat CM is good at dealing with plateaus and many localoptima. Therefore CM is used to perform mutation in lowaffinity antibodies. It helps in escaping from local optimaand favors less computation time in exploiting localneighborhood; whereas, GM due to its fine tuning capabilityand small step size supports the algorithm to reach globaloptima. From the above discussion, it is clear that proposedPM operator is capable of satisfying the necessary conditionof hypermutation i.e. mutation as per their antigenic affinity.For the validation of above analysis, experiments have

been performed on similar functions belonging to differentcategories. The analysis has been shown in Table 7. From

ARTICLE IN PRESS

Table 6

Comparison of different chaotic strategies

Chaos

generator

function

Logistic Tent Sinusoidal Proposed

M S M S M S M S

1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

2 0.0 0.0 4.57� 10�11 8.97� 10�12 5.59� 10�10 1.08� 10�10 0.0 0.0

3 5.57� 10�6 2.59� 10�6 6.97� 10�7 1.87� 10�7 4.47� 10�5 8.97� 10�6 4.72� 10�7 1.19� 10�7

4 0.0 0.0 0.0 0.0 1.14� 10�4 8.58� 10�5 2.25� 10�4 3.54� 10�5

5 2.74 2.05 2.99 1.18 3.11 2.45 2.81 1.87

6 4.54� � 10�8 8.88� 10�9 4.94� 10�8 7.15� 10�9 0.0 0.0 0.0 0.0

7 6.67� 10�5 8.94� 10�6 4.58� 10�5 7.28� 10�6 8.27� 10�6 1.96� 10�6 8.18� 10�6 5.05� 10�6

8 �12566.45 0.17 �12568.49 0.077 �12567.14 0.24 �12568.97 0.055

9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

10 8.74� 10�7 4.57� 10�7 2.15� 10�6 1.07� 10�6 8.17� 10�6 4.48� 10�6 3.22� 10�7 1.28� 10�7

11 3.65� 10�3 1.19� 10�3 0.0 0.0 0.0 0.0 0.0 0.0

12 8.15� 10�11 3.58� 10�11 9.21� 10�10 1.84� 10�10 4.17� 10�10 9.20� 10�11 7.78� 10�11 4.14� 10�12

13 2.90� 10�13 7.68� 10�14 7.33� 10�12 5.84� 10�12 4.09� 10�12 1.77� 10�12 1.21� 10�13 9.27� 10�14

14 1.11 7.06� 10�2 1.06 4.16� 10�3 1.08 8.75� 10�2 1.03 6.04� 10�3

15 3.98� 10�4 8.48� 10�5 3.13� 10�4 5.04� 10�3 3.27� 10�4 8.75� 10�6 3.15� 10�4 4.13� 10�6

16 �1.03181 8.89� 10�3 �1.02714 5.40� 10�3 �1.02514 1.62� 10�2 �1.03147 2.10� 10�3

17 0.433 8.89� 10�2 0.417 5.54� 10�3 0.421 2.67� 10�2 0.418 4.44� 10�3

18 3.048 7.20� 10�3 3.078 5.15� 10�3 3.147 9.97� 10�2 3.054 2.64� 10�4

19 �3.6418 2.61� 10�3 �3.6812 6.64� 10�3 �3.7001 9.87� 10�3 �3.7015 3.44� 10�4

20 �3.116 9.72� 10�4 �3.187 6.29� 10�4 �3.256 5.13� 10�3 �3.249 7.59� 10�4

21 �9.9281 0.0273 �9.9454 0.0283 �9.9381 0.0315 �9.9498 0.0218

22 �9.9171 0.02781 �9.9254 0.01140 �9.9215 0.04312 �9.9207 0.0781

23 �9.9601 0.0617 �9.9617 0.0415 �9.9548 0.0344 �9.9615 0.0254

Table 7

Comparison of different mutation strategies

Mutation type

function

Gaussian Cauchy Point Proposed

M S M S M S M S

1 0.0 0.0 8.17� 10�10 6.37� 10�10 2.15� � 10�8 3.58� 10�10 0.0 0.0

2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

3 4.21� 10�6 2.54� 10�6 6.97� 10�5 2.36� 10�6 7.23� 10�5 5.32� 10�5 5.89� 10�7 7.89� 10�7

4 0.0 0.0 0.0 0.0 2.36� 10�4 1.12� 10�5 0.0 0.0

5 2.84 2.34 2.74 2.49 2.98 2.12 2.76 2.07

6 5.26� 10�8 5.97� 10�9 1.94� 10�8 6.46� 10�9 0.0 0.0 0.0 0.0

7 4.18� 10�5 5.26� 10�6 1.12� 10�5 6.04� 10�6 7.73� 10�6 5.75� 10�6 9.18� 10�6 8.43� 10�6

8 �12568.94 0.15 �12568.19 0.27 �12568.27 0.190 �12569.1 0.055

9 2.47� 10�3 1.14� 10�3 0.0 0.0 2.98� 10�2 6.40� 10�3 0.0 0.0

10 9.77� 10�7 8.87� 10�8 9.05� 10�7 8.28� 10�7 7.18� 10�6 3.54� 10�6 3.56� 10�6 2.14� 10�6

11 0.0 0.0 0.0 0.0 2.58� 10�3 7.28� 10�4 0.0 0.0

12 8.39� 10�11 8.53� 10�11 8.21� 10�10 2.48� 10�10 7.87� 10�10 4.42� 10�11 8.79� 10�11 2.65� 10�12

13 6.91� 10�11 2.64� 10�12 4.97� 10�12 2.58� 10�12 8.54� 10�10 4.98� 10�10 4.36� 10�13 7.75� 10�14

14 1.18 4.71� 10�1 1.06 1.95� 10�2 1.29 6.72� � 10�2 1.06 3.83� 10�2

15 4.01� 10�4 7.42� 10�5 3.33� 10�4 2.25� 10�3 4.48� 10�4 7.28� � 10�5 3.19� 10�4 8.23� 10�6

16 �1.02118 2.45� 10�3 �1.02921 9.51� 10�3 �1.01141 2.25� � 10�2 �1.03151 4.15� 10�3

17 0.501 7.46� 10�2 0.444 9.39� 10�3 0.551 1.94� 10�2 0.411 5.60� 10�3

18 3.251 8.59� 10�3 3.178 4.58� 10�3 3.254 3.39� 10�2 3.0119 8.56� 10�4

19 �3.2221 1.87� 10�3 �3.4418 3.66� 10�3 �3.1912 8.70� 10�3 �3.7448 2.92� 10�4

20 �2.982 8.79� 10�4 �3.117 7.09� 10�4 �2.687 7.73� 10�3 �3.277 2.26� 10�4

21 �9.8001 0.0568 �9.8599 0.0687 �9.754 0.170 �9.9045 0.0109

22 �9.3667 0.0595 �9.8541 0.092 �9.6617 0.1177 �9.9233 0.0879

23 �9.7821 0.0247 �9.8441 0.0355 �9.7459 0.109 �9.9115 0.0306

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 119

this table, we can easily perceive the performance of PMoperator, as applied on various functions. It is observedthat in comparison to the Cauchy and GM operator, the

PM operator shows a tradeoff between CM and GM andfinally reached the optimal solution speedily than CM andwith better outcome than GM.

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128120

6. Performance assessment of FCA and comparison with

other algorithms

The performance of proposed FCA is tested byimplementing it on 23 benchmark functions and comparingthe results with the existing IAs i.e. SIA and CA. For eachtest function, 20 runs are performed and the averageoutcome is plotted in the figure for 105 function evalua-tions. The experimental results for the functions ofdifferent categories are provided in following subsections.

6.1. Unimodal functions

In this sub-section, comparative analyses have beenperformed between the proposed FCA and existingalgorithms CA and SIA for unimodal functions (f1�f7).The progress of mean best solution (averaged over 20 runs)with the number of evaluations is shown in Fig. A1 (asshown in Appendix A). From the figure, it can be easilyvisualized that FCA performs much better than CA andSIA in terms of convergence speed and quality of solution.

6.2. Multimodal functions

Multimodal functions have been introduced to check theperformance of algorithm from local optima and locatingglobal optima. The basic reason for introducing thesefunctions is to test the algorithms characteristics incontinuous and multimodal search space. The wholemultimodal functions are divided into two categories onthe basis of their functional behavior, described in thefollowing sub-sections.

6.2.1. Multimodal functions with many local minima

The multimodal functions (f8�f13) included in ourexperiments are the combinations of unimodal and multi-modal functions. These functions concatenate their char-acteristics with the real world optimization problems.Comparison for these functions among the proposed andexisting algorithm is given in Fig. A2 (as shown inAppendix A). It is obvious from the graph that ouralgorithm performs much better than the existing algo-rithms due to advance mutation strategy and elitist-basedimmune memory that effectively explores and exploit thesearch space without depending on the silhouette of thesearch space.

6.2.2. Multimodal functions with few local minima

To evaluate the performance of our algorithm in acomplex surface, which requires a proper exploration andexploitation of the surface, certain functions with few localminima (f14�f23) are included in the experimentation. Fig.A3(a and b) (as shown in Appendix A) shows thecomparison among the results obtained by solving thefunction and averaged over 20 runs. These functionsappear to be simple and similar to that of the unimodalfunction. But the major difference is observed in their

convergence trend. From the figures, it is observed that theconvergence graph consists of long standstill point incomparison to the unimodal functions. Further, oncomparing the trend of FCA and existing algorithms, it isobserved that CA and SIA get stacked for longer numberof generations, whereas FCA appear to converge at least atlinear rate.Finally, it is observed that inspite of certain crash point

in the convergence trend of algorithms, the FCA out-performed CA and SIA in terms of convergence speed andquality of solution in all the functions taken intoconsideration.

6.3. Empirical evidence

To validate the above experiments empirically, themean best and standard deviation of the fitness valueobtained in the last generation of the algorithm is providedin this section. Table 8 shows the experimental results over20 independent runs and 105 evaluations. Here theproposed algorithm is compared with SIA, CA and opt-IMMALG. Opt-IMMALG is a recently developed IA thatutilizes the concept of cloning operator, inversely propor-tional hypermutation, aging operator and real coding ofvariables. It is evident from the table that the FCA isshowing a superior performance as compared to SIA andCA. However, in case of opt-IMMALG, except few testfunctions the proposed FCA outperforms the formeralgorithm.Finally, after going through the above experiments it

apparent that the proposed algorithm gives superiorperformance over numerical optimization problems. Inorder to prove the robustness and reliability of proposedalgorithm for permutation coding a combinatorial optimi-zation problem is considered in this paper. In Section 7, amachine-loading problem of FMS is considered to

�

Test the proposed algorithm in real-world problems. � Test the efficiency of FCA using permutation represen-tation scheme.

7. Machine-loading problem

Machine loading is one of the important operationaldecisions of FMSs that deals with the allocation ofoperations of the jobs from a given pool of jobs toalternate machines. The allocation on machines should bedone by considering several machining and toolingconstraints. In the literature, heuristic procedures havetackled the machine-loading problem by dividing it intothree sub-problems (Shanker and Srinivasulu, 1989;Swarnkar and Tiwari, 2004; Tiwari et al., 2006) i.e. (1)job sequence determination, (2) operation allocation tomachines, and (3) reallocation of jobs. The combination ofthese sub-problems makes machine-loading problem asNP-hard (Shanker and Srinivasulu, 1989) in nature.In order to reduce the complexity associated with

ARTICLE IN PRESS

Table 8

Comparison of proposed algorithm and existing clonal algorithms

Function CA SIA FCA Opt-IMMALG Minimum

(Yao and

Liu, 1999)

M S M S M S M S 0

1 7.36� 10�8 5.15� 10�8 2.31� 10�8 8.97� 10�9 0.0 0.0 0.0 0.0 0

2 0.0 0.0 6.64� 10�8 2.42� 10�8 0.0 0.0 0.0 0.0 0

3 6.34� 10�7 2.49� 10�7 5.48� 10�7 1.52� 10�7 4.53� 10�7 1.52� 10�7 0.0 0.0 0

4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

5 14.17 9.18 21.39 16.39 2.74 2.49 16.29 13.96 0

6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

7 4.18� 10�5 2.85� 10�5 2.31� 10�4 5.18� � 10�5 5.95� 10�6 3.07� 10�6 1.99� 10�5 2.35� 10�5 0

8 �12528.94 21.48 �12518.19 39.21 �12569.46 0.016 �12535.15 62.81 �12569.5

9 0.18 0.75 1.12 6.214 0.0 0.0 0.596 4.178 0

10 7.15� 10�4 1.27� 10�4 1.56� 10�3 3.12� 10�4 1.56� 10�7 3.12� 10�7 0.0 0.0 0

11 4.18� � 10�4 2.02� 10�4 6.47� 10�3 7.04� 10�4 0.0 0.0 0.0 0.0 0

12 3.27� 10�7 8.54� 10�8 6.57� 10�3 5.44� 10�3 7.94� 10�11 4.25� 10�12 1.77� 10�21 8.77� 10�24 0

13 5.06� 10�8 2.81� 10�8 9.10� 10�5 4.57� 10�7 2.65� 10�14 8.17� 10�14 1.68� 10�21 5.37� 10�21 0

14 1.21 9.50� 10�1 1.34 2.81� 10�1 1.04 3.65� 10�2 0.998 1.1� 10�3 1

15 3.31� 10�4 4.28� 10�5 3.76� 10�4 2.03� 10�4 3.17� 10�4 8.23� 10�6 3.2� � 10�4 2.67� 10�5 0.0003075

16 �1.02377 1.58� 10�4 �1.01281 5.24� 10�2 �1.03156 8.36� 10�6 �1.013 2.21� 10�2 �1.03163

17 0.501 2.87� 10�3 0.527 8.54� 10�1 0.401 1.16� 10�3 0.423 3.21� 10�2 0.398

18 3.8791 6.86� 10�3 6.15813 4.0252 3.0129 2.15� 10�4 5.837 3.742 3

19 �3.1287 1.29� 10�5 �2.9871 5.90� 10�1 �3.7628 1.29� 10�5 �3.72 7.84� 10�3 �3.86

20 �2.9451 9.81� 10�1 �2.5418 1.07 �3.3119 6.5� 10�6 �3.292 3.09� 10�2 �3.32

21 �9.8817 0.1521 �9.2677 0.2252 �9.9244 0.0452 �10.153 1.03� 10�7 �10

22 �9.2691 0.2841 �8.9342 0.8972 �9.9438 0.0384 �10.402 1.03� 10�5 �10

23 �9.1528 0.2987 �8.6654 1.1503 �9.9622 0.0503 �10.536 1.16� 10�3 �10

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 121

machine-loading problem certain assumption has beenmade, described as under:

�

The processing requirement for all the jobs is known. � Loading and unloading time of the jobs is negligible. � Non-splitting of the jobs. � Unique job routing. � Machine grouping, tool sharing and tool duplication isnot considered.

7.1. Modeling the loading problem

In this paper, the objective functions are systemunbalance and throughput. While considering the systemunbalance both overloading and under loading of ma-chines have been assumed. The following cases arise whena machine is subjected to both overloading and underloading:

�

Under loading increases the system unbalance withreduction in throughput. � Overloading increases the system unbalance andthroughput simultaneously.

Hence, the overall objective function should be a logicalcombination of both objectives. Before, describing the

objectives of the underlying problem the notations used todescribe the same are given as follows:

A. Notations:

SU

system unbalance TH throughput BSj batch size of job jSUmax

maximum system unbalance (1920min) TMmo

over utilized time on machine mMjo

set of machines for performing oth operation ofjob jOj

set of operations for job jTSjoma

tool slot available on machine m for performingoth operation of job j

TSjomr2

toolslot

remaining on machine m after performing othoperation of job j

TSjomr1

toolslot

required by machine m for performing othoperation of job j

TMmu

under utilized time on machine mB. Decision variables:

aj ¼1; if job j is selected;

0; otherwise;

(

ARTICLE IN PRESS

Table 9

Comparison of proposed FCA with heuristic procedures

Pr.

No.

Batch

Size

A B C Proposed

FCA

SU TH SU TH SU TH SU TH

1 8 253 39 122 42 76 42 14 48

2 6 388 51 202 63 234 63 234 63

3 5 288 63 286 79 152 69 72 69

4 5 819 51 819 51 819 51 819 51

5 6 467 62 364 76 264 61 364 76

6 6 548 51 365 62 314 63 69 64

7 6 189 54 147 66 996 48 189 63

8 7 459 36 459 36 158 43 63 48

9 7 462 79 315 88 309 88 309 88

10 6 518 44 320 56 166 55 122 56

A: Shanker and Srinivasulu (1989); B: Mukopadhyay et al. (1992) and C:

Tiwari et al. (1997).

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128122

bojm ¼

1; if operation o of job j is

assigned to machine m;

0; otherwise:

8><>:

C. Formulation of objective function and constraints:In this paper, a bi-criteria approach has been applied to

solve the underlying problem. The objectives are:1. Minimization of system unbalance:

Min SU ¼XMm¼1

TMum þ TMo

m

�,

MinF 1 ¼SUmax �

PMm¼1 TMm

u þ TMmo

�SUmax

. (28)

The objective given in Eq. (28) is a measure of systemunbalance, the minimization part of system unbalance hasbeen made to the maximization problem by subtracting thesystem unbalance value with the maximum system balance(SUmax) and further to normalize the resulting values samehas been divided by the overall system balance (SUmax).

2. Maximization of throughput:

MinTH ¼XJ

j¼1

BSj � aj

�,

MaxF 2 ¼

PJj¼1 BSj � aj

�PJ

j¼1BSj

. (29)

The objective maximization of throughput is normalized inthe range (0, 1) by dividing the resultant throughput valuewith the maximum possible throughput (see Eq. (29)). Theoverall objective function will be

MaxF ¼ðW 1 � F 1Þ þ ðW 2 � F 2Þ

ðW 1 þW 2Þ

¼

W 1SUmax�

PM

m¼1TMu

mþTMomð Þ

SUmax

þ W 2

PJ

j¼1BSj�ajð ÞPJ

j¼1BSj

" #

W 1 þW 2. ð30Þ

The overall objective shown in Eq. (30) is the weightedcombination of the objective functions given in Eqs. (28)and (29). In the Eq. (30), W1 and W2 shows the relativeimportance of system unbalance and throughput, respec-tively. Presently, equal weights (i.e. W1 ¼W2 ¼ 1) havebeen assigned to either objectives. The afore-mentionedobjective functions are subjected to following set ofconstraints.

1. Machining constraints:

XMm¼1

ðTMum þ TMo

mÞX0. (31)

Constraint (31) ensures that the sum of remaining time(over utilized and under utilized time) after allocation of all

jobs must be a positive quantity, it can be reach maximumvalue equal to zero

XOj

o¼1

TMr1jombjompTMa

jom; 8i. (32)

Constraint (32) implies that the time available on anymachine always must be greater than or equal to the timerequired by that machine for performing the operation ofjobs.Xo2Mjo

bjomp1; 8j; 8Oj. (33)

Constraint (33) shows that the only one machine isrequired from a given set of machines for performing theoptional operation of the jobs

XMm¼1

XOj

o¼1

XJ

j¼1

TMr2jombjomX0; 8j; 8M ; 8Oj. (34)

Constraint (34) implies that the sum of remaining time onall the machines must be a positive quantity.

2. Tooling constraints:

XOj

o¼1

TSr1jombjompTSa

jom. (35)

Constraint (35) shows that the tool slot required by anymachine for performing the operations of the jobs must bewithin the available tool slots on that machine

TSr2jombjomX0; 8j; 8M ; 8Oj. (36)

Constraint (36) expressed that the tool slot remaining onany machine after the completion of any operation of a jobmust not be less than zero.

3. Processing constraints:

XMm¼1

XOj

o¼1

bjom ¼ aj �Oj ; 8J. (37)

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 123

Constraint (37) implies that all the operations of a job areto be completed sequentially, before considering the nextjob.

Table 9 depicts the result obtained by proposed FCAalgorithm for all 10-test problems. From the table, it isevident that the proposed FCA performs consistentlybetter as compared to existing heuristic procedures foralmost all the problems considered. Previous work on themachine-loading problem along with the detailed descrip-tions of all test problems can be obtained from Shankerand Srinivasulu (1989) and Tiwari et al. (1997).

8. Conclusion

This paper presents a FCA with advanced mutationstrategy and elitism-based immune memory. The mainfocus of present research is to find a more versatile andefficient methodology, which is capable of maintaininggood memory and can give better quality results with fastconvergence rate. Further, it is also capable of solvingmultimodal and combinatorial optimization real worldfunctions. The paper proposes a new model of applying amutation operator based on the parallel action of GM(small rate mutation) and CM (large step mutation) thatsatisfies the necessary condition of hypermutation. Immunememory is used with elitism mechanism to keep the bestsolution until; hopefully a better solution is obtained. Italso passes the current solution to the next generation,when the solutions get entrapped in local minima.

To enhance the efficacy of existing IAs, as per the pathguided by (Eiben et al., 1999) that is ‘‘I want to apply

changing mutation rates, let me see how others did it’’,following modification has been performed to develop theproposed CA:

�

chaotic generation of antibodies; � inclusion of Roulette wheel selection rule for selectionprocedure;

� PM operator; � modified clone-generating equation and mutation rateequation; and

�

vital use of immune memory in preserving superiorantibodies and passing its feature to next generation.These attributes make the algorithm more robust anddynamic in nature. The effect of premature convergence,sub-optimal solution, and entrapment in local minima canbe improved by parameters like power scale (a), maximumnumber of clones (b), updation bound (d), mutation rate,etc. The proposed algorithm was tested on known standardfunctions taken from literature. The obtained resultsexhibit better solution and higher convergence speed forproposed solution approach. Also, the comparativestudy made with existing IAs viz. SIA and CA provedthat the proposed algorithm outperforms CA and SIA interms of solution quality and convergence speed. The paperalso analyses in depth for advanced mutation strategyand also explains the aptness of PM operator to thecondition of hypermutation. The theoretical analysis issupported by the experimental evidences, where PMoperator is compared with other existing mutationstrategies. This paper also investigates various chaoticsequences and finally observed that a generator comprisingthe traits of all existing sequence performs better result interms converges and solution quality. Finally, the testing ofFCA over machine-loading problem confirms that it isapplicable to real-life problem with prominent results.Hence, this paper proposes an algorithm that can searchthe solution space effectively and speedily without depend-ing overly on the shape of search/solution space (i.e.difficulty of problems).

Appendix A

Convergence plot of unimodel functions (f1�f7) asshown in Fig. A1.Fig. A2 shows convergence plot of multimodal functions

with many minima (f8�f13).Fig. A3 shows (a) convergence plot of multimodal

functions with less minima (f14�f18) and (b) convergenceplot of multimodal functions with less minima (f19�f23).

ARTICLE IN PRESS

101 102 10310-6

10-5

10-4

10-3

10-2

10-1

100

101

Fit

ness v

alu

e

FCA

CA

100 101 102 103 10410-5

10-4

10-3

10-2

10-1

100

Fit

ness v

alu

e

10410-7

10-6

10-5

10-4

Fit

ness v

alu

e

102 103

10-1

100

Fit

ness v

alu

e

100 101 102 103 1040

5

10

15

20

25

30

Fit

ness v

alu

e

103 10410-12

10-10

10-8

10-6

10-4

10-2

100

102

104

Fit

ness v

alu

e

102 10310-6

10-5

10-4

10-3

10-2

10-1

100

101

No. of Evaluations No. of Evaluations

No. of Evaluations No. of Evaluations

No. of Evaluations No. of evaluations

Fit

ness v

alu

e

No. of Evaluations

SIA

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

f1 f2

f3 f4

f5f6

f7

Fig. A1. Convergence plot of unimodel functions (f1�f7).

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128124

ARTICLE IN PRESS

102 103 104

-104.05

-104.06

-104.07

-104.08

-104.09

-104.1

Fit

ne

ss

va

lue

Fit

ne

ss

va

lue

FCACA

SIA

102 103 10410-3

10-2

10-1

100

101

Fit

ne

ss

va

lue

Fit

ne

ss

va

lue

f10

f9f8

0

1 2 3 4 5 6

x104

0

1

2

3

4

5

6

7

8

9

10

No. of Evaluations

No. of EvaluationsNo. of Evaluations

Fit

ne

ss

va

lue

Fit

ne

ss

va

lue

f11

f12 f13

103

102

101

100

10-1

102 103 104

No. of Evaluations No. of Evaluations

102

101

100

10-1

10-2

10-3

10-4

10-5

100 101 102 103

101

100

10-1

10-2

10-3

10-4

10-5

10-6

102 103 104

No. of Evaluations

FCACA

SIA

FCACA

SIA

FCACA

SIA

FCACA

SIA

FCACA

SIA

Fig. A2. Convergence plot of multimodal functions with many minima (f8�f13).

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 125

ARTICLE IN PRESS

8

7

6

5

4

3

2

1

0

Fitness v

alu

eF

itness v

alu

e

200 400 600 800 1000 1200 1400 1600 1800

No. of Evaluations

101

100

10-1

10-2

10-3

10-4

10-5

10-6

102 103 104

No. of Evaluations

102 103 104

10-2

10-3

10-4

No. of Evaluations

Fitness v

alu

e

7

6

5

4

3

2

1

0

Fitness v

alu

e

200 400 600 800 1000 1200 1400 1600 1800

No. of Evaluations

2000

18

16

14

12

10

8

6

Fitness v

alu

e

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

No. of Evaluations x 105

f18 FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

FCACASIA

f14f15

f17f16

a

Fig. A3. (a) Convergence plot of multimodal functions with less minima (f14�f18). (b) Convergence plot of multimodal functions with less minima

(f19�f23).

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128126

ARTICLE IN PRESS

8

Fitness v

alu

e

No. of EvaluationsNo. of Evaluations

No. of EvaluationsNo. of Evaluations

FCA

CA

SIA

f19

b

6

4

2

0

-2

-4

0.5 1 1.6 2 2.5

x 105

1.2

1

0.8

0.6

0.4

0.2

0

-0.2

0.4

-0.6

0.8

Fitness v

alu

e

f20FCA

CA

SIA

0 2 4 6 8 10 12 14 16 18

x 104

-8.2

-8.4

-8.6

-8.8

-9

-9.2

-9.4

-9.0

-9.8

Fitness v

alu

e

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

x 105x 104

7.8

-8

-8.2

-8.4

-8.6

-8.8

-9

-9.2

-9.4

Fitness v

alu

ef21 f22

0 1 2 3 4 5 6 7 8 9 10

1

0.5

0

-0.5

-1

-1.5

2

-2.5

-3

-3.5

-4

Fitness v

alu

e

0 1 2 3 4 5 6 7 8 9 10

x 104No. of Evaluations

f23

FCA

CA

SIA

FCA

CA

SIA

FCA

CA

SIA

Fig. A3. (Continued)

N. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128 127

ARTICLE IN PRESSN. Khilwani et al. / Engineering Applications of Artificial Intelligence 21 (2008) 106–128128

References

Ahn, C.W., Ramakrishna, R.S., 2003. Elitism based compact genetic

algorithm. IEEE Transaction on Evolutionary Computation 7 (4),

367–385.

Back, T., Fozel, D., Michalwicz, Z., 1998. Handbook on Evolutionary

Computation.

Back, T., Eiben, A.E., VanderVaart, N.A.L., 2000. An Empirical Study on

GAs ‘Without Parameter’. PPSN 315-324.

Caponetto, R., Fortuna, L., Fazzino, S., Xibilia, G.M., 2003. Chaotic

sequences to improve the performance of evolutionary algorithm.

IEEE Transactions on Evolutionary Computation 7 (3), 289–304.

Chen, L., Aihara, K., 1995. Chaotic SA by a neural network model with

transient chaos. Neural Networks 8, 915–930.

Cutello, V., Nicosia, G., 2002a. Multiple learning using immune

algorithms. In: Paper Presented at the Fourth International Con-

ference on Recent Advances in Soft Computing, Nottingham, UK.

Cutello, V., Nicosia, G., 2002b. An immunological approach to combinatorial

optimization problems. Lecture Notes in Computer Science 2527, 361–370.

Cutello V., Morelli G., Nicosia G., Pavone M., 2005. Immune algorithms

with aging operators for the string folding problem and the protein

folding problem. In: Proceedings of the Fifth European. Conference on

Computation in Combinatorial Optimization (EVOCOP’05), Lecture

Notes on Computer Science, vol. 3448, pp. 80–90.

Cutello, V., Nicosia, G., Povene, M., 2006. Real coded clonal selection

algorithm for unconstrained global optimization using a hybrid

inversely proportional hypermutation operator. In: SAC’06, 23–27

April, Dijon, France.

Dasgupta, D., 1999. Artificial Immune System and their Applications.

Springer, Berlin, Germany.

Dasgupta, D., Gonzalez, F., 2002. An immunity-based technique to

characterize intrusions in computer networks. IEEE Transaction On

Evolutionary Computation 6 (3).

DeCastro, L.N., Zuben, F.J.V., 2002. aiNet: an artificial immune network

for data analysis. In: Abbas, H.A., Sarker, R.A., Newton, C.S. (Eds.),

Data Mining: A Heuristic Approach, pp. 231–259.

DeCastro, L.N., Zuben, V., 2002. Learning and optimization using the

clonal selection principle. IEEE Transaction on Evolutionary Compu-

tation, special issue on Artificial Immune System (AIS) 6 (3), 239–351.

DeCastro, L.N., Zuben, V., 2005. Recent Development in Biologically

Inspired Computing Idea. Group Publishing, pp. 104–146.

Determan, J., Foster, A.J., 1999. Using chaos in genetic algorithm. In:

Proceedings of the 1999 Congress on Evolutionary Computation, vol.

3. IEEE Press, Piscataway, NJ, pp. 2094–2101.

Eiben, A.E., Schoenauer, M., 2002. Evolutionary computing. Information

Processing Letters 82, 1–6.

Eiben, A.E., Hinterding, R., Michalewicz, Z., 1999. Parameter control in

evolutionary algorithm. IEEE Transaction on Evolutionary Computa-

tion 3 (2), 124–140.

Fozel, G.B., Corne, D.W., 2003. Evolutionary Computing in Bioinfor-

matics. Morgan Kaufmann, San Francisco, CA.

Gen, M., Cheng, R., 1997. Genetic Algorithm and Engineering Design.

Wiley Publication, New York.

George, A.J.T., Grey, D., 1999. Receptor editing during affinity

maturation. Immunology Today 20 (4).

Gog, A., Dumitrescu, D., 2004. Parallel mutation based genetic

chromodynamics. Informatica XLIX (2).

Goldberg, D.E., 1989. Genetic Algorithm in Search Optimization, and

Machine Learning. Addison-Wesley, MA.

Harik, G.R., Lobo, F.D., Goldberg, D.E., 1999. The compact genetic

algorithm. Brief Papers IEEE Transaction on Evolutionary Computa-

tion 3 (4), 287–297.

Hyun, D., Oh, S.Y., 2000. A new mutation rule for evolutionary

programming from back propagation learning. IEEE Transaction on

Evolutionary Computation 4 (2), 188–190.

Jong, K.D., 1975. An analysis of the behavior of a class of genetic

adaptive systems. Ph.D. Thesis, University of Michigan, Ann Arbor,

pp. 71–82.

Luo, C.Z., Shao, H.H., 2003. Evolutionary algorithms with chaotic

perturbations. Control Decision 15, 557–560.

Mukopadhyay, S.K., Midha, S., Krishna, V.A., 1992. A heuristic

procedure for loading problem in flexible manufacturing

system. International Journal of Production Research 30 (9),

2213–2228.

Muller, S.D., Marchetto, J., Airaghi, S., Koumoutsakos, P., 2002.

Optimization based on bacterial chemotaxis. IEEE Transaction on.

Evolutionary Computation 6, 16–29.

Shanker, K., Srinivasulu, A., 1989. Some solution methodologies for a

loading problem in random FMS. International Journal of Production

Research 27 (6), 1019–1034.

Swarnkar, R., Tiwari, M.K., 2004. Modeling machine loading problem of

FMSs and its solution methodology using a hybrid tabu search and

simulated annealing-based heuristic approach. Robotics and Compu-

ter-Integrated Manufacturing 20, 199–209.

Tiwari, M.K., Hazarika, B., Vidyarthi, N.K., Jaggi, P., Mukopadhyay,

S.K., 1997. A heuristic solution approach to the machine loading

problem of FMS and its Petri net model. International Journal Of

Production Research 35 (8), 2269–2284.

Tiwari, M.K., Kumar, S., Kumar, S., Prakash, Shankar, R., 2006. Solving

Part type selection and operation allocation problems in a FMS: an

approach using consraint based fast simulated annealing algorithm.

IEEE Transaction on System, Machine and Cybernatics Part-A 36 (6),

1170–1184.

Tiwari, M.K., Prakash Kumar, A., Mileham, A.R., 2005. Determination

of an optimal assembly sequence using the PsychoCA. Journal of

Engineering Manufacture 219, 137–149.

Yao, X., Liu, Y., 1999. Evolutionary programming made faster. IEEE

Transactions on Evolutionary Computation 3 (2), 82–102.