performance evaluation of genetic algorithms and evolutionary programming in optimization and...

23
This article was downloaded by: [McGill University Library] On: 21 November 2014, At: 06:28 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Cybernetics and Systems: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ ucbs20 PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING R. Abu-Zitar a & A. M. Al-Fahed Nuseirat b a Computer Science Department, Al-Isra PrivateUniversity, Amman, Jordan b Faculty of Engineering, Al-Isra Private University, Amman, Jordan Published online: 30 Nov 2010. To cite this article: R. Abu-Zitar & A. M. Al-Fahed Nuseirat (2002) PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE

Upload: a-m-al-fahed

Post on 24-Mar-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

This article was downloaded by: [McGill University Library]On: 21 November 2014, At: 06:28Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales RegisteredNumber: 1072954 Registered office: Mortimer House, 37-41Mortimer Street, London W1T 3JH, UK

Cybernetics andSystems: AnInternational JournalPublication details, includinginstructions for authors andsubscription information:http://www.tandfonline.com/loi/ucbs20

PERFORMANCEEVALUATION OFGENETIC ALGORITHMSAND EVOLUTIONARYPROGRAMMING INOPTIMIZATION ANDMACHINE LEARNINGR. Abu-Zitar a & A. M. Al-FahedNuseirat ba Computer Science Department,Al-Isra PrivateUniversity, Amman,Jordanb Faculty of Engineering, Al-IsraPrivate University, Amman, JordanPublished online: 30 Nov 2010.

To cite this article: R. Abu-Zitar & A. M. Al-Fahed Nuseirat(2002) PERFORMANCE EVALUATION OF GENETIC ALGORITHMS ANDEVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE

Page 2: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

LEARNING, Cybernetics and Systems: An International Journal, 33:3,203-223, DOI: 10.1080/019697202753551611

To link to this article: http://dx.doi.org/10.1080/019697202753551611

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy ofall the information (the “Content”) contained in the publicationson our platform. However, Taylor & Francis, our agents,and our licensors make no representations or warrantieswhatsoever as to the accuracy, completeness, or suitability forany purpose of the Content. Any opinions and views expressedin this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. Theaccuracy of the Content should not be relied upon and shouldbe independently verified with primary sources of information.Taylor and Francis shall not be liable for any losses, actions,claims, proceedings, demands, costs, expenses, damages,and other liabilities whatsoever or howsoever caused arisingdirectly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and privatestudy purposes. Any substantial or systematic reproduction,redistribution, reselling, loan, sub-licensing, systematic supply,or distribution in any form to anyone is expressly forbidden.Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 3: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

PERFORMANCE EVALUATION OF GENETICALGORITHMS AND EVOLUTIONARYPROGRAMMING IN OPTIMIZATION ANDMACHINE LEARNING

R. ABU-ZITAR

Computer Science Department,Al– Isra Private University, Amman, Jordan

A. M. AL-FAHED NUSEIRAT

Faculty of Engineering, Al– Isra PrivateUniversity, Amman, Jordan

Genetic Algorithms (GAs) and Evolutionary Programming (EP) are in-

vestigated here in both optimization and machine learning. Adaptive and

standard versions of the two algorithms are used to solve novel applications in

search and rule extraction. Simulations and analysis show that while both

algorithms may look similar in many ways their performance may differ for

some applications. Mathematical modeling helps in gaining better under-

standing for GA and EP applications. Proper tuning and loading is a key for

acceptable results. The ability to instantly adapt within an unpredictable and

unstable search or learning environment is the most important feature of

evolution-based techniques such as GAs and EP.

In recent years, both GA and EP have attracted many researchers from

different orientations and interests. The strength of those evolution-based

algorithms comes from their simplicity, ¯exibility, and applicability

(Hinton and Nowlan 1997). GA and EP are very simpli®ed models of

how chromosomes and genes operate in the living organisms. We know

that nature’s evolutionary algorithm has been working magni®cently for

Address correspondence to A. M. Al-Fahed Nuseirat, Dean of Faculty of Engineering,

AL– ISRA Private University, P. O. Box 621286, 11162 Amman, Jordan.

Cybernetics and Systems: An International Journal, 33: 203– 223, 2002Copyright # 2002 Taylor & Francis0196 -9722/02 $12.00 + .00

203

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 4: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

billions of years. All that we see around us, from all kinds of life and

intelligence, is the result of their great evolution. Endless types, shapes,

and forms of life are all re¯ections of what genetics carry. Through

continuous adaptation with the surrounding environment, survival for

the best is implemented, and only the best is allowed to reproduce. Off-

spring of the parents are expected to be more tolerant to the surrounding

environment, and therefore, adapt easier. The fact that GAs and EP have

been working successfully for billions of years puts a burden on our

shoulders. Many questions need to be answered. How deep is our

understanding as computer scientists of these algorithms? How ef®ciently

can we mimic those algorithms on our computers? How can we utilize

them in real-life applications and industry? At ®rst, one may think that

no work can be done without a thorough understanding of those algo-

rithms. However, if we understand enough to deduce an ef®cient and

useful search mechanism, then we are close to what we need. Simulating

those algorithms on the computer includes a proper coding scheme,

proper tuning of parameters, and ef®cient objective functions. A lot of

work has been done regarding the aforementioned points (Davis 1991;

Fogel 1991; Chin-Teng and Lee 1995). Many applications in global op-

timization has been solved with GA and EP (Fogel 1991; Rumelhart et al.

1986), in addition to a lot of literature investigating theoretical analysis

and modeling (Holland 1986; Goldberg 1989).

Fraser (1957) and Bremermann (1962) were the ®rst pioneers in

simulating genetic systems and applying them in optimization. The GA in

its known form, was introduced by Holland (1975). His student, David

Goldberg (Goldberg 1989; Booker, Goldberg, and Holland 1989) was

one of the major contributors to the publicity of the GA among AI

community. His work made the GA available and acceptable to readers

from all levels. The GA in its simplest form consists of three basic

operations: reproduction, crossover, and mutation. The basic building

block in the GA is the ``string’’; which is a sequence of bits representing

variables of the search space. The bits themselves form the genotype

and their decoded values are the phenotype. All GA operations are

implemented over a ®nite population of strings. As the search process

goes on, the average ®tness for the population of strings is expected to

increase. Fitness is measured using some objective function that is related

to the criterion need to be optimized. On the other hand, the EP sug-

gested by Fogel (1991), uses a population of phenotype strings carrying

the exact variables to be optimized. Reproduction is done after some

204 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 5: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

sorting phase, and offspring are generated using only mutation. The

major differences between the GA and the EP are in (1) the encoding

scheme, in which a unique encoding is not used for all problems, (2)

reproduction, where single offspring for every parent are generated for

EP, and (3) the non-crossover, with only-mutation operation used with

EP. Next we will show two demonstrations for each of the two algorithms

in optimization and machine learning applications, but ®rst we offer a

brief description of GA and EP in their standard form.

DESCRIPTION OF STANDARD GA AND STANDARD EP

The following ¯owcharts show brief descriptions of standard GA and EP

(see Figure 1 and Figure 2, respectively). As mentioned earlier the major

difference areas, besides encoding and crossover, are in the number of

offspring for each string. In standard EP each string reproduces one child

after the reordering process, while in standard GA the chromosome may

be selected reproduction more than one time and may reproduce different

children. In advanced versions of EP, however, each string may re-

produce an arbitrary number of offspring per parent.

Standard GA Versus Standard EP in Optimization

A challenging maximization problem that is suitable as benchmark for

testing the standard GA and EP, is maximizing the function shown

below(Chin-Teng and Lee 1995):

f(x; y) = 0:5 – [(sin2(x2 ‡ y2)12 – 0:5)=[1:0 ‡ 0:001(x2 ‡ y2)]2] (1)

over x 2 [– 100; ‡100], y 2 [– 100; ‡100]. This function has a wavy sur-

face (Figure 3) and one global solution in a tiny area of the search space.

We used 100 chromosomes with 44 bits each (44 bits are enough to

provide an acceptable degree of accuracy in the decoded variables x

and y). There is no systematic way to pick the exact number of bits for

each chromosome. We did some experiments and relied on our previous

experience to select that chromosome length. Crossover probability was

chosen to be 0.65, mutation probability was 0.008, and generation gap

was equal to one (Davis 1991). The population converged gradually to an

identical set of chromosomes after only four generations. The average

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 205

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 6: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

®tness of the population was initially around 0.3 and then moved up to

around 0.99.

The same optimization problem was used for testing the standard EP.

An initial population of 100 strings, each string carrying two parameters (x

and y) was used. Mutation is a random variable taken from a normal

distribution function with a zero mean and a ®xed standard deviation

Figure 1. The ¯owchart of a standard GA.

206 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 7: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

(around 10% of the x & y ranges). The standard EP ®rst converged to a

some local maximum point of ®tness around 0.6. That took around ®ve

iterations, then after around 16 iterations it started to move up with ®tness

until it reached ®tness 0.99 after 42 iterations. This simple case study shows

the ability of both algorithms in their standard forms, which are relatively

simple forms, to solve deceiving optimization problems that hill-climbing

Figure 2. The ¯owchart of a standard EP.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 207

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 8: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

techniques fail to solve (Hassoun 1995). However, EP could overcome the

weakness in its performance when an adaptive mutation operator was

used. In another attempt to solve the same problem with a sort of adaptive

EP, we used a mutation random variable taken from a normal distribution

with a zero mean and a standard deviation related linearly to the ®tness.

The scaling factor that related ®tness to standard deviation was around 30.

The EP strings converged after 12 iterations to an average ®tness of

around 0.99 showing improvement in the performance.

To further test the two algorithms we increased the number of

variables for the function to be maximized. We used six variables; x, y, z,

w, h, and k instead of x and y only. Although the function is symmetric

for the six variables, it is still a challenging problem to solve. We

initialized the GA with the same crossover and mutation probabilities,

same initial population size, same generation gap, but different chro-

mosome length, we used 180 bits for each chromosome. The GA did not

converge even after more than 1000 generations. It appeared to be

trapped in some neighborhood in the search space where it could not

Figure 3. The surface of the two variables function f(x; y).

208 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 9: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

escape. We repeated the process but this time with separate crossover

points for each sub-string in an attempt to pump more possible solutions

to the search operation (i.e. we used multiple crossover points). The goal

was to give a chance for all (genes) variables to compete independently.

The performance of the GA improved noticeably. The average ®tness of

the population converged to around 0.99 after 500 generations.

On the other hand, EP was tested with the same six-variable func-

tion, we used the modi®ed version with adaptive mutation. It took EP

around 3000 iterations to get to a ®tness of around 0.87. However, we

increased the adaptability of the EP by using an adaptive scaling factor

for the standard deviation of the normal distribution function from

which mutation is taken. We called the EP in that case adaptive-adaptiv e

EP. The scaling factor (k) was around 200 when ®tness was less than 0.1,

10 when ®tness was less than 0.6 and larger than 0.1, and 5 when ®tness

was larger than 0.6. This modi®cation resulted in some improvement as it

reached an average ®tness of 0.934 in 3000 iterations (see Table 1). Figure

4 shows the average ®tness of GA and EP populations versus the number

of generations when optimizing the six-variable function, with a single

crossover point GA, multiple crossover point GA, adaptive EP, and

adaptive-adaptiv e EP.

Figure 4. Fitness versus iterations for optimizating the six-variables function.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 209

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 10: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

GA and EP in Machine Learning

Another application for the GA was in rules extraction for machine

learning systems (Wilson 1987; Abu Zitar and Hassoun 1995). One of the

machine learning paradigms that uses GA is the classi®er systems

(Goldberg 1989). GA has the ability to reproduce new rules using bits

and pieces of selected good rules. Moreover, adding new operators like

insertion and conditional mating may enhance the GA performance and

add richness to the solution space. REGAR, Rule Extraction with

Genetic Assisted Reinforcement, is a genetic-based machine learning

system invented by Abu Zitar (1993). It has many applications, especially

in nonlinear control systems (Abu Zitar and Hassoun 1993a). REGAR

consisted of detectors, classi®ers pool (rules), rule evolution mechanism

(GA), credit assignment mechanism, and effector. Figure 5 shows the

Table 1. Comparison of the performance of GA and EP in solving the optimization

problem described in section 2

Number of convergence iterations (Max. ®tness)

Algorithm Two-variables Six-variables

Standard GA(SGA) 4 (0.99) divergance

Standard EP(SEP) 42 (0.99) divergance

Adaptive EP(AEP) not tried 5000 (0.87)

Adaptive-Adaptive EP(AAEP) not tried 3000 (0.934)

Multi crossover GA(MCGA) not tried 500 (0.99)

Figure 5. REGAR’s system architecture.

210 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 11: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

architecture of REGAR. We focus particularly on the rule evolution

mechanism here, which is the GA, and show how it can be tuned, loaded,

and utilized ef®ciently. Then we will replace the GA with EP and see the

cost and performance differences when REGAR is tested in some

application. The detectors in REGAR have the job of reading an input of

analog signal and converting it into a binary message that is passed to a

window in the classi®ers pool. The classi®er is a binary simple condition-

action rule with some initial strength, the condition is usually matched

with the state variables of the input, and the action is consequently

converted into output of the machine learning system. The output is

passed to the environment, and the environment reacts correspondingly

moving from one state to another. The credit assignment algorithm

modi®es the strength of each classi®er according to some reward=penalty

mechanism. If the state of the environment was getting closer to the

required target, the classi®er(s) responsible for system output is rewarded;

otherwise it is punished. Reward=penalty is done by increasing=

decreasing strength of classi®ers with some calculated factor (Abu Zitar

and Hassoun 1993b). Some preset criteria re¯ects the measure of how

close the environment state is from the desired goal state.

In REGAR, the GA here has the following characteristics:

1. It is multiple crossover, since each variable in the classi®er has a single

crossover point.

2. It uses conditional marriage (Booker 1985), since only classi®ers with

similar actions are allowed to mate.

3. It uses an insertion operator. This operator works at the beginning of

every generation, it removes a percentage of the lowest strength

classi®ers from the pool, and inserts new classi®ers with conditions

similar to the environment message posted at the window. It attaches

random actions to them and gives them ®tness equal to the average

®tness of the classi®ers population.

4. A GA gap is used. It is the time interval between two consecutive call

for the GA.

When using the EP instead of the GA, the major difference was in the

structure of classi®ers used. The structure of standard EP consists of real

values; the detector’s job is just to post the input of the environment state

variables on the classi®ers pool window. The classi®er itself consists of a

condition made of a sequence of real variables, and the action that is also

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 211

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 12: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

made of a sequence of real variables. For fair comparison with the GA,

mutation is taken from a uniform distribution, i.e. non-adaptive. Inser-

tion operator is used working on real-valued classi®ers. The effector job

here is also minimized as it only delivers output to the environment.

Finally, an EP gap is used similar to the one used with the GA. Our bench

mark application was the trailer truck problem shown in Figure 6

(Widrow and Nguyen 1989). This problem has four inputs and one

output. It is a challenging nonlinear control problem that has no ana-

lytical solution. The input state variables are the truck angle, trailer

angle, and the x, y locations of the truck (Figure 6). The goal is to back

up the trailer truck from any given initial orientation to the loading dock.

The back up speed is constant and the only output is the steering angle.

The tricky thing about this problem is that any early decision of the

controller during the backing up process will affect the ®nal state of

the trailer truck system a few steps later. REGAR plays the role of the

controller as its output (the steering angle) affects the plant (the trailer

truck). As a matter of fact, we start by the rule extraction stage in which

consecutive successful sequences of rules are saved in a retrieval ®le for

the retrieval stage. A successful sequence of rules is that sequence of rules

used by REGAR and that lead the trailer truck successfully to the

loading dock. Then, these rules are tested periodically to judge if they

form together a complete control surface for the problem. This periodical

testing is done in retrieval mode. Figure 7 show a ¯ow chart for REGAR

in learning, and Figure 8 shows REGAR in retrieval. In learning, the

team work of the credit assignment algorithm and the rule evolution unit

(GA=EP) will eventually result in successful sequences of rules (Sutton

1988).

Figure 6. Trailer-truck and parking lot.

212 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 13: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

Learning was done ®rst using the GA as evolution unit; 100 random

initializations were generated, 67 initializations ended up with successful

sequences, and the rest failed by exceeding the maximum allowed number

of iterations for every learning phase. Those initializations resulted in

around 500 micro rules. Those rules were used in another different 100

initializations for retrieval (testing), 90 of them resulted in a successful

regulator control, and 10 failed. Those results were the best of tens of

simulations in learning and retrieval in which different GA and credit

assignment parameters were interactively optimized. On the other hand,

Figure 7. REGAR operation ¯owchart.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 213

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 14: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

the learning process was repeated replacing GA by EP with a mutation

operator taken from a uniform distribution function. We used 100 ran-

dom initializations; only 35 succeeded and 65 failed by exceeding the

maximum allowed time of iterations for every learning phase. In retrieval,

only 55 attempts succeeded out of 100 initializations. The total number of

extracted rules was 200 rules (see Table 2 for a performance comparison).

MATHEMATICAL NOTATIONS

A classi®er in REGAR is represented by Cj 3– ply that has the following

form

Cj = [s j; y j; u j] (1)

where s j 2 R is the strength, y j 2 {0; 1}n is the condition, and

u j 2 {0; 1}m is the action of the jth classi®er respectively.

Table 2. Comparison of the performance of GA and EP in machine learning for the

trailer-truck application

Algorithm

Succesful learning

initializations

Number of

extracted rules

Successful retrieval

initializations

SEP 35 200 53

MCGA 67 500 90

Figure 8. REGAR implements a feedback controller during retrieval.

214 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 15: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

During learning and retrieval REGAR interacts with the environ-

ment (plant) generating three sequences: S jx, S j

u, and S jc,

S jx = {x j(t j

o); x j(t jo ‡ 1); . . . ; x j(t j

f )}

S ju = {u j(t j

o); u j(t jo ‡ 1); . . . ; u j(t j

f)}

S jc = {c j(t j

o); c j(t jo ‡ 1); . . . ; c j(t j

f)}

where j is the index of current sequence of active classi®ers, to is the time

step at which the ®rst classi®er in sequence S jc released its action, tf is the

time step at which last classi®er in sequence j released its action, u(t) is

control signal supplied by active classi®er, x(t) are the environment state

variable, and c(t) classi®er selected at time t.

The ®tness function is evaluated at the end of every classi®er

sequence f i

f i = f(x(ti); d(ti)) (2)

where i is the active classi®er index, d(ti) is the desired goal state. In

general

f i =1 –

Pn

Ptif

k= tiobn[en(k)]2

(tif – ti

o)P

n[emaxn]2

(3)

en(k) = xn(k) – dn(k), k is the index of the time step, and n is the state

variable index, en(k) is calculated error, and bn are positive weighting

constants. Reward term is given by

4Ri =

Pj2M cshare £ s j

(tif – tio)

(4)

where M is the group of classi®ers not in active sequence, s j is the

strength of classi®er in M group, and cshare is the percentage of strength

each classi®er from M pays to reward the classi®ers in S jc. Finally, the

penalty term is given by 4Pi

4Pi = cshare £ si (5)

where si is the strength of respective classi®er in the sequence Sic.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 215

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 16: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

A uni®ed general model for both the GA and EP acting on a ®nite

population can be represented by a Markovian transition matrix Qij. Qij

is an N £ N matrix, each element in Qij is the probability that the po-

pulation Pi will be produced from population Pi under standard GA or

EP. The most important difference in this uni®ed model is that GA uses a

®nite population working on a ®nite search space (space of discrete

binary strings), while EP uses ®nite population with in®nite search space

(space of real-numbered strings). To overcome this problem in our

model, we divided the EP search space into a tiny, discrete, ®nite number

of neighborhoods (yi). In that sense, any string in EP population is

referred to by its unique neighborhood label, while in GA a string is

referred to by its own label. If we refer to a string as Zk; j, it means the

string that has kth pattern (or from kth neighborhood for EP) and

forming population j. If the ®nite population has strings with length l,

then there are 2l – 1 possible patterns for every selection. If we have n

number of selection-and-recombinatio n steps, we will have the following

expression for all possible ways of forming a population Pj

n

Z0; j

³ ´n – Z0; j

Z1; j

³ ´¢ ¢ ¢

n – Z0; j – Z1; j – ¢ ¢ ¢ – Z2l ; j

Z2l ; j

³ ´

=n!

Z0; j! – Z1; j! – ¢ ¢ ¢ – Z2l ; j!(6)

The probability that the correct number of occurrences of each string y

(in population Pj) is produced (from population Pi) is

Y2l– 1

y

[pi(y)]Zy; j (7)

The probability that population Pj is produced from population Pi is the

multi-nomial distribution

Qi; j =n!

Z0; j! – Z1; j! – ¢ ¢ ¢ – Z2l ; j!

Y2l – 1

y

[pi(y)]Zy; j

= n!Y2l – 1

y

[pi(y)]Zy; j

Zy; j!(8)

216 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 17: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

P(y) is the expected proportion of string y in the population produced

from Pi. This value depends on the ®tness of the string and on the average

®tness of the whole population. For details see Nix and Vose (1991). This

Qij matrix gives an exact model of both standard GA and standard EP

acting on a ®nite population.

CONCLUSIONS AND DISCUSSIONS

The GA and the EP showed great ability to optimize multivariable

functions with an in®nite number of local optima. GA and EP, however,

need proper tuning and careful selection of parameters (Davis 1989). By

making the mutation adaptive in EP, and converting crossover to be

multiple point, the performance greatly improved. We put a limit of 1000

generations with ®tness of at least 0.5 to judge if performance was so far

acceptable or not. In all the simulations we had, adaptive parameters

used with the EP helped in tuning mutation properly. Raising the degree

of adaptation for the EP resulted in even better performance as shown in

Figure 4. Keeping the crossover and mutation probabilities constant, the

GA, on the other hand, could not overcome the increase of dimensions

for the same function until we used multiple crossover points. When

optimizing the two-variable function, the performance of the GA and the

EP were fairly identical with a slight advantage for the GA. After using

adaptive mutation, the performance of the EP was very identical to the

GA. When optimizing the six variable functions, both standard GA and

EP failed within the allowed limit of iterations. Only by using multiple

crossover points for the GA and adaptive or adaptive-adaptive EP was

our criteria met. However, the GA in its best case showed faster con-

vergence toward the global solution, and even steadier performance when

it reached there. The ability of the GA to discretize the search space by

using the binary structure that is mapped to quanti®ed real values pro-

vides a better distribution of samples of the search space. Moreover, the

crossover itself is an excellent mechanism to search in the regions that are

within some limited Hamming distance from the selected strings. Muta-

tion, the radical operator, can take the search to any point in the avail-

able space. Multiple crossover points have the effect of providing

independent competition between the opposite sub-strings in selected

chromosomes. In EP, if mutation is taken from a uniform distribution

function or even from normal distribution function then the algorithm

will act like some sort of random search and mutation will start to have

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 217

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 18: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

destructive effects in most of the cases. As we have shown, mutation was

turned into a constructive tool when ®tness was linearly used in con-

trolling the standard deviation of the normal random distribution.

Wolpert and Macready (1997), however, in their ``No Free Lunch’’

(NFL) theorem prove that crossover is not more powerful than mutation.

The NFL theorem states that all algorithms perform the same, according

to any performance measure, when averaged over all possible cost

functions. Therefore, there may be another evolutionary algorithm that

uses mutation only or even does not use either mutation nor crossover

and outperforms our version(s) of the GA. The EP, for example, may

outperform the GA for different cost fuctions. Our simulations, however,

are based on presented cost functions of equation (1) and equation (3).

All the results and conclusions we present are based on those cost

functions and pre-described algorithms. The ultimate goal for both

crossover and mutation is to provide diversity in the strings population

without sabotaging previously gained knowledge. Diversity is the fuel for

any search mechanism. Mutation has no limits on what it can alter. It can

take the search to any point in the space. It may be enough to fuel any

search engine if it was used properly in an adaptive or annealing fashion.

Crossover, on the other hand, is much more conservative than mutation.

It results in generating offspring that carry the genetics of their parents.

Crossover is a major part in the operations of living organisms. If we are

going to simulate evolution and recombination then we can not ignore it.

Spears and De Jong (1998) indicated that more disruptive crossover

operators achieve higher levels of construction. This led to the NFL

theorem for crossover operators with respect to survivability and con-

struction. On the other hand, the more disruptive mutation rates yield

lower levels of construction. Thus, there is no general NFL for mutation.

In machine learning, GA with multiple crossover points was used at

®rst. The GA was called every period of time to work on evolving the

existing classi®ers and replacing the bad rules with good rules. The

conditional marriage is used to allow distinct ``species’’ to appear and to

prevent excessive diversity in the population. There is always a degree of

``healthy’’ diversity kept in the population by the mutation and insertion

operators. The insertion operator was essential to provide classi®ers with

conditions similar to the environment message, therefore reducing

signi®cantly the problem of rule-mismatch that faces most rule-base

systems. In machine learning systems, such as REGAR, we have to be

careful when using the GA. The goal here is to extract rules, re®ne them,

218 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 19: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

and breed them with help of the GA. The GA is not used to dig into

search space looking for some global point. That is why we used the GA

gap that allows us to use the GA periodically as needed. Figure 9 re-

sembles how the GA works on classi®ers to converge to dominant clas-

si®ers in separate regions of the search space. The dominant classi®er is

the one with highest strength among classi®ers of its neighborhood. For

the sake of comparison, we replaced the GA with EP, using a mutation

from uniform distribution, and similar EP gap and repeated whole si-

mulations reaching the aforementioned result. The drawback of EP here

is related to the fact that REGAR uses qualitative information to eval-

uate the performance of its classi®ers. It does the credit assignment

according to this measure. There is no quantitative measure like error or

®tness that can be used in building an adaptive mutation mechanism.

Even if we create a measure like this, it means an additional building

block in REGAR that will only be used by EP. Moreover, REGAR will

lose the merit that it is not a supervised learning system consuming little

pay-off information.

During retrieval, REGAR was used as a closed loop controller as

shown in Figure 9, retrieval are simply reading input, matching, selecting,

and ®ring the action again and again. The GA here has outperformed the

EP in the quantity of good rules it could extract for the same number of

initializations. This has re¯ected on the quality of retrieval trajectories

Figure 9. GA applied to the population during rules generation.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 219

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 20: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

shown in Figures 10 and 11. The trajectory of some successful back-ups is

smoother for rules extracted by GA than it is for the ones by the EP. This

is due to higher number of rules available at the rule base of the GA

controller. The control surface tends to be more continuous and, hence,

smoother. On the other hand, if we want to narratively describe the dif-

ferences between the computational complexity of the two algorithms, the

EP is de®nitely less complex. The GA requires larger number of compu-

tation steps and processing time during encoding, decoding, crossover,

and even mutation operations. However, performance is the issue for our

machine learning system, specially with available powerful hardware

tools. Even if we leave both algorithms to run forever on any available

powerful machine, once convergence is reached then no signi®cant im-

provement is usually expected. Any rotation in the function under opti-

mization requires, at least partially, rebuilding the whole structure of the

Figure 10(a). Backing up sample trajectory with GA.

Figure 10(b). Backing up sample trajectory with EP.

220 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 21: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

choromosomes population and repeating the optimization process. This

operation was tested on GA and EP, and it is true for both of them.

It is worthy to mention that choosing the optimum set of parameters

for initializing the algorithms is not a straight-forward job. There are no

well-de®ned equations or functions that promptly give the proper values

of parameters. As all heuristic techniques, some trial-and-error is needed

before improvement is noticed and criterion is relatively met. Any pre-

vious experience in tuning the algorithms would also help. All the con-

clusions we have are based on hundreds of runs we made before reaching

the optimum set of results.

REFERENCES

Abu Zitar, R. A. 1993. Machine learning with rule extraction by genetic assisted

reinforcement (REGAR): Application to nonlinear control. Ph.D. diss.,

Department of Electrical and Computer Engineering, Wayne State Uni-

versity, Detroit, MI.

Abu Zitar, R. A., and M. H. Hassoun. 1993a. Genetic and reinforcement-based

rule extraction for regulator control. Proceedings of the 32nd IEEE conference

on Decision and Control, 1258– 1263, San Antonio, TX.

Figure 11(a). Backing up sample trajectory with GA.

Figure 11(b). Backing up sample trajectory with EP.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 221

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 22: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

Abu Zitar, R. A., and M. H. Hassoun. 1993b. Regulator control via genetic as-

sisted reinforcement learning. Proceedings of the Fifth International Con-

ference on Genetic Algorithms. Urbana-Champaign Illinois, 251– 260.

Abu Zitar, R. A., and M. H. Hassoun. 1995. Neurocontrollers trained with rules

extracted by a genetic assisted reinforcement learning system. IEEE Trans. on

Neural Networks 6(4):859– 879.

Booker, L. B. 1985. Improving the performance of genetic algorithms in classi®er

systems. Proceedings of the First International Conference on Genetic Algo-

rithms and their Applications, Pittsburg, PA, 80– 92. J. Grefenstette (ed.)

Booker, L. B., and D. E. Goldberg and J. H. Holland. 1989. Classi®er systems

and genetic algorithms. Artif. Intell. 40:235– 282.

Bremermann, H. J. 1962. Optimization through evolution and recombination.

Self-Organizing Systems 1962. Yovits, M. C., G. T. Jacobi, and G. D.

Goldstien (eds), Spartan Books,Washington DC, 93– 106.

Chin-Teng, L., and C. S. G. Lee. 1995. Neural Fuzzy Systems. Upper Saddle

River, NJ: Prentice-Hall.

Davis, L. 1989. Adapting operator probabilities in genetic algorithms, in: J. D.

Schaffer, ed., Proceedings of the Third International Conference on Genetic

Algorithms. Morgan Kaufmann, San Mateo, CA, 60– 69.

Davis, L. 1991. Handbook of genetic algorithms. New York: Van Nostrad Reinhold.

Fogel, D. B. 1991. System Identi®cation through Simulated Evolution: A Machine

Learning Approach to Modeling. Needham, MA: Ginn Press.

Fraser, A. S. 1957. Simulation of genetic systems by automatic digital computers.

I. Introduction, Australian J. Biological Sciences, Vol. 10, 484– 491.

Goldberg, D. E. 1989. Genetic Algorithms in Search Optimization, and Machine

Learning. Reading, MA: Addison-Wesley.

Hassoun, M. H. 1995. Fundamentals of Arti®cial Neural Networks. Cambridge,

MA: MIT Press.

Hinton, G. E., and S. J. Nowlan. 1997. How learning can guide evolution.

Complex Systems. 1:495– 502.

Holland, J. H. 1975. Adaptation in Natural and Arti®cial Systems. Ann Arbor,

MI: University of Michigan Press.

Holland, J. H. 1986. A mathematical framework for studying learning in classi®er

systems, Physica 22D:307– 317.

Nix, A. E., and M. D. Vose. 1991. Modeling genetic algorithms with markov

chains. Annals of Mathematics and Arti®cial Intellegence 5:79– 88.

Rumelhart, D. E., J. L. McClelland, and the PDP Research Group. 1986. Parallel

Distributed Processing: Exploration in the Microstructure of Cognition, vol. 1.

Cambridge, MA: MIT Press.

Spears, W. and K. DeJong. 1998. Dining with GAs: operator lunch theorem,

Proc. of Foundations of Genetic Algorithms, 24– 26, Leiden, The Netherlands,

Springer– Verlag.

222 R. ABU-ZITAR AND A.M. AL-FAHED NUSEIRAT

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014

Page 23: PERFORMANCE EVALUATION OF GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING IN OPTIMIZATION AND MACHINE LEARNING

Sutton, R. S. 1988. Learning to predict by methods of temporal differences.

Machine Learning 3:9– 41.

Widrow, B., and D. Nguyen. 1989. The truck backer-upper: An example of self

learning in neural networks. Proceedings of the International Joint Conference

on Neural Networks, Washington, DC, June 18– 22, 357– 363.

Wilson, S. W. 1987. Classi®er systems and the ANIMAT problem. Machine

Learning 2:199– 228.

Wolpert, D., and W. Macready. 1997. No free lunch theorems for optimization.

IEEE Trans. on Evolutionary Computation 1(1):67– 82.

GENETIC ALGORITHMS AND EVOLUTIONARY PROGRAMMING 223

Dow

nloa

ded

by [

McG

ill U

nive

rsity

Lib

rary

] at

06:

28 2

1 N

ovem

ber

2014