an evolutionary programming algorithm for continuous global optimization

16
Continuous Optimization An evolutionary programming algorithm for continuous global optimization Yao Wen Yang * , Jian Feng Xu, Chee Kiong Soh 1 School of Civil and Environmental Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore Received 29 November 2002; accepted 12 May 2004 Available online 3 August 2004 Abstract Evolutionary computations are very effective at performing global search (in probability), however, the speed of con- vergence could be slow. This paper presents an evolutionary programming algorithm combined with macro-mutation (MM), local linear bisection search (LBS) and crossover operators for global optimization. The MM operator is designed to explore the whole search space and the LBS operator to exploit the neighborhood of the solution. Simulated annealing is adopted to prevent premature convergence. The performance of the proposed algorithm is assessed by numerical experiments on 12 benchmark problems. Combined with MM, the effectiveness of various local search operators is also studied. Ó 2004 Elsevier B.V. All rights reserved. Keywords: Evolutionary computations; Evolutionary programming; Global optimization; Simulated annealing 1. Introduction Global optimization is an important issue in Operational Research. Over the last two decades, Evolutionary Computations (ECs), including Genetic Algorithms (GAs) [1,2], Genetic Programming (GP) [3], Evolutionary Program- ming (EP) [4,5] and Evolutionary Strategies (ES) [6,7], have developed into a group of powerful techniques for search and optimization [8–11]. ECs are best suited for generating approximate solutions to problems where no efficient algo- rithms for generating the optimal solution are known to exist. However, the speed of conver- gence could be slow for certain cases. There have been a large number of papers contributed to improve the search efficiency of ECs. Michalewicz [9] presented a GA with varying population size. The concept of ‘‘age’’ of a 0377-2217/$ - see front matter Ó 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2004.05.007 * Corresponding author. Tel.: +65 6790 4057; fax: +65 6791 0676. E-mail addresses: [email protected] (Y.W. Yang), [email protected] (J.F. Xu), [email protected] (C.K. Soh). 1 Tel.: +65 6790 5306; fax: +65 6791 5093. European Journal of Operational Research 168 (2006) 354–369 www.elsevier.com/locate/ejor

Upload: yao-wen-yang

Post on 14-Jul-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: An evolutionary programming algorithm for continuous global optimization

European Journal of Operational Research 168 (2006) 354–369

www.elsevier.com/locate/ejor

Continuous Optimization

An evolutionary programming algorithm for continuousglobal optimization

Yao Wen Yang *, Jian Feng Xu, Chee Kiong Soh 1

School of Civil and Environmental Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798, Singapore

Received 29 November 2002; accepted 12 May 2004Available online 3 August 2004

Abstract

Evolutionary computations are very effective at performing global search (in probability), however, the speed of con-vergence could be slow. This paper presents an evolutionary programming algorithm combined with macro-mutation(MM), local linear bisection search (LBS) and crossover operators for global optimization. The MM operator isdesigned to explore the whole search space and the LBS operator to exploit the neighborhood of the solution. Simulatedannealing is adopted to prevent premature convergence. The performance of the proposed algorithm is assessed bynumerical experiments on 12 benchmark problems. Combined with MM, the effectiveness of various local searchoperators is also studied.� 2004 Elsevier B.V. All rights reserved.

Keywords: Evolutionary computations; Evolutionary programming; Global optimization; Simulated annealing

1. Introduction

Global optimization is an important issue inOperational Research. Over the last two decades,Evolutionary Computations (ECs), includingGenetic Algorithms (GAs) [1,2], Genetic

0377-2217/$ - see front matter � 2004 Elsevier B.V. All rights reservdoi:10.1016/j.ejor.2004.05.007

* Corresponding author. Tel.: +65 6790 4057; fax: +65 67910676.

E-mail addresses: [email protected] (Y.W. Yang),[email protected] (J.F. Xu), [email protected] (C.K.Soh).

1 Tel.: +65 6790 5306; fax: +65 6791 5093.

Programming (GP) [3], Evolutionary Program-ming (EP) [4,5] and Evolutionary Strategies (ES)[6,7], have developed into a group of powerfultechniques for search and optimization [8–11].ECs are best suited for generating approximatesolutions to problems where no efficient algo-rithms for generating the optimal solution areknown to exist. However, the speed of conver-gence could be slow for certain cases.

There have been a large number of paperscontributed to improve the search efficiency ofECs. Michalewicz [9] presented a GA with varyingpopulation size. The concept of ‘‘age’’ of a

ed.

Page 2: An evolutionary programming algorithm for continuous global optimization

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 355

chromosome was introduced to replace the con-cept of selection, thus the size of the populationwas influenced and varied at every stage of theprocess. A two-level ES was developed by VanKemenade [12] to balance the global and localsearch. EP was originally proposed to solve predic-tion tasks, and then extended to work on real-val-ued object variables based on normally distributedmutation [5]. Yao and Liu [13,14] and Yao et al.[15] showed that EP and ES could converge fasterwhen using Cauchy mutation instead of Gaussianmutation. Based on these two operators, Chella-pilla and Fogel [16] and Chellapilla [17] presentedtwo new mutation operators called the mean andthe adaptive mean mutation operators (MMOand AMMO, respectively), consisting of linearcombinations of Gaussian and Cauchy mutationsfor enhanced search and optimization. Inspiredby neural network backpropagation learning,Choi and Oh [18] developed a new mutation oper-ator to improve the performance of EP.

Some other researchers started from another as-pect. They hybridized the local search operatorswith ECs to improve their search efficiency. Hart[19] and Hart and Belew [20] studied the effect oflocal search operators in GA. Chellapilla et al.[21] embedded conjugate gradient method [22]and Solis and Wets method [23] into EP to inves-tigate the effectiveness of local search operatorsin EP. They concluded that ‘‘the local search oper-ators can statistically and significantly enhance theperformance of EP both in terms of the rate ofconvergence and the quality of the final solutionsobtained’’.

A specially designed local search operator canfind an optimum very quickly; however, the opti-mum found could only be a local one. For globaloptimization, the EP method should be carefullydesigned so that the algorithm would not betrapped in any local optima. This paper presentsan EP algorithm integrated with macro-mutation(MM), local linear bisection search (LBS) andcrossover operator to find a balance betweensearch effectiveness and efficiency. The integratedalgorithm is called EP–MMLSX. The MM opera-tor is used to explore the whole search space, whilethe LBS operator is used to exploit the neighbor-hood of the solution. Simulated annealing is

employed to prevent the chromosomes in the earlygenerations from being trapped in the localoptima.

2. Integrated EP–MMLSX algorithm

An optimization problem can generally be writ-ten as follows:

min f ðxÞ; x ¼ ðx1; . . . ; xnÞ 2 Rn; ð1Þwhere x2S\F, S˝Rn defines the search spacewhich is an n-dimensional space bounded by theparametric constraints

xLi 6 xi 6 xUi ; i ¼ 1; . . . ; n: ð2ÞThe feasible region F is defined by

F ¼ x 2 Rn gjðxÞ6 0; j ¼ 1; . . . ; l

hkðxÞ ¼ 0; k ¼ 1; . . . ;m

����� �

; ð3Þ

where gj(x) and hk(x) represent inequality andequality constraints, respectively. The m equalityconstraints imply that m variables can be repre-sented in terms of the remaining variables, thuscan be eliminated. Therefore, the original n-dimen-sional optimization problem with inequality andequality constraints is equivalent to an (n�m)-dimensional problem with only inequality con-straints. The latter is considered hereafter.

The rest of this section gives a detailed descrip-tion of the proposed integrated EP–MMLSX algo-rithm for continuous global optimizationproblems. First, the whole procedure of the algo-rithm is proposed. Then, the representation struc-ture of the chromosome is introduced. After that,two variation operators contributing to globaland local search, respectively, as well as the cross-over operator, are presented. The constraints arehandled by applying a repairing scheme. Finally,the mechanism of selection is discussed.

2.1. Procedure of proposed algorithm

The procedure of the proposed algorithm issimilar to the other evolutionary computationalgorithms, except that some new operators areintroduced. It randomly generates a pool of possi-ble solutions first. Each parent chromosome is

Page 3: An evolutionary programming algorithm for continuous global optimization

356 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

assigned a random number uniformly distributedfrom 0 to 1. Depending on which range this num-ber is located in, i.e., [0, pL], (pL, pL+pC) or[pL+pC, 1], LBS or crossover or MM operator isapplied to the parent to generate a new chromo-some, where pL and pC are the probabilities ofLBS and crossover operators, respectively. Afterthis, the tournament selection is performed on allparent and offspring chromosomes to form a newgeneration. The flowchart of the proposed algo-rithm is shown in Fig. 1.

2.2. Representation structure

In the traditional EP method, each chromo-some contains a vector of the objective variablesx and a vector of the strategy parameters r. Wealso adopt such a representation structure in thispaper. However, in our proposed method, thestrategy parameter represents the mutation step in-stead of the deviation of a certain distribution inthe traditional EP. We use s for identification.

Yes

Initialization:Apply MM to a randomly sele

generate a pool of feasible

Variation:Apply three variation operators

chromosomes with the pre-dpossibilities to generate o

Evaluation:Calculate fitness of all chro

Selection:Select chromosomes basetournament weight obtaicompetition with their o

Termination Check

Start

End

No

Fig. 1. Flowchart for proposed

Thus, the chromosome is represented as (x, s) inour proposed algorithm.

2.3. Variation operators

Three kinds of variation operators, namely lo-cal linear bisection search operator, macro-muta-tion operator, and crossover operator areemployed.

2.3.1. Local linear bisection search (LBS) operator

The objective of LBS is to exploit the neighbor-hood of the parent chromosome. The LBS opera-tor is designed as follows:

For the ith variable in the chromosome, a vari-ation is applied: x0i ¼ xi � si, where ± is selectedaccording to the gradient gi. For example, for amaximum problem, if gi is greater than zero,x0i ¼ xi þ si, otherwise, negative is adopted. If themutated chromosome is not better than its parent,si reduces to si/2 until a better solution is found orthe mutation step si is less than a preset small

cted seed to solutions

to the parent etermined ffspring

mosomes

d on their ned from pponents

MM

LBS

Crossover

EP–MMLSX algorithm.

Page 4: An evolutionary programming algorithm for continuous global optimization

Fig. 2. Mechanism of LBS operator: (a) a 1-dimensional maximization problem––by adjusting the mutation step from si to s0i, anoffspring with better fitness is generated; (b) a 2-dimensional maximization problem––the individual climbs in x1 and x2 directionscontinuously to approach the peak of the fitness landscape.

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 357

number which depends on the expected accuracy.The mutation step is adjusted as shown in Fig.2(a) for a 1-dimensional maximization problem.

The LBS operator is applied to each variable inthe chromosome. The process of local bisectionsearch is similar to hill-climbing, as illustrated inFig. 2(b) for a 2-dimensional problem. The searchstarts in x1 direction to find the ridge of the fitnesslandscape, and continues to ‘‘climb’’ in x2 direc-tion from the new point. This process iterates forall directions. It is expected that a point closer toa local optimum can be found by LBS. Bisectionmethod is employed to adjust the mutation stepbecause it requires no specific knowledge aboutthe fitness landscape. And thus, the LBS operatoris a general method to be applied to any optimiza-tion problem.

The first derivatives of the fitness function areneeded for the LBS operator. However, in mostcases, the gradient gi is not available. In our pro-gram, gi is calculated by an approximate numericalmethod, i.e., gi ¼ f ðxþDxiÞ�f ðxÞ

Dxi, where Dxi is set as

xi· e for xi 6¼0, or e for xi=0, and e is a small pos-itive value.

The LBS operator can help the algorithm find alocal optimum very quickly. This local optimum isnot necessarily the global one. However, if a solu-tion near to the global optimum is found, theemployment of LBS can significantly accelerate

the speed of convergence of the algorithm to theglobal optimum.

2.3.2. Macro-mutation (MM) operator

As stated before, the employment of LBS oper-ator can accelerate the convergence speed. How-ever, the risk of being trapped in a localoptimum also increases. To overcome the prob-lem, MM operator is designed and employed inthe algorithm. The objective of MM operator isto explore the search space and to escape fromthe local optimum trap. The mechanism of MMis defined as follows.

For a parent chromosome x=(x1, x2,. . .xn�m),a move is added to generate an offspring. The move

is randomly selected within the search space S.Thus movei ¼ UðxLi ; xUi Þ, where UðxLi ; xUi Þ is a ran-dom number uniformly distributed between xLi andxUi , which are the lower and upper bounds of theith variable, respectively. The new chromosome

x0 ¼ ðx1 þ UðxL1 ; xU1 Þ; x2 þ UðxL2 ; xU2 Þ; . . . ; xn�m

þ UðxLn�m; xUn�mÞÞ;

where the variable x0i ¼ xi þ UðxLi ; xUi Þ is uniformlydistributed in ½xi þ xLi ; x

Ui � or ½xLi ; xi þ xUi � depend-

ing on xi P 0 or xi<0. It is observed that veryweak connection exists between the offspring andthe parent, except the upper and lower bound of

Page 5: An evolutionary programming algorithm for continuous global optimization

Mother

Offspring1

Offspring3moveF1

F 2

358 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

the variable. Hence, the algorithm has high proba-bility of escaping from a local optimum or movingaway from a plateau.

Since the distance between the offspring chro-mosome to the global optimum is unknown, themutation step s is set as its upper bound:si ¼ xUi � xLi .

Father Offspring2

Offspring3’

move’

Fig. 3. The repairing scheme for crossover.

2.3.3. Crossover

The mechanism of mutation–selection in thetraditional EP without crossover has received con-siderable criticism from researchers working in thefield of GAs [8, p. 106]. In our algorithm, an arith-metical crossover [9] is employed to explore thesearch space and to maintain the diversity of thepopulation. Two offspring are generated fromthe parents as follows:

x10i ¼ aix1i þ ð1� aiÞx2i ; x20i ¼ ð1� aiÞx1i þ aix2i ;

s10i ¼ ais1i þ ð1� aiÞs2i ; s20i ¼ ð1� aiÞs1i þ ais2i ;

ð4Þwhere the superscripts 1 and 2 represent the firstand second parent/offspring, respectively; theprime ( 0) stands for the offspring chromosome;and ai is a random number uniformly distributedin [0, 1].

2.4. Handling constraints

The basic idea to handle the constraints is tokeep all of the modified chromosomes in the feasi-ble region F. Generally, F is connected or is a dis-joint sum of connected components. For aninterior point x=(x1, x2,. . ., xn�m) in F, there ex-ists an e > 0 so that any point x0ðxi � e < x0i <xi þ e; i ¼ 1; 2; . . . ; n� mÞ 2 F . According to themechanism of MM, LBS and crossover operator,the offspring is generated by applying a move tothe parent:

x0i ¼ xi þmovei: ð5Þ

In case the new chromosome is beyond F, arepairing scheme is employed: the move decreasesby multiplying a random number between 0 and1, i.e., move=U(0, 1)·move. Within limited time,max(movei) can be reduced to be less than e and a

feasible offspring can be generated. In case a feasi-ble solution cannot be obtained after max(movei)has been reduced to be less than e, it indicates thatthe parent point locates at the boundary of F andthe direction of movei has been going out of F.Then, the direction of movei is flipped. With thismethod, a feasible offspring can be ensured. Fig.3 illustrates how the repairing scheme works forcrossover, where F includes two regions F1 andF2. For the mother and father from different re-gions, the offspring generated by crossover willprobably locate in F1 (offspring 1) or F2 (offspring2) or beyond F (offspring 3). The offspring beyondF (offspring 3) can be repaired to a feasible solu-tion (offspring 3 0) by applying the repairing schemefor limited times.

The emphasis of the repairing scheme is to en-sure that a feasible solution can be obtained withinlimited time. It could be slow if the shape of thefeasible region is very irregular. However, sinceno fitness evaluation is required in the repairingscheme, the computing cost would not be high.An alternate solution is to use a kind of determin-istic repairing method, which might be faster insome cases that the feasible region is defined byinequality constraints described as explicit func-tional dependence, but it is less general than thepresented repairing scheme.

2.5. Selection

Selection is based on the performance of thechromosomes. A tournament selection is carriedout in the proposed EP method to choose the

Page 6: An evolutionary programming algorithm for continuous global optimization

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 359

survivals to the next generation. Each chromo-some obtains a tournament ‘‘weight’’ by competi-tion with other q randomly selected opponents.Simulated annealing is employed to prevent thechromosomes in the early generations from beingtrapped in a local optimum. The tournamentweight Wi of the ith chromosome is initially setat zero and increased based on the competitionas follows [24]:

W i ¼W i þ 1 if Ei is better than or

equal to En;

W i þ PðDEÞ if Ei is worse than En;

8><>:

ð6Þ

where Ei is the fitness of the ith chromosome, En isthe fitness of a randomly selected opponent, andP(DE) is the probability given by a Gibbsdistribution:

P ðD EÞ ¼ expjEn � EijT 0b

ng

� ð7Þ

where T0 is the initial temperature and is set atT0=(Emax�Emin)/2 so that jEn�Eij/T0 is within[0, 2], ng is the present generation index, and b isthe cooling parameter, which is less than 1 to getthe similar effect of annealing.

3. Case study and overall assessment

of EP–MMLSX

3.1. Test functions

Twelve benchmark problems are selected fromRefs. [9,15–18,25] to assess the performance ofthe proposed algorithm, as listed in Table 1.

These 12 problems have been studied by otherresearchers. f1 and f2 have been studied by Sohand Dong [25] with their improved EP method(SDEP). f3–f6 have been studied in [9] using aGA with varying population size (GAVaPS) anda GA for numerical optimization for constrainedproblems (GENOCOP). Choi and Oh [18] havestudied f7–f9 using self-adaptive EP (SAEP),accelerated EP (AEP), and a conventional EP

with a mutation operator inspired by neural net-work backpropagation learning (CEP/BLO). f9–f12 have been studied by Yao et al. [15] using con-ventional EP (CEP) and fast EP (FEP). Chella-pilla [17] has also studied functions f10–f12 byusing conventional EP with Gaussian mutationoperator (CEP/GMO), Cauchy mutation opera-tor (CEP/CMO), mean mutation operator (CEP/MMO) and adaptive mean mutation operator(CEP/AMMO). f10 with a larger parameter boundhas been studied by Chellapilla and Fogel [16].

3.2. Parameters adopted

There are several parameters to be pre-deter-mined by the users. Following the typical imple-mentations [16,17], the numerical experimentswere carried out using a population size of 50and a tournament size q=10 in this study. Proba-bilities for LBS and crossover were set as pL=0.2and pC=0.2, respectively. For direct comparison,the program was terminated when the number offunction evaluations reached the number reportedin the corresponding literatures.

The simulated annealing scheme adjusts theselection pressure in the early generations. Thesimulated annealing is expected to take effect inthe first N generations, which means that after Ngenerations, P(DE) is small enough to be neg-lected. If q opponents are selected for competition,PðDEÞ ¼ 1

q can be viewed as a small value. Since(En�Ei)/T02 [0, 2], we use the average value of1.0 and from Eq. (7), the cooling parameter b iscalculated as

ffiffiffiffiffiffiffiffiffiffiffiffiffi1= ln qN

p.

The value of the cooling parameter is deter-mined by experiments. Fig. 4 illustrates theinfluence of b value on the performance of EP–MMLSX algorithm based on f1 as an example.Different values of b are selected, i.e., 0, 0.91998,0.95916, 0.97258, 0.97937, 0.98346 and 0.98994,corresponding to N=0, 10, 20, 30, 40, 50 and 60,respectively. When N=0, it indicates that the sim-ulated annealing is not activated in the algorithm.Fig. 4 shows that the algorithm has better per-formance when N is between 30 and 50, which isconsistent with the recommendations of [24]. Inthis study, b=0.97258 is selected as the coolingparameter.

Page 7: An evolutionary programming algorithm for continuous global optimization

Table 1Twelve benchmark problems

Functions Constraints

max f1ðxÞ ¼Xni¼1

xi sinðipxiÞ Xni¼1

x2i 6 100:0

max f2ðxÞ ¼3x1 þ x2 � 2x3 þ 0:8

2x1 � x2 þ x3þ 4x1 � 2x2 þ x37x1 þ 3x2 � x3

�2.0 6 x 6 1.0

max f3ðxÞ ¼ �x sinð10pxÞ þ 1 x1+x2�x3 6 1, �x1+x2�x3 6 �1,12x1+5x2+12x3 6 34.8, 12x1+12x2+7x36 29.1, �6x1+x2+x3 6 �4.1, 0 6 xi, i=1,2,3

max f4ðx; yÞ ¼ 0:5þ sin2ffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2

p� 0:5

ð1þ 0:001ðx2 þ y2ÞÞ2�100 6 x, y 6 100

min f5ðx; yÞ ¼ �10:5x1 � 7:5x2 � 3:5x3

� 2:5x4 � 1:5x5 � 10y � 0:5X5i¼1

x2i

6x1+3x2+3x3+2x4+x5 6 6.5, 10x1+10x3+y

6 20, 06xi 6 1, 0 6 y

min f6ðxÞ ¼X10i¼1

xi ci þ lnxiPxi

where c1=�6.089, c2=�17.164,c3=�34.054, c4=�5.914, c5=�24.721,c6=�14.986, c7=�24.100, c8=�10.708,c9=�26.662, c10=�22.179

x1+2x2+2x3+x6+x10=2, x4+2x5+x6+x7=1,x3+x7+x8+2x9+x10=1,xi P 0.000001, i=1,2, . . ., 10

min f7ðxÞ ¼Xni¼1

x2i n=3, �6 6 xi6 6, i=1, . . .,n

min f8ðxÞ ¼ 100ðx21 � x2Þ2 þ ð1� x1Þ2 �5.12 6 xi 6 5.12, i=1,2

min f9ðxÞ ¼1

1500

þP25

j¼11

jþP2

j¼1ðxi�aijÞ6

� 0:9980038;

a1ja2j

264

375 ¼

�32 �16 0 16 32 �32 �16 � � � 0 16 32

�32 �32 �32 �32 �32 �16 �16 � � � 32 32 32

� �

�65 6 xi 6 65, i=1,2

(continued on next page)

360 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

Page 8: An evolutionary programming algorithm for continuous global optimization

Table 1 (continued)

Functions Constraints

min f10ðxÞ ¼ �20 exp �0:2ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1

n

Xni¼1

x2i

s !

� exp1

n

Xni¼1

cosð2pxiÞ !

þ 20þ e

n=30, �30 6 xi 6 30, i=1, . . .,n

min f11ðxÞ ¼ 10nþXni¼1

ðx2i � 10 cosð2pxiÞÞ n=30, �5.12 6 xi 6 5.12, i=1, . . .,n

min f12ðxÞ ¼Xni¼1

x2i4000

þ 1�Yni¼1

cosxiffiffii

p�

n=30, �600 6 xi 6 600, i=1, . . .,n

Number of Function Evaluations

Mea

n B

est

Fit

ness

5.0x104 1.0x105 1.5x10516.4

16.5

16.6

16.7

16.8

16.9

17.0

N=0 N=10 N=20 N=30 N=40 N=50 N=60

Fig. 4. Effect of simulated annealing on performance of EP–MMLSX on function f1.

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 361

3.3. Results and comparison

For test function f1, the best known functionvalues for cases when the dimension of the func-tion is 3, 5, 7 and 9, respectively, have been re-ported in [25]. Table 2 shows a comparisonbetween the best results obtained by EP–MMLSXand SDEP. From Table 2, we can see that EP–MMLSX can find a better function value. The rel-ative increase can be up to more than 5% when thedimension is 9.

For the other 11 test functions f2–f12, 20 inde-pendent trial runs of EP–MMLSX were conductedon each of them. The average results were com-pared with those reported in the previous corre-sponding works with the same number offunction evaluations, which takes into accountthe computing cost of both macro and localsearch.

Soh and Dong [25] have also studied the testfunction f2. They reported that a run of 16,000function evaluations with their improved EP meth-od obtained the known global optimum (2.471429)and the convergence was faster than all the com-pared methods. With EP–MMLSX, not only thebest trial, but all of the 20 runs obtained the globaloptimum within 10,000 function evaluations.

Michalewicz [9] studied f3 and f4 using GA-VaPS. After 3000 function evaluations, the bestvalues for f3 and f4, obtained from 20 independentruns, were found to be 2.841 and 0.970, respec-tively. With EP–MMLSX, not only the best resultsbut also the average results of the 20 independentruns are better, which are 2.950 and 0.987,respectively.

Michalewicz [9] also studied f5 and f6 usingGENOCOP, and reported that GENOCOP foundthe known global optimum (�213.0) of f5 in70,000 function evaluations. In comparison, EP–MMLSX obtained the global optimum in less than

Page 9: An evolutionary programming algorithm for continuous global optimization

Table 2Comparison between the best results of test function f1 obtained by EP–MMLSX and SDEP [25]

Dimensionof the testfunctions

Number offunctionevaluations

Best resultsfound bySDEP

Best resultsfound byEP–MMLSX

Relativeincrease(%)

Solution

3 150,000 17.2445 17.2445 0.00 (�6.4955, 5.2498, �5.49982)300,000 17.2445 17.2445 0.00 (�6.4955, 5.2498, �5.49982)

5 150,000 22.1569 22.3194 0.73 (�4.50937, �4.25292, 4.83412, �4.62556, 4.10045)300,000 22.1579 22.3194 0.73 (�4.50937, �4.25292, 4.83412, �4.62556, 4.10045)

7 150,000 25.8743 26.3133 1.70 (�4.49526, 3.24768, �3.49835, 4.12449, �3.69964, 3.7496,3.49978)

300,000 25.9472 26.3136 1.41 (�4.49468, 3.24789, �3.49865, 4.12424, �3.69978, 3.74986,3.4999)

9 150,000 27.9138 29.6309 6.15 (2.53722, 3.2389, 2.8306, 4.12285, 3.6988, 3.41647, �2.64187,3.56221, 3.61106)

300,000 28.027 29.6407 5.76 (2.53066, 3.24205, 2.8315, 4.12359, 3.69924, 3.41629, �2.64261,3.56176, 3.61089)

362 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

30,000 function evaluations in all the 20 trials. Forthe test function f6, the global optimum is un-known. Michalewicz reported �47.760765 as thebest solution ever known. The number of functionevaluations was not given. However, EP–MMLSXfound a point with a better function value of�47.7611. This point isx ¼ ð0:040671; 0:147737; 0:783142; 0:001414;

0:485246; 0:000693; 0:027401; 0:017947;

0:037316; 0:096878Þ:All of the 20 independent runs of EP–MMLSXconverged to this point within 10,000 functionevaluations.

f7–f9 have been studied by Choi and Oh [18]using SAEP, AEP and CEP/BLO. A comparisonbetween these methods and EP–MMLSX is shownin Table 3, where MBF signifies the mean best fit-ness and NFE the number of function evaluation.It should be mentioned that the data from [18]were obtained from their figures instead of a table,

Table 3Comparison between performance of EP–MMLSX and SAEP, AEP

Test functions SAEP [18] AEP [18]

NFE MBF NFE MBF

f7 4200 1E�10 6000 1E�f8 45,000 9E�5 45,000 5E�f9 45,000 5E0 45,000 1E�Note: NFE means the number of function evaluations. MBF means t

so there may be some minor errors, but it will notaffect the overall assessment. For the test functionf7, all four methods converged very quickly andtheir performances were comparable. For the testfunction f8, EP–MMLSX is as good as SAEPand AEP, but CEP/BLO converged faster thanall of them. Both EP–MMLSX and CEP/BLOhave a much better performance than SAEP andAEP on the test function f9.

CEP, FEP, CEP/GMO, CEP/CMO, CEP/MMO and CEP/AMMO were employed to studythe test functions f9–f12 in [15–17]. Table 4 showsa comparison between these methods and EP–MMLSX on these four test functions. With thesame number of function evaluations, EP–MMLSX statistically found results better thanCEP with various operators. Compared withFEP, EP–MMLSX algorithm has better perform-ance on f9 and f12 but worse on f10 and f11.

In summary, the EP–MMLSX algorithm ob-tained a better value and/or converged faster to

and CEP/BLO on test function f7–f9

CEP/BLO [18] CP-MMLSX

NFE MBF NFE MBF

10 1200 1E�10 4178 5.4E�116 5850 1E�10 45,000 5.9E�45 2000 4E�8 13,954 3.8E�8he mean best fitness.

Page 10: An evolutionary programming algorithm for continuous global optimization

Table 4Comparison between performance of EP–MMLSX and CEP, FEP, CEP/GMO, CEP/CMO, CEP/MMO and CEP/AMMO on testfunction f9–f12

f9 f10 f11 f12

NFE MBF NFE MBF NFE MBF NFE MBF

CEP [15] 10,000 0.66 150,000 9.2 500,000 89.0 200,000 8.6E�2FEP [15] 10,000 0.22 150,000 1.8E�2 500,000 4.6E�2 200,000 1.6E�2CEP/GMO [16] N/A N/A 150,000 14.9973 N/A N/A N/A N/ACEP/CMO [16] N/A N/A 150,000 6.9304 N/A N/A N/A N/ACEP/MMO [16] N/A N/A 150,000 5.3377 N/A N/A N/A N/ACEP/AMMO [16] N/A N/A 150,000 6.0053 N/A N/A N/A N/ACEP/GMO [17] N/A N/A 150,000 10.47 250,000 113.72 250,000 6.05CEP/CMO [17] N/A N/A 150,000 4.52 250,000 55.82 250,000 7.08CEP/MMO [17] N/A N/A 150,000 3.75 250,000 46.96 250,000 3.84CEP/AMMO [17] N/A N/A 150,000 3.41 250,000 52.59 250,000 1.85EP–MMLSX 13,954 3.8E�8 149,868 2.14125 249,955 21.3419 249,561 1.045E�2

Note: NFE means the number of function evaluations. MBF means the mean best fitness. N/A means not available.

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 363

the global optimum than SDEP, GAVaPS andGENOCOP on the test functions f1–f6. It also per-formed better than SAEP and AEP on test func-tions f7–f9. Moreover, it was able to statisticallyobtain a better value than those conventional EPwith various mutation rules on test functions f10–f12. It is also better than FEP for f9 and f12. How-ever, the EP–MMLSX algorithm is not as good asFEP for f10 and f11 and not efficient as CEP/BLOfor f8. In general, the numerical experiments on the12 benchmark problems show that the EP–MMLSX algorithm has a higher effectivenessand efficiency for the majority of the test functions.

4. Influence of different operators

As shown above, the EP–MMLSX algorithm isable to find the global optimum effectively and effi-ciently. However, the influence of different opera-tors, i.e., MM, LBS and crossover, on theperformance of the algorithm is not clear. In thissection, their effects are investigated by enablingcertain operators, e.g., EP–MM, implying thatonly MM operator is enabled in the algorithm.Considering the fact that LBS operator is designedto speed up the convergence to local optima, it isnot reasonable to compare the results of EP–LBS(the algorithm with only LBS operator) and EP–MM for those problems with a large number of

local optima. The effect of LBS operator is ana-lyzed by comparing the results of EP–MM andEP–MMLS (both MM and LBS operators areenabled). Similarly, the effect of crossover is inves-tigated by the comparison between the results ofEP–MMLS and EP–MMLSX.

Moreover, some other local search operatorsdeveloped by previous researchers are used to re-place the proposed LBS operator for certain prob-lems. Thus, in this section, the effectiveness of(1+1) ES [12], Solis and Wets method [23] andDavidon–Fletcher–Powell method [26,27] are alsoinvestigated with the combination of MM opera-tor. The corresponding algorithms are denoted asEP–MMES, EP–MMSW and EP–MMDFP,respectively. A summary of the abbreviations forthe various versions of the algorithm is listed inTable 5. The objective of the comparison is to givesome guidelines for selecting suitable local searchoperators for certain optimization problems. Theselected local search operators are briefly describedas follows:

1. (1+1)ES: The offspring is generated based onthe Gaussian mutation from the parent. Thebetter of the parent and offspring is selected tosurvive.

2. Solis and Wets method: A new point is generatedby adding the normally distributed zero meandeviates to the current point. If the function

Page 11: An evolutionary programming algorithm for continuous global optimization

Number of Function Evaluations

Mea

n B

est

Fit

ness

0.0 3.0x104 6.0x104 9.0x104 1.2x105 1.5x10516.316.416.516.616.716.816.917.017.117.217.3

M D L E LX S

Fig. 5. Performances of six algorithms on test function f1.

0.0 3.0x103 6.0x103 9.0x103 1.2x104 1.5x1042.2

2.3

2.4

2.5

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 6. Performances of six algorithms on test function f2.

Table 5Summary of abbreviations of algorithms

Abbreviations in text Abbreviations in the table and figures Descriptions

EP–MM M The algorithm employs MM operator onlyEP–MMDFP D The algorithm employs MM and DFP operatorsEP–MMES E The algorithm employs MM and (1+1) ES operatorsEP–MMLS L The algorithm employs MM and LBS operatorsEP–MMLSX LX The algorithm employs MM, LBS and crossover operatorsEP–MMSW S The algorithm employs MM and SW operators

364 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

value at the new point is worse than the currentpoint, the algorithm examines the oppositepoint generated by taking a step in the oppositedirection. If neither of these steps generates abetter point, a new point is then generated inthe next iteration.

3. Davidon–Fletcher–Powell gradient search

method: As a quasi-Newton method, it can havequadratic termination that needs only the firstderivatives and thus is economical in functionevaluations. For the trial solution x0 and theinitial matrix A0, it repeats the following steps:

1: sk ¼ �Ak � gk; xkþ1 ¼ xk þ sk;

2: yk ¼ gkþ1 � gk; zk ¼ Ak � yk;3: Akþ1 ¼ Ak þ ðsk � sTk Þ=ðsk � yTk Þ

�ðzk � zTk Þ=ðyk � zTk Þ;where the subscript k represents the numberof iterations and g is the gradient vector atpoint x.

In EP–MMES and EP–MMSW, the initial va-lue for the strategy parameter is set at one-tenthof the parametric bound. The matrix A in EP–MMDFP is initially set as an identity matrix.The probabilities of the local search operatorsare set at 0.2, and the local search has five itera-tions for all of the algorithms. f1 is studied withthree dimensions only, for simplicity. Twenty inde-pendent runs are performed for each algorithm oneach problem. The performances of the algorithmson the 12 test functions are illustrated in Figs 5–16.The comparison is based on the number of func-tion evaluations so that the computing cost for lo-cal search is also considered.

An unambiguous rank order based on the re-sults is shown in Table 6. The rank is based on

the mean best function value obtained by the algo-rithm at the end of the trial. For those cases wherethe value is the same, the speed of convergence isthen considered. The algorithm with a higher con-vergence rate is given a higher rank. For example,

Page 12: An evolutionary programming algorithm for continuous global optimization

0.0 2.0x102 4.0x102 6.0x102 8.0x102 1.0x1032.80

2.82

2.84

2.86

2.88

2.90

2.92

2.94

2.96

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 7. Performances of six algorithms on test function f3.

0.0 5.0x103 1.0x104 1.5x104 2.0x1040.90

0.92

0.94

0.96

0.98

1.00

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 8. Performances of six algorithms on test function f4.

0 1x104 2x104 3x104 4x104-215

-210

-205

-200

-195

-190

-185

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 9. Performances of six algorithms on test function f5.

0 1x104 2x104 3x104 4x104-47.80

-47.75

-47.70

-47.65

-47.60

-47.55

-47.50

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 10. Performances of six algorithms on test function f6.

Number of Function Evaluations

Mea

n B

est

Fit

ness

0 1x104 2x104 3x10410-32

10-28

10-24

10-20

10-16

10-12

10-8

10-4

100

M D L E LX S

Fig. 11. Performances of six algorithms on test function f7.

10

10

0 1x104 2x104 3x104 4x10410-6

10-5

10-4

-3

-2

10-1

100

M D

L E

LX

Number of Function Evaluations

Mea

n B

est

Fit

ness S

Fig. 12. Performances of six algorithms on test function f8.

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 365

Page 13: An evolutionary programming algorithm for continuous global optimization

Number of Function Evaluations

Mea

n B

est

Fit

ness

0.0 5.0x103 1.0x104 1.5x104 2.0x10410 -8

10 -7

10 -6

10 -5

10 -4

10 -3

10 -2

10 -1

10 0

10 1

M D L E LX S

Fig. 13. Performances of six algorithms on test function f9.

0.0 5.0x104 1.0x105 1.5x105 2.0x105 2.5x1050

5

10

15

20

25

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 14. Performances of six algorithms on test function f10.

Number of Function Evaluations

Mea

n B

est

Fit

ness

0.0 5.0x104 1.0x105 1.5x105 2.0x105 2.5x1050

50

100

150

200

250

300

M D L E LX S

Fig. 15. Performances of six algorithms on test function f11.

0.0 5.0x104 1.0x105 1.5x105 2.0x105 2.5x10510-7

10-6

10-5

10-4

10-3

10-2

10-1

100

101

102

M D L E LX S

Number of Function Evaluations

Mea

n B

est

Fit

ness

Fig. 16. Performances of six algorithms on test function f12.

366 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

the entry ‘‘M, (E, L, S, LX), D’’ for the results of f1indicates that the algorithm EP–MM generatessolutions that have the best mean function valueand that these values are unambiguously statisti-cally and significantly better than those generatedby the other algorithms. The keyword unambigu-ously means the results of EP–MM are statisticallyand significantly better in all comparisons with theother algorithms. The parentheses grouping E, L,S and LX indicate that the results for EP–MMES,EP–MMLS, EP–MMSW and EP–MMLSX arenot all statistically and/or significantly different.

The following observations are obtained fromTable 6 and Figs. 5–16:

(1) Effect of LBS operator: It is observed thatEP–MM performs better than or as well as EP–MMLS for test functions f1–f3. This is probablyattributed to the small search space of the testfunctions. The number of independent variablesis less than 3. The variables are constrained in[�10, 10], [0, 3] and [�2, 1] for the three problems,respectively. Since the small search space can beexplored by MM operator effectively, the gain ofconvergence acceleration by LBS cannot balancethe cost of fitness evaluation. Thus, the introduc-tion of LBS in the algorithm decreases the searchefficiency for these three functions. However, theEP algorithm with enabled LBS operator

Page 14: An evolutionary programming algorithm for continuous global optimization

Table 6Unambiguous ranking of the six algorithms

Test functions Dimension Number offunction evaluations

Characteristics Unambiguous rank

f1 3 150,000 Multimodal M, (E, L, S, LX), Df2 3 14,000 Linear funciton with linear constraints M, (S, E), (LX, L), Df3 1 1500 Multimodal (M, S, E, L, LX, D)f4 2 15,000 Multimodal (LX, E), L, S, M, Df5 6 50,000 Nonlinear function with linear constraints L, LX, M, (S, E), Df6 10 100,000 Multimodal (D, L, LX), M, E, Sf7 3 30,000 Strictly convex D, LX, L, S, E, Mf8 2 40,000 Global optimum resides inside

a long, narrow, andparabolic-shaped flat valley

E, D, LX, L, M, S

f9 2 20,000 Multimodal, with multiple deeplocal deep minima

E, LX, L, M, S, D

f10 30 100,000 Multimodal LX, S, L, M, E, Df11 30 250,000 Multimodal (LX, L), (E, S), D, Mf12 30 250,000 A relatively large convex domain exist in

the neighborhood of the global optimumD, LX, L, S, E, M

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 367

outperforms EP–MM for all the other nine testfunctions. This demonstrates that the LBS opera-tor can improve the convergence rate in EP search,especially for problems with large search space.

(2) Effect of crossover: The effect of crossovercan be observed by comparing the performanceof EP–MMLS and EP–MMLSX. In all cases ex-cept f5, the results of EP–MMLSX are better thanor at least as good as those of EP–MMLS. Figs.8,11–14 and 16 clearly show that when EP–MMLSstagnated in the later generations, EP–MMLSXcan continue finding a better solution. This indi-cates that crossover can prevent the evolutionfrom trapping in the local optima. Thus it is rec-ommended that crossover should be employed inthis EP algorithm.

(3) Some guidelines for selection of local search

operators: Different local search operators havedifferent advantages over the others for solvingvarious problems. For example, DFP method ismore suitable for f6, f7 and f12. Both the algorithmsEP–MMLS and EP–MMLSX have the similarperformance as EP–MMDFP on f6. The EP–MMDFP algorithm has the best performance onfunctions f7 and f12. It is observed that f7 is aspherical function which is strictly convex, andf12 has a relatively large convex domain½� p

2

ffiffii

p; p2

ffiffii

p� in the neighborhood of the global

optimum. On the other hand, EP–MMLS has bet-ter performance than EP–MMES and EP–MMSWon functions f5, f6, f7, f11 and f12. These functionshave at least six variables, except f7 which is astrictly convex problem. Statistically, EP–MESand EP–MMSW outperform EP–MMLS on prob-lems with not more than three variables i.e., f2, f4,f8 and f9, but they are defeated by EP–MMLS forthe problems with more variables.

Therefore, from the above observations, it isrecommended that for problems with large convexdomain in the neighborhood of the global opti-mum, DFP should be selected; for problems withfewer variables, ES and SW are more suitable;for problems with more variables and numerouslocal optima, LBS is the best choice.

It is worth mentioning that the control parame-ters adopted in this paper are the same as or closeto the previous studies so that the comparison re-sults are meaningful. Considering the fact that theperformance of an EP algorithm depends heavilyon the values of the control parameters, if theyare adjusted specially for the proposed algorithm,it will be argued that the performance improve-ment is attributed to the parameter adjustment.Thus, in this study, no parameter adjustment hasbeen done. It is observed that the proposed EP–MMLSX algorithm outperforms most of the other

Page 15: An evolutionary programming algorithm for continuous global optimization

368 Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369

algorithms with these unadjusted parameters.Thus, it is reasonable to expect that the perfor-mance of the algorithm could be further improvedby adjusting the control parameters.

5. Conclusions

In this paper, an EP algorithm integrated withmacro-mutation, local search and crossover opera-tors is presented for continuous global optimiza-tion problem solving. Numerical experiments on12 benchmark problems showed that the proposedmethod can search the global optimum effectivelyand efficiently, thus it could be an alternativesearch and optimization algorithm.

The influence of the operators on the overallperformance of the algorithm is also investigated.The employment of local search operators can im-prove the search efficiency of EP for problems withlarge search space. Crossover has also demon-strated its effectiveness in the search by preventingthe evolution from being trapped in the local opti-ma. Moreover, some other local search operatorsare also investigated. Some practical guidelines tochoose suitable local search operator are provided:DFP is suitable for problems with large convex do-main in the neighborhood of the global optimum;ES and SW show their advantage to deal withproblems with fewer variables; and LBS is recom-mended for problems with more variables and alarge number of local optima.

References

[1] J.H. Holland, Outline for a logical theory of adaptivesystems, Journal of the Association for ComputingMachinery 3 (1962) 297–314.

[2] J.H. Holland, Adaptation in Natural and Artificial Sys-tems, The University of Michigan Press, Ann Arbor, MI,1975.

[3] J.R. Koza, Genetic Programming: On the Programming ofComputers by Means of Natural Selection, MIT Press,Cambridge, MA, 1992.

[4] L.J. Fogel, Toward inductive inference automata, in:Proceedings of the International Federation for Informa-tion Processing Congress, Munich, Germany, 1962,pp. 395–399.

[5] D.B. Fogel, Evolving Artificial Intelligence, Ph.D. Thesis,University of California, San Diego, 1992.

[6] I. Rechenberg, Cybernetic solution path of an experimentalproblem, Royal Aircraft Establishment, Library Transla-tion No. 1122, Farnborough, Hants, UK, 1965.

[7] H.-P. Schwefel, Kybernetische Evolution als Strategie derexperimentellen Forschung in der Stromungstechnik,Diplomarbeit, Technische Universitat, Berlin, 1965.

[8] D.E. Goldberg, Genetic Algorithms in Search, Optimiza-tion and Machine Learning, Addison Wesley, New York,1989.

[9] Z. Michalewicz, Genetic Algorithms+Data Struc-tures=Evolution Programs, third ed., Springer, Berlin,1996.

[10] D.B. Fogel, Evolutionary Computation: Toward a NewPhilosophy of Machine Intelligence, IEEE Press, Piscata-way, NJ, 1995.

[11] M. Gen, R. Cheng, Genetic Algorithms and EngineeringDesign, Wiley, New York, 1997.

[12] C.H.M. Van Kemenade, A two-level evolution strategy:balancing global and local search, in: Proceedings of the5th Belgian–Dutch Conference on Machine Learning,1995, pp. 49–60.

[13] X. Yao, Y. Liu, Fast evolutionary programming, in:Proceedings of the 5th Annual Conference on Evolution-ary Programming (EP�96), MIT Press, Cambridge, MA,1996, pp. 451–460.

[14] X. Yao, Y. Liu, Fast evolution strategies, in: EvolutionaryProgramming VI: Proceedings of the 6th InternationalConference on Evolutionary Programming (EP�97),Springer, Berlin, 1997, pp. 151–161.

[15] X. Yao, Y. Liu, G. Lin, Evolutionary programming madefaster, IEEE Transactions on Evolutionary Computation 3(1999) 82–102.

[16] K. Chellapilla, D.B. Fogel, Two new mutation operatorsfor enhanced search and optimization in evolutionaryprogramming, in: Proceedings of the SPIE�s InternationalSymposium on Optical Science, Engineering, and Instru-mentationConference 3165: Applications of Soft Comput-ing, SPIE, Billingham, WA, 1997, pp. 260–269.

[17] K. Chellapilla, Combining mutation operators in evolu-tionary programming, IEEE Transactions on EvolutionaryComputation 2 (1998) 91–96.

[18] D.-H. Choi, S.-Y. Oh, A new mutation rule for evolution-ary programming motivated from backpropagation learn-ing, IEEE Transactions on Evolutionary Computation 4(2000) 188–190.

[19] W. Hart, Adaptive Global Optimization with LocalSearch, Ph.D. Thesis, University of California at SanDiego, 1994.

[20] W. Hart, K.R. Belew, Optimization with genetic algo-rithms hybrids that use local search, in: R.K. Belew, M.Mitchell (Eds.), Adaptive Individuals in Evolving Popula-tions, Addison Wesley, New York, 1996.

[21] K. Chellapilla, H. Birru, S.S. Rao, Effectiveness of localsearch operators in EP, in: Genetic Programming:

Page 16: An evolutionary programming algorithm for continuous global optimization

Y.W. Yang et al. / European Journal of Operational Research 168 (2006) 354–369 369

Proceedings of the 3rd Annual Conference, MorganKaufmann Publishers, San Francisco, CA, pp. 753–761.

[22] W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flan-nery, Numerical Recipes in C: The Art of ScientificComputing, Cambridge University Press, Cambridge,MA, 1992.

[23] F.J. Solis, R.J.-B. Wets, Minimization by random searchtechniques, Mathematical Operations Research 6 (1981)19–30.

[24] J.-B.H. Minster, N.P. Williams, T.G. Masters, J.F. Gilbert,J.S. Haase, Application of evolutionary programming toearthquake hypocenter determination, in: Proceedings of

the 4th Annual Conference on Evolutionary Programming,MIT Press, Cambridge, MA, 1995, pp. 3–17.

[25] C.K. Soh, Y.X. Dong, Evolutionary programmingfor inverse problems in civil engineering, Journal ofComputing in Civil Engineering, ASCE 15 (2) (2001)144–150.

[26] W.C. Davidon, Variable metric method for minimization,Atomic Energy Commission Research and DevelopmentReport, ANL-5990, 1959.

[27] R. Fletcher, M.J.D. Powell, A rapidly convergentdescent method for minimization, Computer Journal 6(1963) 163–168.