[ieee 2013 sixth international conference on advanced computational intelligence (icaci) - hangzhou,...

Download [IEEE 2013 Sixth International Conference on Advanced Computational Intelligence (ICACI) - Hangzhou, China (2013.10.19-2013.10.21)] 2013 Sixth International Conference on Advanced Computational Intelligence (ICACI) - An effective improvement of JADE for real-parameter optimization

Post on 09-Mar-2017

214 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

  • 2013 Sixth International Conference on Advanced Computational Intelligence October 19-21,2013, Hangzhou, China

    An Effective Improvement of JADE for Real-parameter Optimization

    Chunjiang Zhang and Liang Gao

    Abstract-Although the metaheuristics cannot guarantee to find the optimum for global optimization, they are efficient

    indeed, especially for the problems very difficultly be optimized

    by traditional optimization methods. Differential evolution

    algorithm is one of the most competitive metaheuristics and the

    adaptive DE with optional external archive (JADE) is an

    excellent DE-variant. Based on the analysis of shortcomings of

    JADE, an effective improvement of JADE is put forward in this

    paper. Two parameters in JADE can be reinitialized and two

    new mutation strategies are added in the improved JADE. 28

    benchmark problems for competition on real-parameter single

    objective optimization in 2013 IEEE Congress on Evolutionary

    Computation (CEC 2013) are used to test the performance of

    the proposed algorithm. The results compared with DE/rand/l

    and original JADE shows the improvement is effective.

    Keywords-Differential evolution; Improved JADE; Real-parameter optimization

    I. INTRODUCTION

    DDifferential evolution (DE) is a powerful and simple stochastic algorithm for real-parameter optimization. DE has become a bright star in the sky of

    nature-inspired metaheuristics since it was first put forward by R. Storn and K. V. Price in 1995 [1]. And DE or DE-based algoritluns were always on the top list in the former IEEE Congress on Evolutionary Computation (CEC) competitions. For example, classical DE ranked second and SaDE (self-adaptive DE) ranked third on lO-D problems in 2005 CEC competition on real parameter optimization. In the review paper [3], four reasons why the researchers have been looking at DE as an attractive optimization tool were pointed out. They are as follows: 1) DE is simple; 2) DE is powerful; 3) The number of control parameters in DE is very few; 4) The space complexity of DE is low. For higher accuracy and efficiency, many DE-variants such as SaDE [4], jDE [5], DEGL (DE with global and local neighborhoods) [6], JADE (Adaptive DE with optional external archive) [7], CODE (DE with composite trial vector generation strategies and control parameters) [8] were proposed in recent years. Although the modified DE-variants are more powerful than classical DE in some respects, there are still shortcomings in them and there is still some space for improvement. This paper focuses on JADE. After analyzing the drawbacks of JADE, a simple, straightforward and effective improvement is made (IJADE).

    This research work is supported by the National Basic Research Program of China (973 Program) under grant No. 201lCB706804 and the Natural Science Foundation of China (NSFC) under Grant no. 51121002

    The authors are with State Key Laboratory of Digital Manufacturing Equipment &Technology, Huazhong University of Science and Technology, Wuhan 430074, PR China. (e-mails: zh chj@gg.com; gaoliang@mail.hust.edu.cn )

    978-1-4673-6343-3113/$31.00 2013 IEEE 58

    And the benclunark problems for competition on real-parameter single objective optimization in CEC 2013 are used to test the performance of our improved algoritlun.

    The remainder of this paper is organized as follows: DE and JADE are introduced in section Error! Reference source not found . . Section Error! Reference source not found. provides the analysis of the drawbacks of JADE and the improvement of JADE. Experimental results are presented and discussed in section Error! Reference source not found . . In section Error! Reference source not found., conclusions are drawn.

    II. DE AND JADE

    A. DE

    Like other evolutionary algorithms, DE has a population with size of NP individuals. Each individual is a D-dimensional vectors representing the candidate solutions. The subsequent generation in DE is denoted by G = 0, 1, ... , Gmax. The ith individual of the population at the current generation is denoted as

    -

    X,.G = [xi,l.G' Xi.2.G' , X,.D,G] (1)

    At the beginning of DE, an initial population is generated by uniformly randomizing individuals within the feasible search space. For example, the jth component of the ith individual is generated at G = 0 as

    Xi,j,O = xj,min + randi,} (0, l).(xj,max - xj,min) (2)

    where Xj,min and xj,max is the minimum and maximum bound

    value at jth dimension and rand;! (0, 1) is a uniformly

    distributed random number lying between 0 and 1. After initialization, DE contains three steps at each

    generation G: mutation, crossover and selection, In mutation, DE creates a mutant vector

    Vi,G = (Vi,I,G' Vi,2,G' , . , Vi,D,G)

    for each population member

    X"G. The five most frequently used mutation strategies are

    listed as follows.

    1) "DE/randl1" : v"G = X/l,G + F; -( X/2,G -Xr3,G ) (3)

    2) "DE/bestl1" : v"G = Xbesl,G + F; -(Xrl,G -Xr2,G) (4)

    3) "DE/current-to-bestl1":

    V"G =X"G +F;-(XbeS"G -x"G)+F;-(Xrl,G -X/2,G) (5)

    4) "DE/best/2":

    Vi,G = Xbesl,G + F; -(Xrl,G -X/2,G) + F; -(X/3,G -Xr4,G) (6)

  • 5) "DE/rand/2":

    V"e = Xrl,e + p, .(Xr2,e -Xr3,e) + p, .(Xr4,e -Xr5,e) (7)

    In the above equations, rl, r2, r3, r4, and r5 are distinct integer randomly selected from the range [1,NP] and are also

    different from i, Xbesl,G is the best individual in the current population, In classic DE, the parameter F, = F is a positive fixed parameter which is called the scaling factor for

    amplifying the difference vectors (Xrl,G -Xr2,e) etc" While

    in many improved DE-variants, taking JADE for example, each individual i has its own scaling factor F;,

    After mutation, a crossover operator is applied to X',G and

    V',G to generate a trial vectoru"G = (u"I,e,ui,2,e",u,,D,e)

    The DE family can use two kinds of crossover schemes which are exponential and binomial crossover respectively. In this paper, the binomial crossover is only used. Under the scheme of binomial crossover, the trial vector is obtained as

    {V,,J,G' if rand',J (0, 1) CR, or J = Jm"" u. = "j

    ,G x',J,G' otherwise

    (8)

    where i = 1, 2, . . . , NP , J = 1, 2, . . . , D , Jm"" is a randomly

    chosen integer in [I ,D ], rand,)O,I) is a uniformly distributed random number between 0 and 1. In many adaptive DE variants, CR; is associated each individual and it may be varied at different generation.

    The selection operator is performed to determine whether the target or the trial vector survives to the next generation after crossover. For the minimization problem, it is expressed as follows:

    - {"G' if f(',G) f(',G) Xi,G+l = _

    X"G, otherwise (9 )

    The above three steps are repeated generation after generation until a termination criterion is satisfied.

    B. JADE

    JADE is a DE variant by implementing a new mutation strategy "DE/current-to-pbest" with optional external archive and updating control parameters in an adaptive manner. Its earlier version was presented by Zhang and Sanderson proposed in CEC 2007 [9 ]. The journal article of JADE was published in 2009 [7]. JADE is very competitive among the DE variants. In the article of another DE variant CoDE [8], the experimental results of five DE variants (including JADE, jDE, SaDE, EPSDE and CoDE) on the benchmark problems of CEC 2005 for real-parameter single objective optimization showed that JADE ranked second and it was just slightly inferior than CoDE. The key points of JADE are introduced as follows.

    1) DElcurrent-to-pbestI1 In JADE, a new mutation strategy, named

    DE/current-to-pbestll was put forward. Actually, DE/current-to-pbestll is generalized from DE/current-to-bestil in order to enhance its global

    59

    exploration ability. The new mutation strategy has two schemes. In the first one without optional archive, a mutation vector is generated as - - -p - - -Vi.G =Xi.G +p'.(Xbesl,G -Xi.G)+p'(Xrl.G -Xr2,G) (8)

    -p where Xbesl,G is randomly chosen from the top 100p%

    individuals with p E (0,1] . Each individual i has its own F;

    which is updated at each generation through an adaptive manner introduced later. Comparing Equation (5) and

    Equation (8), the only difference is that Xbes',G has a

    superscript p in Equation (8). If 100p% equals l,the two mutation strategies are the same.

    In the second scheme DE/current-to-pbestll with archive, a mutation vector is generated in the following manner: - - -p - - -Vi.G =Xi.G +p'.(Xbesl,G -Xi,G)+p'(Xrl.G -Xr2,G) (9 )

    where Xr2,G is randomly selected from the union of current

    population P and an archive population A. The archive population A is initiated to be empty, Then, after each generation, the parent solution that fail in the selection process are added to the archive. If the size of the archive A exceeds a given threshold, some individuals are randomly removed from the archive.

    2) Parameter Adaptation At each generation, the scaling factor F; of each individual

    Xi.G is generated according to a Cauchy distribution with

    local parameter JiF and scale parameter 0. 1

    (10)

    and then set to be 1 if F; > l or regenerated if F; < 0 . SFi is denoted as the set of successful scaling factors in each generation. JiF is updated at the end of each generation as

    follows:

    JiF=(1-c).JiF + c.meanL(SF) (11)

    where meanL (SF) is the Lehmer mean

    LFES,F2 meanL (SF) ="

    F'

    L....FESF (12)

    The crossover probability CRt of each individual i is

    generated according to a normal distribution of mean JiCR and standard deviation 0. 1

    CRt =

Recommended

View more >