velocity self adaptation

Upload: naren-pathak

Post on 03-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 Velocity Self Adaptation

    1/6

    2008 IEEE Swann Intelligence Symposium

    St. Louis MO USA, September21-23, 2008

    Velocity Self-Adaptation

    ade

    Particle Swarm Optimization

    aster

    Guangming Lin, Lishan Kang, Yongsheng Liang, Yuping Chen

    bstract The lognormal self-adaptation has been used

    extensively in evolutionary programming EP) and evolution

    strategies ES) to adjust the search step size for each objective

    variable. The Particle Swarm Optimization PSO) relies on two

    kinds of factors: velocity and position of particles to generate

    better particles. In this paper, we propose Self-Adaptive Velocity

    PSO SAVPSO) in which we firstly introduce lognormal

    self-adaptation strategies to efficiently control the velocity of

    PSO. Extensive empirical studies have been carried out to

    evaluate the performance of SAVPSO, standard PSO and some

    other improved versions of PSO. From the experimental results

    on 7 widely used test functions, we can show that

    SA

    VPSO

    outperforms standard PSO.

    I INTRODUCTION

    volutionary algorithms EAs) have been applied to lnany

    optimization problems successfully in recent years. They

    are population-based search algorithms with the

    generation-and-test feature [1, 2]. Newoffspring are generated

    by perturbations and tested to detennine the acceptable

    individuals for the next generation. For large search spaces,

    the methods

    of

    EAs are more sufficient than the classical

    exhaustive methods; they are stochastic algorithms whose

    search methods model some natural phenomena: genetic

    inheritance and Darwinian strife for survival. The best known

    techniques in the class

    of

    EAs are Genetic Algorithlns GA),

    Evolution Strategies ES), Evolution Programming EP), and

    Genetic Programlning GP). The Particle SwannOptimization

    PSO) is also a stochastic search algorithln first proposed by

    Kennedy and Eherhart [8,9], which developed out of work

    simulating themovement

    of

    flocks

    of

    birds. PSO shares many

    features with EAs. It has shown to be an efficient, robust and

    simple optimization algorithln. PSO has been applied

    successfully to many different kinds

    of

    problems [18, 19]

    Optimization using EAs and PSO can be explained by two

    lnajor steps:

    1

    Generate the solutions in the current population, and

    Manuscript received June 16, 2008. This work was supportedin part by the

    National Natural Science Foundation

    of

    China No. 60473081).

    Guangming Lin is with Shenzhen Institute

    of

    Information Technology.

    No.1068 West Niguang Road, Shenzhen 518029, China corresponding

    author, phone: 86-755-25859105; e-mail: [email protected])

    Lishan Kang is with School

    of

    Computer Science, China University of

    Geoscience, Wuhan, China

    Yongsheng Liang is with Shenzhen Institute

    of

    InformationTechnology.

    Yuping Chen is with School of Computer Science, China University of

    Geoscience, Wuhan, China

    2 Select the next generation from the generated and the

    current solutions.

    These two steps can be regarded as a population-base

    version of the classical generate-and-test method, where we

    use mutation or velocity and position in PSO) to generated

    new solutions and selection is used to test which of the newly

    generated solutions should survive to the next generation.

    Fonnulating EAs as a special case

    of

    the generate-and-test

    method establishes a bridge between PSO and other search

    algorithms, such as EP, ES, GA, simulated annealing SA),

    tabu search TS), and others, and thus facilitates

    cross-fertilization amongst different research areas.

    Standard PSO perfonns well in the early iterations, but

    has problelns reaching a near optimal solution in some

    of

    the

    multi-modal optitnization problelns [8]. PSO could often

    easily fall into local optima, because the particle could quickly

    get closer to the best particle. Both Eberhart [8] and Angeline

    [10] conclude that hybrid models of the EAs and the PSO,

    could lead to further advances. Some researchers have been

    done to tackle this problem [18, 19] In [18, 19], a method

    hybrid Fast EP and PSO to fonn a Fast PSO, which is uses

    Cauchy mutation operator to mutate the best position

    of

    particles gbest, It is to hope that the long jump from Cauchy

    mutation could get the best position out

    of

    the local optitna

    where it has fallen. FPSO focus on the best position

    of

    particle

    gbest. Actually in PSO procedure, there is another important

    factor is the velocity

    of

    particle. During PSO search, theglobal

    best position gbest and the current best position of particles

    pbest indicate the search direction. The velocity

    of

    particle is

    the search step size. In [2] we analyzed the importanceof steps

    size affect the perfonnance

    of

    EAs. In this paper we focus on

    velocity the search step size of PSO. We first introduce the

    10gnonna1

    self-adaptation strategy to control the velocity

    of

    PSO. According to the global optimization search strategy, in

    the early stages, we should increase the step size to enhance

    the global search ability, and in the final stages, we should

    decrease the step size to enhance the local search ability. The

    characteristics of lognonnal function fit this search strategy

    very well. We proposed a new self-adaptive velocity PSO

    SAVPSO) algorithm to efficiently control the global and

    local search

    of

    PSO. Weuse a suite

    of

    7 functions to test PSO

    and SAVPSO. We can see SAVPSOsignificantly outperfonns

    PSO in all the test functions.

    The rest

    of

    thepaper is organized as following: Section 2

    fonnulates the global optimization problem considered in this

    paper and describes the implementation

    of

    EP, FEP and PSO.

    Section 3 describes the implementation

    of

    the new SAVPSO

    978-1-4244-2705-5/08/ 25.00 2008 IEEE

    Authorized licensed use limited to: Bankura Unnayani Institute of Engineering. Downloaded on January 16, 2010 at 00:43 from IEEE Xplore. Restrictions apply.

  • 8/11/2019 Velocity Self Adaptation

    2/6

    (2.3)

    algorithm. Section 4 lists benchmark functions use in the

    experiments, and gives the experimental settings. Section 5

    presents and discusses the experimental resul ts. F inally,

    Section 6 concludes with a summary and a few remarks.

    II. OPTIMIZATION USING EP, FEP AND PSO

    A global minimization problem can be represented as a

    pair S,j), where S

    R

    n is a bounded set on

    R

    n andf SH

    R

    is an n-dimensional real-valued function. The problem is to

    find a point

    x min E S

    such that

    j x min

    is a global minitnum on

    S. More specifically, it is required to find x min E S such that

    \;fXE

    S : f x m i n ~ f x ,

    Wherej

    does not need to

    be

    continuous but it must be bounded.

    This paper considers only the unconstrained optimization

    functions.

    A Classical Evolutionary Programming

    Fogel [4] and Back and Schwefel [6] have indicated that

    CEP with self-adaptive mutation usually performs better than

    CEP withou t se lf-adapt ive muta tion for the funct ion they

    tested. For this reason, CEP with self-adaptive mutation will

    be invest igated in this paper. As described by Back and

    Schwefel [6], CEP implemented in this study is as follows:

    1. Generate the initial population of Jl individuals, and

    set

    k=1.

    Each individual is taken as a pair of

    real-valued vectors, Xi,TJi),ViE {1,2,oo.,p} , where

    Xi

    s are var iables and

    lli

    s are standard deviations for

    Gaussian mutations (also know as strategy parameters

    in self-adaptive evolutionary algorithms).

    2. Evaluate the fitness score for each

    individual

    Xi 1JJ,

    Vi

    E {1,2,oo.,p}, of

    the population

    based on the objective function

    f x

    i

    ) .

    3. Each parent XpTJi) Vi E

    {1,2,oo.,jl},

    creates a single

    offspring X i ,

    i

    ) by: for}=1,2,

    n,

    x; j = xi J) + 11/J)N

    j

    (0, 1 (2.1)

    17i j)=17i j) xexp T N O, l )+T N

    j

    O,l; 2.2)

    Where

    Xi

    j),

    Xi

    j),

    i

    j and

    i

    j)

    denote the j-th

    component of the vecto rs

    Xi Xi

    1]i and 1]i

    respectively.

    N O 1

    denote a normally distributed

    one-d imens ional r andom number wi th mean and

    standard deviation 1. Nj (0,1) indicates that a new random

    number is generated for each value of}.

    The

    and

    ar e commonly set to

    Mr1and

    [6].

    4. Calculate

    the fitness

    of

    each

    offspring

    x; , l; ), V iE

    {1,2,

    ...

    ,

    p}

    5. Conduct pair WIse comparison over t he union of

    parents

    Xi

    17

    i

    and

    offspring

    x

    i

    , 1]

    ), i i

    E

    {1,2,..., , l l} .

    For each individual, q

    opponents are chosen uniformly at random from all the

    parents and offspring. For each comparison, if an

    individual s fitness is better than its opponent, then it is

    the winner.

    6. Select

    Jl

    individuals , out of (

    Xi

    i

    ) and

    ( x; ,Tl; , 17 i E {1,2,... ll} those are winners, to be

    parents in the next generation.

    7. Stop if the halting cri terion is sat isfied; otherwise,

    k=k+

    1

    and go to Step 3.

    B The Standard Particle

    Swarm

    Optimization

    Particle swarm optimization (PSO) algorithm is a recent

    addition to the list of global search Inethods.

    It

    is a population

    based stochast ic opti tnization technique developed by

    Kennedy and Eberhart [8] in 1995, inspired

    by

    social behavior

    of

    organisms such as fish schooling, bird flocking and swarm

    intelligence theory.

    PSO

    has been found to be robus t in

    solving continuous nonlinear optimization problems. Recently,

    PSO has been successfully employed to solve non-smooth

    cOInplex optimization problems. In past several years,

    PSO

    has been widely appl ied in many research and applica tion

    areas.

    The Particle Swarm Optimization s imula tes social

    behavior such as a school of flying birds in searching of food.

    The behavior of each individual is impacted by the behaviors

    of

    neighborhoods and the swarm.

    PSO

    is initialized with a population

    of

    random solutions

    of the objective function. It uses a population of individuals,

    called particles, with an initial population distributed randomly

    over

    the search space.

    It

    searches for the optimal value

    of

    a

    function by updat ing the popula tion through a number of

    generations. Each

    new

    populat ion is generated from the old

    population with a set of simple rules that

    have

    stochastic

    elements.

    Each part ic le searches the opt imum posi tion l ike the

    behavior of a b irds search food that

    it

    flown through the

    problem space by following the current optimal particles. The

    position of each particle is updated by a new velocity

    calculated through equations (2.3) and (2.4) which is based on

    its previous velocity, the position at which the best solution so

    far has been achieved by the particle (pbest or pb), and the

    position at which the best solution so far has been achieved by

    the global population (gbest

    or

    gb).

    v i

    +1 =

    OJxv i)+c

    1

    Xli

    x pb-x i))+

    c

    2

    x r

    x gb-x i))

    x i + = x i) + v i +

    (2.4)

    In equation(

    1

    ,