repulsive self-adaptive acceleration particle swarm optimization...

16
JAISCR, 2014, Vol. 4, No. 3, pp. 189 REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACH Simone A. Ludwig Department of Computer Science, North Dakota State University, Fargo, ND, USA Abstract Adaptive Particle Swarm Optimization (PSO) variants have become popular in recent years. The main idea of these adaptive PSO variants is that they adaptively change their search behavior during the optimization process based on information gathered during the run. Adaptive PSO variants have shown to be able to solve a wide range of diffi- cult optimization problems efficiently and effectively. In this paper we propose a Re- pulsive Self-adaptive Acceleration PSO (RSAPSO) variant that adaptively optimizes the velocity weights of every particle at every iteration. The velocity weights include the acceleration constants as well as the inertia weight that are responsible for the balance be- tween exploration and exploitation. Our proposed RSAPSO variant optimizes the velocity weights that are then used to search for the optimal solution of the problem (e.g., bench- mark function). We compare RSAPSO to four known adaptive PSO variants (decreasing weight PSO, time-varying acceleration coefficients PSO, guaranteed convergence PSO, and attractive and repulsive PSO) on twenty benchmark problems. The results show that RSAPSO achives better results compared to the known PSO variants on difficult opti- mization problems that require large numbers of function evaluations. 1 Introduction Particle Swarm Optimization (PSO) is one of the swarm intelligence methods [1]. The behavior of PSO is inspired by bird swarms searching for op- timal food sources, where the direction in which a bird moves is influenced by its current movement, the best food source it ever experienced, and the best food source any bird in the swarm ever expe- rienced. As for PSO, the movement of a particle is influenced by its inertia, its personal best position, and the global best position of the swarm. PSO has several particles, and every particle maintains its current objective value, its position, its velocity, its personal best value, that is the best ob- jective value the particle ever experienced, and its personal best position, that is the position at which the personal best value has been found. In addition, PSO maintains a global best value, that is the best objective value any particle has ever experienced, and a global best position, that is the position at which the global best value has been found. Basic PSO [1] uses the following equation to move the particles: x (i) (n + 1)= x (i) (n)+ v (i) (n + 1), n = 0, 1, 2,..., - 1, (1a) where x (i) is the position of particle i, n is the iter- ation number with n = 0 referring to the initializa- tion, is the total number of iterations, and v (i) is the velocity of particle i, i = 1, 2,..., n p , where n p is the number of particles. Basic PSO uses the following equation to update the particle velocities: v (i) (n + 1)= wv (i) (n)+ c 1 r (i) 1 (n)[x (i) p (n) - x (i) (n)] + c 2 r (i) 2 (n)[x g (n) - x (i) (n)], n = 0, 1, 2,..., - 1, (1b) – 204 10.1515/jaiscr-2015-0008

Upload: doanque

Post on 15-Mar-2018

241 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

JAISCR, 2014, Vol. 4, No. 3, pp. 189

REPULSIVE SELF-ADAPTIVE ACCELERATIONPARTICLE SWARM OPTIMIZATION APPROACH

Simone A. Ludwig

Department of Computer Science, North Dakota State University,Fargo, ND, USA

Abstract

Adaptive Particle Swarm Optimization (PSO) variants have become popular in recentyears. The main idea of these adaptive PSO variants is that they adaptively change theirsearch behavior during the optimization process based on information gathered duringthe run. Adaptive PSO variants have shown to be able to solve a wide range of diffi-cult optimization problems efficiently and effectively. In this paper we propose a Re-pulsive Self-adaptive Acceleration PSO (RSAPSO) variant that adaptively optimizes thevelocity weights of every particle at every iteration. The velocity weights include theacceleration constants as well as the inertia weight that are responsible for the balance be-tween exploration and exploitation. Our proposed RSAPSO variant optimizes the velocityweights that are then used to search for the optimal solution of the problem (e.g., bench-mark function). We compare RSAPSO to four known adaptive PSO variants (decreasingweight PSO, time-varying acceleration coefficients PSO, guaranteed convergence PSO,and attractive and repulsive PSO) on twenty benchmark problems. The results show thatRSAPSO achives better results compared to the known PSO variants on difficult opti-mization problems that require large numbers of function evaluations.

1 Introduction

Particle Swarm Optimization (PSO) is one ofthe swarm intelligence methods [1]. The behaviorof PSO is inspired by bird swarms searching for op-timal food sources, where the direction in which abird moves is influenced by its current movement,the best food source it ever experienced, and thebest food source any bird in the swarm ever expe-rienced. As for PSO, the movement of a particle isinfluenced by its inertia, its personal best position,and the global best position of the swarm.

PSO has several particles, and every particlemaintains its current objective value, its position, itsvelocity, its personal best value, that is the best ob-jective value the particle ever experienced, and itspersonal best position, that is the position at whichthe personal best value has been found. In addition,PSO maintains a global best value, that is the best

objective value any particle has ever experienced,and a global best position, that is the position atwhich the global best value has been found. BasicPSO [1] uses the following equation to move theparticles:

x(i)(n+1) = x(i)(n)+ v(i)(n+1),

n = 0,1,2, . . . , −1, (1a)

where x(i) is the position of particle i, n is the iter-ation number with n = 0 referring to the initializa-tion, is the total number of iterations, and v(i) is thevelocity of particle i, i= 1,2, . . . ,np, where np is thenumber of particles. Basic PSO uses the followingequation to update the particle velocities:

v(i)(n+1) = wv(i)(n)+ c1r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2r(i)2 (n)[xg(n)− x(i)(n)],

n = 0,1,2, . . . , −1, (1b)

– 20410.1515/jaiscr-2015-0008

Page 2: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

190 Simone A. Ludwig

where x(i)p (n) is the personal best position of parti-cle i, and x(i)g (n) is the global best position of par-ticle i, w is the inertia weight and is set to 1, andthe acceleration constants are c1 and c2. Both r(i)1

and r(i)2 are vectors with components having randomvalues uniformly distributed between 0 and 1. Thenotation r(i)(n) denotes that a new random vector isgenerated for every particle i at iteration n.

PSO can focus on either population/particle di-versity or convergence of particles at any iteration.Diversity favors particles that are searching a largearea coarsely, whereas convergence favors parti-cles that are searching a small area intensively. Apromising strategy is to promote diversity of theswarm in early iterations and convergence in lateriterations [2, 3], or assigning attributes to individualparticles to promote diversity or convergence [4].

Despite PSO’s simplicity, the success of PSOlargely depends on the selection of optimal valuesfor the control parameters w, c1, and c2. Non-optimal values of the control parameters lead tosuboptimal solutions, premature convergence, stag-nation, or divergent or cyclic behaviour [5, 6].However, the optimal setting of the control param-eters is dependent on the problem and might be dif-ferent for the particles within the swarm. Sincefinding the optimal control parameters manuallyis very time-consuming, therefore, related workhas addressed this with PSO variants that adap-tively change all or some of the control parameters.For example, decreasing weight PSO decreases theinertia weight w(n) linearly over time [2], time-varying acceleration coefficients PSO changes notonly the inertia weight but also c1(n) and c2(n) overtime [2, 3], and guaranteed convergence PSO en-sures that the global best particle searches within adynamically adapted radius [7, 8].

Other variants include the linear reduction ofthe maximum velocity PSO [9], and non-linear ad-justed inertia weight PSO [10]. PSO with dynamicadaption [11] uses an evolutionary speed factor thatmeasures personal best value changes and an aggre-gation degree that measures the relative position ofparticles in the objective space to calculate the iner-tia weight w.

APSO in [12] adapts the inertia weight of ev-ery particle based on its objective value, the globalbest value, and the global worst value. APSO in-

troduced in [13] changes its inertia weight based onswarm diversity to reduce premature convergenceand hence to increase overall convergence. Theswarm diversity is calculated as a function of posi-tions. Different variations of the self-tuning APSOare discussed in [14, 15, 16].

Self-tuning APSO as described in [15] grantsevery particle its own personal best weight ci

1 andglobal best weight ci

2. Self-tuning APSO initial-izes the personal best weights ci

1 and the global bestweights ci

2 randomly for every particle, and movesthe personal and global best weights towards val-ues of particle i that yielded the most updates ofthe global best position, where the distance of themovement towards the personal best weight ci

1 andthe global best weight ci

2 are based on the total num-ber of iterations [14]. In an update of self-tuningAPSO, the personal and global best weights aremoved in ever smaller steps for increasing numbersof iterations [15].

It has been shown with past research that theadaptation of the velocity weights improve the con-verges speed of PSO compared to having fixed ve-locity weights. Therefore, our approach is basicallyinspired by other PSO variants that assign everyparticle its own velocity weights [15, 16]. ThesePSO variants usually adapt the velocity weights ofa certain particle that is selected based on a measureof superior performance [16] and adopt these veloc-ity weights for all other particles. This paper is anextension of the work published as a short paper in[17]. The organization of this paper is as follows:In Section 2, details of the five PSO variants againstwhich we compare RSAPSO is given. Section 3 in-troduces and describes the proposed RSAPSO vari-ant. In Section 4, the benchmark problems usedto compare the variants with RSAPSO are outline.Section 5 lists the conclusions reached from thisstudy.

2 Related Work and PSO Variantsused for Comparison

We are interested in finding the global minimumof an objective function f (x) in a D-dimensionalsearch space of the form [xmin,xmax]

D. In order toassess the performance of RSAPSO, we utilize fourrelated adaptive PSO variants: decreasing weight

Page 3: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

191Simone A. Ludwig

where x(i)p (n) is the personal best position of parti-cle i, and x(i)g (n) is the global best position of par-ticle i, w is the inertia weight and is set to 1, andthe acceleration constants are c1 and c2. Both r(i)1

and r(i)2 are vectors with components having randomvalues uniformly distributed between 0 and 1. Thenotation r(i)(n) denotes that a new random vector isgenerated for every particle i at iteration n.

PSO can focus on either population/particle di-versity or convergence of particles at any iteration.Diversity favors particles that are searching a largearea coarsely, whereas convergence favors parti-cles that are searching a small area intensively. Apromising strategy is to promote diversity of theswarm in early iterations and convergence in lateriterations [2, 3], or assigning attributes to individualparticles to promote diversity or convergence [4].

Despite PSO’s simplicity, the success of PSOlargely depends on the selection of optimal valuesfor the control parameters w, c1, and c2. Non-optimal values of the control parameters lead tosuboptimal solutions, premature convergence, stag-nation, or divergent or cyclic behaviour [5, 6].However, the optimal setting of the control param-eters is dependent on the problem and might be dif-ferent for the particles within the swarm. Sincefinding the optimal control parameters manuallyis very time-consuming, therefore, related workhas addressed this with PSO variants that adap-tively change all or some of the control parameters.For example, decreasing weight PSO decreases theinertia weight w(n) linearly over time [2], time-varying acceleration coefficients PSO changes notonly the inertia weight but also c1(n) and c2(n) overtime [2, 3], and guaranteed convergence PSO en-sures that the global best particle searches within adynamically adapted radius [7, 8].

Other variants include the linear reduction ofthe maximum velocity PSO [9], and non-linear ad-justed inertia weight PSO [10]. PSO with dynamicadaption [11] uses an evolutionary speed factor thatmeasures personal best value changes and an aggre-gation degree that measures the relative position ofparticles in the objective space to calculate the iner-tia weight w.

APSO in [12] adapts the inertia weight of ev-ery particle based on its objective value, the globalbest value, and the global worst value. APSO in-

troduced in [13] changes its inertia weight based onswarm diversity to reduce premature convergenceand hence to increase overall convergence. Theswarm diversity is calculated as a function of posi-tions. Different variations of the self-tuning APSOare discussed in [14, 15, 16].

Self-tuning APSO as described in [15] grantsevery particle its own personal best weight ci

1 andglobal best weight ci

2. Self-tuning APSO initial-izes the personal best weights ci

1 and the global bestweights ci

2 randomly for every particle, and movesthe personal and global best weights towards val-ues of particle i that yielded the most updates ofthe global best position, where the distance of themovement towards the personal best weight ci

1 andthe global best weight ci

2 are based on the total num-ber of iterations [14]. In an update of self-tuningAPSO, the personal and global best weights aremoved in ever smaller steps for increasing numbersof iterations [15].

It has been shown with past research that theadaptation of the velocity weights improve the con-verges speed of PSO compared to having fixed ve-locity weights. Therefore, our approach is basicallyinspired by other PSO variants that assign everyparticle its own velocity weights [15, 16]. ThesePSO variants usually adapt the velocity weights ofa certain particle that is selected based on a measureof superior performance [16] and adopt these veloc-ity weights for all other particles. This paper is anextension of the work published as a short paper in[17]. The organization of this paper is as follows:In Section 2, details of the five PSO variants againstwhich we compare RSAPSO is given. Section 3 in-troduces and describes the proposed RSAPSO vari-ant. In Section 4, the benchmark problems usedto compare the variants with RSAPSO are outline.Section 5 lists the conclusions reached from thisstudy.

2 Related Work and PSO Variantsused for Comparison

We are interested in finding the global minimumof an objective function f (x) in a D-dimensionalsearch space of the form [xmin,xmax]

D. In order toassess the performance of RSAPSO, we utilize fourrelated adaptive PSO variants: decreasing weight

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

PSO, time-varying acceleration coefficients PSO,guaranteed convergence PSO, and attractive and re-pulsive PSO.

2.1 Decreasing Weight PSO (DWPSO)

DWPSO is similar to basic PSO, but the iner-tia weight w(n) is decreased linearly over time [2].Thus, DWPSO promotes diversity in early itera-tions and convergence in late iterations. DWPSOuses Equation 1b to determine the velocities of theparticles whereby the inertia weight w(n) is calcu-lated using:

w(n) = ws − (ws −we)n−1

, (2)

where ws is the inertia weight for the first iteration,and we is the inertia weight for the last iteration.

2.2 Time-Varying Acceleration Coeffi-cients PSO (TVACPSO)

TVACPSO adapts the acceleration coefficients,i.e., the personal weight c1(n) and global bestweight c2(n) over time besides the the inertiaweight w(n) [2, 3]. The idea is to have highdiversity during early iterations and high conver-gence during late iterations. The inertia weightw(n) is changed as in DWPSO using Equation (2).TVACPSO uses the following equation to deter-mine the velocities:

v(i)(n+1) = w(n)v(i)(n)

+ c1(n)r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2(n)r(i)2 (n)[xg(n)− x(i)(n)],

n = 0,1,2, . . . , −1, (3a)

where the personal best weight c1(n), and the globalbest weight c2(n) at iteration n are calculated using:

c1(n) = c1s − (c1s − c1e)n−1

,

c2(n) = c2s − (c2s − c2e)n−1

, (3b)

where c1s is the personal best weight for the first it-eration, c1e is the personal best weight for the lastiteration, c2s is the global best weight for the first it-eration, and c2e is the global best weight for the lastiteration.

2.3 Guaranteed Convergence PSO(GCPSO)

GCPSO guarantees that the global best particlesearches within a dynamically adapted radius [7, 8].This addresses the problem of stagnation and in-creases local convergence by using the global bestparticle to randomly search within an adaptivelychanging radius at every iteration [8]. GCPSO, asdescribed in [2], uses the following equation to up-date the position:

x(ig)(n+1) = xg(n)+w(n)v(ig)(n)

+ρ(n)(1−2r3(n)),

n = 0,1,2, . . . , −1, (4a)

GCPSO uses Equation (1b) to determine the ve-locities v(i)(n). The personal best weight c1 and theglobal best weight c2 are held constant. GCPSOuses the following equation to update the velocityof the global best particle:

v(ig)(n+1) =−x(ig)(n)+ xg(n)+w(n)v(ig)(n)

+ρ(n)(1−2r3(n)),

n = 0,1,2, . . . , −1, (4b)

where ig is the index of the particle that updatedthe global best value most recently. The expres-sion −x(ig)(n)+ xg(n) is used to reset the positionof particle ig to the global best position. r3(n) arerandom numbers uniformly distributed between 0and 1. The search radius is controlled by the searchradius parameter ρ(n). The search radius parameterρ(n) is calculated using:

ρ(n+1) =

2ρ(n), if σ(n+1)> σc,12 ρ(n), if φ(n+1)> φc,

ρ(n), otherwise,

(4c)

where σc is the consecutive success threshold, andφc is the consecutive failure threshold defined be-low. Success means that using Equations (1) and(4b) to update the particle positions results in animproved global best value and position, and fail-ure means it does not. The numbers of consecutivesuccesses σ(n) and failures φ(n) are calculated us-

Page 4: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

192 Simone A. Ludwig

ing:

σ(n+1) =

{0, if φ(n+1)> φ(n),σ(n)+1, otherwise,

(4d)

φ(n+1) =

{0, if σ(n+1)> σ(n),φ(n)+1, otherwise.

(4e)

2.4 Attractive and Repulsive PSO (RPSO)

RPSO aims to overcome the problem of prema-ture convergence [18]. It uses a diversity measure tocontrol the swarm by alternating between phases of“attraction” and “repulsion”. The attraction phaseoperates as basic PSO by the particles attractingeach other (see Equation (1b)). The repulsion phaseis done by inverting the velocity-update equation ofthe particles as follows:

v(i)(n+1) = w(n)v(i)(n)

− c1r(i)1 (n)[x(i)p (n)− x(i)(n)]

− c2r(i)2 (n)[xg(n)− x(i)(n)],

n = 0,1,2, . . . , −1, (5a)

In the repulsion phase, the individual particle is nolonger attracted to, but instead repelled by the bestknown particle position and its own previous bestposition.

In the attraction phase, the swarm is contract-ing, and therefore the diversity decreases. Oncethe diversity drops below a lower bound, dlow, therepulsion phase is switched to, so that the swarmexpands according to Equation (5a). When a di-versity of dhigh is reached, the attraction phase isswitched on again. Therefore, there is an alterna-tion between phases of exploitation and exploration(attraction and repulsion).

Equation (5b) sets the sign-variable dir to either1 or -1 depending on the diversity values as given inEquation (5c):

dir =

{−1, if diversity(S)< dlow,

1, if diversity(S)> dhigh,(5b)

diversity(S) =1

|S|× |L|×

|S|

∑i=1

���� N

∑j=1

(pi j − p j)2,

(5c)

where S is the swarm, |S| is the swarm size, |L| is thelength of the longest diagonal in the search space, Nis the dimensionality of the problem, pi j is the jth

value of the ith particle, and p j is the jth value of theaverage point p. Finally, Equation (5d) is modifiedby multiplying the sign-variable dir by the socialand personal components that decide whether theparticles are attracted to, or repelled by each other:

v(i)(n+1) = w(n)v(i)(n)

+dir(c1r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2r(i)2 (n)[xg(n)− x(i)(n)]),

n = 0,1,2, . . . , −1, (5d)

2.5 Dealing with Search Space Violations

If a particle attempts to leave the search space,our strategy is to return it along its proposed paththrough a series of correcting iterations. In particu-lar, we use:

x(i)(n+1) = x(i)(n)− v(i)(n+1),

n = 0,1, ..., −1, (6a)

where x(i)(n+1) is the corrected position, v(i) is thecorrected velocity, n is the count for the correctingiterations, and˘is the total number of correcting iter-ations. The initial corrected position x(i)(0) is set tothe position x(i)(n+1), which is outside the searchspace. The corrected velocities v(i) are calculatedusing:

v(i)(n+1) = αv(i)(n),

n = 0,1, ..., −1, (6b)

where α is the correction factor, and the initial cor-rected velocity v(i)(0) is set to the velocity v(i)(n+1) that caused the particle to attempt to leave thesearch space. Equation (6a) is used until the cor-rected position x(i)(n+ 1) is in the search space orthe limit on the total number of correcting itera-tions ˘ is reached. If ˘ is reached, the components ofx(i)() still outside the search space are clamped tothe boundary of the search space. Based on goodperformance in empirical experiments, the valueschosen are α = 0.54 and˘= 4.

Page 5: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

193Simone A. Ludwig

ing:

σ(n+1) =

{0, if φ(n+1)> φ(n),σ(n)+1, otherwise,

(4d)

φ(n+1) =

{0, if σ(n+1)> σ(n),φ(n)+1, otherwise.

(4e)

2.4 Attractive and Repulsive PSO (RPSO)

RPSO aims to overcome the problem of prema-ture convergence [18]. It uses a diversity measure tocontrol the swarm by alternating between phases of“attraction” and “repulsion”. The attraction phaseoperates as basic PSO by the particles attractingeach other (see Equation (1b)). The repulsion phaseis done by inverting the velocity-update equation ofthe particles as follows:

v(i)(n+1) = w(n)v(i)(n)

− c1r(i)1 (n)[x(i)p (n)− x(i)(n)]

− c2r(i)2 (n)[xg(n)− x(i)(n)],

n = 0,1,2, . . . , −1, (5a)

In the repulsion phase, the individual particle is nolonger attracted to, but instead repelled by the bestknown particle position and its own previous bestposition.

In the attraction phase, the swarm is contract-ing, and therefore the diversity decreases. Oncethe diversity drops below a lower bound, dlow, therepulsion phase is switched to, so that the swarmexpands according to Equation (5a). When a di-versity of dhigh is reached, the attraction phase isswitched on again. Therefore, there is an alterna-tion between phases of exploitation and exploration(attraction and repulsion).

Equation (5b) sets the sign-variable dir to either1 or -1 depending on the diversity values as given inEquation (5c):

dir =

{−1, if diversity(S)< dlow,

1, if diversity(S)> dhigh,(5b)

diversity(S) =1

|S|× |L|×

|S|

∑i=1

���� N

∑j=1

(pi j − p j)2,

(5c)

where S is the swarm, |S| is the swarm size, |L| is thelength of the longest diagonal in the search space, Nis the dimensionality of the problem, pi j is the jth

value of the ith particle, and p j is the jth value of theaverage point p. Finally, Equation (5d) is modifiedby multiplying the sign-variable dir by the socialand personal components that decide whether theparticles are attracted to, or repelled by each other:

v(i)(n+1) = w(n)v(i)(n)

+dir(c1r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2r(i)2 (n)[xg(n)− x(i)(n)]),

n = 0,1,2, . . . , −1, (5d)

2.5 Dealing with Search Space Violations

If a particle attempts to leave the search space,our strategy is to return it along its proposed paththrough a series of correcting iterations. In particu-lar, we use:

x(i)(n+1) = x(i)(n)− v(i)(n+1),

n = 0,1, ..., −1, (6a)

where x(i)(n+1) is the corrected position, v(i) is thecorrected velocity, n is the count for the correctingiterations, and˘is the total number of correcting iter-ations. The initial corrected position x(i)(0) is set tothe position x(i)(n+1), which is outside the searchspace. The corrected velocities v(i) are calculatedusing:

v(i)(n+1) = αv(i)(n),

n = 0,1, ..., −1, (6b)

where α is the correction factor, and the initial cor-rected velocity v(i)(0) is set to the velocity v(i)(n+1) that caused the particle to attempt to leave thesearch space. Equation (6a) is used until the cor-rected position x(i)(n+ 1) is in the search space orthe limit on the total number of correcting itera-tions ˘ is reached. If ˘ is reached, the components ofx(i)() still outside the search space are clamped tothe boundary of the search space. Based on goodperformance in empirical experiments, the valueschosen are α = 0.54 and˘= 4.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

3 Proposed Variant: RepulsiveSelf-adaptive Acceleration PSO(RSAPSO)

Our RSAPSO variant is inspired by other PSOvariants that assign every particle its own velocityweights [15, 16]. These variants typically move thevelocity weights of all particles toward the velocityweights of a certain particle that is selected basedon a measure of superior performance [16].

For example, self-tuning APSO moves the ve-locity weights towards the settings of the particlethat yielded the most updates of the global best po-sition [15, 16]. Controlled APSO [19] adaptivelychanges the personal best weights c(i)1 (n) and theglobal best weights c(i)2 (n) based on the distancebetween the positions and the global best position.Inertia weight APSO [12] allows every particle itsown inertia weight w(i)(n) that is changed using afunction of the objective values and the global bestvalue. Optimized PSO [20] uses multiple PSO sub-swarms, each having their own parameter settings,in an inner iteration to solve the original optimiza-tion problem. The parameter settings are then opti-mized in an outer iteration of PSO for a fixed num-ber of iterations.

Inspired by the optimized PSO variant [20], wetreat the problem of finding good velocity weightsas an optimization problem. In RSAPSO every par-ticle has its own velocity weights, i.e., its inertiaweight w(i), personal best weight c(i)1 , and globalbest weight c(i)2 . A particular setting of the velocityweights is referred to as the position of the veloc-ity weights. An objective function for the velocityweights is used to quantify how well the positions ofthe velocity weights perform for solving the overalloptimization problem. Using the calculated objec-tive values of the velocity weights, RSAPSO takesa step toward optimizing the velocity weights. Thevelocity weights are optimized in a fixed auxiliarysearch space.

Compared to optimized PSO [20], the RSAPSOapproach of optimizing the velocity weights afterevery (outer) PSO iteration is more efficient sinceonly one additional PSO instance (for optimizingthe velocity weights) is executed and only for one(inner) iteration. An advantage of RSAPSO is thatthe velocity weights can adapt themselves to dy-

namic changes, e.g., different particle distributionsat different iterations.

RSAPSO uses the following equation, with thenotation used in Equation (1b), to update the veloc-ities of particles:

v(i)(n+1) = w(i)(n)v(i)(n)

+ c(i)1 (n)r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c(i)2 (n)r(i)2 (n)[xg(n)− x(i)(n)],

n = 0,1,2, . . . , −1, (7a)

An auxiliary objective function is used to quan-tify the success of particles as a function of theirvelocity weights. There are reliable and directlyemployable entities to measure the success of par-ticles. In particular, we use the improvement in theobjective value of the particle [21], the number ofupdates of the global best position that the particleyielded [15, 16], and the number of updates of thepersonal best position that the particle yielded. Wepropose the following objective function for the ve-locity weights, selected based on good performancein empirical experiments:

f (i)(n) = e(i)(n)(1+wlu

(i)l (n)+wgu(i)g (n)

),

n = 1,2, . . . , −1, (7b)

where f (i)(n) is the objective value of the velocityweights for particle i at iteration n, e(i)(n) is the nor-malized improvement described below, u(i)l is thenumber of times particle i updated its personal bestposition, u(i)g is the number of times particle i up-dated the global best position, wl is the local weightfactor used to weigh the number of personal bestupdates u(i)l , and wg is the global weight factor usedto weigh the number of global best updates u(i)g . Thevalue of wg is usually set to a larger number than thevalue of wl because updates to the global best posi-tion are relatively more important. Equation (7b)is thus used to guide the evolution of the positionsof the velocity weights towards optimal values. Al-ternative objective functions are possible, e.g., onesthat use the normalized improvements e(i)(n), or thelocal and global best update counters individually.The normalized improvements e(i)(n) are calculatedas follows, based on good performance in empiricalexperiments:

e(i)(n) =δ(i)(n)σ(n)

, (7c)

Page 6: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

194 Simone A. Ludwig

where σ(n) is the normalization sum (which has tobe greater than zero), and δ(i)(n) is the difference inthe objective values calculated using:

δ(i)(n) = f (i)(n)− f (i)(n−1), (7d)

where f (i) is the objective value of particle i.

In practice, early iterations might yield large ab-solute values of δ(i), whereas late iterations mightonly yield small absolute values of δ(i). Therefore,we propose the following normalization to fairly ac-count for the contribution of the velocity weightsfrom late iterations:

σ(n) =

{∑np

i=1−δ(i)(n), for δ(i)(n)< 0,1, otherwise.

(7e)

In other words, the normalization sum σ(n) makesobjective values of the velocity weights comparablefor different n. This normalization is chosen basedon good performance in empirical experiments.

The velocity weights are optimized using onestep of PSO in an inner iteration, resulting in thefollowing overall iteration to update the positionsof the velocity weights:

x(i)(n+1) = x(i)(n)+ v(i)(n+1),

n = 1,2, . . . , −1, (7f)

v(i)(n+1) = w(n)v(i)(n)

+ c1(n)r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2(n)r(i)2 (n)[x(i)g (n)− x(i)(n)],

n = 1,2, . . . , −1, (7g)

where x(i)(n) is the position of the velocity weights,v(i)(n) is the velocity of the velocity weights, x(i)p (n)is the personal best position of the velocity weights,x(i)g (n) is the global best position of the velocityweights, w(n) is the inertia weight for optimiz-ing the velocity weights, c1(n) is the personal bestweight for optimizing the velocity weights, c2(n) isthe global best weight for optimizing the velocityweights, and r(i)1 (n) and r(i)2 (n) are random vectorswith components that are uniformly distributed be-tween 0 and 1 for every particle i and iteration n.

Equations (7f) and (7g) are used after Equation(1a) and (7a) have been used to update the positions

of the particles, and the new objective values havebeen calculated. The first component of x(i)(n) isused as the inertia weight w(i)(n), the second com-ponent of x(i)(n) is used as the personal best weightc(i)1 (n), and the third component of x(i)(n) is used asthe global best weight c(i)2 (n) as given in Equation(7a).

The proposed RSAPSO switches betweenphases based on the mean separation of particles.If RSAPSO is in the attractive phase and con-verges, it switches to the repulsive phase once ithas reached a small enough mean separation value.This can counter the trapping in a local optimum. IfRSAPSO is in the repulsive phase, it switches to theattractive phase once it has reached a large enoughmean separation. Similarly, four-state APSO usesthe mean separation to decide in which of the fourstates it is in as given in [22]. The attractive-repulsive PSO [18] switches between phases basedon a calculated diversity factor that is calculatedsimilarly to the mean separation.

We propose the following objective function forthe velocity weights that adapt itself to the currentphase:

f (i)(n) =

{f (i)(n), if a(n) = 1,

−s(i)(n), if a(n) = 2,(7h)

where f (i)(n) is the objective value of the veloc-ity weights, and a(n) is the phase indicator. IfRSAPSO is in the attractive phase a(n) = 1, the ob-jective value of the velocity weights f (i)(n) is set tof (i)(n) as calculated in Equation (7b). If RSAPSOis in the repulsive phase a(n) = 2, the objectivevalue of the velocity weights f (i)(n) is set to thenegation of the mean separation s(i)(n). This objec-tive function for the velocity weights was selectedfor RSAPSO since good performance of the veloc-ity weights is indicated by f (i)(n) in the attractivephase, and −s(i)(n) in the repulsive phase. In par-ticular, in the attractive phase we focus on conver-gence by rewarding good objective values of the ve-locity weights f (i)(n), and in the repulsive phase wefocus on diversity by rewarding high mean separa-tion values s(i)(n).

The attractive-repulsive PSO [18] switches tothe repulsive phase if its diversity factor goes belowan absolute lower threshold value and switches tothe attractive phase if its diversity factor goes above

Page 7: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

195Simone A. Ludwig

where σ(n) is the normalization sum (which has tobe greater than zero), and δ(i)(n) is the difference inthe objective values calculated using:

δ(i)(n) = f (i)(n)− f (i)(n−1), (7d)

where f (i) is the objective value of particle i.

In practice, early iterations might yield large ab-solute values of δ(i), whereas late iterations mightonly yield small absolute values of δ(i). Therefore,we propose the following normalization to fairly ac-count for the contribution of the velocity weightsfrom late iterations:

σ(n) =

{∑np

i=1−δ(i)(n), for δ(i)(n)< 0,1, otherwise.

(7e)

In other words, the normalization sum σ(n) makesobjective values of the velocity weights comparablefor different n. This normalization is chosen basedon good performance in empirical experiments.

The velocity weights are optimized using onestep of PSO in an inner iteration, resulting in thefollowing overall iteration to update the positionsof the velocity weights:

x(i)(n+1) = x(i)(n)+ v(i)(n+1),

n = 1,2, . . . , −1, (7f)

v(i)(n+1) = w(n)v(i)(n)

+ c1(n)r(i)1 (n)[x(i)p (n)− x(i)(n)]

+ c2(n)r(i)2 (n)[x(i)g (n)− x(i)(n)],

n = 1,2, . . . , −1, (7g)

where x(i)(n) is the position of the velocity weights,v(i)(n) is the velocity of the velocity weights, x(i)p (n)is the personal best position of the velocity weights,x(i)g (n) is the global best position of the velocityweights, w(n) is the inertia weight for optimiz-ing the velocity weights, c1(n) is the personal bestweight for optimizing the velocity weights, c2(n) isthe global best weight for optimizing the velocityweights, and r(i)1 (n) and r(i)2 (n) are random vectorswith components that are uniformly distributed be-tween 0 and 1 for every particle i and iteration n.

Equations (7f) and (7g) are used after Equation(1a) and (7a) have been used to update the positions

of the particles, and the new objective values havebeen calculated. The first component of x(i)(n) isused as the inertia weight w(i)(n), the second com-ponent of x(i)(n) is used as the personal best weightc(i)1 (n), and the third component of x(i)(n) is used asthe global best weight c(i)2 (n) as given in Equation(7a).

The proposed RSAPSO switches betweenphases based on the mean separation of particles.If RSAPSO is in the attractive phase and con-verges, it switches to the repulsive phase once ithas reached a small enough mean separation value.This can counter the trapping in a local optimum. IfRSAPSO is in the repulsive phase, it switches to theattractive phase once it has reached a large enoughmean separation. Similarly, four-state APSO usesthe mean separation to decide in which of the fourstates it is in as given in [22]. The attractive-repulsive PSO [18] switches between phases basedon a calculated diversity factor that is calculatedsimilarly to the mean separation.

We propose the following objective function forthe velocity weights that adapt itself to the currentphase:

f (i)(n) =

{f (i)(n), if a(n) = 1,

−s(i)(n), if a(n) = 2,(7h)

where f (i)(n) is the objective value of the veloc-ity weights, and a(n) is the phase indicator. IfRSAPSO is in the attractive phase a(n) = 1, the ob-jective value of the velocity weights f (i)(n) is set tof (i)(n) as calculated in Equation (7b). If RSAPSOis in the repulsive phase a(n) = 2, the objectivevalue of the velocity weights f (i)(n) is set to thenegation of the mean separation s(i)(n). This objec-tive function for the velocity weights was selectedfor RSAPSO since good performance of the veloc-ity weights is indicated by f (i)(n) in the attractivephase, and −s(i)(n) in the repulsive phase. In par-ticular, in the attractive phase we focus on conver-gence by rewarding good objective values of the ve-locity weights f (i)(n), and in the repulsive phase wefocus on diversity by rewarding high mean separa-tion values s(i)(n).

The attractive-repulsive PSO [18] switches tothe repulsive phase if its diversity factor goes belowan absolute lower threshold value and switches tothe attractive phase if its diversity factor goes above

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

an absolute upper threshold value. We use the samemechanism but replace the diversity factor with themean separation. Specifically, we use the followingequation to switch between the phases:

a(n+1) =

1, if a(n) = 2 ∧ s(n)> su(n),2, if a(n) = 1 ∧ s(n)< sl(n),a(n), otherwise,

(7i)

where sl(n) is the mean separation absolute lowerthreshold, and su(n) is the mean separation abso-lute upper threshold.RSAPSO starts in the attractive phase a(n) = 1.If the mean separation s(n) falls below the meanseparation absolute lower threshold sl(n), RSAPSOchanges from the attractive phase a(n) = 1 to therepulsive phase a(n + 1) = 2. If the mean sepa-ration s(n) rises above the mean separation abso-lute upper threshold su(n), RSAPSO changes fromthe repulsive phase a(n) = 2 to the attractive phasea(n+1) = 1.

To the best of our knowledge, the adaptivechange of the mean separation absolute lower sl(n)and upper threshold su(n) is novel. This concept al-lows for increased accuracy and convergence as thealgorithm proceeds. Furthermore, it can be used ifgood values for the mean separation absolute lowerand the mean separation absolute upper thresholdare not known. The mean separation absolute lowerand upper threshold, sl(n) and su(n) respectively,are adapted as follows:

sl(n+1) =

{sl(n)

sl, if a(n) = 2 ∧ s(n)> su(n),

sl(n), otherwise,(7j)

su(n+1) =

{su(n)

su, if a(n) = 2 ∧ s(n)> su(n),

su(n), otherwise,(7k)

where sl is the mean separation absolute lower di-visor and su is the mean separation absolute upperdivisor. The mean separation absolute lower thresh-old sl(n) is divided by the mean separation absolutelower divisor sl , and the mean separation absoluteupper threshold su(n) is divided by the mean sep-aration absolute upper divisor su if the algorithmswitches from the repulsive phase to the attractivephase at iteration n. Both the mean separation ab-solute lower sl(n+1) and upper threshold su(n+1)remain the same if the algorithm does not switchfrom the repulsive phase to the attractive phase;

i.e., the mean separation absolute lower thresholdsl(n + 1) and upper threshold su(n + 1) are onlychanged after a full cycle through the attractive andrepulsive states.

Algorithm 1 outlines our RSAPSO variant.RSAPSO calculates the mean separation afterthe local and global best positions are updated.RSAPSO requires the mean separation to decidewhether a phase switch is required. If so, the objec-tive function for the velocity weights, and the searchspace for the velocity weights are switched to theircounterparts in the new phase. The search spacefor the velocity weights in the attractive phase mustmainly yield positive velocity weights. The searchspace for the velocity weights in the repulsive phasemust mainly yield negative velocity weights. Allvelocity weights have to be reinitialized in the newsearch space for the velocity weights if a phaseswitch occurs. The personal best positions and val-ues of the velocity weights and the global best po-sition and value of the velocity weights are resetsince if their values were discovered in the attrac-tive phase, they cannot be used in the repulsivephase and vice versa. In case a switch from therepulsive to the attractive phase occurs, i.e., onephase cycle is finished, the mean separation abso-lute lower and upper threshold are updated usingEquations (7j) and (7k). If no phase switch occurs,RSAPSO follows the flow of optimizing the veloc-ity weights; however, it uses Equation (7h) insteadof Equation (7b) as the objective function for thevelocity weights.

mainly yield negative velocity weights. All veloc-ity weights have to be reinitialized in the new searchspace for the velocity weights if a phase switch oc-curs. The personal best positions and values of thevelocity weights and the global best position andvalue of the velocity weights are reset since if theirvalues were discovered in the attractive phase, theycannot be used in the repulsive phase and vice versa.In case a switch from the repulsive to the attractivephase occurs, i.e., one phase cycle is finished, themean separation absolute lower and upper thresh-old are updated using Equations (7j) and (7k). Ifno phase switch occurs, RSAPSO follows the flowof optimizing the velocity weights; however, it usesEquation (7h) instead of Equation (7b) as the objec-tive function for the velocity weights.

Algorithm 1 Description of RSAPSOinitialize positions and velocitiesinitialize positions of velocity weightscalculate objective valuesupdate local and global best positions and valuesrepeat

update positionscalculate objective valuesupdate local and global best positions and valuescalculate mean distanceif phase changed then

update mean absolute lower/upper threshold if necessaryswitch objective function/search space for velocity weightsreinitialize positions of velocity weightsreset local/global best positions/values of velocity weights

elsecalculate objective values of velocity weightsupdate local/global best positions/values of velocity weightsupdate positions of velocity weights

end ifuntil maximum number of generations reachedreport final results

4 Experiments and Results

4.1 Benchmark Problems

Twenty optimization benchmark problems are usedto compare our RSAPSO algorithm with the cho-sen PSO variants. All the benchmark problems fromthe semi-continuous challenge [23] are used, includ-ing the Ackley, Alpine, Griewank, Parabola, Rosen-brock, and Tripod test problems. Some of the op-timization problems from [24] have been selectedbased on their shapes to guarantee a diverse set ofproblems, including the Six-hump Camel Back, DeJong 5, Drop Wave, Easom, Goldstein–Price, AxisParallel Hyper-ellipsoid, Michalewicz, and Shuberttest problems [23]. We also use optimization prob-lems from [25] to expand our benchmark set. Theseinclude the Generalized Penalized, Non-continuousRastrigin, Sphere, Rastrigin, and Step test problems[25]. In addition, Schaffer’s F6 test problem from[20] is used. For ease of comparison, we normalizedthe benchmark problems in order for all to have aglobal optimum of 0.0.

Table 1 lists the benchmark functions, their prop-erties, bounds on x, and the search space dimensions.

Table 1: Description of Test Problems.Function Name [xmin ,xmax ] DF1 Ackley [-30,30] 30F2 Alpine [-10,10] 10F3 Six-hump Camel Back [-2,2] 2F4 De Jong 5 [-65.536,65.536] 2F5 Drop Wave [-5.12,5.12] 2F6 Easom [-100,100] 2F7 Generalized Penalized [-50,50] 30F8 Griewank [-300,300] 30F9 Goldstein–Price [-2,2] 2F10 Axis Parallel Hyper-ellipsoid [-5.12,5.12] 100F11 Michalewicz [0,π] 10F12 Non-continous Rastrigin [-5.12,5.12] 30F13 Parabola [-20,20] 200F14 Rastrigin [-10,10] 30F15 Rosenbrock [-10,10] 30F16 Schaffer’s F6 [-100,100] 2F17 Shubert [-10,10] 2F18 Sphere [-100,100] 100F19 Step [-100,100] 30F20 Tripod [-100,100] 2

9

Page 8: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

196 Simone A. Ludwig

4 Experiments and Results

4.1 Benchmark Problems

Twenty optimization benchmark problems areused to compare our RSAPSO algorithm with thechosen PSO variants. All the benchmark problemsfrom the semi-continuous challenge [23] are used,including the Ackley, Alpine, Griewank, Parabola,Rosenbrock, and Tripod test problems. Some of theoptimization problems from [24] have been selectedbased on their shapes to guarantee a diverse set ofproblems, including the Six-hump Camel Back, DeJong 5, Drop Wave, Easom, Goldstein–Price, AxisParallel Hyper-ellipsoid, Michalewicz, and Shuberttest problems [23]. We also use optimization prob-lems from [25] to expand our benchmark set. Theseinclude the Generalized Penalized, Non-continuousRastrigin, Sphere, Rastrigin, and Step test problems[25]. In addition, Schaffer’s F6 test problem from[20] is used. For ease of comparison, we normal-ized the benchmark problems in order for all to havea global optimum of 0.0.

Table 1 lists the benchmark functions, theirproperties, bounds on x, and the search space di-mensions.

4.2 Parameter Settings

The parameters are set to the values describedas follows:

– search space for velocity weights: [−0.5,2.0];

– search space for personal best weights:[−1.0,4.2];

– search space for global best weights: [−1.0,4.2];

– wl = 1;

– wg = 6;

– ws = 0.9;

– we = 0.4;

– c1s = 2.5;

– c1e = 0.5;

– c2s = 0.5;

– c2e = 2.5;

– percentage of particles selected for mutation oftheir velocity weights: 33;

– iterations before resetting best positions and ve-locity weights: 50;

– initialization space for inertia weights: [0.4,0.9];

– initialization space for personal best weights:[0.5,2.5];

– initialization space for global best weights:[0.5,2.5];

– reinitialization space for

– inertia weights: [0.5,0.8];

– reinitialization space for personal best weights:[0.6,2.4];

– reinitialization space for global best weights:[0.6,2.4];

– α = 0.5;

– m = 10;

– mu = 2.5.

4.3 Experimental Setup

We compare the PSO variants on four exper-iments using four different numbers of functionevaluations (FE) including initialization.

– The first experiment uses np = 10 particles and= 100 iterations resulting in a total of 1,000 FE.

– The second experiment uses np = 20 particlesand = 500 iterations resulting in a total of10,000 FE.

– The third experiment uses np = 40 particlesand = 2,500 iterations resulting in a total of100,000 FE.

– The fourth experiment uses np = 100 particlesand = 10,000 iterations resulting in a total of1,000,000 FE.

If the FE are the dominant expense, all the variantsconsidered require approximately the same CPUtime for a given number of FE. All calculations areperformed in double precision. The results reportedare best, mean and standard deviation from 30 runsperformed.

Page 9: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

197Simone A. Ludwig

4 Experiments and Results

4.1 Benchmark Problems

Twenty optimization benchmark problems areused to compare our RSAPSO algorithm with thechosen PSO variants. All the benchmark problemsfrom the semi-continuous challenge [23] are used,including the Ackley, Alpine, Griewank, Parabola,Rosenbrock, and Tripod test problems. Some of theoptimization problems from [24] have been selectedbased on their shapes to guarantee a diverse set ofproblems, including the Six-hump Camel Back, DeJong 5, Drop Wave, Easom, Goldstein–Price, AxisParallel Hyper-ellipsoid, Michalewicz, and Shuberttest problems [23]. We also use optimization prob-lems from [25] to expand our benchmark set. Theseinclude the Generalized Penalized, Non-continuousRastrigin, Sphere, Rastrigin, and Step test problems[25]. In addition, Schaffer’s F6 test problem from[20] is used. For ease of comparison, we normal-ized the benchmark problems in order for all to havea global optimum of 0.0.

Table 1 lists the benchmark functions, theirproperties, bounds on x, and the search space di-mensions.

4.2 Parameter Settings

The parameters are set to the values describedas follows:

– search space for velocity weights: [−0.5,2.0];

– search space for personal best weights:[−1.0,4.2];

– search space for global best weights: [−1.0,4.2];

– wl = 1;

– wg = 6;

– ws = 0.9;

– we = 0.4;

– c1s = 2.5;

– c1e = 0.5;

– c2s = 0.5;

– c2e = 2.5;

– percentage of particles selected for mutation oftheir velocity weights: 33;

– iterations before resetting best positions and ve-locity weights: 50;

– initialization space for inertia weights: [0.4,0.9];

– initialization space for personal best weights:[0.5,2.5];

– initialization space for global best weights:[0.5,2.5];

– reinitialization space for

– inertia weights: [0.5,0.8];

– reinitialization space for personal best weights:[0.6,2.4];

– reinitialization space for global best weights:[0.6,2.4];

– α = 0.5;

– m = 10;

– mu = 2.5.

4.3 Experimental Setup

We compare the PSO variants on four exper-iments using four different numbers of functionevaluations (FE) including initialization.

– The first experiment uses np = 10 particles and= 100 iterations resulting in a total of 1,000 FE.

– The second experiment uses np = 20 particlesand = 500 iterations resulting in a total of10,000 FE.

– The third experiment uses np = 40 particlesand = 2,500 iterations resulting in a total of100,000 FE.

– The fourth experiment uses np = 100 particlesand = 10,000 iterations resulting in a total of1,000,000 FE.

If the FE are the dominant expense, all the variantsconsidered require approximately the same CPUtime for a given number of FE. All calculations areperformed in double precision. The results reportedare best, mean and standard deviation from 30 runsperformed.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

Table 1. Description of Test Problems.

Function Name [xmin,xmax] DF1 Ackley [-30,30] 30F2 Alpine [-10,10] 10F3 Six-hump Camel Back [-2,2] 2F4 De Jong 5 [-65.536,65.536] 2F5 Drop Wave [-5.12,5.12] 2F6 Easom [-100,100] 2F7 Generalized Penalized [-50,50] 30F8 Griewank [-300,300] 30F9 Goldstein–Price [-2,2] 2F10 Axis Parallel Hyper-ellipsoid [-5.12,5.12] 100F11 Michalewicz [0,π] 10F12 Non-continous Rastrigin [-5.12,5.12] 30F13 Parabola [-20,20] 200F14 Rastrigin [-10,10] 30F15 Rosenbrock [-10,10] 30F16 Schaffer’s F6 [-100,100] 2F17 Shubert [-10,10] 2F18 Sphere [-100,100] 100F19 Step [-100,100] 30F20 Tripod [-100,100] 2

4.4 Results

Analyzing the results, as shown in Tables 2to 5 (best mean values are given in bold), revealthat RSAPSO improves with increasing numbersof FE, scoring best compared to the other variantsfor 100,000 and 1,000,000 FE. Figure 1 shows theresults counting the number of wins, i.e., numberof times an algorithm scored best in terms of bestmean value on the benchmark functions.

For 1,000 FE (Table 2), DWPSO, GCPSO, andRSAPSO score best on 2 benchmark functions,RPSO scores best on only 1 benchmark function,and TVACPSO outperforms the other algorithmsscoring best on 12 benchmark functions.

For 10,000 FE (Table 3), TVACPSO and RPSOscore best on 5 benchmark functions, DWPSO andGCPSO score best on 7 benchmark functions, andRSAPSO scores best on 8 benchmark functions.

For 100,000 FE, as shown in Table 4, reveal thatTVACPSO and RPSO score best on 7 benchmarkfunctions, DWPSO, and GCPSO score best on 8benchmark functions, and RSAPSO scores best on10 benchmark functions.

Figure 1. Number of wins versus number offunction evaluations

For 1,000,000 FE (Table 5) DWPSO,TVACPSO and GCPSO have the best mean valuefor 9 benchmark functions, RPSO for 11 bench-mark functions, and RSAPSO scores best on 14benchmark functions.

For 1,000,000 FE, the optimum value of 0.0 wasachieved by all PSO variants, measuring the bestvalue on 7 benchmark functions, for 100,000 FEonly on 6 benchmark functions, and for 10,000 FEonly on 2 benchmark functions. This demonstrates

0  

2  

4  

6  

8  

10  

12  

14  

1000  FE   10000  FE   100000  FE   1000000  FE  

Num

ber  o

f  Wins  

DWPSO   TVACPSO   GCPSO   RPSO   RSAPSO  

Figure 1: Number of wins versus number of functionevaluations

For 1,000,000 FE (Table 5) DWPSO, TVACPSOand GCPSO have the best mean value for 9 bench-mark functions, RPSO for 11 benchmark functions,and RSAPSO scores best on 14 benchmark func-tions.

For 1,000,000 FE, the optimum value of 0.0 wasachieved by all PSO variants, measuring the bestvalue on 7 benchmark functions, for 100,000 FE onlyon 6 benchmark functions, and for 10,000 FE only on2 benchmark functions. This demonstrates that withincreasing numbers of FE more benchmark functionsare solved optimally.

In terms of the average values achieved, for1,000,000 FE, 0.0 was achieved by the PSO variants41 times, for 100,000 FE 28 times, and for 10,000FE 16 times.

A Friedman ranking test [26] was applied on theaverage results for the four different FE. Table 6shows the average ranks obtained by each PSO vari-ant. All the results for all four different FE are notstatistically significant at the 5% significance level.The post hoc procedures of Bonferroni-Dunn andHochberg confirmed this. However, as the previousdiscussion outlined, our approach has the best rankfor 1,000,000 FE, even though the results are not sta-tistically significant.

Algorithm 1,000 FE 10,000 FE 100,000 FE 1,000,000 FEDWPSO 3.275 2.85 2.65 3.275TVACPSO 2 2.7 3.15 3.475GCPSO 2.975 2.7 2.75 2.8RPSO 3.15 3.375 3.55 2.8RSAPSO 3.6 3.375 2.9 2.65

Table 6: Average rankings of the algorithms (Fried-man)

Figure 2 shows the function value for 1,000,10,000, 100,000 and 1,000,000 FE for benchmarkfunctions F9, F11, F12, and F14. It confirmsonce more that our proposed RSAPSO first performspoorly for 1,000 FE, however, showing improvedvalues with increasing numbers of FE by scoring bestfor 100,000 and 1,000,000 FE.

Overall, the experiments have shown that for in-creasing FE our proposed RSAPSO variant scoredbetter than the other PSO variants. Looking at theparticular benchmark functions that are multimodal,which are F4, F5, F7, F12, F16, and F17, RSAPSOas expected scored best on these functions with theexception of F17. As mentioned in literature [18],RPSO has shown to work particularly well on mul-timodal functions, which is most likely due to theswitching between attractive and repulsive phases.This allows the algorithm to adopt good velocity val-ues and together with the repulsive and attractivephase it helps to move the particle towards bettersolutions. The results on the benchmark functionsconfirms this by the implemented RPSO as well asour proposed RSAPSO variant showing the better re-sults.

5 Conclusion

We proposed a repulsive and adaptive PSO variant,named RSAPSO, for which every particle has itsown velocity weights, i.e., inertia weight, personalbest weight and global best weight. An objectivefunction for the velocity weights is used to measure

15

Page 10: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

198 Simone A. Ludwig

Figure 2. Function value versus FE for different benchmark functions.

that with increasing numbers of FE more bench-mark functions are solved optimally.

In terms of the average values achieved, for1,000,000 FE, 0.0 was achieved by the PSO variants41 times, for 100,000 FE 28 times, and for 10,000FE 16 times.

A Friedman ranking test [26] was applied onthe average results for the four different FE. Table 6shows the average ranks obtained by each PSO vari-ant. All the results for all four different FE are notstatistically significant at the 5% significance level.The post hoc procedures of Bonferroni-Dunn andHochberg confirmed this. However, as the previousdiscussion outlined, our approach has the best rankfor 1,000,000 FE, even though the results are notstatistically significant.

Figure 2 shows the function value for 1,000,10,000, 100,000 and 1,000,000 FE for benchmarkfunctions F9, F11, F12, and F14. It confirms oncemore that our proposed RSAPSO first performspoorly for 1,000 FE, however, showing improvedvalues with increasing numbers of FE by scoringbest for 100,000 and 1,000,000 FE.

Overall, the experiments have shown that for in-creasing FE our proposed RSAPSO variant scoredbetter than the other PSO variants. Looking atthe particular benchmark functions that are multi-

modal, which are F4, F5, F7, F12, F16, and F17,RSAPSO as expected scored best on these func-tions with the exception of F17. As mentioned inliterature [18], RPSO has shown to work particu-larly well on multimodal functions, which is mostlikely due to the switching between attractive andrepulsive phases. This allows the algorithm to adoptgood velocity values and together with the repulsiveand attractive phase it helps to move the particle to-wards better solutions. The results on the bench-mark functions confirms this by the implementedRPSO as well as our proposed RSAPSO variantshowing the better results.

5 Conclusion

We proposed a repulsive and adaptive PSO vari-ant, named RSAPSO, for which every particle hasits own velocity weights, i.e., inertia weight, per-sonal best weight and global best weight. An objec-tive function for the velocity weights is used to mea-sure the suitability of the velocity weights for solv-ing the overall optimization problem. Due to thecalculated objective values of the velocity weights,RSAPSO is able to improve the optimization pro-cess. In particular, the RSAPSO variant adapts thevelocity weights before it optimizes the solution ofthe problem (e.g., benchmark function). The advan-

0.00E+00  

2.00E-­‐08  

4.00E-­‐08  

6.00E-­‐08  

8.00E-­‐08  

1.00E-­‐07  

1.20E-­‐07  

1000  FE   10000  FE   100000  FE   1000000  FE  

Func1o

n  value  

DWPSO   TVACPSO   GCPSO   RPSO   RSAPSO  

(a) F9 Benchmark

0.00E+00  

5.00E-­‐01  

1.00E+00  

1.50E+00  

2.00E+00  

2.50E+00  

3.00E+00  

1000  FE   10000  FE   100000  FE   1000000  FE  

Func/o

n  value  

DWPSO   TVACPSO   GCPSO   RPSO   RSAPSO  

(b) F11 Benchmark

0.00E+00  

2.00E+01  

4.00E+01  

6.00E+01  

8.00E+01  

1.00E+02  

1.20E+02  

1.40E+02  

1000  FE   10000  FE   100000  FE   1000000  FE  

Func/o

n  value  

DWPSO   TVACPSO   GCPSO   RPSO   RSAPSO  

(c) F12 Benchmark

0.00E+00  

5.00E+01  

1.00E+02  

1.50E+02  

2.00E+02  

2.50E+02  

3.00E+02  

1000  FE   10000  FE   100000  FE   1000000  FE  Func.o

n  value  

DWPSO   TVACPSO   GCPSO   RPSO   RSAPSO  

(d) F14 Benchmark

Figure 2: Function value versus FE for different benchmark functions.

the suitability of the velocity weights for solving theoverall optimization problem. Due to the calculatedobjective values of the velocity weights, RSAPSO isable to improve the optimization process. In particu-lar, the RSAPSO variant adapts the velocity weightsbefore it optimizes the solution of the problem (e.g.,benchmark function). The advantage of RSAPSOis that the velocity weights adapt themselves to dy-namic changes, e.g., different particle distributions atdifferent iterations.

We evaluated our RSAPSO algorithm on twentybenchmark functions and compared it with fourPSO variants, namely decreasing weight PSO, time-varying acceleration coefficient PSO, guaranteedconvergence PSO, and attractive and repulsive PSO.Our RSAPSO variant achieves better results than the

other variants for higher numbers of FE in particularfor 1,000,000 FE. A possible reason for RSAPSO’spoorer performance for 1,000 and 10,000 FE is thatthe optimization of the velocity weights takes sev-eral iterations to have a beneficial effect since moreknowledge of the optimization problem is acquiredby then. In addition, RSAPSO has shown to workparticularly well on multimodal functions due to theincorporated attractive and repulsive phases for theoptimization of the velocity weights.

Since RSAPSO has longer running times depend-ing on the difficulty and the dimensionality of theproblem, future work will parallelize the algorithmusing Hadoop’s MapReduce methodology in order tospeed up the optimization process. Furthermore, wewould like to extend RSAPSO to integrate the idea

16

Page 11: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

199Simone A. Ludwig

Figure 2. Function value versus FE for different benchmark functions.

that with increasing numbers of FE more bench-mark functions are solved optimally.

In terms of the average values achieved, for1,000,000 FE, 0.0 was achieved by the PSO variants41 times, for 100,000 FE 28 times, and for 10,000FE 16 times.

A Friedman ranking test [26] was applied onthe average results for the four different FE. Table 6shows the average ranks obtained by each PSO vari-ant. All the results for all four different FE are notstatistically significant at the 5% significance level.The post hoc procedures of Bonferroni-Dunn andHochberg confirmed this. However, as the previousdiscussion outlined, our approach has the best rankfor 1,000,000 FE, even though the results are notstatistically significant.

Figure 2 shows the function value for 1,000,10,000, 100,000 and 1,000,000 FE for benchmarkfunctions F9, F11, F12, and F14. It confirms oncemore that our proposed RSAPSO first performspoorly for 1,000 FE, however, showing improvedvalues with increasing numbers of FE by scoringbest for 100,000 and 1,000,000 FE.

Overall, the experiments have shown that for in-creasing FE our proposed RSAPSO variant scoredbetter than the other PSO variants. Looking atthe particular benchmark functions that are multi-

modal, which are F4, F5, F7, F12, F16, and F17,RSAPSO as expected scored best on these func-tions with the exception of F17. As mentioned inliterature [18], RPSO has shown to work particu-larly well on multimodal functions, which is mostlikely due to the switching between attractive andrepulsive phases. This allows the algorithm to adoptgood velocity values and together with the repulsiveand attractive phase it helps to move the particle to-wards better solutions. The results on the bench-mark functions confirms this by the implementedRPSO as well as our proposed RSAPSO variantshowing the better results.

5 Conclusion

We proposed a repulsive and adaptive PSO vari-ant, named RSAPSO, for which every particle hasits own velocity weights, i.e., inertia weight, per-sonal best weight and global best weight. An objec-tive function for the velocity weights is used to mea-sure the suitability of the velocity weights for solv-ing the overall optimization problem. Due to thecalculated objective values of the velocity weights,RSAPSO is able to improve the optimization pro-cess. In particular, the RSAPSO variant adapts thevelocity weights before it optimizes the solution ofthe problem (e.g., benchmark function). The advan-

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

Tabl

e2.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00FE

.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .Ta

ble

2.Pe

rfor

man

cefo

rben

chm

ark

F1to

F20

for1

,000

FE.

Tabl

e2:

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00FE

.A

lgor

ithm

F1F2

F3F4

F5F6

F7F8

F9F1

0

DW

PSO

Bes

t4.

94e+

002.

09e-

011.

12e-

121.

76e-

071.

51e-

056.

00e-

054.

95e+

044.

52e+

002.

56e-

104.

82e+

03M

ean

5.98

e+00

5.50

e-01

4.71

e-10

1.66

e+00

2.13

e-02

1.45

e-01

1.00

e+06

5.74

e+00

1.41

e-08

5.33

e+03

Std

1.26

e+00

2.36

e-01

6.45

e-19

2.30

e+00

1.35

e-03

6.29

e-02

2.04

e+12

1.26

e+00

5.16

e-16

5.82

e+05

TVA

CPS

OB

est

5.56

e+00

1.29

e-02

2.13

e-14

9.94

e-01

6.37

e-12

2.51

e-04

6.10

e+02

2.99

e+00

2.44

e-14

3.31

e+03

Mea

n6.

08e+

002.

49e-

012.

00e-

133.

29e+

002.

13e-

023.

27e-

019.

59e+

036.

98e+

001.

44e-

114.

00e+

03St

d3.

11e-

014.

49e-

023.

86e-

266.

11e+

001.

35e-

033.

18e-

019.

84e+

071.

24e+

015.

35e-

227.

06e+

05

GC

PSO

Bes

t4.

51e+

003.

55e-

013.

55e-

135.

88e-

107.

92e-

061.

79e-

011.

02e+

025.

29e+

007.

51e-

102.

92e+

03M

ean

5.50

e+00

5.28

e-01

6.88

e-10

3.47

e-01

4.25

e-02

7.26

e-01

1.39

e+04

8.00

e+00

4.76

e-08

4.13

e+03

Std

7.32

e-01

7.70

e-02

1.18

e-18

3.62

e-01

1.35

e-03

2.25

e-01

3.17

e+08

2.02

e+01

2.93

e-15

1.29

e+06

RPS

OB

est

5.06

e+00

8.06

e-02

0.00

e+00

3.99

e-13

6.38

e-02

5.70

e-09

2.33

e+03

5.93

e+00

7.46

e-14

4.98

e+03

Mea

n5.

35e+

007.

97e-

014.

62e-

093.

22e-

016.

38e-

023.

33e-

011.

82e+

059.

29e+

006.

72e-

096.

02e+

03St

d6.

66e-

023.

95e-

016.

41e-

173.

26e-

011.

89e-

253.

33e-

017.

08e+

101.

03e+

011.

35e-

161.

42e+

06

RSA

PSO

Bes

t5.

72e+

008.

06e-

026.

56e-

093.

94e-

136.

38e-

025.

70e-

092.

33e+

035.

93e+

006.

47e-

094.

98e+

03M

ean

6.13

e+00

7.97

e-01

1.28

e-08

3.31

e-01

6.38

e-02

3.33

e-01

1.82

e+05

9.29

e+00

1.07

e-07

6.02

e+03

Std

2.37

e-01

3.95

e-01

3.33

e-17

3.29

e-01

4.04

e-21

3.33

e-01

7.08

e+10

1.03

e+01

1.26

e-12

1.42

e+06

Alg

orith

mF1

1F1

2F1

3F1

4F1

5F1

6F1

7F1

8F1

9F2

0

DW

PSO

Bes

t1.

92e+

001.

23e+

025.

15e+

032.

52e+

026.

10e+

039.

36e-

031.

80e-

074.

76e+

041.

82e+

033.

37e-

03M

ean

2.50

e+00

1.40

e+02

5.77

e+03

2.77

e+02

1.03

e+04

9.60

e-03

2.55

e-04

4.91

e+04

3.39

e+03

6.93

e-01

Std

3.52

e-01

8.93

e+02

7.34

e+05

8.67

e+02

1.32

e+07

4.16

e-08

1.89

e-07

3.10

e+06

4.50

e+06

1.28

e+00

TVA

CPS

OB

est

1.21

e+00

1.11

e+02

4.17

e+03

1.23

e+02

1.47

e+03

9.72

e-03

1.27

e-07

3.17

e+04

1.10

e+03

3.64

e-06

Mea

n1.

80e+

001.

28e+

024.

73e+

031.

69e+

023.

22e+

039.

72e-

031.

14e-

063.

78e+

041.

21e+

031.

33e+

00St

d3.

49e-

012.

29e+

024.

53e+

052.

96e+

032.

86e+

063.

06e-

193.

07e-

129.

71e+

078.

86e+

031.

33e+

00

GC

PSO

Bes

t1.

77e+

008.

00e+

015.

13e+

032.

29e+

025.

75e+

039.

72e-

031.

12e-

084.

22e+

041.

27e+

031.

27e-

04M

ean

2.25

e+00

1.08

e+02

5.67

e+03

2.99

e+02

9.23

e+03

9.72

e-03

1.10

e-07

5.06

e+04

3.09

e+03

1.00

e+00

Std

2.67

e-01

8.45

e+02

2.51

e+05

7.73

e+03

3.29

e+07

5.94

e-27

1.92

e-14

5.75

e+07

2.57

e+06

1.00

e+00

RPS

OB

est

2.17

e+00

1.12

e+02

5.61

e+03

2.29

e+02

2.80

e+03

3.15

e-03

2.84

e-13

3.84

e+04

2.24

e+03

1.46

e-08

Mea

n2.

80e+

001.

33e+

026.

28e+

032.

49e+

025.

67e+

035.

34e-

031.

36e-

074.

62e+

043.

38e+

033.

33e-

01St

d4.

36e-

015.

33e+

023.

57e+

052.

28e+

021.

25e+

071.

48e-

055.

59e-

144.

87e+

071.

43e+

063.

33e-

01

RSA

PSO

Bes

t2.

18e+

001.

12e+

025.

63e+

032.

28e+

022.

81e+

033.

05e-

034.

09e-

073.

81e+

042.

24e+

039.

67e-

08M

ean

2.83

e+00

1.33

e+02

6.21

e+03

2.46

e+02

5.60

e+03

5.31

e-03

1.04

e-05

4.60

e+04

3.49

e+03

3.33

e-01

Std

4.63

e-01

5.33

e+02

3.55

e+05

2.24

e+02

1.27

e+07

1.45

e-05

1.93

e-10

4.83

e+07

1.89

e+06

3.33

e-01

11

Page 12: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

200 Simone A. Ludwig

Tabl

e3.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r10,

000

FE.

Simone A. Ludwig

Tabl

e3.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r10,

000

FE.

Tabl

e3:

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r10,

000

FE.

Alg

orith

mF1

F2F3

F4F5

F6F7

F8F9

F10

DW

PSO

Bes

t1.

17e-

012.

32e-

070.

00e+

002.

22e-

160.

00e+

000.

00e+

001.

21e+

004.

85e-

017.

77e-

144.

97e+

02M

ean

4.65

e-01

1.59

e-06

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

1.93

e+00

7.01

e-01

7.70

e-14

6.81

e+02

Std

1.02

e-01

5.53

e-12

0.00

e+00

9.12

e-64

0.00

e+00

0.00

e+00

6.24

e-01

3.75

e-02

1.64

e-30

2.86

e+04

TVA

CPS

OB

est

3.31

e-02

2.36

e-10

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

1.78

e+00

1.13

e-02

7.82

e-14

4.42

e+02

Mea

n9.

63e-

012.

43e-

070.

00e+

002.

22e-

162.

13e-

020.

00e+

003.

79e+

002.

73e-

027.

49e-

145.

41e+

02St

d6.

57e-

011.

11e-

130.

00e+

009.

12e-

641.

35e-

030.

00e+

004.

89e+

002.

07e-

048.

15e-

309.

61e+

03

GC

PSO

Bes

t1.

82e-

013.

91e-

080.

00e+

002.

22e-

160.

00e+

000.

00e+

001.

06e+

002.

53e-

017.

77e-

141.

34e+

02M

ean

8.44

e-01

6.70

e-04

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

1.97

e+00

5.45

e-01

7.61

e-14

2.58

e+02

Std

3.78

e-01

1.34

e-06

0.00

e+00

9.12

e-64

0.00

e+00

0.00

e+00

2.04

e+00

6.40

e-02

4.40

e-30

1.27

e+04

RPS

OB

est

2.42

e+00

1.11

e-05

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

2.60

e+00

1.65

e-01

7.68

e-14

9.86

e+02

Mea

n3.

15e+

002.

14e-

050.

00e+

002.

22e-

160.

00e+

000.

00e+

005.

08e+

004.

86e-

017.

37e-

141.

09e+

03St

d4.

51e-

012.

37e-

100.

00e+

009.

12e-

640.

00e+

000.

00e+

001.

09e+

013.

09e-

011.

99e-

291.

40e+

04

RSA

PSO

Bes

t2.

58e+

001.

11e-

050.

00e+

002.

22e-

160.

00e+

000.

00e+

003.

31e+

003.

68e-

017.

64e-

149.

86e+

02M

ean

3.20

e+00

2.14

e-05

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

6.11

e+00

7.37

e-01

7.30

e-14

1.09

e+03

Std

3.40

e-01

2.37

e-10

0.00

e+00

9.12

e-56

0.00

e+00

0.00

e+00

7.61

e+00

1.44

e-01

1.98

e-29

1.40

e+04

Alg

orith

mF1

1F1

2F1

3F1

4F1

5F1

6F1

7F1

8F1

9F2

0

DW

PSO

Bes

t9.

04e-

015.

07e+

011.

75e+

031.

11e+

011.

11e+

025.

04e-

082.

84e-

146.

02e+

032.

00e+

000.

00e+

00M

ean

1.33

e+00

8.72

e+01

2.18

e+03

2.97

e+01

1.36

e+02

5.63

e-03

4.74

e-14

8.47

e+03

3.00

e+00

3.33

e-01

Std

2.79

e-01

9.99

e+02

1.68

e+05

8.43

e+02

9.29

e+02

2.54

e-05

2.69

e-28

4.71

e+06

1.00

e+00

3.33

e-01

TVA

CPS

OB

est

2.64

e-01

5.60

e+01

1.40

e+03

7.01

e+01

6.58

e+01

0.00

e+00

2.84

e-14

3.51

e+03

9.00

e+00

0.00

e+00

Mea

n7.

61e-

016.

73e+

011.

63e+

038.

38e+

011.

15e+

026.

48e-

035.

68e-

146.

56e+

032.

33e+

013.

33e-

01St

d6.

97e-

011.

21e+

024.

26e+

041.

62e+

021.

86e+

033.

15e-

058.

08e-

287.

60e+

062.

26e+

023.

33e-

01

GC

PSO

Bes

t2.

57e-

015.

65e+

011.

43e+

037.

39e+

019.

61e+

010.

00e+

005.

68e-

146.

46e+

037.

00e+

000.

00e+

00M

ean

5.62

e-01

7.55

e+01

1.52

e+03

1.02

e+02

1.12

e+02

3.24

e-03

6.63

e-14

1.02

e+04

1.10

e+01

3.33

e-01

Std

8.42

e-02

3.56

e+02

8.85

e+03

1.53

e+03

7.14

e+02

3.15

e-05

2.69

e-28

1.06

e+07

1.30

e+01

3.33

e-01

RPS

OB

est

4.50

e-01

4.51

e+01

2.08

e+03

1.24

e+02

1.65

e+02

5.04

e-08

5.68

e-14

3.22

+03

1.90

e+02

0.00

e+00

Mea

n1.

03e+

007.

18e+

012.

73e+

031.

38e+

022.

34e+

025.

63e-

037.

58e-

143.

80e+

033.

11e+

028.

59e-

11St

d4.

32e-

015.

34e+

023.

44e+

054.

54e+

025.

40e+

032.

54e-

052.

69e-

282.

96e+

051.

10e+

042.

21e-

20

RSA

PSO

Bes

t4.

50e-

014.

51e+

012.

08e+

034.

22e+

017.

72e+

010.

00e+

005.

68e-

146.

46e+

032.

90e+

010.

00e+

00M

ean

1.03

e+00

7.28

e+01

2.47

e+03

5.75

e+01

2.05

e+02

0.00

e+00

7.58

e-14

1.02

e+04

2.00

e+02

0.00

e+00

Std

4.32

e-01

5.77

e+02

1.70

e+05

1.76

e+02

1.40

e+04

0.00

e+00

2.69

e-28

1.06

e+07

3.12

e+04

0.00

e+00

12

Page 13: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

201Simone A. Ludwig

Tabl

e3.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r10,

000

FE.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

Tabl

e4.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r100

,000

FE.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

Tabl

e4.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r100

,000

FE.

Tabl

e4:

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r100

,000

FE.

Alg

orith

mF1

F2F3

F4F5

F6F7

F8F9

F10

DW

PSO

Bes

t5.

68e-

102.

22e-

160.

00e+

000.

00e+

000.

00e+

000.

00e+

003.

81e-

188.

79e-

147.

82e-

148.

89e-

02M

ean

1.19

e-09

3.52

e-16

0.00

e+00

1.48

e-16

0.00

e+00

0.00

e+00

1.60

e-17

1.39

e-02

7.80

e-14

4.12

e-01

Std

1.01

e-18

5.03

e-32

0.00

e+00

1.64

e-32

0.00

e+00

0.00

e+00

3.18

e-34

1.83

e-04

6.57

e-32

1.16

e-01

TVA

CPS

OB

est

4.24

e-11

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

6.99

e-15

7.82

e-14

2.77

e-01

Mea

n5.

29e-

110.

00e+

000.

00e+

007.

40e-

170.

00e+

000.

00e+

005.

67e-

208.

21e-

037.

82e-

146.

25e-

01St

d1.

50e-

220.

00e+

000.

00e+

001.

64e-

320.

00e+

000.

00e+

006.

63e-

397.

47e-

052.

39e-

589.

09e-

02

GC

PSO

Bes

t5.

87e-

113.

89e-

150.

00e+

000.

00e+

000.

00e+

000.

00e+

006.

00e-

209.

86e-

037.

95e-

144.

77e-

02M

ean

1.83

e-10

4.16

e-15

0.00

e+00

1.48

e-16

0.00

e+00

0.00

e+00

4.80

e-19

2.46

e-02

7.86

e-14

6.54

e-02

Std

3.68

e-20

9.55

e-32

0.00

e+00

1.64

e-32

0.00

e+00

0.00

e+00

2.69

e-37

1.88

e-04

5.92

e-31

3.62

e-04

RPS

OB

est

1.18

e-09

3.22

e-15

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

2.06

e-17

2.22

e-16

7.84

e-14

4.33

e+00

Mea

n8.

59e-

015.

03e-

140.

00e+

007.

40e-

170.

00e+

000.

00e+

003.

46e-

021.

07e-

027.

80e-

144.

47e+

01St

d6.

81e-

016.

33e-

270.

00e+

001.

64e-

320.

00e+

000.

00e+

003.

58e-

038.

70e-

057.

57e-

324.

29e+

03

RSA

PSO

Bes

t1.

18e-

093.

22e-

150.

00e+

002.

22e-

160.

00e+

000.

00e+

002.

06e-

172.

22e-

167.

82e-

142.

04e-

01M

ean

9.76

e-02

5.03

e-14

0.00

e+00

2.22

e-16

0.00

e+00

0.00

e+00

3.46

e-02

1.07

e-02

7.80

e-14

4.67

e+00

Std

2.86

e-02

6.33

e-27

0.00

e+00

9.12

e-64

0.00

e+00

0.00

e+00

3.58

e-03

8.70

e-05

6.57

e-32

2.16

e+01

Alg

orith

mF1

1F1

2F1

3F1

4F1

5F1

6F1

7F1

8F1

9F2

0

DW

PSO

Bes

t4.

91e-

031.

20e+

014.

29e+

011.

69e+

012.

37e+

010.

00e+

002.

84e-

142.

83e+

000.

00e+

000.

00e+

00M

ean

1.96

e-01

2.30

e+01

1.05

e+02

2.06

e+01

4.36

e+01

0.00

e+00

2.84

e-14

3.57

e+00

0.00

e+00

0.00

e+00

Std

2.98

e-02

2.73

e+02

4.78

e+03

1.22

e+01

8.93

e+02

0.00

e+00

0.00

e+00

1.28

e+00

0.00

e+00

0.00

e+00

TVA

CPS

OB

est

2.90

e-01

2.20

e+01

9.24

e+01

2.69

e+01

9.38

e+00

0.00

e+00

2.84

e-14

3.36

e+00

0.00

e+00

0.00

e+00

Mea

n4.

22e-

012.

70e+

011.

25e+

023.

32e+

013.

50e+

010.

00e+

005.

68e-

142.

93e+

013.

33e-

013.

33e-

01St

d2.

78e-

022.

50e+

019.

89e+

024.

19e+

011.

21e+

030.

00e+

008.

08e-

285.

19e+

023.

33e-

013.

33e-

01

GC

PSO

Bes

t4.

67e-

023.

05e+

017.

28e+

013.

38e+

012.

39e+

010.

00e+

002.

84e-

142.

02e+

000.

00e+

000.

00e+

00M

ean

3.11

e-01

3.58

e+01

7.99

e+01

3.65

e+01

2.52

e+01

0.00

e+00

2.84

e-14

4.36

e+00

0.00

e+00

0.00

e+00

Std

8.86

e-02

2.17

e+01

1.28

e+02

9.24

e+00

2.48

e+00

0.00

e+00

0.00

e+00

8.60

e+00

0.00

e+00

0.00

e+00

RPS

OB

est

1.78

e-15

5.06

e+00

2.38

e+02

3.78

e+01

2.32

e+01

0.00

e+00

2.84

e-14

5.68

e+00

0.00

e+00

0.00

e+00

Mea

n1.

39e-

029.

02e+

002.

73e+

025.

77e+

012.

32e+

010.

00e+

005.

68e-

142.

13e+

023.

33e+

000.

00e+

00St

d5.

87e-

041.

28e+

011.

25e+

033.

45e+

023.

16e-

040.

00e+

008.

08e-

281.

29e+

011.

23e+

010.

00e+

00

RSA

PSO

Bes

t0.

00e+

005.

06e+

005.

46e+

013.

28e+

012.

32e+

010.

00e+

002.

84e-

141.

89e-

020.

00e+

000.

00e+

00M

ean

1.33

e-02

9.02

e+00

1.14

e+02

5.60

e+01

2.32

e+01

0.00

e+00

5.68

e-14

4.78

e-01

0.00

e+00

0.00

e+00

Std

5.29

e-04

1.28

e+01

6.74

e+03

4.53

e+02

3.16

e-04

0.00

e+00

8.08

e-28

1.79

e-01

0.00

e+00

0.00

e+00

13

Page 14: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

202 Simone A. Ludwig

Tabl

e5.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00,0

00FE

.

Simone A. Ludwig

Tabl

e5.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00,0

00FE

.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

Table 6. Average rankings of the algorithms (Friedman)

Algorithm 1,000 FE 10,000 FE 100,000 FE 1,000,000 FEDWPSO 3.275 2.85 2.65 3.275TVACPSO 2 2.7 3.15 3.475GCPSO 2.975 2.7 2.75 2.8RPSO 3.15 3.375 3.55 2.8RSAPSO 3.6 3.375 2.9 2.65

Tabl

e5:

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00,0

00FE

.A

lgor

ithm

F1F2

F3F4

F5F6

F7F8

F9F1

0

DW

PSO

Bes

t7.

55e-

150.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

007.

40e-

037.

95e-

149.

94e-

15M

ean

7.55

e-15

9.99

e-16

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

1.07

e-02

7.86

e-14

6.26

e-14

Std

0.00

e+00

3.00

e-30

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

8.09

e-06

5.92

e-31

7.75

e-27

TVA

CPS

OB

est

7.55

e-15

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

7.82

e-14

7.59

e-10

Mea

n9.

92e-

150.

00e+

000.

00e+

007.

40e-

170.

00e+

000.

00e+

000.

00e+

006.

56e-

037.

82e-

141.

18e-

08St

d1.

68e-

290.

00e+

000.

00e+

001.

64e-

320.

00e+

000.

00e+

000.

00e+

001.

29e-

042.

39e-

582.

00e-

16

GC

PSO

Bes

t7.

55e-

150.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

008.

04e-

142.

76e-

16M

ean

7.55

e-15

1.30

e-15

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

4.93

e-03

7.89

e-14

4.63

e-16

Std

0.00

e+00

5.03

e-30

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

7.28

e-05

1.64

e-30

6.43

e-32

RPS

OB

est

7.55

e-15

2.22

e-16

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

8.04

e-14

2.40

e-19

Mea

n9.

92e-

152.

61e-

150.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

006.

57e-

037.

89e-

142.

32e-

17St

d1.

68e-

296.

60e-

300.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

003.

85e-

051.

64e-

301.

32e-

33

RSA

PSO

Bes

t7.

55e-

152.

22e-

160.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

008.

04e-

142.

40e-

19M

ean

9.92

e-15

2.61

e-15

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

6.57

e-03

7.61

e-14

2.32

e-17

Std

1.68

e-29

6.60

e-30

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

0.00

e+00

3.85

e-05

1.64

e-30

1.32

e-33

Alg

orith

mF1

1F1

2F1

3F1

4F1

5F1

6F1

7F1

8F1

9F2

0

DW

PSO

Bes

t1.

78e-

157.

00e+

008.

55e-

044.

97e+

001.

40e+

010.

00e+

000.

00e+

001.

05e-

130.

00e+

000.

00e+

00M

ean

2.81

e-02

1.00

e+01

1.65

e-03

2.12

e+01

1.80

e+01

0.00

e+00

9.47

e-15

2.13

e-12

0.00

e+00

0.00

e+00

Std

6.13

e-04

1.30

e+01

4.82

e-07

2.40

e+01

1.19

e+01

0.00

e+00

2.69

e-28

8.66

e-24

0.00

e+00

0.00

e+00

TVA

CPS

OB

est

4.91

e-03

8.00

e+00

2.27

e-02

1.19

e+01

6.14

e+00

0.00

e+00

0.00

e+00

5.54

e-08

0.00

e+00

0.00

e+00

Mea

n3.

62e-

021.

37e+

015.

52e+

001.

82e+

011.

04e+

010.

00e+

004.

74e-

141.

63e-

070.

00e+

000.

00e+

00St

d8.

37e-

042.

43e+

018.

88e+

015.

97e+

013.

40e+

010.

00e+

001.

88e-

271.

77e-

140.

00e+

000.

00e+

00

GC

PSO

Bes

t0.

00e+

007.

00e+

001.

01e-

041.

09e+

018.

70e+

000.

00e+

005.

68e-

147.

64e-

160.

00e+

000.

00e+

00M

ean

1.72

e-02

9.00

e+00

2.86

e-04

1.76

e+01

1.64

e+01

0.00

e+00

5.68

e-14

2.79

e-15

0.00

e+00

0.00

e+00

Std

6.58

e-04

3.00

e+00

2.57

e-08

3.70

e+01

4.41

e+01

0.00

e+00

0.00

e+00

1.02

e-29

0.00

e+00

0.00

e+00

RPS

OB

est

0.00

e+00

0.00

e+00

8.56

e-07

4.97

e+00

1.73

e-03

0.00

e+00

0.00

e+00

4.25

e-18

0.00

e+00

0.00

e+00

Mea

n5.

71e-

063.

33e-

015.

36e-

052.

12e+

011.

36e+

000.

00e+

003.

79e-

148.

31e-

160.

00e+

000.

00e+

00St

d9.

78e-

113.

33e-

018.

13e-

092.

40e+

015.

25e+

000.

00e+

001.

88e-

271.

08e-

300.

00e+

000.

00e+

00

RSA

PSO

Bes

t0.

00e+

000.

00e+

004.

42e-

041.

09e+

011.

73e-

030.

00e+

000.

00e+

000.

00e+

000.

00e+

000.

00e+

00M

ean

1.18

e-15

3.33

e-01

7.03

e+00

1.23

e+01

1.36

e+00

0.00

e+00

3.79

e-14

0.00

e+00

0.00

e+00

0.00

e+00

Std

1.05

e-30

3.33

e-01

1.48

e-02

2.31

e+00

5.25

e+00

0.00

e+00

1.88

e-27

0.00

e+00

0.00

e+00

0.00

e+00

14

Page 15: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

203Simone A. Ludwig

Tabl

e5.

Perf

orm

ance

forb

ench

mar

kF1

toF2

0fo

r1,0

00,0

00FE

.

REPULSIVE SELF-ADAPTIVE ACCELERATION . . .

tage of RSAPSO is that the velocity weights adaptthemselves to dynamic changes, e.g., different par-ticle distributions at different iterations.

We evaluated our RSAPSO algorithm on twentybenchmark functions and compared it with fourPSO variants, namely decreasing weight PSO, time-varying acceleration coefficient PSO, guaranteedconvergence PSO, and attractive and repulsive PSO.Our RSAPSO variant achieves better results thanthe other variants for higher numbers of FE inparticular for 1,000,000 FE. A possible reasonfor RSAPSO’s poorer performance for 1,000 and10,000 FE is that the optimization of the velocityweights takes several iterations to have a beneficialeffect since more knowledge of the optimizationproblem is acquired by then. In addition, RSAPSOhas shown to work particularly well on multimodalfunctions due to the incorporated attractive and re-pulsive phases for the optimization of the velocityweights.

Since RSAPSO has longer running times de-pending on the difficulty and the dimensionality ofthe problem, future work will parallelize the algo-rithm using Hadoop’s MapReduce methodology inorder to speed up the optimization process. Further-more, we would like to extend RSAPSO to integratethe idea of moving bound behavior that would allowexpert knowledge about the search space for the ve-locity weights to be incorporated.

References[1] J. Kennedy and R. Eberhart, Particle swarm opti-

mization, Proceedings of IEEE International Con-ference on Neural Networks, 4:1942–1948, 1995.

[2] A. Engelbrecht, Computational Intelligence: AnIntroduction, 2nd Edition, Wiley, 2007.

[3] A. Ratnaweera, S. K. Halgamuge, and H. C. Wat-son, Self-organizing hierarchical particle swarmoptimizer with time-varying acceleration coeffi-cients, IEEE Transaction on Evolutionary Com-putation, 8(3):240–255, 2004.

[4] X. Li, H. Fu, and C. Zhang, A self-adaptive par-ticle swarm optimization algorithm, Proceedingsof the 2008 International Conference on ComputerScience and Software Engineering, 186–189, 2008.

[5] I. C. Trelea, The Particle Swarm OptimizationAlgorithm: Convergence Analysis and ParameterSelection, Information Processing Letters, 85(6),317-325, 2003.

[6] F. Van den Bergh and A. P. Engelbrecht, A Study ofParticle Swarm Optimization Particle Trajectories,Information Sciences, 176(8), 937-971, 2006.

[7] C. K. Monson, K. D. Seppi, Exposing Origin-Seeking Bias in PSO, Proceedings of GECCO’05,pp. 241-248, 2005.

[8] F. Van den Bergh and A. P. Engelbrecht, A newlocally convergent particle swarm optimiser, Pro-ceedings of the IEEE International Conference onSystems, Man and Cybernetics, 3:94–99, 2002.

[9] F. Schutte, A. A. Groenwold, A Study of GlobalOptimization using Particle Swarms, Journal ofGlobal Optimization, 31, 93-108, 2005.

[10] A. Chatterjee, P. Siarry, Nonlinear inertial weightvariation for dynamic adaptation in particle swarmoptimization, Journal of Computer Operation Re-search, 33(3), 859-871, 2004.

[11] X. Yang, J. Yuan, J. Yuan, and H. Mao, A modi-fied particle swarm optimizer with dynamic adap-tation, Applied Mathematics and Computation,189(2):1205–1213, 2007.

[12] J. Zhu, J. Zhao, and X. Li, A new adaptive par-ticle swarm optimization algorithm, InternationalWorkshop on Modelling, Simulation and Optimiza-tion, 456–458, 2008.

[13] Y. Bo, Z. Ding-Xue, and L. Rui-Quan, A mod-ified particle swarm optimization algorithm withdynamic adaptive, 2007 Third International Con-ference on Intelligent Information Hiding and Mul-timedia Signal Processing, 2:346–349, 2007.

[14] T. Yamaguchi and K. Yasuda, Adaptive parti-cle swarm optimization; self-coordinating mech-anism with updating information, IEEE Interna-tional Conference on Systems, Man and Cybernet-ics, 2006. SMC ’06, 3:2303–2308, 2006.

[15] T. Yamaguchi, N. Iwasaki, and K. Yasuda, Adap-tive particle swarm optimization using informationabout global best, IEEE Transactions on Electron-ics, Information and Systems, 126:270–276, 2006.

[16] K. Yasuda, K. Yazawa, and M. Motoki, Particleswarm optimization with parameter self-adjustingmechanism, IEEE Transactions on Electrical andElectronic Engineering, 5(2):256–257, 2010.

[17] S. A. Ludwig, Towards A Repulsive and AdaptiveParticle Swarm Optimization Algorithm, Proceed-ings of Genetic and Evolutionary ComputationConference (ACM GECCO), Amsterdam, Nether-lands, July 2013 (short paper).

[18] J. Riget and J. S. Vesterstrø, A diversity-guidedparticle swarm optimizer — The ARPSO, EVALifeTechnical Report no. 2002-2, 2002.

Page 16: REPULSIVE SELF-ADAPTIVE ACCELERATION PARTICLE SWARM OPTIMIZATION APPROACHyadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech-7b06e12b … · REPULSIVE SELF-ADAPTIVE ACCELERATION

204 Simone A. Ludwig

[19] A. Ide and K. Yasuda, A basic study ofadaptive particle swarm optimization, DenkiGakkai Ronbunshi, Electrical Engineering inJapan, 151(3):41–49, 2005.

[20] M. Meissner, M. Schmuker, and G. Schneider, Op-timized particle swarm optimization (OPSO) andits application to artificial neural network training,BMC Bioinformatics, 7(1):125, 2006.

[21] X. Cai, Z. Cui, J. Zeng, and Y. Tan, Self-learningparticle swarm optimization based on environmen-tal feedback, Innovative Computing, Informationand Control, 2007. ICICIC ’07, page 570, 2007.

[22] Z. H. Zhan and J. Zhang, Proceedings of the 6thInternational Conference on Ant Colony Optimiza-tion and Swarm Intelligence, pages 227–234, 2008.

[23] M. Clerc, Semi-continuous chal-lenge, http://clerc.maurice.free.fr/pso/semi-continuous challenge/, April 2004.

[24] M. Molga and C. Smutnicki, Test functions for op-timization needs, http://zsd.iiar.pwr.wroc.pl/, 2005.

[25] Z. H. Zhan, J. Zhang, Y. Li, and H. S. H. Chung,Adaptive particle swarm optimization, IEEETransactions on Systems, Man, and Cybernetics,Part B: Cybernetics, 39(6):1362–1381, 2009.

[26] M. Friedman, The use of ranks to avoid the as-sumption of normality implicit in the analysis ofvariance, Journal of the American Statistical Asso-ciation, vol. 32, no. 200, pp. 675–701, 1937.