hybrid learning particle swarm optimizer with genetic disturbance
TRANSCRIPT
![Page 1: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/1.jpg)
Hybrid learning particle swarm optimizer with genetic disturbance
Yanmin Liu a,b,n, Ben Niu c,d, Yuanfeng Luo a
a School of Mathematics and Computer Science, Zunyi Normal College, Zunyi 563002, Chinab School of Economics and Management, Tongji University, Shanghai 200092, Chinac College of Management, Shenzhen University, Shenzhen 518060, Chinad Hefei Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China
a r t i c l e i n f o
Article history:Received 10 November 2013Received in revised form3 March 2014Accepted 8 March 2014Available online 4 November 2014
Keywords:Particle swarm optimizerHybrid learningGenetic disturbance
a b s t r a c t
Particle swarm optimizer (PSO) is a population-based stochastic optimization technique which has alreadybeen successfully applied to the engineering and other scientific fields. This paper presents a modificationof PSO (hybrid learning PSO with genetic disturbance, HLPSO-GD for short) intended to combat theproblem of premature convergence observed in many PSO variants. In HLPSO-GD, the swarm uses a hybridlearning strategy whereby all other particles’ previous best information is adopted to update a particle'sposition. Additionally, to better make use of the excellent particle's information, the global external archiveis introduced to store the best performing particle in the whole swarm. Furthermore, the geneticdisturbance (simulated binary crossover and polynomial mutation) is used to cross the correspondingparticle in the external archive, and generate new individuals which will improve the swarm ability toescape from the local optima. Experiments were conducted on a set of traditional multimodal testfunctions and CEC 2013 benchmark functions. The results demonstrate the good performance of HLPSO-GD in solving multimodal problems when compared with the other PSO variants.
& 2014 Elsevier B.V. All rights reserved.
1. Introduction
Particle swarm optimizer (PSO), originated from the simulation ofhuman and social animal behavior [1], has come to be successfullyapplied to the engineering and other scientific fields. It has beenproven to be a powerful competitor to other evolutionary algorithms,such as genetic algorithms [2]. In the PSO running mechanism, itsimulates the social behavior of individuals, and each particle is“evolved” by cooperation and competition among the individualsthrough generations. In the swarm, the particles evaluate their posi-tions relative to an objective function at each iteration, share thememories of its own flying experiences and the best experience ofthe swarm, and then use those memories to adjust their own velo-cities and positions. In the past decade, many researchers proposeddifferent variants of PSO, including parameters improvements, topol-ogies design, hybrid strategies, and so on [3–29].
Most stochastic optimization algorithms (like PSO and GA) willsuffer from the ‘curse of dimensionality’ which implies that the algo-ithm performance will deteriorate as the dimensionality of the searchspace increases. Usually, a basic stochastic global search algorithm cangenerate a sample for a uniform distribution that covers the entiresearch space [18]. Given this idea and combined with our previouswork in [33,34], this paper introduces a variant of PSO (hybrid learning
PSO with genetic disturbance, HLPSO-GD for short). In HLPSO-GD, ahybrid learning strategy is introduced, in which all other particles'historical best information is used to update a particle's velocity, andthe genetic disturbance (simulated binary crossover and polynomialmutation) is used to cross the corresponding particle in the externalarchive, and generate a new particle which will improve the swarmability to escape from the local optima, respectively.
Additionally, in order to increase the information exchange amongall particles, the neighborhood topology is not fixed, but dynamicallyconstructed. The used strategies of HLPSO-GD ensure the swarm'sdiversity against the premature convergence, especially for complexmultimodal problems. The experimental results demonstrate theproposed HLPSO-GD is able to escape from the local optima to someextend when solving the complex multimodal problems.
The organization of this paper is as follows: in Section 2, anoverview of the basic version of PSO and some PSO variants arediscussed, the introduction of the proposed HLPSO-GD is inSections 3, and 4 presents the experiments to be conducted. Theexperiments' results and conclusions are presented in Section 5.
2. Particle swarm optimizer
2.1. Basic particle swarm optimizer (BPSO)
PSO is a population-based optimization algorithm that startswith an initial population of randomly generated particles. Each
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/neucom
Neurocomputing
http://dx.doi.org/10.1016/j.neucom.2014.03.0810925-2312/& 2014 Elsevier B.V. All rights reserved.
n Corresponding author at: School of Mathematics and Computer Science, ZunyiNormal College, Zunyi 563002, China.
E-mail address: [email protected] (Y. Liu).
Neurocomputing 151 (2015) 1237–1247
![Page 2: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/2.jpg)
particle is endowed with historical memory that enables it toremember the best position it found so far. Each individual isattracted by its own best experience and neighbors' best experi-ences (best found position by the neighbors) as follows:
vi ðtÞ ¼wUvi ðt�1Þþφ1 Ur1ðpi �xi ðt�1ÞÞþφ2 Ur2ðpg �xi ðt�1ÞÞ ð2:1Þ
xi ðtÞ ¼ xi ðt�1Þþvi ðtÞ ð2:2Þ
where, vi ðtÞ is the velocity of the ith particle, xi ðtÞ is the position ofthe ith particle, pi is the best previous position for the ith particle, piis the global best particle found by all particles so far. w is theweight value used to balance the local and global search abilities ofa particle. φ1 and ϕ2 are two learning factors called accelerationcoefficient, r1 and r2 are two uniformly distributed random variablein the range (0, 1). Fig. 1 gives the pseudo code of the basic PSO.
The basic PSO can be classified into two main types, i.e. localversion PSO model (LPSO) and global version PSO model (GPSO).The difference of two models is the way of defining the neighbor-hood for each particle. In GPSO, a particle neighborhood consists ofall particles in the whole swarm except itself, which ensureinformation sharing between each other (in graph theory, theneighborhood topology is all type, where all vertexes are con-nected to every other). In LPSO, the neighborhood of a particle isdefined by several fixed particles, such as ring, four clusters,square, and pyramid type and so on. The two models will producethe different performances. Clerc and Kennedy [3] have discussedtheir conclusions: GPSO has a faster convergence speed but alsohas a higher probability of being trapped into local optima thanLPSO. On the contrary, the LPSO is less vulnerable to the attractionof local optima but has a slower convergence speed than GPSO.Given the relationship between GPSO and LPSO and the characterof hybrid learning, GPSO is adopted in the proposed HLPSO-GD,and the experimental results demonstrated that GPSO is morereliable on many test problems.
2.2. Some PSO variants
Due to PSO simplicity and effectiveness, it has already beensuccessfully applied to many science and engineering field, andsome improved variants have been proposed in the literatures. Theexisting literatures mainly discuss improvement of PSO in fouraspects: population topology [4–8], diversity maintenance [9–14],particle learning strategy [15–20] and application of other strategy[21–29]. Some PSO variants are briefly reviewed hereinafter.
(1) Neighborhood topology: The topology type has a significanteffect on PSO performance, and it determines the way how theparticles in the swarm communicate or share the importantinformation with each other. From the process of populationtopology construction, two types are produced, i.e. static anddynamic topologies. According to their characters, Kennedyet al. [5] have tested some social-network topologies on fivetest functions. After that, Mendes et al. proposed a fullyinformed PSO (FIPS) [8], where a particle uses a stochasticaverage of pbest's (pbest: personal best position of a givenparticle, so far) from all of its neighbors instead of using itsown pbest and the gbest (gbest: position of the best particle ofthe entire swarm) in flight direction adjustment. To make useof the swarm information, Liang et al. [6] proposed a compre-hensive learning PSO (CLPSO) for multimodal problems, wherea particle uses different pbest to update its velocity, and canpotentially learn from other member for each dimension. Inthe recent study, Nai et al. [7] proposed an enhanced particleswarm optimizer in comporting a weighted particle (EPSOWP)is proposed to improve the evolutionary performance. Exceptfor static topology, Suganthan [4] proposed a dynamic topol-ogy PSO, where the search begins with LPSO and is graduallyincreased until the GPSO is reached.
(2) Diversity maintenance: In [13], the author stated that the lack ofpopulation diversity will lead to the fact that PSO will appearthe premature convergence. Therefore, several methods of thediversity maintenances have been introduced into PSO toimprove the swarm ability to escape local optima. For example,in [9], the author manages the diversity by preventing too manyparticles from getting crowded in search space. In [11], theauthor introduced a niching PSO where a cognitive-only PSO isincorporate to ensure PSO convergence. In [12], the authorproposed a speciation-based PSO where the population size isdynamically adjusted by constructing an ordered list of parti-cles. In [10], the author proposed a modified particle swarmoptimization (PSO) with simulated annealing (SA) techniquewhich has a better ability to escape from a local optimum. Inrecent work, Yang et al. [14] proposed a clustering PSO where ahierarchical clustering method is used to produce multi-swarms. In [25], the author adopted the grammatical differen-tial evolution to improve the swarm diversity to conquer thepremature convergence. The authors of Ref. [26] used Gram-matical Swarm for the same purpose.
(3) Particle learning strategy: In [15], the author proposed thefitness-distance-ratio PSO (FDR-PSO) where a particle will selectanother particle pbest which is supposed to be a higher fitnessand the nearest to the particle being updated. In [16], the authorproposed a randomly generated directed graph to define neigh-borhood topology where the ‘random edge migration’ methodand the ‘neighborhood re-structuring’ method are adopted.Janson et al. [17] also proposed a new learning strategy wherea hierarchy structure is arranged. In [18], the author proposed aclubs PSO (C-PSO) where each club membership is dynamicallychanged to avoid premature convergence. Recently, in [19] theauthor proposed DMS-PSO where the swarm is divided into anumber of sub-swarms, and each sub-swarm is regroupedfrequently by using various information exchanges in the wholeswarm. Except for sub-swarms construction, Van et al. [20]proposed a cooperative particle swarm optimizer (CPSO-H)where the search process are integrated by a global swarm.Additionally, Omran et al. [27] proposed opposition-based learn-ing to improve the performance of particle swarm optimization.Han et al. [28] also introduced opposition-based learning toenhance the performance of particle swarm optimization.
(4) Application of other strategy: the incorporation with otherevolutionary algorithm is a common method to improve PSO
Fig. 1. The pseudo code of BPSO.
Y. Liu et al. / Neurocomputing 151 (2015) 1237–12471238
![Page 3: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/3.jpg)
performance. The first hybrid PSO was developed by [21] wherea selection scheme is introduced to identify the learning exe-mplar. In [22], the author also proposed a hybrid PSO byincorporating recombination operator and dynamic linkagediscovery. Similar to [22], Wei et al. [23] modified the fastevolutionary programming by replacing the Cauchy mutation.Except for hybrid strategy, Zhan et al. [24] proposed orthogonalstrategy which can guide particles to fly in better directions byconstructing a much promising and efficient exemplar.
3. Hybrid learning PSO with genetic disturbance
3.1. Dynamic neighborhood topology and gbest external archive
In the global neighborhood, the information is exchanged fasterthan that of in the local neighborhood topology. Based on thisjudgment, a specified neighborhood topology can determine thefrequency of diversity loss in the swarm. For example, in the globalneighborhood topology, the information can be transferred fast,meanwhile the swarm diversity is also lost rapidly. In this way, thesmall size topology facilitates the preservation of the swarmdiversity for a longer time.
In addition, at the early stage of search progress, to obtain theglobal best position exploration of the positions as much aspossible is required. Therefore, LPSO is chosen in this stage. Whilethe later search process demands the exploitation of the bestfound position so far, GPSO is chosen to meet this requirement.Thus, in order to weight the relationship of the search efficiencyand diversity, a self-adaptive dynamic neighborhood topology isproposed where the rule of the neighborhood choice is closenessdegree of each particle. Here, to take full advantage of theneighborhood information of each particle, the topology will notbe rebuilt until pbest of the current particle ceases improving for acertain number of generations (it is set as 6 in our approach). Thepseudo code of the neighborhood choice of each particle is shownin Algorithm 1,
% e_distance( )¼function of computing the Euclidean distancebetween any two points
(1) Function choice_ neighborhood(2) for each particle(i¼1:ps)(3) for each particle (j¼1:ps) % ps¼population size(4) L(i, j)¼e_distance(i, j); % L(i,:)¼the element of theith row of matrix L;
(5) next j(6) m¼mean(L(i,:)); % mean(v)¼return mean value of arrayv
(7) Index¼L(i,:);(8) For z¼1:ps(9) If Index (z)om(10) neighbor(i)¼z(11) End if % neighbor(i)¼the neighborhood of particle i(12) next z(13) next i(14) End Function
Algorithm 1. :
Furthermore, in order to make the best use of the gbest message,the external archive is adopted to keep a historical record of gbestobtained during the whole search process. Initially, this archive isempty, then as the evolution progresses, the new gbest enters thearchive in each generation. These gbests in the external archive willmake the simulated binary crossover to improve particle ability to
escape from local optima (see Section 3.2). However, the size ofthe archive will increase fast along with the iteration elapsed, thesize of the archive tends to be restricted to a pre-specified valuewhose (the size of which) size is set based on the real-world need.In our proposed algorithm, the size is set as the half of the itera-tion number.
3.2. Disturbance
3.2.1. Polynomial mutation operator (PMO)When a swarm stagnates, it is impossible to generate new
solutions guiding the swarm out of this state. Consequently, it willresult in the entire swarm get trapped into a local minimum. Sinceall the particles are attracted by the global best individual (gbest), itis possible to lead the swarm to escape from a current location bymutating a particle if the mutated particle becomes the new globalbest one [30]. However, every coin has two sides, the mutationmight also lead to a particle far away from the global optimum.Here, instead of the usual mutation strategy (i.e., a particle willmutate when reaching a certain probability), the mutation operatoris invoked when a particle's pbest ceases improving for a number ofgenerations (4 is used in our experiment),
Here, the PMO [29,31] is employed, and the formulas aredescribed as follows:
xiðtþ1Þ ¼ xiðtÞþðxiU ðtÞ�xiLðtÞÞUδi ð3:1Þ
where xUi (t) and xLi (t) are the upper and lower bound of xi(t) that is theposition of the current particle i at iteration t; δi is a small variationwhich is calculated from a polynomial distribution by (3.2) and (3.3).
δk ¼ ð2rkÞ1
ηm þ 1�1; if rko0:5 ð3:2Þ
δk ¼ 1�½2U ð1�rkÞ�1
ηm þ 1; if rkZ0:5: ð3:3Þwhere rk is a uniformly distributed random number in the range [0,1];ηm is mutation distribution index, and here, ηm is equalto population size.
3.2.2. Simulated binary crossover (SBC)As mentioned above, once a swarm stagnates, the whole swarm
will be trapped in local optimum. Thus, in order to better strengthenthe ability to escape from local optimum, a daring and audaciouslytempting idea is proposed, in which the simulated binary crossover(SBC) [32] operator is used for the global best performing particle(gbest) in the external archive. The strategy makes the correspondingcomponent exchange of the different gbest according to a certainprinciple defined by SBC. The advantage of SBC application has beenproved in [32], and the formula of SBC is the following:
x1;k ¼ 12 ½ð1�βkÞUxr;1;kþð1þβkÞUxr;2;k�
x2;k ¼ 12 ½ð1þβkÞUxr;1;kþð1�βkÞUxr;2;k� ð3:4Þ
where xr,i,k is the randomly selected particle in the external archive;xi,k represents the new generated ith particle after the crossover;βk(Z0) is generated by (3.5),
βðuÞ ¼ ð2uÞ 1ηc þ 1; if uo0:5
βðuÞ ¼ ½2ð1�uÞ� � 1ηc þ 1; if uZ0:5 ð3:5Þ
where, u is a uniformly distributed random number in the range[0,1]; ηc (the distribution index for crossover) determines how wellspread the new generated particle will be, and it is equal topopulation size.
The whole process of SBC is presented below,
Step 1: Randomly select two gbest in the current externalarchive. (N denotes the size of the current external archive)Step 2: Compute β value in terms of (3.5).
Y. Liu et al. / Neurocomputing 151 (2015) 1237–1247 1239
![Page 4: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/4.jpg)
Step 3: Generate two new particles in terms of (3.4).Step 4: Repeat step 1,Step 5: Stop until the number of the newly generated particlesis equal to N.
3.2.3. The role of simulated binary crossover and PMOTo test the role of the simulated binary crossover and poly-
nomial mutation in HLPSO-GD, four PSO algorithms are performedon five different ten-dimensional test functions. Four PSOs are PSOwith dynamic neighborhood (DPSO), PSO with dynamic neighbor-hood and PMO (PDPSO), PSO with dynamic neighborhood and SBC(SDPSO), PSO with dynamic neighborhood, PMO and SBC (HLPSO-GD), respectively. Here, each test function is run 30 times, and ateach run the maximum fitness evaluations (FEs) are set at 3�104.When dealing with the experimental dates, it is impossible tocombine the raw results of four algorithms for different functions.Thus, Mendes's method [8] is used to deal with the raw results.They are standardized to the same scale as follows:
X ¼ xij�μijσij
ð3:6Þ
where xij ,μij and σij denote the trial results, mean, and standarddeviation of the ith test function in the jth algorithm, respectively.Note that the parameter j represents the above four algorithms inthis experiment.
As all test functions are the minimization problems, the smallerthe standardizing value is, the better the performance works. Asshown in Fig. 2, the diversity of HLPSO-GD is obviously increasedby the introduction of dynamic neighborhood topology, crossoverand mutation, which also makes HLPSO-GD achieve the best resultin standardize value compared with other algorithms. PDPSO andSDPSO are closely followed, while DPSO generates the worstresult. Hence, we can conclude that the dynamic neighborhoodtopology, crossover and mutation operator can improve the swarmability to escape from local optimum.
3.2.4. Hybrid learning strategyClerc [3] indicated that a constriction factor χ will improve the
convergence of PSO (CF-SPO), whose velocity and position of theith particle are updated by (3.7).
vi ðtÞ ¼ χfvi ðt�1Þþφ1 Ur1ðpi �xi ðt�1ÞÞþφ2 Ur2ðpg �xi ðt�1ÞÞgxi ðtÞ ¼ xi ðt�1Þþvi ðtÞ ð3:7Þwhere χ ¼ 2= 2�ϕ�
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiϕ2�4ϕ
q�������� and ϕ¼ φ1þφ2; ϕ44.
In view of the merit of CF-PSO, the constriction factor χ will belinked to our proposed hybrid learning PSO with genetic disturbance.
Additionally, in [6,20], the authors have concluded a particle thatlearns from all dimensions of the best performing particle in theswarmwill produce ‘two steps forward, one step back’ phenomenon,which will degrade the efficiency of the algorithm performance andeven result in the premature convergence. Therefore, the strategywhich can make the best of particle information is necessary. Basedon our proposed comprehensive learning strategy [33,34], a con-striction factor to balance each dimension information of a particle isintroduced (here, it is called hybrid learning strategy). Namely, inHLPSO-GD, all particles' historical best information and the bestperforming particle information in the swarm are used to update aparticle's velocity, and a constriction factor is used to distribute theforce of each learning part. So, the following updating equation of thevelocity and position are employed in our algorithm,
vi ðtþ1Þ ¼ χfvi ðtÞþφ1 Ur1ðpi ðtÞ�xi ðtÞÞþφ2 Ur2ðpg ðtÞ�xi ðtÞÞþφ3 Ur3ðpbinðiÞ ðtÞ�xi ðtÞÞgxi ðtþ1Þ ¼ xi ðtÞþvi ðtþ1Þ: ð3:8Þ
pbinðiÞd ¼ arg max FitnessðpjÞ� FitnessðpiÞ
pjd �xjd�� ��
� �� �
dAf1;2;⋯;ng; i¼ 1;2;⋯; ps; jAneighbori ð3:9Þ
where, ps is the population size; neighbori represents the neighbor-hood of particle i; vi ðtþ1Þ ¼ ðvi1; vi2;⋯; vinÞ represents the velocityof the ith particle; xi ðtþ1Þ ¼ ðxi1; xi2;⋯; xinÞ represents the positionof the ith particle at iteration tþ1; pg ðtÞ ¼ ðpg1; pg2;⋯; pg
nÞ is thebest position discovered by the whole population; pbinðiÞ
d definesthe corresponding dimension of the particle i's pbinðiÞ ; φ1, φ2 and φ3
denote the acceleration coefficients; χ denotes constriction factor;r1, r2 and r3 are uniformly distributed random number in the range[0,1]; pj ¼ ðpj1; pj2;⋯; pj
nÞ; jAneighbori is any member's pbest ofthe neighborhood of the particle i; d defines the particle's dimension; |…| denotes the absolute value; Fitness(p) represents the correspondingfitness value of an array p; arg{v} is the function to find the index ofelement v; In order to guarantee a particle flight in the search space, itsvelocity on each dimension is limited to pg ðtÞ ¼ ðpg1; pg2;⋯; pg
nÞ thatis a constant specified by the user. If viðtÞ
�� �� exceeds the vmax, then theith particle's velocity is assigned to signð viðtÞ
�� ��ÞUvmax. About the valuerule of φ1, φ2 and φ3, the analysis process is shown bellow. Firstly, (3.8)can be expressed using (3.10) and (3.11),
vi ðtþ1Þ ¼ χ Uvi ðtÞ�ð∑3j ¼ 1χ Urj UφjÞxi ðtÞþχ Uφ1 Ur1 Upi ðtÞ
þχ Uφ2 Ur2 Upg ðtÞþχ Uφ3 Ur3 UpbinðiÞ ðtÞxi ðtþ1Þ ¼ χ Uvi ðtÞ
þð1�∑3j ¼ 1χ Urj UφjÞxi ðtÞþχ Uφ1 Ur1 Upi ðtÞ
þχ Uφ2 Ur2 Upg ðtÞþχ Uφ3 Ur3 UpbinðiÞ ðtÞ ð3:10Þ
0 0.5 1 1.5 2 2.5 3
x 104
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
Fitness Evaluation
Sta
ndar
dize
d V
alue
SDPSOPDPSODPSOHLPSO-GD
0 0.5 1 1.5 2 2.5 3
x 104
10-2
100
102
104
106
Fitness Evaluation
Div
ersi
ty o
f sw
arm
SDPSOPDPSODPSOHLPSO-GD
Fig. 2. Effect of dynamic neighborhood topology, crossover and mutation on algorithm.
Y. Liu et al. / Neurocomputing 151 (2015) 1237–12471240
![Page 5: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/5.jpg)
vi ðtþ1Þxi ðtþ1Þ
!¼
χ �∑3j ¼ 1χ Urj Uφj
χ 1�∑3j ¼ 1χ Urj Uφj
0@
1A vi ðtÞ
xi ðtÞ
!
þχφ1 Ur1 φ2 Ur2 φ3 Ur3φ1 Ur1 φ2 Ur2 φ3 Ur3
! pi ðtÞpg ðtÞ
pbinðiÞ ðtÞ
0B@
1CA ð3:11Þ
Let
A¼χ �∑3
j ¼ 1χ Urj Uφj
χ 1�∑3j ¼ 1χ Urj Uφj
0@
1A vi ðtÞ
xi ðtÞ
!;
B¼ χ Uφ1 Ur1 φ2 Ur2 φ3 Ur3φ1 Ur1 φ2 Ur2 φ3 Ur3
!;
ϕ¼∑3j ¼ 1χ Urj Uφj; q¼
pi ðtÞpg ðtÞ
pbinðiÞ ðtÞ
0B@
1CA;
and
yðtÞ ¼vi ðtÞxi ðtÞ
!;
so, (3.11) is equal to (3.12),
yðtþ1Þ ¼ AyðtÞþBUq ð3:12Þ
where (3.12) is a kind of the equation of the linear discretedynamical system that describes the particle state change from tto tþ1; A that decides the state of the particles' motion, is thecoefficient of the system; B is the input matrix; q is the externaloutput of the system.
(1) Begin(2) Initialize positions and associated velocities of allparticles.
(3) Evaluate the fitness values of all particles.(4) Set the current position as pbest.(5) Set the current particle with the best fitness value in thewhole population as the gbest.
(6) Set Vmax¼0.25(Xmax�Xmin).(7) Mstaynum¼zeros(1, ps). //the gap iteration for themutation of particles
(8) Nstaynum¼zeros(1, ps). //the gap iteration for theneighborhood reconstruction of particles
(9) While (fitcountoMax_FES) && (ko iteration)(10) For each particle (i¼1:ps)(11) If Mstaynum(i)Z6(12) Mstaynum(i)¼0.(13) Reconstruct particle i's neighborhood(14) End If(15) End For(16) If Nstaynum(i)Z4(17) Nstaynum(i)¼0
Fig. 3. Relationship between constriction factor and acceleration coefficients.
Table 1Dimension, search space and global optimum of test functions.
Test function n Search space f(xn)
Type I:BFZ Function Un-rotated Function Sphere (f1) 30 [�100,100]30 0Rosenbrock (f2) 30 [�2.05,2.05]30 0Ackley (f3) 30 [�32.8,32.8]30 0Griewanks (f4) 30 [�600,600]30 0Rastrigin (f5) 30 [�5.1,5.1]30 0Schewfel (f6) 30 [�500,500]30 0
Rotated Function Ackley (f7) 30 [�32.77,32.77]30 0Griewanks (f8) 30 [�600,600]30 0Rastrigin (f9) 30 [�5.12,5.12]30 0Rastrigin-noncont (f10) 30 [�5.12,5.12]30 0
Type II: CEC2013 Function Rotated Bent Cigar (f11) 30 [�100,100]30 �1200Different Powers (f12) 30 [�100,100]30 �1000Rotated Weierstrass (f13) 30 [�100,100]30 �600Rastrigin (f14) 30 [�100,100]30 �400Composition Function 3 (n¼3, Rotated) (f15) 30 [�100,100]30 900
Y. Liu et al. / Neurocomputing 151 (2015) 1237–1247 1241
![Page 6: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/6.jpg)
(18) Ngbest begins the polynomial mutation(19) End If(20) For each particle (i¼1:ps)(21) Select an exemplar from external archive(22) Construct particle i's neighborhood(23) Updating particle velocity and position(24) Update pbest(25) Update Nstaynum, Mstaynum(26) Evaluate the fitness values of the current particle i(27) Update gbest external archive // N -the number ofall the members in the current external archive
(28) For i¼1:N/2(29) Randomly choose 2 gbest in the externalarchive.
(30) Simulated binary crossover(31) End For(32) End For(33) Increase the generation count(34) End While(35) Output the result.(36) End Begin
Algorithm 2. :
The final state of the particles' motion tends to be the systemstability of (3.12), which means that the algorithm is convergent atthe moment [35]. The necessary and sufficient conditions for thestability of the (3.12) system are that the modulus of the eigenvalue ofmatrix A is less than 1. By calculating, the eigenvalues of matrix A are,
λ1;2 ¼2�ϕ7
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiϕ2�4ϕ
q2
ð3:13Þ
If 0oϕo4 (the modulus of λ1;2 is less than 1), the systemdescribed by (3.12) remain stable. Fig. 3 gives the relationshipbetween constriction factor and acceleration coefficients. InHLPSO-GD, φ1 ¼ φ2 ¼ φ3 ¼ 4=3χ. The pseudo code of HLPSO-GD isshown in Algorithm 2.
4. Simulated experiments
4.1. The comparative PSOs and test functions
In order to know how competitive HLPSO-GD was, the experi-ments were conducted to compare six PSO algorithms that arerepresentative of the state of the art. The six PSOs are: (i) Localversion PSO (the neighborhood topology is ring type, where everyvertex is connected to two others) with constriction factor (CF-LPSO) [3]; (ii) Global version PSO (The neighborhood topology isall type, where all vertexes are connected to every other) withconstriction factor (CF-GPSO) [3]; (iii) Fitness-distance-ratio-basedPSO (FDR-PSO) [15]; (iv) Fully informed particle swarm optimiza-tion (FIPS) [8]; (v) Cooperative particle swarm optimization (CPSO-Hk) [20], k¼6 is used in this paper. φ1¼1.492, φ2¼1.492 andw¼0.729 are used in all PSOs except HLPSO-GD.
Additionally, to compare with the performance of all algorithms,two types of benchmark functions are conducted. Type I: a set oftraditional multimodal test functions that the benchmark functionshave global optima located at zero (BFZ for short), they are Sphere,Rosenbrock, Ackley, Griewanks, Rastrigin, Schewfel, Rastrigin_non-cont functions; Type II: several representative benchmark functionsin CEC 2013, they are Unimodal Functions (Rotated Bent CigarFunction and Different Powers Function), Basic multimodal Func-tions (Rotated Weierstrass Function and Rastrigin's Function) and Ta
ble
2Mea
nsafter3�10
4functionev
aluationsfortheun-rotated
test
functionsfortypeI(BFZ
).
Algorithm
Sphere
Rosen
broc
kAck
ley
Griew
anks
Rastrigin
Schew
fel
CF-LP
SO4.55
02e�01
77
1.32
01e�01
72.42
18eþ001
71.12
36eþ001
1.89
78eþ0007
1.13
74eþ000
4.66
10e�002
73.76
31e�002
5.17
38eþ001
72.54
26eþ001
4.99
45eþ003
72.51
23eþ003
CF-GPS
O4.75
46e�01
47
2.34
21e�01
42.40
78eþ001
71.74
56eþ001
1.37
69eþ001
72.81
68eþ001
3.28
86e�001
71.45
61e�001
8.25
81eþ001
71.63
46eþ001
4.34
74eþ003
71.96
53eþ003
FDR-PS
O9.06
14e�01
77
2.19
5e�01
72.32
27eþ001
71.02
11eþ001
1.37
96e�004
72.15
11e�004
1.37
62e�006
72.67
31e�006
3.58
19eþ001
72.56
43eþ001
4.14
54eþ003
72.51
84eþ003
FIPS
8.42
98e�006
72.43
3e�006
2.61
26eþ001
71.1184
eþ001
9.47
23e�004
73.16
34e�004
1.23
29e�002
73.95
34e�002
1.34
81eþ002
71.36
64e
þ002
3.09
33eþ002
72.42
71eþ002
CPS
O3.05
83e�006
73.21
43e�006
2.48
99eþ0007
1.13
61eþ000
2.54
44e
�004
71.24
32e�004
2.21
57e�002
71.56
32e�002
1.99
38eþ0007
1.78
31eþ000
1.067
5eþ003
72.23
51eþ003
HLP
SO-G
D1.49
76e�01
17
1.445
3e�01
19.19
72e�001
74.49
31e�001
3.84
39e�006
71.73
27e�006
3.98
23e�008
71.99
83e�008
5.62
71e�001
72.73
07e�001
9.98
22eþ002
74.02
86eþ002
h0
11
11
0
Y. Liu et al. / Neurocomputing 151 (2015) 1237–12471242
![Page 7: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/7.jpg)
Composition Functions (Composition Function 3 (n¼3, Rotated)).Table 1 presents the properties of BFZ whose formulas are describedby [6,20], and the benchmark functions in CEC 2013 can be down-loaded from the website: http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2013/CEC2013.htm.
Sphere, Rosenbrock, Ackley, Griewanks, Rastrigin, Schewfel,Rastrigin_noncont, Rotated Bent Cigar, Different Powers functions,Rotated Weierstrass, Rastrigin's Function for CEC 2013 and Compo-sition Function 3 (n¼3, Rotated) are selected to test the conver-gence characteristics from the visual graph, and Ackley, Griewanksand Rastrigin functions are rotated with Salomon's algorithm [36]to increase optimization difficulty and test algorithm robustness
(the predefined threshold is 0.05, 0.02 and 50 respectively).
4.2. Fixed-iteration results and discussions
In order to make these different algorithms comparable, fortype I test functions, the population size is set as 30 for all PSOs,and each test function is run 30 times. At each run, the maximumfitness evaluation (FEs) is set as 3�104 and 6�104 for the un-rotated test functions and rotated test functions, respectively. Fortype II test functions, the population size is set as 100 for all PSOs,and each test function is run 30 times. At each run, the maximumfitness evaluation (FEs) is fixed to 3�105.
Table 3Means after 6�104 function evaluations for the rotated test functions for type I (BFZ).
Algorithm Ackley Griekwanks Rastrigin rastrigin_noncont
CF-LPSO 1.3856e�01371.5367e�013 3.4467e�00271.4325e�002 1.1939eþ00171.5453eþ001 5.3000eþ00071.2454eþ000CF-GPSO 7.1054e�01573.5346e�015 9.8354e�00272.6572e�002 1.0945eþ00171.4341eþ001 7.0012eþ00072.4325eþ000FDR-PSO 1.1551eþ00072.4744eþ000 9.5999e�00274.2343e�002 9.9496eþ00072.0171eþ000 6.0001eþ00072.5665eþ000FIPS 3.5527e�01574.5425e�015 6.7777e�00272.7467e�002 9.8954eþ00075.2542eþ000 7.0650eþ00071.5674eþ000CPSO 1.1551eþ00071.6366eþ000 3.1611e�00275.5432e�002 1.0228eþ00171.5412eþ001 1.800eþ00173.0565eþ001HLPSO-GD 4.8732e�01971.0981e�019 2.3157e�00271.1218e�002 4.3567eþ00072.1462eþ000 3.4591eþ00072.3471eþ000h 1 0 1 1
0 1 2 3x 104
10-15
10-10
10-5
100
105
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
CF-LPSOCF-GPSOFDR-PSOFIPSCPSOHLPSO-GD
0 1 2 3x 104
10-2
100
102
104
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3x 104
10-6
10-4
10-2
100
102
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3
x 104
10-10
10-5
100
105
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3x 104
10-1
100
101
102
103
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3
x 104
102
103
104
105
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
Fig. 4. Convergence characteristics of the un-rotated test functions for type I test functions. (a) Sphere; (b) Rosenbrock; (c) Ackley; (d) Griekwanks; (e) Rastrigin; (f) Schewfel.
Table 4Means after 3�105 function evaluations for type II (CEC2013 test functions).
Algorithm Rotated Bent Cigar Different Powers Rotated Weierstrass Rastrigin Composition Function 3
CF-LPSO 3.7893eþ00371.3568eþ002 �1.000eþ00372.324eþ001 �5.9725eþ00272.3102eþ001 �3.9901eþ00271.0894eþ001 1.4488eþ00378.0146eþ002CF-GPSO 6.2850eþ00473.5478eþ002 �1.0000eþ00378.2564eþ001 �5.9773eþ00271.2564eþ001 �3.9801eþ00273.8749eþ001 1.8163eþ00379.5874eþ002FDR-PSO 1.6485eþ00474.9241eþ003 �1.0000eþ00379.5147eþ001 �5.9769eþ00279.5287eþ001 �3.9973þ00274.7682eþ001 1.2319eþ00378.9654eþ002FIPS 1.0043eþ00478.4215eþ002 �1.0000eþ00372.6547eþ001 �5.9750eþ00272.3179eþ001 �3.9640þ00271.0287eþ001 2.3948eþ00375.2479eþ002CPSO 5.5883eþ00672.6874eþ002 �1.0000eþ00379.5814eþ001 �5.9898eþ00276.2471eþ001 �4.0000þ00273.2514eþ001 2.4030eþ00376.2534eþ002HLPSO-GD �1.1975eþ00372.1478eþ002 �1.0000eþ00371.0047eþ001 �5.9983eþ00271.2147eþ001 �4.0000þ00271.3529eþ001 1.0531eþ00372.1473eþ002h 1 1 1 0 1
Y. Liu et al. / Neurocomputing 151 (2015) 1237–1247 1243
![Page 8: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/8.jpg)
For type I test functions, Tables 2 and 3 present the means and95% confidence interval after 3�104 and 6�104 function evalua-tion. For type II test functions, Table 4 presents the means and 95%confidence interval after 3�105 function evaluation. The bestresults obtained by six PSOs are presented in bold. In addition,to illustrate whether the result obtained by HLPSO-GD is statisti-cally different from that obtained by other five PSO variants, theWilcoxon rank sum tests are conducted. It should be noted that anh value of one implies that the performances of the two algorithms
are statistically different with 95% certainty, whereas h value ofzero indicates that the performances are not statistically different.
The average convergence trends are displayed in Figs. 4 and 5for type I test functions. Fig. 6 give the function error value (fi(x)�fi(x*)) for type II test functions, where fi(x) is the correspondingfunction value, and fi(x*) global optimum shown in Table 1.
About type I test function, as shown in Figs. 4 and 5, HLPSO-GDis almost as the same convergence characteristics as the algorithmproposed by [20] in the un-rotated test functions, which stems from
0 1 2 3 4 5 6
x 104
10-20
10-10
100
1010
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
CF-LPSOCF-GPSOFDR-PSOFIPSCPSOHLPSO-GD
0 1 2 3 4 5 6
x 104
10-2
10-1
100
101
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3 4 5 6
x 104
100
101
102
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
0 1 2 3 4 5 6
x 104
100
101
102
Fitness Evaluation
Mea
n Fu
nctio
n V
alue
Fig. 5. Convergence characteristics of the rotated test functions for type I test functions. (a) Ackley; (b) Griekwanks; (c) Rastrigin; (d) Rastrigin_noncont.
0 2 4 6 8 10x 104
100
105
1010
Fitness Evaluation
Func
tion
Err
or V
alue
CF-LPSOCF-GPSOFDR-PSOFIPSCPSOHLPSO-GD
0 2 4 6 8 10
x 104
10-15
10-10
10-5
100
105
Fitness Evaluation
Func
tion
Err
or V
alue
0 2 4 6 8 10
x 104
10-1
100
101
102
103
Fitness Evaluation
Func
tion
Err
or V
alue
0 2 4 6 8 10x 104
10-15
10-10
10-5
100
105
Fitness Evaluation
Func
tion
Err
or V
alue
0 2 4 6 8 10x 104
102
103
104
Fitness Evaluation
Func
tion
Err
or V
alue
Fig. 6. Convergence characteristics of type II (CEC2013) test functions. (a) Rotated Bent Cigar Function; (b) Different Powers Function; (c) Rotated Weierstrass Function;(d) Rastrigin's Function; (e) Composition Function 3 (n¼3, Rotated).
Y. Liu et al. / Neurocomputing 151 (2015) 1237–12471244
![Page 9: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/9.jpg)
the same strategy application of crossover and mutation in twoalgorithms. But in the rotated test functions, HLPSO-GD achievedgood performance because of the comprehensive learning strategy.
Sphere function is easily optimized by all PSO algorithms, butCF-LPSO has the best result. As for Rosenbrock function whoseglobal minimum lies in a curved valley, HLPSO-GD may avoid thepremature convergence, while other five algorithms get trapped inlocal optima (premature convergence) without exception. On theun-rotated Ackley, HLPSO-GD takes the lead, and FDR-PSO per-forms better than the other five PSO variants. In the rotated case,HLPSO-GD also belongs to the performance leader, and CF-GPSO,CF-LPSO and FIPS have the better results than CPSO and FDR-PSO(FDR-PSO is trapped in local optima early on).
Similarly, on Griewanks, HLPSO-GD achieves the better resultthan any other PSO algorithm in the un-rotated case. When thesearch space is rotated, only HLPSO-GD, CF-LPSO and CPSO belongto the cluster of performance leader, and the other three algorithmstend to premature convergence. Rastrigin function presents apattern similar to Ackley. Whether in the rotated or the un-rotated case, HLPSO-GD performs very well, whereas other fivealgorithms almost have the same convergence characteristic exceptCPSO that performs almost the same characteristic as HLPSO-GD inthe rotated case. For Schewfel function, FIPS stands out from thecrowd, and HLPSO-GD and CPSO are followed. But there are nodistinct differences among them. On rastrigin-noncont function,besides CPSO, other five algorithms present almost the sameconvergence track, but HLPSO-GD has the best result.
For type II (CEC2013) all test functions, HLPSO-GD show thebetter performance compared with CF-LPSO, CF-GPSO, FDR-PSOand FIPS, but in Rastrigin function, HLPSO-GD and CPSO have thesame trend of convergence.
Altogether, whether in type I (BFZ) test function or in type I I(CEC2013) test function, from the above analysis, It can beobserved that HLPSO-GD perform better than the other fivealgorithms on f2, f3, f4, f5, f7, f9, f10, f11, f12, f13 and f15. Accordingto Wilcoxon rank sum tests, the HLPSO-GD results and the bestresult achieved by the other five PSOs are statistically differentwith 95% certainty.
Furthermore, as simulated binary crossover, polynomial muta-tion and dynamic neighborhood topology are introduced, HLPSO-GD achieves the better performance with regard to other PSOs.However, it is essential to investigate whether these strategies willincrease the computational complexity. Here, a Matlab function(tic and toc) is applied to measure the time needed for each PSOs,and 30 independent runs are performed for the un-rotated androtated test functions to achieve the run time when reachingstopping criterion (the stopping criteria of un-rotated and rotatedtest functions are 3�104 and 6�104 function evaluations, respec-tively). From Table 5, it can be observed that the run time used bythe HLPSO-GD algorithm is the same order of magnitude as otheralgorithm, which implies that HLPSO-GD does not increase thecomputational complexity.
4.3. Robustness and discussions
Table 6 shows the results of the robustness analysis [34]. Here, the“robustness” is used to evaluate the search stability of the algorithmsunder different condition (such as the rotated and un-rotated testfunction) by a certain criterion. For HLPSO-GD, reaching a specifiedthreshold is used as the criterion. A robust algorithm indicates theone that manages to reach the threshold consistently whether in therotated or the un-rotated. The “Success rate” column lists thefrequency of the test algorithm reaching threshold in 60 times run.The “FEs” column means the number of function evaluations whenreaching the threshold, and only the dates of successful runs are usedto compute “Success rate”.
As can be seen in Table 6, CF-GPSO performs unsatisfied in theun-rotated and the rotated Ackley function, but CF-LPSO can reachthe threshold in the un-rotated case. FDR-PSO and CPSO failed onthe rotated Ackley function, while in the un-rotated case, theyconsistently reached the given threshold. The FIPS completelyfailed in both the un-rotated and the rotated cases. Only HLPSO-GD consistently reached the threshold in any case.
The Griewanks function is hard to reach the predefined thresholdfor the majority algorithms, as can be seen from Table 6. Only HLPSO-GD consistently reached the threshold in the un-rotated and rotatedcase. CPSO also achieved the better result, but the result is disappointedin the rotated case. Note that in the un-rotated case, except CF-GPSOand CF-LPSO, the success rates of the other PSOs are above 70%.
On Rastrigin function, HLPSO-GD consistently reached thethreshold in the un-rotated and the rotated cases. CPSO and FDR-PSO reached the threshold in the un-rotated case, but they failed inthe rotated case. CF-LPSO, FIPS and CF-GPSO had some difficultieswhether in the un-rotated or the rotated cases.
Overall, when the robustness is concerned, HLPSO-GD appearsto be the winner. FDR, CPSO and FIPS consistently reached thethreshold on most of test functions, which indicates that thealgorithms are slightly less robust. CF-LPSO and CF-GPSO seemedto be unreliable on the multimodal benchmark functions.
Table 5The run time when reaching stopping criterion on Un-rotated test functions fortype I test functions (Unit: second).
Algorithm Un-rotated test functions Rotated test functions
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
CF-LPSO 7 12 19 29 50 36 431 262 421 789CF-GPSO 8 10 18 27 46 37 409 286 452 546FDR-PSO 9 9.8 20 35 45 35 387 268 399 654FIPS 9 8.9 16 40 70 32 374 363 488 653CPSO 10 8.5 24 43 54 56 452 544 515 468HLPSO-GD 9 11 21 31 49 47 359 243 386 481
Table 6Robustness analysis of algorithms for type I test functions.
Algorithm Ackley Griewanks Rastrigin Ackleyn Griewanksn Rastriginn
FEs Success rate (%) FEs Success rate (%) FEs Success rate (%) FEs Success rate (%) FEs Success rate (%) FEs Success rate (%)
CF-LPSO 21081 100 14337 43 13145 31 4270 20 12017 20 2670 26CF-GPSO 20080 70 9288 40 5400 20 3331 19 6148 25 3200 20FDR-PSO 19761 100 8648 70 13674 100 15525 45 24173 40 1980 90FIPS 19832 90 8643 72 13654 94 10642 90 37202 39 3100 95CPSO 19325 90 25109 86 919 100 2474 90 24162 41 3560 100HLPSO-GD 975 100 16347 100 712 100 8240 100 20837 100 3201 99
Y. Liu et al. / Neurocomputing 151 (2015) 1237–1247 1245
![Page 10: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/10.jpg)
5. Conclusions and future works
This paper presented an improved PSO with hybrid learningstrategy and genetic disturbance (HLPSO-GD). To enhance theinformation exchange frequency among all particles, the neighbor-hood topology is dynamically constructed in HLPSO-GD, and thevelocity of each particle is updated based on all particles of itsneighborhood including itself. By this way, the swarm's diversitycan be maintained to some extend and thus fight against thepremature convergence. To strengthen the swarm ability to escapefrom local optima, the genetic disturbance (simulated binarycrossover and polynomial mutation) is introduced to generate anew gbest. From the simulated results of type I (BFZ) and type II(CEC2013) test function, HLPSO-GD are statistically different fromthe second best result. Additionally, the robustness analysis alsoshows that HLPSO-GD is a stable algorithm in the differentoptimization condition. Given the convergence property androbustness, HLPSO-GD can be regarded as an effective improve-ment in the PSO domain.
In the future, our works will focus on (i) testing the proposedalgorithm effectiveness in more multimodal complex test problems,(ii) applying HLPSO-GD to deal with some practical problems (suchas supply chain model, portfolio optimization problem et al.), (iii) inthe future the robustness and computation complexity of HLPSO-GD will be analyzed on CEC 2013 test problems.
Acknowledgments
This work is partially supported by The National Natural ScienceFoundation of China (Grants nos. 71461027, 71271140, 71001072,71210107016), The Hong Kong Scholars Program 2012 (Grant no. G-YZ24), Guizhou Province Science and Technology fund (Grant nos.Qian Ke He J [2012] 2340, Qian Ke He J [2012] 2342, LKZS [2012]10,LKZS [2012]22), China Postdoctoral Science Foundation FundedProject (Grant nos. 2012M520936, 2013T60466, 20100480705,2012T50584), Shanghai Postdoctoral Science Foundation FundedProject (Grant no. 12R21416000), and the Natural Science Foundationof Guangdong Province (Grant no. S2012010008668). Guizhou pro-vince colleges and universities teaching quality and teaching reformproject (no. Qianjiaogaofa [2013]446), Key projects of educationdepartment in Guizhou province (no. Qianjiaohe [2014]295)
References
[1] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceeding of the1995 IEEE International Conference on Neural Networks, vol. 4, 1995,pp. 1942–1948.
[2] R. Eberhart, Y. Shi, Comparison between genetic algorithms and particleswarm optimization, in: Proceedings of the 7th Annual Conference onEvolutionary Programming, vol. 1, 1998, pp. 611–619.
[3] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergencein a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002)58–73.
[4] P.N. Suganthan, Particle swarm optimizer with neighborhood operator, in:Proceedings of the IEEE Congress on Evolutionary Computation, vol. 3, 1999,pp. 1958–1962.
[5] J. Kennedy, R. Mendes, Population structure and particle swarm performance,in: Proceedings of IEEE Congress on Evolutionary Computation, vol. 5, 2002,pp. 671–676.
[6] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particleswarm optimizer for global optimization of multimodal functions, IEEE Trans.Evol. Comput. 10 (2006) 281–295.
[7] J.L Nai, J.W Wang, Enhanced particle swarm optimizer in corporating aweighted particle, Neurocomputing 24 (2014) 218–227.
[8] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204–210.
[9] T.M. Blackwell, P. Bentley, Don't push me! collision-avoiding swarms, in:Proceedings of IEEE Congress on Evolutionary Computation, vol. 2, 2002,pp. 691–696.
[10] D. Yi, X.R. Ge, An improved PSO-based ANN with simulated annealingtechnique, Neurocomputing 63 (2005) 527–533.
[11] R. Brits, A. Engelbrecht, F. van den Bergh, A niching particle swarm optimizer,in: Proceedings of the 4th Asia-Pacific Simulation and the Evolution ofTechnology-enhanced Learning, vol. 3, 2002, pp. 692–696.
[12] X. Li, Adaptively choosing neighborhood bests using species in a particleswarm optimizer for multimodal function optimization, in: Proceedings ofGenetic Evolutionary Computation, vol. 3, 2004, pp. 105–116.
[13] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchicalparticle swarm optimizer with time-varying acceleration coefficients, IEEETrans. Evol. Comput. 8 (2004) 240–255.
[14] S. Yang, C. Li, A clustering particle swarm optimizer for locating and trackingmultiple optima in dynamic environments, IEEE Trans. Evol. Comput. 14(2010) 959–974.
[15] T. Peram, K. Veeramachaneni, Fitness-distance-ratio based particle swarmoptimization, in: Proceeding of the IEEE Swarm Intelligence Symposium, vol.2, 2003, pp. 174–181.
[16] A.S. Mohais, R. Mendes, C. Ward, C. Posthoff, Neighborhood re-structuring inparticle swarm optimization, in: AI 2005: Advances in Artificial Intelligence,vol. 3809, 2005, pp. 776–785.
[17] S. Janson, M. Middendorf, A hierarchical particle swarm optimizer and itsadaptive variant, IEEE Trans. Syst., Man, Cybern., Part B 35 (2005) 1272–1282.
[18] W. Elshamy, H.M. Emara, A. Bahgat, Clubs-based particle swarm optimization,in: Proceedings of the IEEE Swarm Intelligence Symposium (SIS’07), vol. 5,2007, pp. 289–296.
[19] S.Z. Zhao, J.J. Liang, P.N. Suganthan, M.F. Tasgetiren, Dynamic multi-swarmparticle swarm optimizer with local search for large scale global optimization,in: Proceedings of the IEEE Congress on Evolutionary Computation (CEC’08),vol. 5, 2008, pp. 3845–3852.
[20] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to participle swamoptimization, IEEE Trans. Evol. Comput. 8 (2004) 225–239.
[21] P. Angeline, Evolutionary optimization versus particle swarm optimization:philosophy and performance differences, in: Proceedings of the 7th Evolu-tionary Programming, vol. 4, 1998, pp. 601–610.
[22] Y. Chen, W. Peng, M. Jian, Particle swarm optimization with recombinationand dynamic linkage discovery, IEEE Trans. Syst., Man, Cybern. – Part B:Cybern. 37 (2007) 1460–1470.
[23] C.Wei, Z. He, Y. Zhang, W. Pei, Swarm directions embedded in fast evolu-tionary programming, in: Proceedings of IEEE Congress on EvolutionaryComputation, vol. 6, 2002, pp. 1278–1283.
[24] Z.H. Zhan, J. Zhang, Y. Li, Y.H. Shi, Orthogonal learning particle swarmoptimization, IEEE Trans. Evol. Comput. 15 (2010) 832–847.
[25] T. Si, Grammatical differential evolution adaptable particle swarm optimiza-tion algorithm, Int. J. Electron. Commun. Comput. Eng. 3 (2012) 1319–1324.
[26] T. Si, A. De, A.K., Bhattacharjee, Grammatical swarm based-adaptable velocityupdate equations in particle swarm optimizer, in: Advances in IntelligentSystems and Computing, vol. 5, 2014, pp. 197–206.
[27] M.G.H. Omran, S.A. Sharhan, Using opposition-based learning to improve theperformance of particle swarm optimization, in: Proceedings of IEEE SwarmIntelligence Symposium, vol. 3, 2008, pp. 1–6.
[28] L. Han, X. He, A novel opposition-based particle swarm optimization for noisyproblems, in: Proceedings of the Third International Conference on NaturalComputation (ICNC 2007), vol. 5, 2007, pp. 624–629.
[29] N.D. Jana, T. Si, J. Sil, Particle swarm optimization with adaptive mutation inlocal best of particles, in international congress on informatics, environment,Energy Appl. 5 (2012) 1–6.
[30] A. Stacey, M. Jancic, I. Grundy, Particle swarm optimization with mutation, in:Proceedings of IEEE Congress on Evolutionary Computation, vol. 5, 2003,pp. 1425–1430.
[31] K. Deb, M. Goyal, A combined genetic adaptive search (GeneAS) for engineer-ing design, Comput. Sci. Inform. 26 (1996) 30–45.
[32] K. Deb, R.B. Agrawal, Simulated binary crossover for continuous search space,Complex Syst. 9 (1995) 115–148.
[33] Y.M. Liu, Q.Z. Zhao, C.L. Sui, Particle swarm optimizer based on dynamicneighborhood topology and mutation operator, Control Decis. 25 (2010)968–974.
[34] Y.M. Liu, B. Niu, A novel PSO model based on simulating human socialcommunication behavior, Discret. Dyn. Nat. Soc. 8 (2012) 1–26.
[35] V. Kadirkamanathan, S. Kirusnapillai, Stability analysis of the particledynamics in particle swarm optimizer, IEEE Trans. Evol. Comput. 10 (2006)245–254.
[36] R. Salomon, Re-evaluating genetic algorithm performance under coordinaterotation of benchmark functions, BioSystems 39 (1996) 263–278.
Yanmin Liu received the B.S. degree in Applied mathe-matics from Harbin Institute of Technology, Haerbin,China, in 2001, the M.S. degree in Control scienceand engineering from Heilongjiang Bayi AgriculturalUniversity, Daqing, China, in 2006, and the Ph.D. degreein Decision theory and application from ShandongNormal University, Jinan, China, in 2011. He is presentlyserving as a professor in College of Mathematical andComputational Science, Zunyi normal College. His mainfields of research are Swarm Intelligence, Bio-inspiredComputing, Multi-objective Optimization and theirapplications on supply chain.
Y. Liu et al. / Neurocomputing 151 (2015) 1237–12471246
![Page 11: Hybrid learning particle swarm optimizer with genetic disturbance](https://reader037.vdocuments.mx/reader037/viewer/2022100105/5750aa5e1a28abcf0cd7638f/html5/thumbnails/11.jpg)
Ben Niu received the B.S. degree from Hefei UnionUniversity, Hefei, China, in 2001, the M.S. degree fromAnhui Agriculture University, Hefei, China, in 2004, and thePh.D. degree from Shenyang Institute of Automation of theChinese Academy of Sciences, Shenyang, China, in 2008.He is presently serving as an Associate Professor inDepartment of Management Science, Shenzhen University.Currently He is also a Postdoctoral Fellow at Hefei Instituteof Intelligent Machines, CAS and at The Hong KongPolytechnic University, respectively. His main fields ofresearch are Swarm Intelligence, Bio-inspired Computing,and their applications on Supply Chain Optimization,Business Intelligence, and Portfolio Optimization.
Yuanfeng Luo received the B.S. degree in Appliedmathematics from College of Mathematical and Com-putational Science, Zunyi normal College. He is pre-sently serving as a lecturer in College of Mathematicaland Computational Science, Zunyi normal College. Hismain fields of research are Swarm Intelligence andtheir applications on supply chain.
Y. Liu et al. / Neurocomputing 151 (2015) 1237–1247 1247