particleswarmoptimizationalgorithmwithmultiplephasesfor ... · 2021. 4. 16. · received 16 april...

13
Research Article Particle Swarm Optimization Algorithm with Multiple Phases for Solving Continuous Optimization Problems Jing Li, 1 Yifei Sun , 2 and Sicheng Hou 3 1 Department of Basic Course, Shaanxi Railway Institute, Weinan 714000, China 2 School of Physics & Information Technology, Shaanxi Normal University, Xi’an 710119, China 3 Graduate School of Information, Production and Systems, Waseda University, Kitakyushu 808-0135, Japan Correspondence should be addressed to Yifei Sun; [email protected] Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 Academic Editor: Lianbo Ma Copyright © 2021 Jing Li et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An algorithm with different parameter settings often performs differently on the same problem. e parameter settings are difficult to determine before the optimization process. e variants of particle swarm optimization (PSO) algorithms are studied as exemplars of swarm intelligence algorithms. Based on the concept of building block thesis, a PSO algorithm with multiple phases was proposed to analyze the relation between search strategies and the solved problems. Two variants of the PSO algorithm, which were termed as the PSO with fixed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm, were compared with six variants of the standard PSO algorithm in the experimental study. e benchmark functions for single-objective numerical optimization, which includes 12 functions in 50 and 100 dimensions, are used in the experimental study, respectively. e experimental results have verified the generalization ability of the proposed PSO variants. 1. Introduction Particle swarm optimization (PSO) algorithm is a pop- ulation-based stochastic algorithm modeled on the social behaviors observed in flocking birds [1, 2]. As a well-known swarm intelligence algorithm, each particle, which repre- sents a solution in the group, flies through the search space with a velocity that is dynamically adjusted according to its own and its companion’s historical behaviors. e particles tend to fly toward better search areas throughout the search process [3]. Many swarm intelligence algorithms have been proposed to solve different kinds of problems, which include single- or multiple-objective optimization and real-world applications. ese algorithms include particle swarm optimization al- gorithm [1, 2], brain storm optimization (BSO) algorithm [4–6], and pigeon-inspired optimization (PIO) [7], just to name a few. Usually, a kind of swarm intelligence algorithm has many variants, which have different search strategies or parameters. Take the PSO algorithm as an example; for single-objective optimization, there are adaptive PSO algorithm [8], time-varying attractor in PSO algorithm [9], interswarm interactive learning strategy in PSO algorithm [10], triple archives PSO algorithm [11], social learning PSO algorithm for scalable optimization [12], PSO variant for mixed-variable optimization problems [13], etc. For mul- tiobjective optimization, there are adaptive gradient mul- tiobjective PSO algorithm [14], coevolutionary PSO algorithm with bottleneck objective learning strategy [15], normalized ranking based PSO algorithm for many-OB- JECTIVE optimization [16], etc. Besides, the classical PSO algorithm and its variants have been used in various real- world applications [17], such as search-based data analytics problems [18]. Different variants of optimization algorithms have different components and strategies during the search process. e designing of proper components and strategies is vital for a search algorithm before solving the problem. e building blocking thesis could be embedded into the genetic algorithm [19]. Building blocks indicate the com- ponents from all levels to understand and recognize the complex things or structures. An evolutionary computation algorithm could be divided into several components with the Hindawi Discrete Dynamics in Nature and Society Volume 2021, Article ID 8378579, 13 pages https://doi.org/10.1155/2021/8378579

Upload: others

Post on 16-Aug-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

Research ArticleParticle Swarm Optimization Algorithm with Multiple Phases forSolving Continuous Optimization Problems

Jing Li1 Yifei Sun 2 and Sicheng Hou3

1Department of Basic Course Shaanxi Railway Institute Weinan 714000 China2School of Physics amp Information Technology Shaanxi Normal University Xirsquoan 710119 China3Graduate School of Information Production and Systems Waseda University Kitakyushu 808-0135 Japan

Correspondence should be addressed to Yifei Sun yifeissnnueducn

Received 16 April 2021 Revised 1 June 2021 Accepted 18 June 2021 Published 28 June 2021

Academic Editor Lianbo Ma

Copyright copy 2021 Jing Li et al (is is an open access article distributed under the Creative Commons Attribution License whichpermits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

An algorithm with different parameter settings often performs differently on the same problem (e parameter settings aredifficult to determine before the optimization process(e variants of particle swarm optimization (PSO) algorithms are studied asexemplars of swarm intelligence algorithms Based on the concept of building block thesis a PSO algorithm with multiple phaseswas proposed to analyze the relation between search strategies and the solved problems Two variants of the PSO algorithm whichwere termed as the PSO with fixed phase (PSOFP) algorithm and PSO with dynamic phase (PSODP) algorithm were comparedwith six variants of the standard PSO algorithm in the experimental study (e benchmark functions for single-objectivenumerical optimization which includes 12 functions in 50 and 100 dimensions are used in the experimental study respectively(e experimental results have verified the generalization ability of the proposed PSO variants

1 Introduction

Particle swarm optimization (PSO) algorithm is a pop-ulation-based stochastic algorithm modeled on the socialbehaviors observed in flocking birds [1 2] As a well-knownswarm intelligence algorithm each particle which repre-sents a solution in the group flies through the search spacewith a velocity that is dynamically adjusted according to itsown and its companionrsquos historical behaviors (e particlestend to fly toward better search areas throughout the searchprocess [3]

Many swarm intelligence algorithms have been proposedto solve different kinds of problems which include single- ormultiple-objective optimization and real-world applications(ese algorithms include particle swarm optimization al-gorithm [1 2] brain storm optimization (BSO) algorithm[4ndash6] and pigeon-inspired optimization (PIO) [7] just toname a few Usually a kind of swarm intelligence algorithmhas many variants which have different search strategies orparameters Take the PSO algorithm as an example forsingle-objective optimization there are adaptive PSO

algorithm [8] time-varying attractor in PSO algorithm [9]interswarm interactive learning strategy in PSO algorithm[10] triple archives PSO algorithm [11] social learning PSOalgorithm for scalable optimization [12] PSO variant formixed-variable optimization problems [13] etc For mul-tiobjective optimization there are adaptive gradient mul-tiobjective PSO algorithm [14] coevolutionary PSOalgorithm with bottleneck objective learning strategy [15]normalized ranking based PSO algorithm for many-OB-JECTIVE optimization [16] etc Besides the classical PSOalgorithm and its variants have been used in various real-world applications [17] such as search-based data analyticsproblems [18] Different variants of optimization algorithmshave different components and strategies during the searchprocess (e designing of proper components and strategiesis vital for a search algorithm before solving the problem

(e building blocking thesis could be embedded into thegenetic algorithm [19] Building blocks indicate the com-ponents from all levels to understand and recognize thecomplex things or structures An evolutionary computationalgorithm could be divided into several components with the

HindawiDiscrete Dynamics in Nature and SocietyVolume 2021 Article ID 8378579 13 pageshttpsdoiorg10115520218378579

building block thesis In the PSO algorithm there are severalcomponents(e different settings of parameters structuresand strategies of the PSO algorithm could perform differ-ently on the same problem (us the proper setting of theoptimization algorithm is vital for the solved problemHowever there is no favorable method that could find outthe best setting before the optimization Many approacheshave been introduced to analyze the components of the PSOalgorithm For example the setting of parameter size in thePSO algorithm was surveyed and discussed in [20]

Based on the building block thesis a PSO algorithm withmultiple phases was proposed in this paper(is paper couldbe useful for understanding the search components in thePSO algorithm and designing the PSO algorithm for specificproblems It has two targets in this paper

(1) For the theoretical analysis of the PSO algorithm theeffectiveness of different components of various PSOalgorithms such as inertia weight acceleration co-efficient and topology structures is studied

(2) For the application of the PSO algorithm moreeffective PSO algorithms could be designed forsolving different real-world applications

(e remainder of this paper is organized as follows (ebasic PSO algorithm and topology structure were brieflyintroduced in Section 2 Section 3 gives an introduction tothe proposed PSO algorithms with multiple phases InSection 4 comprehensive experimental studies are con-ducted on 12 benchmark functions with 50 or 100 di-mensions to verify the effectiveness of the proposedalgorithms respectively Finally Section 5 concludes withsome remarks and future research directions

2 Backgrounds

PSO algorithm is a widely used swarm intelligence algorithmfor optimization It is easy for understanding and imple-mentation A potential solution which is termed as a particlein PSO algorithm is a search point in the solution space withD-dimensions For each particle it is connected with twovectors ie the vector of velocity and the vector of positionNormally the number of particles is from 1 to S and theindex of the particles or solutions is represented by notationi (e number of dimensions is from 1 to D and the index ofthe dimensions is represented by notation d (e S is thetotal number of particles and D is the total number of di-mensions(e position of the ith particle is represented as xixi [xi1 xi2 xid xiD] xid is the value of the dthdimension for the ith solution where i 1 2 S andd 1 2 D (e velocity of a particle is labeled as vivi [vi1 vi2 vid viD]

21 Particle Swarm Optimization Algorithm (e basicprocess of PSO algorithm is given in Algorithm 1 (ere aretwo equations the update of velocity and position for eachparticle in the basic process of PSO algorithm (eframework of canonical PSO algorithm is shown in Figure 1(e fitness value is evaluated based on the calculation on all

dimensions while the update on the velocity or position isbased on each dimension (e update equations for thevelocity vid and the position xid are as follows [17 21]

vt+1id wiv

tid + c1rand p

tid minus x

tid1113872 1113873 + c2rand p

tnd minus x

tid1113872 1113873

(1)

xt+1id x

tid + v

t+1id (2)

where w denotes the inertia weight c1 and c2 are two positiveacceleration constants and rand is a random function togenerate uniformly distributed random numbers in therange [0 1) pt

i [pti1 pt

i2 ptid pt

im] is termed aspersonal best at the tth iterations which refers to the bestposition found so far by the ith particle andpt

n [ptn1 pt

n2 ptnd pt

nm] is termed as local bestwhich refers to the position found so far by the members inthe ith particlersquos neighborhood that has the best fitness value(e parameter w was introduced to control the global searchand local search ability (e PSO algorithm with the inertiaweight is termed as the canonicalclassical PSO

(e position value of a particle was updated in the searchspace at each iteration (e velocity update equation isshown in equation (1) (is equation has three componentsthe previous velocity cognitive part and social part (ecognitive part indicates that a particle learns from its ownsearching experience and correspondingly the social partindicates that a particle can learn from other particles orlearn from the good solutions in the neighborhood

22 Topology Structure Different kinds of topology struc-tures such as a global star local ring four clusters or VonNeumann structure could be utilized in PSO variants tosolve various problems A particle in a PSO with a differentstructure has a different number of particles in its neigh-borhood with different scope Learning from a differentneighbor means that a particle follows a different neigh-borhood (or local) best in other words the topologystructure determines the connections among particles andthe strategy used in the propagation process of searchinginformation over iteration

A PSO algorithmrsquos ability of exploration and exploitationcould be affected by its topology structure ie with a dif-ferent structure the algorithmrsquos convergence speed and theability to avoid premature convergence will be various on thesame optimization problem because a topology structuredetermines the search information sharing speed or direc-tion for each particle (e global star and local ring are thetwo typical topology structures A PSO with a global starstructure where all particles are connected has the smallestaverage distance in the swarm and on the contrary a PSOwith a local ring structure where every particle is connectedto two near particles has the largest average distance inswarm [22]

(e global star structure and the local ring structurewhich are two commonly used structures are analyzed in theexperimental study (ese two structures are shown inFigure 2 Each group of particles has 16 individuals It should

2 Discrete Dynamics in Nature and Society

be noted that the nearby particle in the local structure ismostly based on the index of particles

3 Particle Swarm Optimization withMultiple Phases

(e search process could be divided into different phasessuch as the exploration phase and the exploitation phase Adiscussion has been given on the genetic diversity of anevolutionary algorithm in the exploration phase [23] Toenhance the generalization ability of the PSO algorithmmultiple phases could be combined into one algorithm (ePSO algorithm is a combination of several componentsEach component could be seen as a building block of a PSOvariant Some changeable components in PSO algorithmsare shown in Figure 3 (e setting of a component is in-dependent of other components Normally the parametersrsquosettings topology structure and other strategies should bedetermined before the search process (ere are three well-known standard PSO algorithms namely the standard PSOalgorithm by Bratton and Kennedy (SPSO-BK) [24] thestandard PSO algorithm by Clerc (SPSO-C) [25] and thecanonical PSO (CPSO) algorithm By Shi and Eberhart [26]

Different search phases are emphasized for PSO variantsbecause it performs differently when solving different kinds

of problems (us to utilize the strengths of differentvariants a PSO algorithm could be a combination of severalPSO variants For example the global star structure could beused at the beginning of the search to obtain a good ex-ploration ability while the local ring structure could be usedat the ending of the search to promote the exploitationability (e basic procedure of the PSO algorithm withmultiple phase strategy (PSOMP) is given in Algorithm 2(e setting of phases could be fixed or dynamically changedduring the search To validate the performance of differentsettings two kinds of PSO algorithms with multiple phasesare proposed

31 Particle Swarm Optimization with Fixed Phases OnePSOMP variant is the PSO with fixed phases (PSOFP) al-gorithm (e process of the PSO algorithm with multiplefixed phases is shown in Figure 4 Two standard PSO var-iants the CPSO with star structure and the SPSO-BK withring structure are combined for the PSOFP algorithm In theexperimental study both variants perform the same itera-tions (e PSOFP algorithm started with the CPSO with thestar structure and ended with the SPSO-BK with the ringstructure All parameters are fixed for each phase during thesearch (e PSOFP algorithm switches from one variant toanother after the fixed number of iterations

32 Particle Swarm Optimization with Dynamic Phases(e other PSO variant is PSOwith dynamic phases (PSODP)algorithm As the name indicates the phase could be dy-namically changed for this algorithm during the searchprocess As the same with the PSOFP algorithm the PSODPalgorithm started with the CPSO with the star structure andended with the SPSO-BK with the ring structure Howeverthe number of iterations is not fixed during the search (eprocess of the PSO algorithm with multiple dynamic phasesis shown in Figure 5 (e global best (gbest) position is thebest solution found so far (e gbest not changing for k

iterations indicates that the algorithm may be stuck intolocal optima (e PSO variant will be changed after thisphenomenon occurs several times For the setting of thePSODP algorithm the search phase will change from CPSOwith star structure to SPSO-BK with a ring structure (econdition of phase change is the gbest stuck into a localoptimum ie the gbest is not changed for 100 iterations

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history For each particle if current position has a

better function value than pbest then update pbest as current position(5) Select a particle that has the best fitness value from the current particlersquos neighborhood this particle is termed as the

neighborhood best (nbest)(6) for each particle do(7) Update particlersquos velocity according equation (1)(8) Update particlersquos position according equation (2)

ALGORITHM 1 Basic process of PSO algorithm

End

Stop

Start

Solutionevaluation

Initialization

Update velocityand position

Update pbestand nbest

Yes

No

Figure 1 (e framework of canonical PSO algorithm

Discrete Dynamics in Nature and Society 3

4 Experimental Study

41 Benchmark Test Functions and Parameter SettingTable 1 shows the benchmark functions which have beenused in the experimental study To give an illustration of thesearch ability of the proposed algorithm a diverse set of 12benchmark functions with different types are used to con-duct the experiments (ese benchmark functions can beclassified into two groups(e first five functions f0 minus f4 areunimodal while the other seven functions are multimodalAll functions are run 50 times to ensure a reasonable sta-tistical result necessary to compare the different approaches

and the value of global optimum is shifted to different fminfor different functions (e function value of the best so-lution found by an algorithm in a run is denoted byf( x

rarrbest) (e error of each run is denoted as

error f( xrarr

best) minus fmin(ree kinds of standard or canonical PSO algorithms are

tested in the experimental study Each algorithm has twokinds of structure the global star structure and the local ringstructure (ere are 50 particles in each group of PSO al-gorithms Each algorithm runs 50 times 10000 iterations inevery run (e other settings for the three standard algo-rithms are as follows

(a) (b)

Figure 2 Two structures have been used in this paper (a) global star structure where all particles are connected (b) local ring structurewhere each particle is connected to two nearby neighbors

Particle swarm optimizationalgorithms

Parameters Topologystructure

Accelerationconstants c1 c2

Inertia weightw

Global starstructure

Otherstructures

Local ringstructure

Figure 3 Some changeable components in PSO algorithms

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history (personal best termed as pbest) For each

particle if current position has a better function value than pbest then update pbest as current position(5) Selection a particle which has the best fitness value among current particlersquos neighborhood this particle is termed as the

neighborhood best(6) for each particle do(7) Update the particlersquos velocity according equation (1)(8) Update the particlersquos position according equation (2)(9) Change the search phase update the structure andor parameter settings

ALGORITHM 2 Basic procedure of PSO algorithm with multiple phase strategy

4 Discrete Dynamics in Nature and Society

(i) SPSO-BK settings w 072984 φ1 φ2 205 iec1 c2 1496172 [24]

(ii) SPSO-C settings w (12 ln(2))≃0721 c1 c2

05 + ln(2)≃1193 [25](iii) Canonical PSO (CPSO) settings w

wmin + (wmax minus wmin) times (Iterationminus iterIteration)ie w is linearly decreased during the searchwmax 09 wmin 04 c1 c2 2 [26]

42 Experimental Results (e experimental results forfunctions with 50 dimensions are given in Tables 2 and 3(e results for unimodal functions f0 minus f4 are given in

Table 2 while results for multimodal functions f5 minus f11 aregiven in Tables 2 and 3 (e aim of the experimental study isnot to validate the search performance of the proposedalgorithm but to attempt to discuss the combination ofdifferent search phases (is is a primary study on thelearning of the characteristics of benchmark functions

To validate the generalization ability of PSO variants thesame experiments are conducted on the functions with 100dimensions All parametersrsquo settings are the same forfunctions with 50 dimensions (e results for unimodalfunctions f0 minus f4 are given in Table 4 while the results formultimodal functions f5 minus f11 are given in Tables 4 and 5From the experiment results it could be seen that theproposed algorithm could obtain a good solution for mostbenchmark functions Based on the combination of differentphases the proposed algorithm could be performed morerobustly than the algorithm with one phase

(e convergence graphs of error values for each algo-rithm on all functions with 50 and 100 dimensions areshown in Figures 6 and 7 respectively In these figures eachcurve represents the variation of themean value of error overthe iteration for each algorithm (e six PSO variants aretested with the proposed two algorithms(e notation of ldquoSrdquoand ldquoRrdquo indicates the star and ring structure in the PSOvariants respectively For example ldquoSPSOBKSrdquo andldquoSPSOBKRrdquo indicate the SPSO-BK algorithm with the starand ring structure respectively (e robustness of theproposed algorithm also could be validated from the con-vergence graph

43 Computational Time Tables 6 and 7 give the totalcomputing time (seconds) of 50 runs in the experiments forfunctions with 50 and 100 dimensions respectively Nor-mally the evaluation speed of the PSO algorithm with ringstructure is significantly slower than the same PSO algo-rithm with a star structure

End

Stop

Start

Solutionevaluation

InitializationUpdate velocityand position

Update pbestand nbest

YesNo

Multiple fixedphases strategy

Figure 4 (e process of PSO algorithm with multiple fixed phases

End

Stop

Start

Solution evaluation

Initialization

Update velocity and position

Update pbest and nbest

Yes

No

gbest not changed aer k

iteration

Multiple dynamic phases

strategy

Figure 5 (e process of PSO algorithm with multiple dynamicphases

Discrete Dynamics in Nature and Society 5

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 2: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

building block thesis In the PSO algorithm there are severalcomponents(e different settings of parameters structuresand strategies of the PSO algorithm could perform differ-ently on the same problem (us the proper setting of theoptimization algorithm is vital for the solved problemHowever there is no favorable method that could find outthe best setting before the optimization Many approacheshave been introduced to analyze the components of the PSOalgorithm For example the setting of parameter size in thePSO algorithm was surveyed and discussed in [20]

Based on the building block thesis a PSO algorithm withmultiple phases was proposed in this paper(is paper couldbe useful for understanding the search components in thePSO algorithm and designing the PSO algorithm for specificproblems It has two targets in this paper

(1) For the theoretical analysis of the PSO algorithm theeffectiveness of different components of various PSOalgorithms such as inertia weight acceleration co-efficient and topology structures is studied

(2) For the application of the PSO algorithm moreeffective PSO algorithms could be designed forsolving different real-world applications

(e remainder of this paper is organized as follows (ebasic PSO algorithm and topology structure were brieflyintroduced in Section 2 Section 3 gives an introduction tothe proposed PSO algorithms with multiple phases InSection 4 comprehensive experimental studies are con-ducted on 12 benchmark functions with 50 or 100 di-mensions to verify the effectiveness of the proposedalgorithms respectively Finally Section 5 concludes withsome remarks and future research directions

2 Backgrounds

PSO algorithm is a widely used swarm intelligence algorithmfor optimization It is easy for understanding and imple-mentation A potential solution which is termed as a particlein PSO algorithm is a search point in the solution space withD-dimensions For each particle it is connected with twovectors ie the vector of velocity and the vector of positionNormally the number of particles is from 1 to S and theindex of the particles or solutions is represented by notationi (e number of dimensions is from 1 to D and the index ofthe dimensions is represented by notation d (e S is thetotal number of particles and D is the total number of di-mensions(e position of the ith particle is represented as xixi [xi1 xi2 xid xiD] xid is the value of the dthdimension for the ith solution where i 1 2 S andd 1 2 D (e velocity of a particle is labeled as vivi [vi1 vi2 vid viD]

21 Particle Swarm Optimization Algorithm (e basicprocess of PSO algorithm is given in Algorithm 1 (ere aretwo equations the update of velocity and position for eachparticle in the basic process of PSO algorithm (eframework of canonical PSO algorithm is shown in Figure 1(e fitness value is evaluated based on the calculation on all

dimensions while the update on the velocity or position isbased on each dimension (e update equations for thevelocity vid and the position xid are as follows [17 21]

vt+1id wiv

tid + c1rand p

tid minus x

tid1113872 1113873 + c2rand p

tnd minus x

tid1113872 1113873

(1)

xt+1id x

tid + v

t+1id (2)

where w denotes the inertia weight c1 and c2 are two positiveacceleration constants and rand is a random function togenerate uniformly distributed random numbers in therange [0 1) pt

i [pti1 pt

i2 ptid pt

im] is termed aspersonal best at the tth iterations which refers to the bestposition found so far by the ith particle andpt

n [ptn1 pt

n2 ptnd pt

nm] is termed as local bestwhich refers to the position found so far by the members inthe ith particlersquos neighborhood that has the best fitness value(e parameter w was introduced to control the global searchand local search ability (e PSO algorithm with the inertiaweight is termed as the canonicalclassical PSO

(e position value of a particle was updated in the searchspace at each iteration (e velocity update equation isshown in equation (1) (is equation has three componentsthe previous velocity cognitive part and social part (ecognitive part indicates that a particle learns from its ownsearching experience and correspondingly the social partindicates that a particle can learn from other particles orlearn from the good solutions in the neighborhood

22 Topology Structure Different kinds of topology struc-tures such as a global star local ring four clusters or VonNeumann structure could be utilized in PSO variants tosolve various problems A particle in a PSO with a differentstructure has a different number of particles in its neigh-borhood with different scope Learning from a differentneighbor means that a particle follows a different neigh-borhood (or local) best in other words the topologystructure determines the connections among particles andthe strategy used in the propagation process of searchinginformation over iteration

A PSO algorithmrsquos ability of exploration and exploitationcould be affected by its topology structure ie with a dif-ferent structure the algorithmrsquos convergence speed and theability to avoid premature convergence will be various on thesame optimization problem because a topology structuredetermines the search information sharing speed or direc-tion for each particle (e global star and local ring are thetwo typical topology structures A PSO with a global starstructure where all particles are connected has the smallestaverage distance in the swarm and on the contrary a PSOwith a local ring structure where every particle is connectedto two near particles has the largest average distance inswarm [22]

(e global star structure and the local ring structurewhich are two commonly used structures are analyzed in theexperimental study (ese two structures are shown inFigure 2 Each group of particles has 16 individuals It should

2 Discrete Dynamics in Nature and Society

be noted that the nearby particle in the local structure ismostly based on the index of particles

3 Particle Swarm Optimization withMultiple Phases

(e search process could be divided into different phasessuch as the exploration phase and the exploitation phase Adiscussion has been given on the genetic diversity of anevolutionary algorithm in the exploration phase [23] Toenhance the generalization ability of the PSO algorithmmultiple phases could be combined into one algorithm (ePSO algorithm is a combination of several componentsEach component could be seen as a building block of a PSOvariant Some changeable components in PSO algorithmsare shown in Figure 3 (e setting of a component is in-dependent of other components Normally the parametersrsquosettings topology structure and other strategies should bedetermined before the search process (ere are three well-known standard PSO algorithms namely the standard PSOalgorithm by Bratton and Kennedy (SPSO-BK) [24] thestandard PSO algorithm by Clerc (SPSO-C) [25] and thecanonical PSO (CPSO) algorithm By Shi and Eberhart [26]

Different search phases are emphasized for PSO variantsbecause it performs differently when solving different kinds

of problems (us to utilize the strengths of differentvariants a PSO algorithm could be a combination of severalPSO variants For example the global star structure could beused at the beginning of the search to obtain a good ex-ploration ability while the local ring structure could be usedat the ending of the search to promote the exploitationability (e basic procedure of the PSO algorithm withmultiple phase strategy (PSOMP) is given in Algorithm 2(e setting of phases could be fixed or dynamically changedduring the search To validate the performance of differentsettings two kinds of PSO algorithms with multiple phasesare proposed

31 Particle Swarm Optimization with Fixed Phases OnePSOMP variant is the PSO with fixed phases (PSOFP) al-gorithm (e process of the PSO algorithm with multiplefixed phases is shown in Figure 4 Two standard PSO var-iants the CPSO with star structure and the SPSO-BK withring structure are combined for the PSOFP algorithm In theexperimental study both variants perform the same itera-tions (e PSOFP algorithm started with the CPSO with thestar structure and ended with the SPSO-BK with the ringstructure All parameters are fixed for each phase during thesearch (e PSOFP algorithm switches from one variant toanother after the fixed number of iterations

32 Particle Swarm Optimization with Dynamic Phases(e other PSO variant is PSOwith dynamic phases (PSODP)algorithm As the name indicates the phase could be dy-namically changed for this algorithm during the searchprocess As the same with the PSOFP algorithm the PSODPalgorithm started with the CPSO with the star structure andended with the SPSO-BK with the ring structure Howeverthe number of iterations is not fixed during the search (eprocess of the PSO algorithm with multiple dynamic phasesis shown in Figure 5 (e global best (gbest) position is thebest solution found so far (e gbest not changing for k

iterations indicates that the algorithm may be stuck intolocal optima (e PSO variant will be changed after thisphenomenon occurs several times For the setting of thePSODP algorithm the search phase will change from CPSOwith star structure to SPSO-BK with a ring structure (econdition of phase change is the gbest stuck into a localoptimum ie the gbest is not changed for 100 iterations

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history For each particle if current position has a

better function value than pbest then update pbest as current position(5) Select a particle that has the best fitness value from the current particlersquos neighborhood this particle is termed as the

neighborhood best (nbest)(6) for each particle do(7) Update particlersquos velocity according equation (1)(8) Update particlersquos position according equation (2)

ALGORITHM 1 Basic process of PSO algorithm

End

Stop

Start

Solutionevaluation

Initialization

Update velocityand position

Update pbestand nbest

Yes

No

Figure 1 (e framework of canonical PSO algorithm

Discrete Dynamics in Nature and Society 3

4 Experimental Study

41 Benchmark Test Functions and Parameter SettingTable 1 shows the benchmark functions which have beenused in the experimental study To give an illustration of thesearch ability of the proposed algorithm a diverse set of 12benchmark functions with different types are used to con-duct the experiments (ese benchmark functions can beclassified into two groups(e first five functions f0 minus f4 areunimodal while the other seven functions are multimodalAll functions are run 50 times to ensure a reasonable sta-tistical result necessary to compare the different approaches

and the value of global optimum is shifted to different fminfor different functions (e function value of the best so-lution found by an algorithm in a run is denoted byf( x

rarrbest) (e error of each run is denoted as

error f( xrarr

best) minus fmin(ree kinds of standard or canonical PSO algorithms are

tested in the experimental study Each algorithm has twokinds of structure the global star structure and the local ringstructure (ere are 50 particles in each group of PSO al-gorithms Each algorithm runs 50 times 10000 iterations inevery run (e other settings for the three standard algo-rithms are as follows

(a) (b)

Figure 2 Two structures have been used in this paper (a) global star structure where all particles are connected (b) local ring structurewhere each particle is connected to two nearby neighbors

Particle swarm optimizationalgorithms

Parameters Topologystructure

Accelerationconstants c1 c2

Inertia weightw

Global starstructure

Otherstructures

Local ringstructure

Figure 3 Some changeable components in PSO algorithms

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history (personal best termed as pbest) For each

particle if current position has a better function value than pbest then update pbest as current position(5) Selection a particle which has the best fitness value among current particlersquos neighborhood this particle is termed as the

neighborhood best(6) for each particle do(7) Update the particlersquos velocity according equation (1)(8) Update the particlersquos position according equation (2)(9) Change the search phase update the structure andor parameter settings

ALGORITHM 2 Basic procedure of PSO algorithm with multiple phase strategy

4 Discrete Dynamics in Nature and Society

(i) SPSO-BK settings w 072984 φ1 φ2 205 iec1 c2 1496172 [24]

(ii) SPSO-C settings w (12 ln(2))≃0721 c1 c2

05 + ln(2)≃1193 [25](iii) Canonical PSO (CPSO) settings w

wmin + (wmax minus wmin) times (Iterationminus iterIteration)ie w is linearly decreased during the searchwmax 09 wmin 04 c1 c2 2 [26]

42 Experimental Results (e experimental results forfunctions with 50 dimensions are given in Tables 2 and 3(e results for unimodal functions f0 minus f4 are given in

Table 2 while results for multimodal functions f5 minus f11 aregiven in Tables 2 and 3 (e aim of the experimental study isnot to validate the search performance of the proposedalgorithm but to attempt to discuss the combination ofdifferent search phases (is is a primary study on thelearning of the characteristics of benchmark functions

To validate the generalization ability of PSO variants thesame experiments are conducted on the functions with 100dimensions All parametersrsquo settings are the same forfunctions with 50 dimensions (e results for unimodalfunctions f0 minus f4 are given in Table 4 while the results formultimodal functions f5 minus f11 are given in Tables 4 and 5From the experiment results it could be seen that theproposed algorithm could obtain a good solution for mostbenchmark functions Based on the combination of differentphases the proposed algorithm could be performed morerobustly than the algorithm with one phase

(e convergence graphs of error values for each algo-rithm on all functions with 50 and 100 dimensions areshown in Figures 6 and 7 respectively In these figures eachcurve represents the variation of themean value of error overthe iteration for each algorithm (e six PSO variants aretested with the proposed two algorithms(e notation of ldquoSrdquoand ldquoRrdquo indicates the star and ring structure in the PSOvariants respectively For example ldquoSPSOBKSrdquo andldquoSPSOBKRrdquo indicate the SPSO-BK algorithm with the starand ring structure respectively (e robustness of theproposed algorithm also could be validated from the con-vergence graph

43 Computational Time Tables 6 and 7 give the totalcomputing time (seconds) of 50 runs in the experiments forfunctions with 50 and 100 dimensions respectively Nor-mally the evaluation speed of the PSO algorithm with ringstructure is significantly slower than the same PSO algo-rithm with a star structure

End

Stop

Start

Solutionevaluation

InitializationUpdate velocityand position

Update pbestand nbest

YesNo

Multiple fixedphases strategy

Figure 4 (e process of PSO algorithm with multiple fixed phases

End

Stop

Start

Solution evaluation

Initialization

Update velocity and position

Update pbest and nbest

Yes

No

gbest not changed aer k

iteration

Multiple dynamic phases

strategy

Figure 5 (e process of PSO algorithm with multiple dynamicphases

Discrete Dynamics in Nature and Society 5

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 3: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

be noted that the nearby particle in the local structure ismostly based on the index of particles

3 Particle Swarm Optimization withMultiple Phases

(e search process could be divided into different phasessuch as the exploration phase and the exploitation phase Adiscussion has been given on the genetic diversity of anevolutionary algorithm in the exploration phase [23] Toenhance the generalization ability of the PSO algorithmmultiple phases could be combined into one algorithm (ePSO algorithm is a combination of several componentsEach component could be seen as a building block of a PSOvariant Some changeable components in PSO algorithmsare shown in Figure 3 (e setting of a component is in-dependent of other components Normally the parametersrsquosettings topology structure and other strategies should bedetermined before the search process (ere are three well-known standard PSO algorithms namely the standard PSOalgorithm by Bratton and Kennedy (SPSO-BK) [24] thestandard PSO algorithm by Clerc (SPSO-C) [25] and thecanonical PSO (CPSO) algorithm By Shi and Eberhart [26]

Different search phases are emphasized for PSO variantsbecause it performs differently when solving different kinds

of problems (us to utilize the strengths of differentvariants a PSO algorithm could be a combination of severalPSO variants For example the global star structure could beused at the beginning of the search to obtain a good ex-ploration ability while the local ring structure could be usedat the ending of the search to promote the exploitationability (e basic procedure of the PSO algorithm withmultiple phase strategy (PSOMP) is given in Algorithm 2(e setting of phases could be fixed or dynamically changedduring the search To validate the performance of differentsettings two kinds of PSO algorithms with multiple phasesare proposed

31 Particle Swarm Optimization with Fixed Phases OnePSOMP variant is the PSO with fixed phases (PSOFP) al-gorithm (e process of the PSO algorithm with multiplefixed phases is shown in Figure 4 Two standard PSO var-iants the CPSO with star structure and the SPSO-BK withring structure are combined for the PSOFP algorithm In theexperimental study both variants perform the same itera-tions (e PSOFP algorithm started with the CPSO with thestar structure and ended with the SPSO-BK with the ringstructure All parameters are fixed for each phase during thesearch (e PSOFP algorithm switches from one variant toanother after the fixed number of iterations

32 Particle Swarm Optimization with Dynamic Phases(e other PSO variant is PSOwith dynamic phases (PSODP)algorithm As the name indicates the phase could be dy-namically changed for this algorithm during the searchprocess As the same with the PSOFP algorithm the PSODPalgorithm started with the CPSO with the star structure andended with the SPSO-BK with the ring structure Howeverthe number of iterations is not fixed during the search (eprocess of the PSO algorithm with multiple dynamic phasesis shown in Figure 5 (e global best (gbest) position is thebest solution found so far (e gbest not changing for k

iterations indicates that the algorithm may be stuck intolocal optima (e PSO variant will be changed after thisphenomenon occurs several times For the setting of thePSODP algorithm the search phase will change from CPSOwith star structure to SPSO-BK with a ring structure (econdition of phase change is the gbest stuck into a localoptimum ie the gbest is not changed for 100 iterations

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history For each particle if current position has a

better function value than pbest then update pbest as current position(5) Select a particle that has the best fitness value from the current particlersquos neighborhood this particle is termed as the

neighborhood best (nbest)(6) for each particle do(7) Update particlersquos velocity according equation (1)(8) Update particlersquos position according equation (2)

ALGORITHM 1 Basic process of PSO algorithm

End

Stop

Start

Solutionevaluation

Initialization

Update velocityand position

Update pbestand nbest

Yes

No

Figure 1 (e framework of canonical PSO algorithm

Discrete Dynamics in Nature and Society 3

4 Experimental Study

41 Benchmark Test Functions and Parameter SettingTable 1 shows the benchmark functions which have beenused in the experimental study To give an illustration of thesearch ability of the proposed algorithm a diverse set of 12benchmark functions with different types are used to con-duct the experiments (ese benchmark functions can beclassified into two groups(e first five functions f0 minus f4 areunimodal while the other seven functions are multimodalAll functions are run 50 times to ensure a reasonable sta-tistical result necessary to compare the different approaches

and the value of global optimum is shifted to different fminfor different functions (e function value of the best so-lution found by an algorithm in a run is denoted byf( x

rarrbest) (e error of each run is denoted as

error f( xrarr

best) minus fmin(ree kinds of standard or canonical PSO algorithms are

tested in the experimental study Each algorithm has twokinds of structure the global star structure and the local ringstructure (ere are 50 particles in each group of PSO al-gorithms Each algorithm runs 50 times 10000 iterations inevery run (e other settings for the three standard algo-rithms are as follows

(a) (b)

Figure 2 Two structures have been used in this paper (a) global star structure where all particles are connected (b) local ring structurewhere each particle is connected to two nearby neighbors

Particle swarm optimizationalgorithms

Parameters Topologystructure

Accelerationconstants c1 c2

Inertia weightw

Global starstructure

Otherstructures

Local ringstructure

Figure 3 Some changeable components in PSO algorithms

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history (personal best termed as pbest) For each

particle if current position has a better function value than pbest then update pbest as current position(5) Selection a particle which has the best fitness value among current particlersquos neighborhood this particle is termed as the

neighborhood best(6) for each particle do(7) Update the particlersquos velocity according equation (1)(8) Update the particlersquos position according equation (2)(9) Change the search phase update the structure andor parameter settings

ALGORITHM 2 Basic procedure of PSO algorithm with multiple phase strategy

4 Discrete Dynamics in Nature and Society

(i) SPSO-BK settings w 072984 φ1 φ2 205 iec1 c2 1496172 [24]

(ii) SPSO-C settings w (12 ln(2))≃0721 c1 c2

05 + ln(2)≃1193 [25](iii) Canonical PSO (CPSO) settings w

wmin + (wmax minus wmin) times (Iterationminus iterIteration)ie w is linearly decreased during the searchwmax 09 wmin 04 c1 c2 2 [26]

42 Experimental Results (e experimental results forfunctions with 50 dimensions are given in Tables 2 and 3(e results for unimodal functions f0 minus f4 are given in

Table 2 while results for multimodal functions f5 minus f11 aregiven in Tables 2 and 3 (e aim of the experimental study isnot to validate the search performance of the proposedalgorithm but to attempt to discuss the combination ofdifferent search phases (is is a primary study on thelearning of the characteristics of benchmark functions

To validate the generalization ability of PSO variants thesame experiments are conducted on the functions with 100dimensions All parametersrsquo settings are the same forfunctions with 50 dimensions (e results for unimodalfunctions f0 minus f4 are given in Table 4 while the results formultimodal functions f5 minus f11 are given in Tables 4 and 5From the experiment results it could be seen that theproposed algorithm could obtain a good solution for mostbenchmark functions Based on the combination of differentphases the proposed algorithm could be performed morerobustly than the algorithm with one phase

(e convergence graphs of error values for each algo-rithm on all functions with 50 and 100 dimensions areshown in Figures 6 and 7 respectively In these figures eachcurve represents the variation of themean value of error overthe iteration for each algorithm (e six PSO variants aretested with the proposed two algorithms(e notation of ldquoSrdquoand ldquoRrdquo indicates the star and ring structure in the PSOvariants respectively For example ldquoSPSOBKSrdquo andldquoSPSOBKRrdquo indicate the SPSO-BK algorithm with the starand ring structure respectively (e robustness of theproposed algorithm also could be validated from the con-vergence graph

43 Computational Time Tables 6 and 7 give the totalcomputing time (seconds) of 50 runs in the experiments forfunctions with 50 and 100 dimensions respectively Nor-mally the evaluation speed of the PSO algorithm with ringstructure is significantly slower than the same PSO algo-rithm with a star structure

End

Stop

Start

Solutionevaluation

InitializationUpdate velocityand position

Update pbestand nbest

YesNo

Multiple fixedphases strategy

Figure 4 (e process of PSO algorithm with multiple fixed phases

End

Stop

Start

Solution evaluation

Initialization

Update velocity and position

Update pbest and nbest

Yes

No

gbest not changed aer k

iteration

Multiple dynamic phases

strategy

Figure 5 (e process of PSO algorithm with multiple dynamicphases

Discrete Dynamics in Nature and Society 5

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 4: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

4 Experimental Study

41 Benchmark Test Functions and Parameter SettingTable 1 shows the benchmark functions which have beenused in the experimental study To give an illustration of thesearch ability of the proposed algorithm a diverse set of 12benchmark functions with different types are used to con-duct the experiments (ese benchmark functions can beclassified into two groups(e first five functions f0 minus f4 areunimodal while the other seven functions are multimodalAll functions are run 50 times to ensure a reasonable sta-tistical result necessary to compare the different approaches

and the value of global optimum is shifted to different fminfor different functions (e function value of the best so-lution found by an algorithm in a run is denoted byf( x

rarrbest) (e error of each run is denoted as

error f( xrarr

best) minus fmin(ree kinds of standard or canonical PSO algorithms are

tested in the experimental study Each algorithm has twokinds of structure the global star structure and the local ringstructure (ere are 50 particles in each group of PSO al-gorithms Each algorithm runs 50 times 10000 iterations inevery run (e other settings for the three standard algo-rithms are as follows

(a) (b)

Figure 2 Two structures have been used in this paper (a) global star structure where all particles are connected (b) local ring structurewhere each particle is connected to two nearby neighbors

Particle swarm optimizationalgorithms

Parameters Topologystructure

Accelerationconstants c1 c2

Inertia weightw

Global starstructure

Otherstructures

Local ringstructure

Figure 3 Some changeable components in PSO algorithms

(1) Initialize each particlersquos velocity and position with random numbers(2) while not reaches the maximum iteration or not found the satisfied solution do(3) Calculate each solutionrsquos function value(4) Compare function value between the current position and the best position in history (personal best termed as pbest) For each

particle if current position has a better function value than pbest then update pbest as current position(5) Selection a particle which has the best fitness value among current particlersquos neighborhood this particle is termed as the

neighborhood best(6) for each particle do(7) Update the particlersquos velocity according equation (1)(8) Update the particlersquos position according equation (2)(9) Change the search phase update the structure andor parameter settings

ALGORITHM 2 Basic procedure of PSO algorithm with multiple phase strategy

4 Discrete Dynamics in Nature and Society

(i) SPSO-BK settings w 072984 φ1 φ2 205 iec1 c2 1496172 [24]

(ii) SPSO-C settings w (12 ln(2))≃0721 c1 c2

05 + ln(2)≃1193 [25](iii) Canonical PSO (CPSO) settings w

wmin + (wmax minus wmin) times (Iterationminus iterIteration)ie w is linearly decreased during the searchwmax 09 wmin 04 c1 c2 2 [26]

42 Experimental Results (e experimental results forfunctions with 50 dimensions are given in Tables 2 and 3(e results for unimodal functions f0 minus f4 are given in

Table 2 while results for multimodal functions f5 minus f11 aregiven in Tables 2 and 3 (e aim of the experimental study isnot to validate the search performance of the proposedalgorithm but to attempt to discuss the combination ofdifferent search phases (is is a primary study on thelearning of the characteristics of benchmark functions

To validate the generalization ability of PSO variants thesame experiments are conducted on the functions with 100dimensions All parametersrsquo settings are the same forfunctions with 50 dimensions (e results for unimodalfunctions f0 minus f4 are given in Table 4 while the results formultimodal functions f5 minus f11 are given in Tables 4 and 5From the experiment results it could be seen that theproposed algorithm could obtain a good solution for mostbenchmark functions Based on the combination of differentphases the proposed algorithm could be performed morerobustly than the algorithm with one phase

(e convergence graphs of error values for each algo-rithm on all functions with 50 and 100 dimensions areshown in Figures 6 and 7 respectively In these figures eachcurve represents the variation of themean value of error overthe iteration for each algorithm (e six PSO variants aretested with the proposed two algorithms(e notation of ldquoSrdquoand ldquoRrdquo indicates the star and ring structure in the PSOvariants respectively For example ldquoSPSOBKSrdquo andldquoSPSOBKRrdquo indicate the SPSO-BK algorithm with the starand ring structure respectively (e robustness of theproposed algorithm also could be validated from the con-vergence graph

43 Computational Time Tables 6 and 7 give the totalcomputing time (seconds) of 50 runs in the experiments forfunctions with 50 and 100 dimensions respectively Nor-mally the evaluation speed of the PSO algorithm with ringstructure is significantly slower than the same PSO algo-rithm with a star structure

End

Stop

Start

Solutionevaluation

InitializationUpdate velocityand position

Update pbestand nbest

YesNo

Multiple fixedphases strategy

Figure 4 (e process of PSO algorithm with multiple fixed phases

End

Stop

Start

Solution evaluation

Initialization

Update velocity and position

Update pbest and nbest

Yes

No

gbest not changed aer k

iteration

Multiple dynamic phases

strategy

Figure 5 (e process of PSO algorithm with multiple dynamicphases

Discrete Dynamics in Nature and Society 5

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 5: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

(i) SPSO-BK settings w 072984 φ1 φ2 205 iec1 c2 1496172 [24]

(ii) SPSO-C settings w (12 ln(2))≃0721 c1 c2

05 + ln(2)≃1193 [25](iii) Canonical PSO (CPSO) settings w

wmin + (wmax minus wmin) times (Iterationminus iterIteration)ie w is linearly decreased during the searchwmax 09 wmin 04 c1 c2 2 [26]

42 Experimental Results (e experimental results forfunctions with 50 dimensions are given in Tables 2 and 3(e results for unimodal functions f0 minus f4 are given in

Table 2 while results for multimodal functions f5 minus f11 aregiven in Tables 2 and 3 (e aim of the experimental study isnot to validate the search performance of the proposedalgorithm but to attempt to discuss the combination ofdifferent search phases (is is a primary study on thelearning of the characteristics of benchmark functions

To validate the generalization ability of PSO variants thesame experiments are conducted on the functions with 100dimensions All parametersrsquo settings are the same forfunctions with 50 dimensions (e results for unimodalfunctions f0 minus f4 are given in Table 4 while the results formultimodal functions f5 minus f11 are given in Tables 4 and 5From the experiment results it could be seen that theproposed algorithm could obtain a good solution for mostbenchmark functions Based on the combination of differentphases the proposed algorithm could be performed morerobustly than the algorithm with one phase

(e convergence graphs of error values for each algo-rithm on all functions with 50 and 100 dimensions areshown in Figures 6 and 7 respectively In these figures eachcurve represents the variation of themean value of error overthe iteration for each algorithm (e six PSO variants aretested with the proposed two algorithms(e notation of ldquoSrdquoand ldquoRrdquo indicates the star and ring structure in the PSOvariants respectively For example ldquoSPSOBKSrdquo andldquoSPSOBKRrdquo indicate the SPSO-BK algorithm with the starand ring structure respectively (e robustness of theproposed algorithm also could be validated from the con-vergence graph

43 Computational Time Tables 6 and 7 give the totalcomputing time (seconds) of 50 runs in the experiments forfunctions with 50 and 100 dimensions respectively Nor-mally the evaluation speed of the PSO algorithm with ringstructure is significantly slower than the same PSO algo-rithm with a star structure

End

Stop

Start

Solutionevaluation

InitializationUpdate velocityand position

Update pbestand nbest

YesNo

Multiple fixedphases strategy

Figure 4 (e process of PSO algorithm with multiple fixed phases

End

Stop

Start

Solution evaluation

Initialization

Update velocity and position

Update pbest and nbest

Yes

No

gbest not changed aer k

iteration

Multiple dynamic phases

strategy

Figure 5 (e process of PSO algorithm with multiple dynamicphases

Discrete Dynamics in Nature and Society 5

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 6: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

Table 2 Experimental results for functions f0 sim f5 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus450 452E-13 minus330 minus3299999 135E-11Ring minus4499999 minus4499999 211E-12 minus3299999 minus3299999 437E-12

SPSO-C Star minus4499999 minus4499999 412E-09 minus3299999 minus3299999 308E-05Ring minus4499999 minus4499999 313E-12 minus3299999 minus3299999 601E-12

CPSO Star minus450 minus450 590E-14 minus330 minus3298 14Ring minus3733774 minus1910975 98981 minus3298322 minus3259999 2151

PSOFP minus450 minus450 147E-12 minus3299999 minus3299999 625E-09PSODP minus450 minus450 191E-12 minus3299999 minus3299999 145E-06

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 4500000 5500000 6999999 180 18836 13936Ring 4553668 4956626 38833 266 36046 50708

SPSO-C Star 4500000 4503961 2748 207 34366 106891Ring 4500958 4513167 25006 373 7143 167178

CPSO Star 4763968 6020931 697433 180 180 0Ring 31216019 49661359 769950 395 53464 70468

PSOFP 4500476 4543896 290223 180 1804 2107PSODP 4500013 4600571 34158 180 18754 26872

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200022 1200044 0001 minus4499976 minus4184438 24245Ring 1200281 1200630 0023 minus4443137 minus4150037 9573

SPSO-C Star 1200408 1201508 0070 minus4499999 minus4483351 2378Ring 1200467 1202094 0085 minus4434564 minus4117220 17957

CPSO Star 1200037 1200090 0002 minus4215111 minus3897360 27273Ring 1200194 1200321 0006 minus2188776 1937953 156862

PSOFP 1200026 1200061 00024 minus4390752 minus3951315 310671PSODP 1200029 1200086 00076 minus4442963 minus4056357 257581

Table 1 (e benchmark functions used in our experimental study where n is the dimension of each problem global optimum is xlowast fmin isthe minimum value of the function and S sube Rn

Function Test function n S fmin

Parabolic f0(x) 1113936ni1 x2

i 100 [minus100 100]n minus4500Schwefelrsquos P222 f1(x) 1113936

ni1 |xi| + 1113937

ni1 |xi| 100 [minus10 10]n minus3300

Schwefelrsquos P12 f2(x) 1113936ni1 (1113936

ik1 xk)2 100 [minus100 100]n 4500

Step quartic noise f3(x) 1113936ni1 (1113860xi + 0511138611113857

2 100 [minus100 100]n 1800f4(x) 1113936

ni1 ix4

i + random[0 1) 100 [minus128 128]n 1200Rosenbrock f5(x) 1113936

nminus1i1 [100(xi+1 minus x2

i )2 + (xi minus 1)2] 100 [minus10 10]n minus4500Schwefel f6(x) 1113936

ni1 minusxi sin(

|xi|

1113968) + 4189829n 100 [minus500 500]n minus3300

Rastrigin f7(x) 1113936ni1[x2

i minus 10 cos(2πxi) + 10] 100 [minus512 512]n 4500

NoncontinuousRastrigin

f8(x) 1113936ni1[y2

i minus 10 cos(2πyi) + 10]

yi xi |xi|lt (12)

(round(2xi)2) |xi|ge (12)1113896

100 [minus512 512]n 1800

Ackley f9(x) minus20 exp(minus02(1n) 1113936

ni1 x2

i

1113969) minus exp((1n) 1113936

ni1 cos(2πxi)) + 20 + e 100 [minus32 32]n 1200

Griewank f10(x) (14000) 1113936ni1 x2

i minus 1113937ni1 cos(xi

i

radic) + 1 100 [minus600 600]n minus4500

Generalized penalized

f11(x) (πn) 10 sin2(πy1) + 1113944nminus1i1 (yi minus 1)

2times [1 + 10 sin2(πyi+1)] + (yn minus 1)

21113882 1113883

+ 1113944n

i1u(xi 10 100 4)

100 [minus50 50]n minus3300

yi 1 + (14)(xi + 1)

u(xi a k m)

k(xi minus a)m

xi gt a

0 minusaltxi lt a

k(minusxi minus a)m

xi lt minus a

⎧⎪⎨

⎪⎩

6 Discrete Dynamics in Nature and Society

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 7: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

Table 3 Experimental results for functions f6 sim f11 with 50 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 66038424 97866971 1359137 4977579 5231492 13619Ring 161004628 174373017 673960 4977580 5306957 14832

SPSO-C Star 76846670 98983600 12182453 4868134 5205225 15146Ring 162990367 175234040 548640 4997950 5301292 14646

CPSO Star 69343602 87618199 854741 4957680 5296818 22117Ring 158750546 175921651 661051 5788806 6311157 16836

PSOFP 61447441 84722288 1171500 5047226 5282633 13507PSODP 59501707 83697773 1038811 4937781 5252104 20126

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 186 22458 22039 120 121773 0649Ring 2320000 2627717 14463 1200000 1200213 0149

SPSO-C Star 227 269985 21175 1218030 1243168 1090Ring 2390000 2771382 18542 1200000 1203807 0558

CPSO Star 211 2406001 14448 120 120 715E-14Ring 2792935 3215721 17248 1227702 1242613 0529

PSOFP 210 2373073 19334 120 1200499 0248PSODP 200 2410076 20067 120 1200484 0237

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4499769 0033 minus330 minus3297654 0330Ring minus4499999 minus4499997 0002 minus3299999 minus3299987 0008

SPSO-C Star minus4499999 minus4498900 0228 minus3299999 minus3290989 1123Ring minus4499999 minus4499989 0004 minus3299999 minus3299912 0039

CPSO Star minus450 minus4499902 0012 minus330 minus3299975 0012Ring minus4482587 minus4452495 1087 minus3292803 minus3276535 0783

PSOFP minus450 minus4499874 0019 minus330 minus3299763 0051PSODP minus450 minus4499902 0017 minus330 minus3299763 0055

Table 4 Experimental results for functions f0 sim f5 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf0 parabolic f1 Schwefelrsquos P222

SPSO-BK Star minus450 minus4499999 297E-10 minus3299999 minus3299999 190E-07Ring minus4499999 minus4499999 272E-11 minus3299999 minus3299999 734E-11

SPSO-C Star minus4499999 minus4499947 00369 minus3299999 minus3298556 04552Ring minus4499999 minus4499999 140E-11 minus3299999 minus3299999 309E-11

CPSO Star minus4499999 minus4499999 213E-09 minus3299999 minus3295999 19595Ring minus1496541 13444583 532677 minus3258224 minus3095554 75546

PSOFP minus4499999 minus4499999 136E-07 minus3299999 minus3299999 136E-09PSODP minus4499999 minus4499999 334E-07 minus3299999 minus3299999 00006

f2 Schwefelrsquos P12 f3 step

SPSO-BK Star 5136355 16021760 3006812 208 37732 137165Ring 24261604 53553297 1565104 671 97824 174267

SPSO-C Star 5554292 29988775 3037685 1208 268348 755401Ring 11881057 20751955 769974 1585 244846 531394

CPSO Star 51459671 108225242 3423207 180 1817 13Ring 20782011 27917191 3168315 1791 250612 287989

PSOFP 13991752 39876253 5750961 182 19976 28642PSODP 9897956 28613122 5655407 185 21536 115588

f4 quartic noise f5 Rosenbrock

SPSO-BK Star 1200138 1200278 00110 minus3970471 minus3299329 41398Ring 1201851 1203265 01013 minus3730928 minus3464626 25371

SPSO-C Star 1205708 1213554 04501 minus3972747 minus3043235 405292Ring 1204383 1209911 03064 minus3719901 minus3390829 28481

CPSO Star 1200379 1200658 0014 minus3634721 minus2948193 50003Ring 1201338 1202321 0043 6069444 39078062 1468905

PSOFP 1200224 1200438 0014 minus3802421 minus2900060 47549PSODP 1200219 1200518 0036 minus3687163 minus2995108 43518

Discrete Dynamics in Nature and Society 7

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 8: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

Table 5 Experimental results for functions f6 sim f11 with 100 dimensions

Algorithm Best Mean StD Best Mean StDf6 Schwefel f7 Rastrigin

SPSO-BK Star 16056678 20638485 2182500 5693949 6117217 25514Ring 34073736 37171637 944548 5605540 6272546 25699

SPSO-C Star 16879295 20909081 2451469 5524806 5986068 23697Ring 34864588 37164686 841829 5694197 6240863 22440

CPSO Star 15445328 19374588 1691740 5853142 6391086 30012Ring 34235521 37141853 835129 9435140 10216837 37831

PSOFP 14734074 18723674 1738027 5833243 6489117 268680PSODP 14094634 18899269 1994712 5748307 6399075 28179

f8 noncontinuous Rastrigin f9 Ackley

SPSO-BK Star 2120000 2751500 306297 1225793 1235817 07107Ring 3072505 3583455 27633 1200000 1218010 04814

SPSO-C Star 2880000 3744500 411362 1263445 1283848 11134Ring 3322500 3957141 341028 1200000 1218958 05702

CPSO Star 3220000 3751452 31873 1200000 1200000 00001Ring 5805314 6606651 34490 1230676 1264343 07101

PSOFP 357 3607510 38267 1200000 1216460 0485PSODP 280 3551481 29234 1200000 1216061 0562

f10 Griewank f11 generalized penalized

SPSO-BK Star minus450 minus4498481 02569 minus330 minus3296599 03758Ring minus4499999 minus4499994 00022 minus3299999 minus3299880 00334

SPSO-C Star minus4499999 minus4494670 06696 minus3299999 minus3292211 08798Ring minus4499999 minus4499995 00024 minus3299999 minus3299526 00640

CPSO Star minus4499999 minus4499933 00092 minus3299999 minus3299707 00432Ring minus4370433 minus4285081 20322 minus3204856 minus3120457 42756

PSOFP minus4499999 minus4499817 0032 minus3299999 minus3299069 0151PSODP minus4499999 minus4499828 0031 minus3299999 minus3299224 0108

f0 parabolic

1000080002000 600040000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

2000 4000 6000 8000 100000

Iteration

10ndash20

10ndash10

100

1010

1020

1030

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

10ndash1

100

101

102

103

104

105

106

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)

Figure 6 Continued

8 Discrete Dynamics in Nature and Society

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 9: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

(e PSOFP and PSODP algorithms perform slower thanthe traditional PSO algorithm on the test benchmarkfunctions (e reason is that some search resources are

allocated on the changes of search phases(e variant of PSOwith multiple phases needs to determine whether to switchphase and the time of switching phase

f3 step

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash3

10ndash2

10ndash1

100

101

102

103

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

2000 4000 6000 8000 100000

Iteration

100

101

102

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefel

2000 4000 6000 8000 100000

Iteration

103

104

105

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

101

102

103

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)f9 ackley

10ndash15

10ndash10

10ndash5

100

105

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 6 Convergence curves of error value for functions with 50 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f) f5 (g) f6 (h) f7 (i) f8(j) f9 (k) f10 (l) f11

Discrete Dynamics in Nature and Society 9

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 10: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

f0 parabolic

2000 4000 6000 8000 100000

Iteration

10ndash15

10ndash10

10ndash5

100

105

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(a)

f1 Schwefels P222

10ndash20

100

1020

1040

1060

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(b)

f2 Schwefels P12

2000 4000 6000 8000 100000

Iteration

103

104

105

106

107

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(c)f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(d)

f4 quartic noise

2000 4000 6000 8000 100000

Iteration

10ndash2

10ndash1

100

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(e)

f5 Rosenbrock

102

103

104

105

106

107

108

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(f )f6 Schwefeltimes104

2000 4000 6000 8000 100000

Iteration

2

22242628

3323436

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(g)

f7 Rastrigin

102

103

104

Erro

r (lo

g (1

0))

2000 4000 6000 8000 100000

Iteration

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(h)

f8 noncontinuous Rastrigin

2000 4000 6000 8000 100000

Iteration

101

102

103

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(i)

Figure 7 Continued

10 Discrete Dynamics in Nature and Society

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 11: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

f9 ackley

2000 4000 6000 8000 100000

Iteration

10ndash5

10ndash4

10ndash3

10ndash2

10ndash1

100

101

102

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(j)

f10 Griewank

2000 4000 6000 8000 100000

Iteration

10ndash4

10ndash2

100

102

104

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(k)

f11 generalized penalized

2000 4000 6000 8000 100000

Iteration

10ndash2

100

102

104

106

108

1010

Erro

r (lo

g (1

0))

SPSOBKSSPSOBKRSPSOCSSPSOCR

CPSOSCPSORPSOFPPSODP

(l)

Figure 7 Convergence curves of error value for functions with 100 dimensions (a) f0 (b) f1 (c) f2 (d) f3 (e) f4 (f ) f5 (g) f6 (h) f7(i) f8 (j) f9 (k) f10 (l) f11

Table 6 Computational time (seconds) spent in 50 runs for the function with 50 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 1414753 11949538 1542324 13938880 1273165 7790309 6372494 4750059f1 1496900 9773025 1563205 12035832 1382238 7332784 6681761 4968441f2 2482647 6639135 2310108 3269424 2269095 7721789 2913018 3078365f3 1478375 16921071 1521913 17374540 1382383 7162581 6652441 5444709f4 1925499 7772614 1919068 8520861 1960920 4139658 4326319 4405604f5 1318303 2036976 1373674 1463761 1378663 7669009 2520464 1975675f6 2470545 5990113 2623676 6033503 2290871 4877298 2256345 2412337f7 2113601 5688381 2121822 5068316 1886031 7616392 7467872 5938517f8 2100288 11884555 2231706 11809223 2136926 7658513 6080168 5191364f9 2136390 8375839 2032908 11794232 1820255 7555640 7641055 5421310f10 2065978 12449973 2116536 14096724 1940266 8291956 7211344 5502234f11 3416110 12300763 3401308 12956684 3275602 9198441 8640743 6978836

Table 7 Computational time (seconds) spent in 50 runs for the function with 100 dimensions

FunctionSPSO-BK SPSO-C CPSO

PSOFP PSODPStar Ring Star Ring Star Ring

f0 2439461 13788133 2387528 19368303 2288345 14618672 6290300 7872999f1 2631645 5947633 2784290 14200705 2558799 13713888 2605299 2597703f2 6224394 15858146 6286117 9271755 6211243 16247289 8695341 8725241f3 2780287 32980839 2920504 33031495 2611934 13673627 13038592 9797108f4 3580799 14603739 3560119 15913160 3603514 7116922 7213646 6463844f5 2365184 3113102 2482506 2493193 2374182 14567880 2515607 2592736f6 4488993 11025489 4660795 11042448 4348893 8740973 4406255 4224402f7 3415798 6637398 3443478 7308557 3370671 13815354 8914041 8854871f8 3687186 13082763 4055594 11594827 3806574 13896927 10236844 8863957f9 3451321 7325952 3582724 12292192 3369159 14677248 7434061 8147588f10 3779289 15561002 3928346 21955736 3782778 16415258 8235398 8805851f11 6193468 9502162 6356353 12206493 6482199 19206593 8620097 9624388

Discrete Dynamics in Nature and Society 11

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 12: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

5 Conclusions

(e swarm intelligence algorithms perform in variedmanner for the same optimization problem Even for a kindof optimization algorithm the change of structure or pa-rameters may lead to different results It is very difficult ifnot impossible to obtain the connection between an algo-rithm and a problem (us choosing a proper setting for analgorithm before the search is vital for the optimizationprocess In this paper we attempt to combine the strengthsof different settings for the PSO algorithm Based on thebuilding blocks thesis a PSO algorithm with a multiplephase strategy was proposed in this paper to solve single-objective numerical optimization problems Two variants ofthe PSO algorithm which were termed as the PSO with fixedphase (PSOFP) algorithm and PSO with dynamic phase(PSODP) algorithm were tested on the 12 benchmarkfunctions (e experimental results showed that the com-bination of different phases could enhance the robustness ofthe PSO algorithm

(ere are two phases in the proposed PSOFP andPSODP methods However the number of phases could beincreased or the type of phase could be changed for otherPSO algorithms with multiple phases Besides the PSO al-gorithms the building block thesis could be utilized in otherswarm optimization algorithms Based on the analysis ofdifferent components the strength and weaknesses of dif-ferent swarm optimization algorithms could be understoodUtilizing this multiple phase strategy with other swarmalgorithms is our future work

Data Availability

(e data and codes used to support the findings of this studyhave been deposited in the GitHub repository

Conflicts of Interest

(e authors declare that they have no conflicts of interest

Acknowledgments

(is work was partially supported by the Scientific ResearchFoundation of Shaanxi Railway Institute (Grant no KY2016-29) and the Natural Science Foundation of China (Grant no61703256)

References

[1] R Eberhart and J Kennedy ldquoA new optimizer using particleswarm theoryrdquo in Proceedings of the Sixth InternationalSymposium onMicro Machine and Human Science pp 39ndash43Nagoya Japan October 1995

[2] J Kennedy and R Eberhart ldquoParticle swarm optimizationrdquo inProceedings of IEEE International Conference on NeuralNetworks (ICNN) pp 1942ndash1948 Perth WA AustraliaNovember 1995

[3] R Eberhart and Y Shi ldquoParticle swarm optimization de-velopments applications and resourcesrdquo in Proceedings of the2001 Congress on Evolutionary Computation (CEC 2001)pp 81ndash86 Seoul Republic of Korea May 2001

[4] Y Shi ldquoAn optimization algorithm based on brainstormingprocessrdquo International Journal of Swarm Intelligence Re-search vol 2 no 4 pp 35ndash62 2011

[5] S Cheng Q Qin J Chen and Y Shi ldquoBrain storm opti-mization algorithm a reviewrdquo Artificial Intelligence Reviewvol 46 no 4 pp 445ndash458 2016

[6] L Ma S Cheng and Y Shi ldquoEnhancing learning efficiency ofbrain storm optimization via orthogonal learning designrdquoIEEE Transactions on Systems Man and Cybernetics Systems2020

[7] S Cheng X Lei H Lu Y Zhang and Y Shi ldquoGeneralizedpigeon-inspired optimization algorithmsrdquo Science ChinaInformation Sciences vol 62 2019

[8] Z-H Zhi-Hui Zhan J Jun Zhang Y Yun Li andH S-H Chung ldquoAdaptive particle swarm optimizationrdquoIEEE Transactions on Systems Man and Cybernetics Part B(Cybernetics) vol 39 no 6 pp 1362ndash1381 2009

[9] J Liu X Ma X Li M Liu T Shi and P Li ldquoRandomconvergence analysis of particle swarm optimization algo-rithm with time-varying attractorrdquo Swarm and EvolutionaryComputation vol 61 Article ID 100819 2021

[10] Q Qin S Cheng Q Zhang L Li and Y Shi ldquoParticle swarmoptimization with interswarm interactive learning strategyrdquoIEEE Transactions on Cybernetics vol 46 no 10 pp 2238ndash2251 2016

[11] X Xia L Gui F Yu et al ldquoTriple archives particle swarmoptimizationrdquo IEEE Transactions on Cybernetics vol 50no 12 pp 4862ndash4875 2020

[12] R Cheng and Y Jin ldquoA social learning particle swarm op-timization algorithm for scalable optimizationrdquo InformationSciences vol 291 pp 43ndash60 2015

[13] F Wang H Zhang and A Zhou ldquoA particle swarm opti-mization algorithm for mixed-variable optimization prob-lemsrdquo Swarm and Evolutionary Computation vol 60 ArticleID 100808 2021

[14] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[15] X-F Liu Z-H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 2019

[16] S Cheng X Lei J Chen J Feng and Y Shi ldquoNormalizedranking based particle swarm optimizer for many objectiveoptimizationrdquo in Simulated Evolution and Learning (SEAL2017)) Y Shi K C Tan M Zhang et al Eds SpringerInternational Publishing Newyork NY USA pp 347ndash3572017

[17] S Cheng H Lu X Lei and Y Shi ldquoA quarter century ofparticle swarm optimizationrdquo Complex amp Intelligent Systemsvol 4 no 3 pp 227ndash239 2018

[18] S Cheng L Ma H Lu X Lei and Y Shi ldquoEvolutionarycomputation for solving search-based data analytics prob-lemsrdquo Artificial Intelligence Review vol 54 2021 In press

[19] J H Holland ldquoBuilding blocks cohort genetic algorithmsand hyperplane-defined functionsrdquo Evolutionary Computa-tion vol 8 no 4 pp 373ndash391 2000

[20] A P Piotrowski J J Napiorkowski and A E PiotrowskaldquoPopulation size in particle swarm optimizationrdquo Swarm andEvolutionary Computation vol 58 Article ID 100718 2020

[21] J Kennedy R Eberhart and Y Shi Swarm IntelligenceMorgan Kaufmann Publisher Burlington MA USA 2001

12 Discrete Dynamics in Nature and Society

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13

Page 13: ParticleSwarmOptimizationAlgorithmwithMultiplePhasesfor ... · 2021. 4. 16. · Received 16 April 2021; Revised 1 June 2021; Accepted 18 June 2021; Published 28 June 2021 ... (SPSO-BK)

[22] R Mendes J Kennedy and J Neves ldquo(e fully informedparticle swarm simpler maybe betterrdquo IEEE Transactions onEvolutionary Computation vol 8 no 3 pp 204ndash210 2004

[23] J Arabas and K Opara ldquoPopulation diversity of nonelitistevolutionary algorithms in the exploration phaserdquo IEEETransactions on Evolutionary Computation vol 24 no 6pp 1050ndash1062 2020

[24] D Bratton and J Kennedy ldquoDefining a standard for particleswarm optimizationrdquo in Proceedings of the 2007 IEEE SwarmIntelligence Symposium (SIS 2007) pp 120ndash127 HonoluluHI USA April 2007

[25] M Clerc Standard Particle Swarm Optimisation from 2006 to2011 Independent Consultant Jersey City NJ USA 2012

[26] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the 1999 Congress on Evo-lutionary Computation (CEC1999) pp 1945ndash1950 Wash-ington DC USA July 1999

Discrete Dynamics in Nature and Society 13