[ieee 2011 fourth international workshop on advanced computational intelligence (iwaci) - wuhan,...
TRANSCRIPT
�
Abstract—In this paper, an advanced PSO has been proposed to solve multi-modal functions optimization. Multiple swarms are used to optimize parallel search and artificial repulsive potential field on local points are set up to prevent multiple swarm converging to the same place and repeatedly search. In addition, this paper provides a theoretical basis for the strategyof multi-swarm parallel search used in the advanced algorithm. Finally, the method is tested on some benchmark functions and the results show a superior performance compared to the others PSO variant.
I. INTRODUCTION
article Swarm Optimization is a stochastic optimization based on swarm intelligence. For its fast convergence speed and less parameters, PSO algorithm has been
concerned by many scholars recently [1]. It is often used to solve nonlinear, unsmooth and multi-modal function optimization, and now it has been widely used in many scientific and engineering fields.
However, in practice, people find that fast loss of
population diversity during the evolution easily causes PSO
sunk into local extremum and leads to PSO early convergence
when solving some multi-modal smooth functions. To solve
the above mentioned problem, many scholars propose
improving methods, for example, the increase of particles
diversity through introduction of evolutionary selection
mechanism [2] or spatial neighborhood dynamic adjustment
[3].
Multi-swarm cooperation can increase particle diversity
and accelerate convergence speed of PSO.Shi and Krohling
proposed a co-evolutionary algorithm based on two PSO to
solve the problem of max-min values [4]. Niu et al. Suggest
the new multi-swarm cooperative to solve the function
optimization [5]. Also a cooperative model with two layers
Manuscript received June 23, 2011. The Project was supported by the Special Fund for Basic Scientific Research of Central Colleges, China University of Geosciences (Wuhan). CUGW090206.
Qin Tang is with the Department of Control Science and Engineering,Huazhong University of Science and Technology, and work in School of Mathematics and Physics, China University of Geosciences, Wuhan 430074, P. R. China (e-mail: [email protected]).
Yi Shen is with the Department of Control Science and Engineering,Huazhong University of Science and Technology, Wuhan 430074, P. R.China (e-mail: [email protected]).
Chengyu Hu is with the School of Computer, China University of Geosciences, Wuhan 430074, P. R. China (phone: 86-15927160586; e-mail: [email protected]).
Jianyou Zeng is with the School of Arts and Communication, China University of Geosciences, Wuhan 430074, P. R. China (e-mail: [email protected]).
framework was proposed by Li [6], Wang et al, Proposed
multi-swarm co-evolutionary algorithm applied to Neural
network noise canceller and the training of RBF neural
network’s structure and parameters [7] [8].
In addition, multi-swarm cooperative co-evolutionary is
widely used to solve dynamic optimization problems.
Blackwell proposed Multi-swarm PSO which is to keep the
population with more diversity through the anti-convergence
to prevent the population meeting together [9].
Speciation-based PSO, advanced by Parrot, is used for
optimization of multi-modal functions and dynamic
optimization problem [10]. Recently, the application of PSO
to dynamic problems through multiple swarms
co-evolutionary has also been explored [11][12]. The paper
[13] shows that the cooperative technique can also solve large
scale problem.
Though multi-swarms can increase the population diversity
and accelerate convergence speed, we cannot jump to the
conclusion that multi-swarm parallel search is more efficient
than single-swarm search. Some mathematical quantitative
proof is still needed to be given. Additionally, in practice,
multi-swarm likely does repeated search to a partial area and
cause algorithm inefficiency. Therefore, in this paper a
multi-swarm cooperation PSO in repulsive potential field has
been put forward and the main idea of this algorithm is to set
up artificial potential field at local extreme points to prevent
multiple swarms repeated search and to explore the new area.
At the same time, in order to assure the convergence precision,
one swarm is left to exploit discovered local extreme point.
II. MULTI-SWARMS COOPERATION OPTIMIZATION
In biosphere, not only exists Darwin’s natural evolution law
—“survival of the fittest”, but also communal evolution law
which insists that multiple individuals or species co-evolve
through inter-cooperation. Cooperation evolution algorithm is
just originated from this thought.
Multi-swarm cooperation can be classified into
Competitive Co-evolution and Cooperative Co-evolution and
the latter one is adopted in this paper. Cooperative
Co-evolution is generally defined as: multiple swarms (or
sub-swarms) searching for a solution (serially or in parallel)
and exchanging some information during the search according
to some communication strategy. Based on the exchanged
information, an action is taken to effectively continue with the
search process. Cooperative Co-evolution is a kind of
macro-coevolution method.
Multi-swarm Cooperation Optimization for Multi-modal Functionsin Repulsive Potential Field
Qin Tang, Yi Shen, Chengyu Hu and Jianyou Zeng
P
70
Fourth International Workshop on Advanced Computational Intelligence Wuhan, Hubei, China; October 19-21, 2011
978-1-61284-375-9/11/$26.00 @2011 IEEE
The most important step of multi-swarm cooperation
optimization is to decompose the problem or space. From the
point of space division, cooperative co-evolution includes
implicit and explicit space division. Explicit space
decomposition divides the space directly according to
correlation of dimension, for example, dividing n dimension
space into n spaces and then using n swarms to optimize
respectively and finally putting the optimum individual
together to form a solution vector. Implicit space division
means that multi-swarms search the whole space at the same
time. Actually, each swarm search different areas because of
different initialization and parameters.
Several papers have proved that multi-swarm cooperation
is better than single swarm search through simulation [14], but
there is no mathematics analysis till now. This paper presents
some assumptions firstly and proves that multi-swarm
cooperation is better than single swarm search under these
assumptions.
Assumption: if searching space is a d dimension hyper
sphere with radius R, there are K local extreme point areas
with radius r1,…,rk marked as m1,m2,…,mk, One of the local
extreme point is x* of the global extreme points and the area is
m*. Multi-swarm cooperation search adopts implicit space
division.
Theorem: if the assumption is tenable, multiple swarms
have a higher probability finding a global optimal solution by
co-evolution than single swarm does.
Proof: to single swarm search, { *}sP x m� means optimal
solution probability of single swarm search result and the probability is
1
{ *} { }K
s s k
k
P x m p x m�
� � �� (1)
The probability of mk varies directly as the radius of this area and the probability is
{ }
d
s k krp x mR
� � � �� �
(2)
According to (1) and (2), the probability of extreme points in single swarm search result is
� 1 2
1
{ *}
ddKs Kk
dKk
r r rrP x m
R R�
� � �� � �� �
� �
� (3)
To multi-swarm cooperation search, if there are K swarms, implicit space division is used and each swarm search corresponding local extreme point area. The maximum range of each swarm search is Rk, and Rk<R, that is
1 2
K
kR R R R� � � � (4)
The probability of extreme points in multi swarm search
result is { *}mP x m�
1 2
1 11 2
{ *} { }
d dK K
m s k k K
k k k K
r r r rP x m p x m
R R R R� �
� � �� � � � �
� � �
� � � �� �
� �� � (5)
So
� 1 2
m K d
ds
K
P R
P R R R
�
�� � �
(6)
As (1) and 1d � we know 1m
s
P
P� , namely
( * *) ( * *)m sP x m P x m� � � (7)
To sum up, if the assumption is tenable, the probability of optimal value searched by multi-swarm is higher than the single swarm does.
III. MULTI-SWARM COOPERATION OPTIMIZATION IN REPULSIVE
POTENTIAL FIELD
In this paper, an advanced cooperative PSO was proposed. More than two swarms referred to as groups are used, and all performing the same PSO algorithm. The exchange of information was also performed every number of iterations. And the gbest of every group is shared. As has shown before, we decompose the search space in an implicit way, every group search in parallel. Unlike the other cooperative PSO, we set up some repulsive point in the search area.
As is known that when use multi-swarm to optimize the
multimodal function in parallel, multiple swarms probably
plunge into a local extreme point. To solve this problem, one
way is that order the fitness value of the swarms which sink
into a local extreme point and keep the best swarm continuing
to search and the others are initialized randomly. The
drawbacks of this method are that initialized swarm or swarms
which did not sink into local extreme point have a high
probability to search repeatedly in the area of local extreme
point. As a result, computing efficiency reduces greatly.
In this paper, setting artificial repulsive potential field at local extreme point can prevent repeated search to local extreme points discovered by multi-swarms. The first step of this algorithm is to group swarms randomly and search in parallel. If three or more swarms cluster, we know this area exist local extreme points, then we set repulsive potential field in the point. In repulsive potential field, swarm with the best fitness value residents and does precision exploitation search; other swarms initialize randomly and explore in searching space. The flowchart of new algorithm is presented as figure 1.
The formula of setting up artificial repulsive potential field in local extreme point as (8).
2
1 1 1
( ) 2
0
ore o
U� � �
� � �
� �
� ��
�
� � �� � � ���
(8)
� is position gain coefficient; � is distance from the swarm
out of repulsive potential field to local extreme point; �0 is
repulsive radius which is a fixed value defined before in the
hyper sphere. Here the distance refers to the Euclidean
distance.
71
Fig. 1. The flowchart of RC_PSO.
IV. EXPERIMENTS
A. Experimental Frameworks
In order to verify the validity of improved algorithm, the
publicly benchmark function Griewank, Ackley and Rastrigin
are tested respectively. These three benchmark functions are
complicated, nonlinear, multimodal functions and own a large
number of extreme points. They can reflect the abilities of the
new algorithm in keeping swarm diversity, searching and
escaping from the local extreme points.
The three benchmark functions are:
Ackley function:
2 1
1 1
11 20 20exp( 0.2 ) exp( cos(2 ))
N N
n nNn n
F e x xN
�� �
� � � � �� � (9)
Griewank function:
� 2
11
12 1 cos
4000
NN
n n nn
F x x n��
� � � �� (10)
Rastigrin function:
2
1
3 [ 10cos(2 ) 10]N
n nn
F x x��
� � �� (11)
The global optimal point of these three benchmark
functions are X*= 0,0, 0 , F(X*)=0 Parameter setting
of benchmark function presents in table 1.
TABLE 1 BENCHMARK FUNCTION PARAMETERS
Benchmark function
Range of Searching space
Max gen size dim
Ackley [-32.768,32.768] 3000 10 10
Griewank [-600,600] 3000 10 10
Rastrigin [-5.12,5.12] 3000 10 10
To know better about the performance of improved
algorithm, this paper compares RC_PSO with PSO_w,
FDR_PSO, FIPS, UPSO and CLPSO. Each algorithm runs 20
times and algorithm parameters keep the same with references
[15]. Inertial weights are 0.9~0.4, acceleration factors are
C1=C2=2. The swarm size is 10. the dimension of search
place is 10. the maxim evolution generation is 3000 and
precision of solution is 10-40. The fixed value of arrogation
radius in RC_PSO is 0.01. This means that when the distance
among three or more swarms is below 0.01, a local extreme
point must exist. The radius of repulsive potential field is 0.1.
B. Simulation Results
The simulation results are presented in Table II.
TABLE II SIMULATION RESULTS BY FIVE VARIANTS OF PSO
FDR_PSO FIPSO UPSO CLPSO RC_PSO
F1best 7.105e-15 3.552e-15 3.552e-15 7.105e-15 3.552e-15mean 0 0 0.7796 0 0 worst 7.105e-15 2.21e-14 2.0133 3.552e-15 3.552e-15
F2best 0.0282 0.00939 0.022141 1.0396e-6 0 mean 0.1208 0.0568 0.0876 0.0090 0.006worst 0.2653 0.13933 0.19231 0.014806 0.13389
F3best 0 0.99496 3.9798 0 0 mean 6.3677 3.1213 14.8683 0.0995 0 worst 12.934 6.9647 31.839 0.99496 1.760e-11
From Table II we can see that mean values, the best
values and the worst values of RC_PSO are better than the
other variants of PSO. To Ackley function, UPSO is easy to
sink into local extreme points. In the 20 times’ runs, all the
variants can find the global extreme points except UPSO. To
Griewank function, CLPSO and RC_PSO work better.
RC_PSO always find the best value. To Rastrigin function, the
four classic variants are difficult to find the optimal value, but
the proposed RC_PSO performs best which indicates the
improved algorithm owns the ability of jumping out of local
extreme points and fast convergence speed.
Figure 2~4 are the curve of fitness value of five algorithms test on three benchmark functions.
Updating speed and position of each group, computing fitness value
Swarm initialization
Updating pbest and gbest according to fitness
Producing Local extreme points, settingrepulsive potential field
Dividing and grouping
Swarms resident at local extreme points do local exploitation search and other swarms do large range exploration search by weights dynamic
change in repulsive potential field
Satisfied?
N
Y
Swarm aggregation and convergence
Y
N
Stop
72
0 500 1000 1500 2000 2500 30001E-161E-151E-141E-131E-121E-111E-101E-91E-81E-71E-61E-51E-41E-30.010.1
110
1001000
best
fitn
ess
valu
e
gen
FDR_PSO FIPS UPSO CLPSO RC_PSO
UPSO
FIPS
FDR_PSO
RC_PSO
CLPSO
Fig. 2. Fitness value curves tested on Ackley function
0 500 1000 1500 2000 2500 3000
0.01
0.1
1
10
100
best
fitn
ess
valu
e
gen
FDR_PSO FIPS UPSO CLPSO RC_PSO
CLPSOUPSO
FDR_PSOFIPS
RC_PSO
Fig. 3. Fitness value curves tested on Griewank function
0 500 1000 1500 2000 2500 30001E-171E-161E-151E-141E-131E-121E-111E-101E-91E-81E-71E-61E-51E-41E-30.010.1
110
1001000
10000
best
fitn
ess
valu
e
gen
FDR_PSO FIPS UPSO CLPSO RC_PSO
RC_PSO
UPSO
CLPSO
FIPS
FDR_PSO
Fig. 4. Fitness value curves tested on Rastrigin function
From the fitness value curves in Fig.2~4 we can conclude
that convergence speed of RC_PSO is in the middle of all
algorithms; to Ackley and Griewank function, the
convergence speed of FIPS is the fastest. The proposed
algorithm scarifies certain searching speed for getting a higher
convergence precision. In general, the search ability of
RC_PSO is better than the other algorithms but its
convergence speed is in the middle.
When the dimension of the searching space increased,
RC_PSO still performs better than other algorithm. In the
following figure 5-7, the dimension of the test function is 100,
Parameter setting of benchmark function presents in table 3.
TABLE 3BENCHMARK FUNCTION PARAMETERS
Benchmark function
Range of Searching space
Max_gen size dim
Ackley [-32.768,32.768] 1000 30 100
Griewank [-600,600] 1000 30 100
Rastrigin [-5.12,5.12] 1000 30 100
0 5000 10000 15000 20000 25000 30000
2
4
6
8
10
12
14
16
18
20
22 FDR_PSO FIPS UPSO RC_PSO CLPSO
fitne
ss v
alue
FEs
Ackly dim=100
RC_PSO
Fig.5. Fitness value curves tested on Ackley function as dim=100
0 5000 10000 15000 20000 25000 30000
0
500
1000
1500
2000
2500
3000
FDR_PSO FIPS UPSO RC_PSO CLPSO
best
fitn
ess
valu
e
FEs
Griewank dim=100
RC_PSO
Fig.6. Fitness value curves tested on Griewank function as dim=100
0 5000 10000 15000 20000 25000 30000-200
0
200
400
600
800
1000
1200
1400
1600
1800 FDR_PSO FIPS UPSO RC_PSO CLPSO
best
fitn
ess
valu
e
FEs
Rastrigin dim=100
RC_PSO
Fig. 7. Fitness value curves tested on Rastrigin function as dim=100
From the figures 5-7, we can draw the conclusion that when
the dimension of the search space increase ,all the algorithms’
73
search ability deteriorate, but comparing with some other
classical PSO variants, RC_PSO can results in better optima,
is more roust and prevents more effectively the premature
convergence.
V. CONCLUSIONS AND THE FUTURE WORKS
Based on the analysis of multi-swarm cooperation
optimization for multi-modal functions, this paper tries to
overcome its shortcomings with the proposition of
multi-swarms cooperation optimization in repulsive potential
field. Though the dimension of the search space increase,
tested on three multi-modal benchmark functions, our
proposed RC_PSO is proved better than the other four
classical algorithms.
Over all, we can conclude that our approach is suitable to multi-modal functions, being robust and outperform than the other variants. However, there is some deficiency in setting aggregation radius and repulsive radius in potential field, this could perhaps be alleviated by making the radius self-adaptive, and this is future work.
REFERENCES
[1] J. Kennedy, R. C. Eberhart, “Particle Swarm Optimization”. In: Proceedings of IEEE International Conference on Neutral Networks, Perth, Australia, pp. 1942-1948, 1995.
[2] M. Lovbjerg, TK. Rasmussen, T. Krink. “Hybrid Particle Swarm Optimizer with Breeding and Subpopulations”. In: Proceedings of the third Genetic and Evolutionary Computation Conference, 2001.
[3] PN.Suganthan “Particle Swarm Optimizer with Neighborhood Operator”. In: Proceedings of the 1999 Congress on Evolutionary Computation. Piscataway, NJ: IEEE Service Center, San Francisco, California, pp. 1958-1962, 1999.
[4] Y. Shi, R. Krohling, “Co-evolutionary particle swarm optimization to solving min-max problems” In: Proc IEEE Congress on Evolutionary Computation, Honolulu, Hawaii, pp. 1682-1687, 2002.
[5] B. Niu, L. Li, X. Chu, “Novel multi-swarm cooperative particle swarm optimization”. Computer Engineering and Applications. vol. 45, no. 3, pp. 28-34, 2009.
[6] A. Li, “Particle Swarms Cooperative Optimizer”. Journal of Fudan University, vol. 43, no. 5, pp. 923-925, 2004.
[7] J. Wang, Q. Shen, H. Shen, X.Ch .Zhou, “Evolutionary design of RBF neural network based on multi-species cooperative particle swarm optimizer”. Control Theory & Applications, vol. 23, no.2, pp. 251-255,2006.
[8] J. Wang, Q. Shen, Y. Shen, X. Nian, “Adaptive Neural Network Noise Conceller Based on Cooperative Particle Swarm Optimization”.Computer Engineering and Applications, vol. 41, no. 13, pp. 20-23, 2005.
[9] T. Blackwell, J. Branke, “Multi-swarms, exclusion and anti- convergence in dynamic environments”. IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 459-472, 2006.
[10] D. Parrott, X. Li. “Locating and tracking multiple dynamic optima by a particle swarm model using speciation”, IEEE Transactions on Evolutionary Computation, vol. 10, no. 4, pp. 440-458, 2006.
[11] G. H. Wang, J. Chen, F. Pan. “Cooperative Multi-Swarms Particle Swarm Optimizer for Dynamic Environment Optimization”. In:Proceedings of the 27th Chinese Control Conference, Kunming, pp. 43-48, 2008.
[12] P. Gao, Z. Cai, L.Yu. “Multi-swarm based optimization algorithm in dynamic environments”, Journal of Central South University (Science and Technology), vol. 40, no. 3, pp. 732-736, 2009.
[13] X. Li, Y. Xin. “Cooperatively Coevolving Particle Swarms for Large Scale Optimization”, IEEE Transactions on Evolutionary Computation, pp. 1-15, 2011.
[14] F. van den Bergh, A. P. Engelbrecht. “A cooperative approach to particle swarm optimization”, In:IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 225-239, 2004.
[15] Y. Lv, S. Li, S. Chen, W. Guo, C. Zhou, “Particle Swarm Optimization Based on Adaptive Diffusion and Hybrid Mutation”, Journal of Software, vol. 18, no.11, pp. 2740-2751, 2007.
74