ieee cis outstanding ph.d. dissertation award...
TRANSCRIPT
IEEE CIS Outstanding Ph.D. Dissertation Award Nomination
Nominee Dr. Zhi-Hui Zhan South China University of Technology [email protected]
Nominator Prof. Qingfu Zhang City University of Hong Kong [email protected]
Referee Prof. Derong Liu University of Illinois at Chicago [email protected]
Prof. C. A. C. Coello CINVESTAV-IPN, Mexico [email protected]
Prof. Jun Wang City University of Hong Kong [email protected]
Prof. C. L. Philip Chen University of Macau [email protected]
Prof. Chin-Teng (CT) Lin National Chiao-Tung University [email protected]
Contents
Part I: Nomination Letter
Part II: Referee Letters
Part III: Ph. D. Dissertation
Part I: Nomination Letter
Nomination Letter Nominator: name, affiliation and email address of nominator
Name: Qingfu Zhang
Affiliation: City University of Hong Kong
Email Address: [email protected]
Nominee: name, affiliation, postal address and email address of
nominee
Name: Zhi-Hui Zhan
Affiliation: South China University of Technology
Postal Address: Room 515, School of Computer Science and
Engineering, South China University of
Technology, Da-Xue-Cheng, Guangzhou,
Guangdong, P. R. China, 510006
Email Address: [email protected]; [email protected]
Dissertation: title of the dissertation, institution in which the
degree was conferred
Title: Research into Machine Learning Aided Particle
Swarm Optimization and Its Engineering
Application
Institution: Sun Yat-sen University
Proposed Citation: provide suggestion for the complete, correct
and succinct citation. The Awards Committee reserves the right
to make any necessary change on the citation.
2017: Zhi-Hui Zhan, “Research into Machine Learning Aided Particle
Swarm Optimization and Its Engineering Application,” Sun Yat-sen
University, China, 2013.
List of Publication in journals and conference proceedings
generated by the research reported in the PhD dissertation;
13 most related papers include 9 IEEE Transactions papers.
3 of them are ESI Highly Cited Papers
Publications Related to Chapter 2
[1]. Zhi-Hui Zhan, J. Zhang, Y. Li, and H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009. [Related to Chapter 2: Propose the adaptive PSO]
ESI Highly Cited Paper
Google Scholar Citation 1026 times, SCI Citation 511 times
The Top 2 cited paper of this journal in recent 10 years, since 2006
[2]. Y. L. Li, Zhi-Hui Zhan (Corresponding Author), Y. J. Gong, W. N. Chen, J. Zhang, and Y. Li, “Differential evolution with an evolution path: A DEEP evolutionary algorithm,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1798-1810, Sept. 2015. [Related to Chapter 2: Extend the adaptive idea to DE]
Google Scholar Citation 30 times, SCI Citation 12 times
Publications Related to Chapter 3
[3]. Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832-847, Dec. 2011. [Related to Chapter 3: Propose the orthogonal learning PSO]
ESI Hot Paper, ESI Highly Cited Paper
Google Scholar Citation 351 times, SCI Citation 196 times
The Top 3 cited paper of this journal in recent 5 years, since 2011
[4]. Y. H. Li, Zhi-Hui Zhan (Corresponding Author), S. Lin, J. Zhang, and X. N. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, no. 1, pp. 370-382, 2015. [Related to Chapter 3: Extend the orthogonal learning strategy to competitive and cooperative strategy]
ESI Highly Cited Paper
Google Scholar Citation 47 times, SCI Citation 27 times
Publications Related to Chapter 4
[5]. Zhi-Hui Zhan, J. Li, J. Cao, J. Zhang, H. Chung, and Y. H. Shi, “Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013. [Related to Chapter 4: : Propose the co-evolutionary multiswarm PSO for MOP]
ESI Highly Cited Paper
Google Scholar Citation 98 times, SCI Citation 58 times
[6]. Y. L. Li, Y. R. Zhou, Zhi-Hui Zhan (Corresponding Author), and J. Zhang, “A primary theoretical study on decomposition-based multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 4, pp. 563-576, Aug. 2016. [Related to Chapter 4: Theoretical study on multiobjective evolutionary algorithms]
[7]. H. H. Li, Z. G. Chen, Zhi-Hui Zhan (Corresponding Author), K. J. Du, and J. Zhang, “Renumber coevolutionary multiswarm particle swarm optimization for multi-objective workflow scheduling on cloud computing environment,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 1419-1420. [Related to Chapter 4: Apply the CMPSO algorithm to cloud computing resources scheduling]
Publications Related to Chapter 5
[8]. Zhi-Hui Zhan and J. Zhang, “Orthogonal learning particle swarm optimization for power electronic circuit optimization with free search range,” in Proc. IEEE Congr. Evol. Comput. (CEC 2011), New Orleans, Jun. 2011, pp. 2563-2570. [Related to Chapter 5: Propose to use OLPSO to solve PEC]
[9]. M. Shen, Zhi-Hui Zhan (Corresponding Author), W. N. Chen, Y. J. Gong, J. Zhang, and Y. Li, “Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks,” IEEE Transactions on Industrial Electronics, vol. 61, no. 12, pp. 7141-7151, Dec. 2014. [Related to Chapter 5: Engineering application of PSO]
Google Scholar Citation 43 times, SCI Citation 27 times
[10]. X. F. Liu, Zhi-Hui Zhan (Corresponding Author), D. Deng, Y. Li, T. L. Gu, and J. Zhang, “An energy efficient ant colony system for virtual machine placement in cloud computing,” IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2016.2623803. 2016. [Related to Chapter 5: Engineering application]
[11]. Y. L. Li, Zhi-Hui Zhan (Corresponding Author), Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411-1423, Jun. 2016. [Related to Chapter 5: Engineering application]
Publications Related to Chapter 1&6
[12]. J. Zhang (Supervisor), Zhi-Hui Zhan, Y. Lin, N. Chen, Y. J. Gong, J. H. Zhong, H. S. H. Chung, Y. Li, and Y. H. Shi, “Evolutionary computation meets machine learning: A survey,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 68-75, Nov. 2011. [Related to Chapters 1&6: Survey of EC&ML]
Google Scholar Citation 91 times, SCI Citation 57 times
[13]. Zhi-Hui Zhan, X. Liu, H. Zhang, Z. Yu, J. Weng, Y. Li, T. Gu, and J. Zhang, “Cloudde: A heterogeneous differential evolution algorithm and its distributed cloud version,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 3, pp. 704-716, March. 2017. [Related to Chapter 6: Future work]
Other information, if applicable, that can be used for the
evaluation of the PhD dissertation.
Abstract of the Ph. D dissertation Particle swarm optimization (PSO) is a kind of simple yet powerful optimization technique. When compared with other evolutionary computation (EC) algorithms such as genetic algorithm (GA), PSO is simper in the algorithm structure, easier in the implementation, and faster in convergence. Therefore, PSO has good application prospect in various science and engineering optimization problems, attracting great interesting and attention from researchers all over the world. During the almost two decades’ development since PSO was invented in 1995, some key problems as follows emerge and call for urgent solutions.
1) The PSO performance strongly relies on the parameter and operator in different evolutionary states. How to be aware of the evolutionary states and adaptively control the parameter and operator to obtain better algorithm performance is a hot yet difficult research topic in PSO community.
2) Although PSO can obtain a reasonable solution fast for various problems, the fast convergence speed makes PSO easy to be trapped into local optima, especially in complex multimodal optimization problem. How to develop a PSO variant with both faster convergence speed and strong global search ability is a significant yet challenging research topic in PSO community.
3) When applying PSO to applications such as multi-objective optimization and engineering optimization problem in practices, how to utilize and combine the problem characteristics so as to efficiently solve the practical problem is still a challenging problem in extending PSO to real-world applications.
In response to these issues, this dissertation carries out innovative researches into the PSO parameter adaptation control, operator orthogonal design, and population co-evolutionary interaction. In order to make these researches efficient, this dissertation points out that the characteristics of population-based search and iteration-based evolution in PSO provide a mass of search data and historical data during the evolutionary process. As machine learning (ML) technique is a powerful tool for obtaining useful information from large amounts of data, using ML technique to analyze, process, and utilize these data has great significance to aid PSO algorithm design and so as to improve the algorithm performance. In view of this, this dissertation conducts researches into ML aided PSO and its engineering application. The main works are to apply the techniques and ideas such as statistical analysis, orthogonal design and prediction, and ensemble learning in the ML field to aid PSO design, improving the algorithm performance, and extending its applications.
The main innovative contributions of this thesis are as follows: (1) Propose a statistical analysis based adaptive PSO (APSO) to make the algorithm can act properly according to different states, enhancing the algorithm versatility.
The parameter and operator requirements for PSO are different in different evolutionary states. By using the strong ability of ML technique in obtaining useful information from mass data, this dissertation proposes to make statistical analyses on the population distribution data and fitness data of PSO during the evolutionary process. This results in a novel evolutionary state estimation (ESE) method that can classify different evolutionary states efficiently. By using the ML technique aided ESE method, APSO can adaptively control the parameters and operators according to different requirements in different states, improving the PSO performance and enhancing the PSO versatility in different search environments. (2) Propose an orthogonal design and prediction based orthogonal learning PSO (OLPSO) to enhance the algorithm global search ability in complex optimization.
As the learning strategy in traditional PSO can not sufficiently utilize the information in the personal experience and the neighborhood experience, this dissertation proposes a novel orthogonal learning (OL) strategy for the particle to construct a promising exemplar to guide the flying. The OL strategy is based on the orthogonal experimental design technique in ML that can efficiently discover useful information in the personal and neighborhood experiences and predict promising combination of these two experiences. Therefore, OLPSO can obtain both fast convergence speed and strong global search ability. The promising performance of OLPSO makes it an efficient tool for complex and multimodal optimization problems. (3) Propose a co-evolutionary multi-swarm PSO (CMPSO) inspired by the ensemble learning idea in ML, enhancing the performance in multi-objective optimization.
The ensemble learning method in ML is to use multiple classifiers to enhance the classification ability. Inspired by such multiple learners’ idea, this dissertation designs a novel optimization framework as multiple populations for multiple objectives (MPMO) when using EC algorithms to solve multi-objective optimization problems (MOP). Based on the MPMO framework, the CMPSO algorithm on the one hand avoids the fitness assignment problem which caused by considering all the objectives together, and on the other hand searches sufficiently in different areas of the Pareto front (PF) by the guidance of each objective. Moreover, CMPSO uses a novel external shared archive for the communication and co-evolution of different swarms, so as to make the non-dominated solutions cover along the whole PF efficiently, enhancing the performance in MOP. (4) Apply OLPSO to the power electronic circuits (PEC) design problem, extending the engineering application fields of PSO.
The PEC design problem is a complex engineering application problem for that it involves lots of components such as resistors, capacitors, and inductors, which are all needed optimally designed so as to obtain good circuit performance. This dissertation on the one hand extends the traditional PEC optimization model by introducing free search range for the components. Although this new model makes PEC much closer to real-world application, it brings great challenges to current optimization methods. Therefore, this dissertation on the other hand proposes to apply the powerful ML aided OLPSO to optimize PEC with free search range. The successes of OLPSO in
the demeng
fromand PSOThismos
scie
eva
all
arti
eng
PEC appmonstrates t
ineering opIn summa
m the PSO prediction,
O to improvs is also an st significan
This Ph
Federat
Guangd
The CC
ence in
aluated ev
over Ch
ificial int
gineering
lication nothat ML a
ptimization pary, this thesdata, and th, and ensem
ve the conveattempt to c
nt research f
h. D diss
ion (CC
dong Prov
CF is the
China. T
very year
hina in a
telligence
, etc.
ot only prided PSO problems.sis argues thherefore pro
mble learninergence speecombine thefields in com
sertation
CF) Outs
vince Ou
largest a
The CCF
r cover th
ll fields
e, networ
rovides PEalgorithms
hat ML techoposes the
ng techniqueed, solutione techniquemputer scien
was aw
standing
utstandin
academic
F outstan
he Ph. D D
related t
rk, databa
Dr. ZhY
into SwarmApplicDoctoProvin
EC a powe have grea
hniques can statistical aes and/or id
n accuracy, aes in EC andnce.
warded th
Dissert
g Dissert
c commu
nding Di
Dissertati
to compu
ase, secur
hi-Hui ZhanYour doctorMachine L
m Optimizacation” is
oral Dissertance, P. R. C
erful optimat potential
acquire useanalysis, ortdeas in the Mand applicatd ML, which
he China
ation in
tation in
unity abou
issertation
ion in the
uter scien
rity, visio
n ral dissertaLearning Aation and Is awardedation 2014
China
mizer, but l in real-w
eful informathogonal deML field to
ation extensih are two o
a Compu
n 2013 a
2014.
ut compu
n Award
e last 2 ye
nce, such
on, softw
ation “ReseAided ParIts Engineed Outstan
of Guangd
also world
ation esign o aid ions. f the
uter
and
uter
d is
ears
h as
ware
earch rticle ering nding dong
Zhiopti6, pSCI
Zhiopti832SCI
Some
dissertat
correspoi-Hui Zhaimization,”
pp. 1362-138I Citation 51
i-Hui Zhanimization,”
2-847, Dec. I Citation 19
related
tion ar
onding IEn, J. ZhanIEEE Tran81, Dec. 2011 times; Top
n, J. Zhang,IEEE Tran2011. [Rela
96 times; Top
publicat
rchive t
EEE Trang, Y. Li,
nsactions on09. [Relatep 2 cited pap
Y. Li, and nsactions onated to Chap 3 cited pap
tions p
the Top
ansaction, and H. n Systems, ed to Chaptper of this j
d Y. H. Shi, n Evolutionapter 3: Proaper of this j
roduced
p Cited
ns JournaChung, “A
Man, and Cter 2: Propournal in re
“Orthogonnary Compuopose the oournal in re
by th
d Paper
als. Adaptive pCyberneticsose the ada
ecent 10 year
nal learning utation, , voorthogonal ecent 5 years
his Ph.
rs in
particle sws- B, vol. 39aptive PSOrs, since 200
particle swol. 15, no. 6learning Ps, since 2011
D
the
warm 9, no.
O] 06
warm 6, pp. SO] 1
4 Paper
by this
and the
The OL
Paper (F
Science
rs (APSO
dissertat
research
LPSO rel
Feb. 201
at that ti
O, OLPSO
tion are
h directio
ated to c
4, there
ime).
O, CMP
listed as
ons are lis
chapter 3
are only
PSO, and
s ESI H
sted as R
3 was on
y 62 Hot
CCPSO
Highly Ci
Research
ce listed
Paper in
O) produ
ited Pape
Front.
as ESI H
n Compu
ced
ers,
Hot
uter
http:ransa4424oole
The No
Authors
://ieeexplore.ieactions%20on45&refinemenan=true&sear
ominee (Z
s that wit
eee.org/searchn%20Evolutionts=undefinedrchField=Sear
Zhi-Hui
th 7 IEEE
h/searchresultonary%20Comd&ranges=201ch_All
Zhan) i
E TEVC
t.jsp?queryTexmputation)&re11_2017_Year
is listed
Papers i
xt=(%22Publiefinements=42r&sortType=d
as one
in Recent
cation%20Titl291944246&reesc_p_Citatio
of the T
t 5 Years
tle%22:IEEE%efinements=4
on_Count&ma
Top
s.
%20T2919
atchB
The Short CV of the Nominee Zhi-Hui Zhan (March 2017)
Zhi-Hui Zhan Professor, Ph.D. Research Interests
Computation Intelligence Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Genetic Algorithm
(GA), Differential Evolution (DE), Brain Storm Optimization (BSO) Cloud Computing and Big Data
Large Scale Resources Scheduling, Multiobjective Optimization, Dynamic Optimization Intelligent Application
Wireless Sensor Network, Scheduling and Control, Intelligent Systems
Work and Education 01/2016 – Now Professor, School of Computer Sci. and Eng. South China Univ. Technology 01/2015 – 12/2015 Associate Professor, School of Advanced Computing Sun Yat-sen University 07/2013 – 12/2014 Lecturer, School of Information Sc. and Technology Sun Yat-sen University 09/2009 – 06/2013 Ph. D., Computer Application Technology Sun Yat-sen University 09/2003 – 07/2007 Bachelor of Science, Computer Sci. and Techno. Sun Yat-sen University
Awards and Honors
2016, Pearl River Scholar Young Professor 2016, Elsevier Most Cited Chinese Researchers in Computer Science 2015, Elsevier Most Cited Chinese Researchers in Computer Science 2014, Elsevier Most Cited Chinese Researchers in Computer Science 2014, Natural Science Found for Distinguished Young Scholars of Guangdong Province, China 2015, Pearl River New Star in Science and Technology 2014, Guangdong Province Outstanding Dissertation Award 2013, China Computer Federation (CCF) Outstanding Dissertation Award
Projects
2015-2017, National Natural Science Foundations of China (NSFC) PI, 270,000RMB 2014-2018, NSF of Guangdong Province for Distinguished Young Scholars PI, 1,000,000RMB 2015-2017, Project for Pearl River New Star in Science and Technology PI, 300,000RMB 2015-2016, Fundamental Research Funds for the Central Universities PI, 300,000RMB 2013-2015, National High-Tech. Research and Develop. Program (863) of China Co-PI, 26,800,000RMB
Academic Activities and Services
The Committee of Machine Learning in CAAI Committee Member The Committee of Artificial Intelligence and Pattern Recognition in CCF Committee Member The 7th Int. Conf. on Information Science and Technology (ICIST 2017) Special Sessions Co-Chair IEEE World Congr. Comput. Intell. (WCCI 2014), Special Session Proposer Int. Conf. Machine Learning & Cybern. (ICMLC 2013) Invited Session Organizer Program Committee Member of 10+ International Conferences, Including:
The Thirty-First AAAI Conference on Artificial Intelligence (AAAI 2017 ) Evolutionary Machine Learning (EML) track of The ACM GECCO 2017 International Conference on Swarm Intelligence (ICSI 2016/2015/2014) Asia-Pacific Services Computing Conference (APSCC 2014) Conf. Technologies and Appl. of Artificial Intell. (TAAI 2015/2014/2013) Int. Conf. Software, Multimedia and Communication Engineering (SMCE 2015)
Regular Reviewer of 20+ International Journals, Including: Since 2009 IEEE Transactions on Evolutionary Computation Since 2012 IEEE Transactions on Cybernetics Since 2009 IEEE Transactions on Industrial Electronics Since 2012 IEEE Transactions on Computational Intelligence and AI in Games Since 2012 IEEE Computational Intelligence Magazine Since 2011 Information Sciences
Invited Talks
01/2017, Int. Workshop on Intell. Opt. and Social Computing (IWIOSC 2017), Changsha, China 10/2016, The 11th Int. Conf. on Bio-inspired Computing: Theor.&Appl. (BIC-TA 2016), Xian, China 08/2014, The First Young Scholar Forum of CAAI, Nanchang, China 06/2015, The 2nd Evolutionary Computation and Learning Forum (ECOLE 2015), Nanjing, China
Monograph
[1]. J. Zhang, Zhi-Hui Zhan, W. N. Chen, J. H. Zhong, N. Chen, Y. J. Gong, R. T. Xu, and Z. Guan, Computation Intelligence, Tsinghua University Press, November, 2011. (Chinese)
[2]. J. Zhang, W. N. Chen, X. M. Hu, Y. Lin, W. L. Zhong, Zhi-Hui Zhan, and T. Huang, Numerical Computing,Tsinghua University Press, July, 2008. (Chinese)
ESI Hot Paper [1]. Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning particle swarm
optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832-847, Dec. 2011. (IF=5.908; Citation: Google Scholar 351 times, SCI 196 times; The Top 3 cited paper of this journal in recent 5 years, since 2011)
ESI Highly Cited Paper [2]. Zhi-Hui Zhan, J. Zhang, Y. Li, and H. Chung, “Adaptive particle swarm optimization,”
IEEE Transactions on Systems, Man, and Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009. (IF=4.943; Citation: Google Scholar 1026 times, SCI 511 times; The Top 2 cited paper of this journal in recent 10 years, since 2006)
[3]. Zhi-Hui Zhan, J. Li, J. Cao, J. Zhang, H. Chung, and Y. H. Shi, “Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013. (IF=4.943; Citation: Google Scholar 98 times, SCI 58 times)
[4]. Y. H. Li, Zhi-Hui Zhan(Corresponding Author), S. Lin, J. Zhang, and X. N. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, no. 1, pp. 370-382, 2015. (IF=3.364; Citation: Google Scholar 47 times, SCI 27 times)
[5]. W. Chen, J. Zhang, Y. Lin, N. Chen, Zhi-Hui Zhan, H. Chang, Y. Li, and Y. H. Shi, “Particle swarm optimization with an aging leader and challengers,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 2, pp. 241-258, April. 2013. (IF=5.908; Citation: Google Scholar 130 times, SCI 41 times)
Other Journal Papers [6]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), D. Deng, Y. Li, T. L. Gu, and J. Zhang, “An
energy efficient ant colony system for virtual machine placement in cloud computing,” IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2016.2623803. 2016. (IF=5.908)
[7]. Y. Li, Y. Zhou, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “A primary theoretical study on decomposition-based multiobjective evolutionary algorithms,” IEEE Trans. on Evolutionary Computation, vol. 20, no. 4, pp. 563-576, Aug. 2016. (IF=5.908)
[8]. Q. Lin, J. Chen, Zhi-Hui Zhan, W. Chen, C. Coello Coello, Y. Yin, C. Lim, and J. Zhang, “A hybrid evolutionary immune algorithm for multiobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 711-729, Oct. 2016. (IF=5.908)
[9]. X. Zhang, J. Zhang, Y. Gong, Zhi-Hui Zhan, W. Chen, and Y. Li, “Kuhn-munkres parallel genetic algorithm for the set cover problem and its application to large-scale wireless sensor networks,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 695-710, Oct. 2016. (IF=5.908)
[10]. Y. J. Gong, J. Zhang, H. Chung, W. N. Chen, Zhi-Hui Zhan, Y. Li, and Y. H. Shi, “An efficient resource allocation scheme using particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 6, pp. 801-816, Dec. 2012. (IF=5.908; Citation: Google Scholar 34 times, SCI 16 times)
[11]. Y. L. Li, Zhi-Hui Zhan(Corresponding Author), Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411-1423, Jun. 2016. (IF=4.943; Citation: Google Scholar 3 times)
[12]. Y. L. Li, Zhi-Hui Zhan(Corresponding Author), Y. J. Gong, W. N. Chen, J. Zhang, and Y. Li, “Differential evolution with an evolution path: A DEEP evolutionary algorithm,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1798-1810, Sept. 2015. (IF=4.943; Citation: Google Scholar 13 times, SCI 1 times)
[13]. N. Chen, W. N. Chen, Y. J. Gong, Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. S. Tan, “An evolutionary algorithm with double-level archives for multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1851-1863, Sept. 2015. (IF=4.943; Citation: Google Scholar 8 times, SCI 1 times)
[14]. W. J. Yu, M. Shen, W. N. Chen, Zhi-Hui Zhan, Y. J. Gong, Y. Lin, O. Liu, and J. Zhang, “Differential evolution with two-level parameter adaptation,” IEEE Transactions on Cybernetics, vol. 44, no. 7, pp. 1080-1099, Jul. 2014. (IF=4.943; Citation: Google Scholar 31 times, SCI 12 times)
[15]. Zhi-Hui Zhan, X. Liu, H. Zhang, Z. Yu, J. Weng, Y. Li, T. Gu, and J. Zhang, “Cloudde: A heterogeneous differential evolution algorithm and its distributed cloud version,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 3, pp. 704-716, March. 2017. (IF=2.661)
[16]. Zhi-Hui Zhan, J. Zhang, Y. Li, O. Liu, S. K. Kwok, W. H. Ip, and O. Kaynak, “An efficient ant colony system based on receding horizon control for the aircraft arrival sequencing and scheduling problem,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 399-412, Jun. 2010. (IF=2.534; Citation: Google Scholar 78 times, SCI 33 times)
[17]. M. Shen, Zhi-Hui Zhan(Corresponding Author), W. N. Chen, Y. J. Gong, J. Zhang, and Y. Li, “Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks,” IEEE Transactions on Industrial Electronics, vol. 61, no. 12, pp. 7141-7151, Dec. 2014. (IF=6.383; Citation: Google Scholar 33 times, SCI 11 times)
[18]. Y. J. Gong, M. Shen, J. Zhang, O. Kaynak, W. N. Chen, and Zhi-Hui Zhan, “Optimizing RFID network planning by using a particle swarm optimization algorithm with redundant reader elimination,” IEEE Transactions on Industrial Informatics, vol. 8, no. 4, pp. 900-912, Nov. 2012. (IF=4.708; Citation: Google Scholar 47 times, SCI 27 times)
[19]. Zhi-Hui Zhan, X. F. Liu, Y. J. Gong, J. Zhang, H. S. H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys, vol. 47, no. 4, Article 63, pp. 1-33, Jul. 2015. (IF=5.243; Citation: Google Scholar 10 times, SCI 1 times)
[20]. Q. Liu, W. Wei, H. Yuan, Zhi-Hui Zhan(Corresponding Author), and Y. Li, “Topology selection for particle swarm optimization,” Information Sciences, vol. 363, no. 1, pp. 154-173, Oct. 2016. (IF=3.364)
[21]. J. Zhang, Zhi-Hui Zhan, Y. Lin, N. Chen, Y. J. Gong, J. H. Zhong, H. S. H. Chung, Y. Li, and Y. H. Shi, “Evolutionary computation meets machine learning: A survey,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 68-75, Nov. 2011. (IF=3.647; Citation: Google Scholar 77 times, SCI 41 times)
[22]. Y. Gong, W. Chen, Zhi-Hui Zhan, J. Zhang, Y. Li, Q. Zhang, and J. Li, “Distributed evolutionary algorithms and their models: A survey of the state-of-the-art,” Applied Soft Computing, vol. 34, pp. 286-300, Sept. 2015. (IF=2.857; Citation: Google Scholar 10 times, SCI 2 times)
[23]. W. Yu, Zhi-Hui Zhan, and J. Zhang, “Artificial bee colony algorithm with an adaptive greedy position update strategy,” Soft Computing, DOI:10.1007/s00500-016-2334-4. 2016. (IF=1.630)
Selected Conference Papers (First/Corresponding Author, Most of Them Are
CEC/GECCO/SSCI Papers)
[1]. Zhi-Hui Zhan, Z. J. Wang, Y. Lin, and J. Zhang, “Adaptive radius species-based particle swarm optimization for multimodal optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC 2016), Vancouver, Canada, Jul. 2016, pp. 2043-2048.
[2]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Orthogonal learning particle swarm optimization with variable relocation for dynamic optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2016), Vancouver, Canada, Jul. 2016, pp. 594-600.
[3]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), J. H. Lin, and J. Zhang, “Parallel differential evolution on distributed computational resources for power electronic circuit optimization,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2016), 2016, pp. 117-118.
[4]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Parallel multi-strategy evolutionary algorithm using massage passing interface for many-objective optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2016), Athens, Greece, Dec. 2016, pp. 1-8.
[5]. Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), W. Shi, W. N. Chen, and J. Zhang, “When neural network computation meets evolutionary computation: A survey,” in Proc. International Symposium on Neural Networks (ISNN 2016), Saint Petersburg, Russia, Jul. 2016, pp. 603-612.
[6]. Y. F. Li, Zhi-Hui Zhan(Corresponding Author), Y. Lin, and J. Zhang, “Comparisons study of APSO OLPSO and CLPSO on CEC2005 and CEC2014 test suits,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 3179-3185.
[7]. Z. G. Chen, K. J. Du, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Deadline constrained cloud computing resources scheduling for cost optimization based on dynamic objective genetic algorithm,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 708-714.
[8]. H. H. Li, Y. W. Fu, Zhi-Hui Zhan(Corresponding Author), and J. J. Li, “Renumber strategy enhanced particle swarm optimization for cloud computing resource scheduling,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 870-876.
[9]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Dichotomy guided based parameter adaptation for differential evolution,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 289-296.
[10]. H. H. Li, Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), K. J. Du, and J. Zhang, “Renumber coevolutionary multiswarm particle swarm optimization for multi-objective workflow scheduling on cloud computing environment,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 1419-1420.
[11]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “An improved method for comprehensive learning particle swarm optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2015), Cape Town, South Africa, Dec. 2015, pp. 218-225.
[12]. Zhi-Hui Zhan and J. Zhang, “Differential evolution for power electronic circuit optimization,” in Proc. Conf. Technologies and Applications of Artificial Intelligence (TAAI 2015), Tainan, Taiwan, Nov. 2015, pp. 158-163.
[13]. Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), H. H. Li, K. J. Du, J. H. Zhong, Y. W. Foo, Y. Li, and J. Zhang, “Deadline constrained cloud computing resources scheduling through an ant colony system approach,” in Proc. Int. Conf. Cloud Computing Research and Innovation (ICCCRI 2015), Singapore, Oct. 2015, pp. 112-119.
[14]. Zhi-Hui Zhan, J. J. Li, and J. Zhang, “Adaptive particle swarm optimization with variable relocation for dynamic optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC 2014), Beijing, China, Jul. 2014, pp. 1565-1570.
[15]. X. F. Liu and Zhi-Hui Zhan(Corresponding Author), “Energy aware virtual machine placement scheduling in cloud computing based on ant colony optimization approach,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2014), Vancouver, Canada, Jul., 2014, pp. 41-47.
[16]. G. W Zhang and Zhi-Hui Zhan(Corresponding Author), “A normalization group brain storm optimization for power electronic circuit optimization,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2014), Vancouver, Canada, Jul., 2014, pp. 183-184.
[17]. Zhi-Hui Zhan, G. Y. Zhang, Y. J. Gong, and J. Zhang, “Load balance aware genetic algorithm for task scheduling in cloud computing,” in Proc. Simulated Evolution And Learning (SEAL 2014), Dec. 2014, pp. 644-655.
[18]. Meng-Dan Zhang, Zhi-Hui Zhan(Corresponding Author), J. J. Li, and J. Zhang, “Tournament selection based artificial bee colony algorithm with elitist strategy,” in Proc. Conf. Technologies and Applications of Artificial Intelligence (TAAI 2014), Taiwan, Nov. 2014, pp. 387-396.
[19]. Guang-Wei Zhang, Zhi-Hui Zhan(Corresponding Author), K. J. Du, Y. Lin, W. N. Chen, J. J. Li, and J. Zhang, “Parallel particle swarm optimization using message passing interface,” in Proc. The 18th Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES 2014), Singapore, Nov. 2014, pp. 55-64.
[20]. Y. L. Li, and Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Differential evolution enhanced with evolution path vector,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2013), Jul., 2013, pp. 123-124.
[21]. Zhi-Hui Zhan, W. N. Chen, Y. Lin, Y. J. Gong, Y. L. Li, and J. Zhang, “Parameter investigation in brain storm optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2013), Singapore, April. 2013, pp. 103-110.
[22]. Zhi-Hui Zhan, J. Zhang, Y. H. Shi, and H. L. Liu, “A modified brain storm optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2012), Brisbane, Australia, Jun. 2012, pp. 1-8.
[23]. Zhi-Hui Zhan and J. Zhang “Enhance differential evolution with random walk,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2012), Philadelphia, America, Jul. 2012, pp. 1513-1514.
[24]. Zhi-Hui Zhan, K. J. Du, J. Zhang, and J. Xiao, “Extended binary particle swarm optimization approach for disjoint set covers problem in wireless sensor networks,” in Proc. Conf. Technologies and Applications of
Artificial Intelligence (TAAI 2012), Tainan, Taiwan, 2012. pp. 327-331. [25]. Zhi-Hui Zhan n, and J. Zhang, “Orthogonal learning particle swarm optimization for power electronic
circuit optimization with free search range,” in Proc. IEEE Congr. Evol. Comput. (CEC 2011), New Orleans, Jun. 2011, pp. 2563-2570.
[26]. Zhi-Hui Zhan n and J. Zhang, “Co-evolutionary differential evolution with dynamic population size and adaptive migration strategy,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2011), Dublin, Ireland, Jul., 2011, pp. 211-212.
[27]. Zhi-Hui Zhan and J. Zhang, “Self-adaptive differential evolution based on PSO learning strategy,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2010), Portland, America, Jul., 2010, pp. 39-46.
[28]. Zhi-Hui Zhan and J. Zhang, “A parallel particle swarm optimization approach for multiobjective optimization problems,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2010), Portland, America, Jul., 2010, pp. 81-82.
[29]. Zhi-Hui Zhan, J. Zhang, and Z. Fan, “Solving the optimal coverage problem in wireless sensor networks using evolutionary computation algorithms,” in Proc. Simulated Evolution And Learning (SEAL 2010), LNCS 6457, pp. 166–176, 2010.
[30]. Zhi-Hui Zhan, J. Zhang, and Y. H. Shi, “Experimental study on PSO diversity,” in Proc. 3rd Int. Workshop on Advanced Computational Intelligence (IWACI 2010), Suzhou, China, Aug. 2010, pp. 310-317.
[31]. Zhi-Hui Zhan, X. L. Feng, Y. J. Gong, and J. Zhang, “Solving the flight frequency programming problem with particle swarm optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2009), Trondheim, Norway, May. 2009, pp. 1383-1390.
[32]. Zhi-Hui Zhan, J. Zhang, and R. Z. Huang, “Particle swarm optimization with information share mechanism,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2009), Montréal, Canada, Jul., 2009, pp. 1761-1762.
[33]. Zhi-Hui Zhan and J. Zhang, “Parallel particle swarm optimization with adaptive asynchronous migration strategy,” in Proc. The 9th Int. Conf. on Algorithms and Architectures for Parallel Processing (ICA3PP), Taipei, Taiwan, Jun., 2009, pp. 490-501.
[34]. Zhi-Hui Zhan and J. Zhang, “Discrete particle swarm optimization for multiple destination routing problems,” in Proc. EvoWorkshops 2009, LNCS 5484, April. 2009, pp. 117–122.
[35]. Zhi-Hui Zhan, J. Xiao, J. Zhang, and W. N. Chen, “Adaptive control of acceleration coefficients for particle swarm optimization based on clustering analysis,” in Proc. IEEE Congr. Evol. Comput. (CEC 2007), Singapore, Sept. 2007, pp. 3276-3282.
Authorized Patents
[1]. J. Zhang, Zhi-Hui Zhan, and T. Huang, Multicast Approach Based on Particle Swarm Optimization, Patent No. ZL200810220650.1.
Summary of Key Publications in SCI Journals
Journal Names 5-Years IF 2016’s IF 5-Year IF Rank in JCR Category PapersIEEE Trans. Evol. Comput. 6.897 5.908 Computer Science – Theory & Method 1/105 7 IEEE Trans. SMC. Part B (CYB) 4.978 4.943 Computer Science – Cybernetics 1/22 6 IEEE Trans. Ind. Electron. 5.985 6.383 Automation & Control System 1/59 1 IEEE Trans. Intell. Transp.Syst. 3.155 2.534 Transportation Science & Technology 6/33 1 IEEE Trans. Ind. Informatics 4.880 4.708 Automation & Control System 3/59 1 IEEE Trans. Paral. Distr. Syst. 2.749 2.661 Computer Science – Theory & Method 11/105 1 IEEE Comput. Intell. Mag. 3.483 3.647 Computer Science – Artificial Intelligence 18/130 1 ACM Computing Surveys 6.559 5.243 Computer Science – Theory & Method 2/105 1 Information Sciences 3.683 3.364 Computer Science – Information Systems 10/144 2 Applied Soft Computing 3.288 2.857 Computer Science – Interdiscip. Appl. 14/104 1 Soft Computing 1.732 1.630 Computer Science – Artificial Intelligence 57/130 1 Total 23
Part II: Referee Letters
Derong Liu Professor
Phone (312) 355-4475
Fax (312) 996-6465
[email protected] www.ece.uic.edu
March 20, 2017
Re: IEEE CIS Outstanding Ph.D. Dissertation Award
Dear Chair and Members of the IEEE CIS Award Committee:
I am glad to write this letter to support Dr. Zhi-Hui Zhan for the 2017 IEEE CIS Outstanding Ph.D.
Dissertation Award. The dissertation “Research into Machine Learning Aided Particle Swarm Optimization and
Its Engineering Application” deals with challenges on introducing machine learning (ML) techniques into
evolutionary computation (EC) algorithms, specially the particle swarm optimization (PSO), to enhance the
performance. The topic of the combination of ML and EC is interesting and is a new idea for EC algorithm
design. This is a nice dissertation contributes to PSO development, by mainly addressing the following three
challenges:
1) To address the challenge that parameters are sensitive to different problems in PSO, this dissertation
designs an adaptive PSO (APSO) to dynamically adjust parameters based on statistical analysis
methods in ML. This work makes PSO less sensitive to the parameter settings and more efficient in a
wider range of applications;
2) To address the challenge in global search capacity of PSO, this dissertation proposes an orthogonal
learning PSO (OLPSO) based on orthogonal experimental design and orthogonal prediction techniques
to construct a promising exemplar to guide the swarm evolution. This work enhance the global search
capacity and speed up the algorithm convergence;
3) To address the fitness assignment problem in multiobjective optimization, this dissertation develops a
co-evolutionary multi-swarm PSO which is inspired by the ensemble learning in ML to obtain well
distribution solutions along the Pareto front.
In addition to general algorithm design studies, this dissertation also extends the ML aided PSO to
engineering application problems. The successful application on power electronic circuit optimization
demonstrates the efficiency and effectiveness of ML aided PSO.
The main content of the dissertation has been published in 13 most related papers, including 9 papers in
IEEE Transactions, 1 paper in IEEE CIM magazine, 1 paper in Elsevier Information Science, and 2 papers in
the top conferences CEC and GECCO. Therefore, the significance of the research is evident as shown by these
high impact journal and conference publications that have been produced from this PhD dissertation. Moreover,
3 of these papers are ESI Highly Cited papers, such as the APSO in IEEE Transactions on Systems, Man, and
Cybernetics - Part B, and the OLPSO in IEEE Transactions on Evolutionary Computation.
Significantly, based on the related research work in this dissertation, Dr. Zhan has published 7 high
quality papers in TEVC in recent 5 years (from 2011), and is listed as one of the top authors according to
the number of papers. This impresses me greatly that a young scholar can obtain such huge achievements
during his Ph.D. study and in his early career after the Ph.D. degree. Due to the excellent work, this Ph.D.
dissertation was awarded the China Computer Federation (CCF) Outstanding Dissertation in 2014,
which is evaluated every year covering the Ph.D. Dissertations in the last 2 years all over China in all fields
related to computer science, such as artificial intelligence, networks, database, security, vision, software
engineering, and architecture.
By considering the above exceptional excellence and contributions of Dr. Zhi-Hui Zhan’s dissertation, I
strongly support the nomination of Dr Zhi-Hui Zhan for the IEEE CIS Outstanding PhD Dissertation Award.
Sincerely, Derong Liu Professor [email protected] http://www.ece.uic.edu/~derong/
CENTRO DE INVESTIGACION Y DE ESTUDIOS AVANZADOS DEL I.P.N.
Av. Instituto Politécnico Nacional # 2508 Col. San Pedro Zacatenco México, D.F. C.P. 07360 Tel. 50-61-38-00 Fax: 57-47-38-02
Mexico City, Mexico, April 11th, 2017 To whom it may concern: This letter is to express my support for the nomination of the PhD thesis entitled “Research into Machine Learning Aided Particle Swarm Optimization and Its Engineering Application” written by Dr. Zhi-Hui Zhan, to the IEEE CIS Outstanding Ph.D. Dissertation Award. The PhD thesis of Dr. Zhan deals with a combination of machine learning techniques and evolutionary computation algorithms. This thesis contains 4 main contributions: 1) It proposes an adaptive particle swarm optimization (APSO) algorithm which uses statistical analysis to estimate the current state of the search during its execution. This scheme is used to control the parameters of APSO in an automated way when applying it to different types of problems, thus significantly improving the efficiency and generality of this particle swarm optimizer (PSO). The main paper derived from this work was published in the IEEE Transactions on Systems, Man and Cybernetics Part B. This paper is the second most highly cited in this journal in the last 10 years. 2) It proposes an orthogonal learning PSO (OLPSO), which uses orthogonal experimental design (OED) to discover useful search information in the currently available solutions, in order to enhance the global search ability of the algorithm. Using orthogonal prediction techniques, the author proposes a novel orthogonal learning strategy for the particles to construct a promising sample that is used to properly guide the search. As a result, OLPSO can attain a fast convergence while having a strong global search ability. The main paper derived from this work was published in the IEEE Transactions on Evolutionary Computation. It is worth noting that this paper has obtained 351 citations in Google Scholar (200 in the ISI Web of Science). 3) It proposes a co-evolutionary multi-swarm PSO (CMPSO). This thesis proposes a novel optimization framework that adopts multiple populations for multiple objectives. This proposal is based on the idea of having multiple learners. Within this framework, CMPSO avoids the fitness assignment problem that arises when considering all the objectives together, while also producing a better search in different areas of the Pareto front by using the guidance of each objective. The main paper derived from this work was published in the IEEE Transactions on Cybernetics. 4) OLPSO is applied to the design of power electronic circuits. In this application, a novel contribution is that the author introduces a free search range for the electronic components. The main paper derived from this work was published in the IEEE Congress on Evolutionary Computation.
CENTRO DE INVESTIGACION Y DE ESTUDIOS AVANZADOS DEL I.P.N.
Av. Instituto Politécnico Nacional # 2508 Col. San Pedro Zacatenco México, D.F. C.P. 07360 Tel. 50-61-38-00 Fax: 57-47-38-02
It is worth emphasizing that a total of 13 papers were derived from this PhD thesis: 3 in the IEEE Transactions on Evolutionary Computation, 4 in the IEEE Transactions on Systems, Man, and Cybernetics Part B (and the IEEE Transactions on Cybernetics), 1 in the IEEE Transactions on Industrial Electronics, 1 the IEEE Transactions on Parallel and Distributed Systems, 1 in the IEEE Computational Intelligence Magazine, 1 in Information Sciences, 1 in the IEEE Congress on Evolutionary Computation and 1 in the Genetic and Evolutionary Computation Conference. Based on the previous, and considering that I am convinced that this PhD thesis makes a valuable contribution to the computational intelligence field, I strongly support it for the IEEE CIS Outstanding Ph.D. Dissertation Award. Please don’t hesitate to contact me in case you have any questions about this matter. Sincerely,
Dr. Carlos A. Coello Coello IEEE Fellow Professor with Distinction (Investigador Cinvestav 3F) Computer Science Department CINVESTAV-IPN Av. IPN No. 2508 Col. San Pedro Zacatenco México, D.F. 07360, México Tel. +52 (55) 5747 3800 x 6564 Fax +52 (55) 5747 3757 email: [email protected] URL: http://delta.cs.cinvestav.mx/~ccoello
City Unìversityof Hong Kong
www.cityu.edu.hk
A Ã)ËnÈEì*zß&Tat Chee Avenue, Kowloon, Hong Kong
T (8s2)3442 8s80 F (852)34420503
E [email protected]Ëüñ7\et* â1ff ÊËè4Professional. cr€at¡vetor The world €[Ér+H.ã
Computer Science
Reference Letter to Support Dr. Zhi-Hui Zhan for IEEE CIS Outstanding
Ph. D. Dissertation Award
April ll,2017
To theAward Committee:
I am writing this letter to support the Ph. D. dissertation by Dr. Zhi-Hui Zhan on
"Research into Machine Learning Aided Particle Swarm Optimization and Its
Engineering Application" for the 2017 IEEE CIS Outstanding Ph.D. Dissertation
Award. This is a nice dissertation with general purpose algorithm development studies
and its engineering application studies. More specifically, three new machine leaning(ML) aided PSO algorithms have been proposed in this dissertation. An adaptive PSO
algorithm aided by the statistics methods in ML, an orthogonal learning PSO
algorithm aided by the orthogonal experiments design technique in ML, and a
co-evolutionary multi-swarm PSO algorithm inspired by the ensemble learning
technique in ML. By studying these ML aid PSO variants and their applications, this
dissertation has conducted a nice combination of evolutionary computation (EC) and
ML, which are two significant fields in computer science.
The main content of the dissertation has been published in 13 most related papers
include l0 IEEE Transactions papers, and 4 of them are ESI Highly Cited Papers,
as follows:
[1]. Zhi-Hui Zhan, J. Zhatg, Y. Li, and H, Chung, "Adaptive particle swarm optimization," IEEETrflnssctions on Systems, Man, ønd Cybernetics - Pørt B, vol. 39, no. 6, pp. 1362-l38l,Dec.
2009. [Related to Chapter 2: on the adaptive PSO]
o Google Scholar Citation 1028 times, SCI Citation 511 times
l2l,Zhi- ui Zhan, J. Zhang, Y, Li, and Y. H. Shi, "Orthogonal learning parlicle swarm
optimization," IEEE Trunssctions on Evolulionary Computøtion, , vol. 15, no, 6, pp.
832-847 , Dec. 2011. [Related to Chapter 3: on the orthogonal learning PSO]
o Google Scholar Citation 351 times, SCI Citation 196 times
[3].Zhi-Hui Zhan, J. Li, J. Cao, J. Zhang, H, Chung, and Y. H. Shi, "Multiple populations for
multiple objectives: A coevolutionary technique for solving multiobjective optimization
problems," IEEE Transuctions on Cybernetics, vol.43, no. 2, pp. 445-463, April. 2013.
[Related to Chapter 4: : Propose the co-evolutionary multiswarm PSO for MOP]
o Google Scholar Citation 98 times, SCI Citation 58 times
I4l.Y. H. Li, Zhi-Hui Zhan (Correspondine Author), S. Lin, J. Zhang, and X, N. Luo,
"Competitive and cooperative particle swarm optimization with information sharing
mechanism for global optimization problems," InformalÍon Sciences, vol. 293, no. l, pp,
370-382,2015, [Related to Chapter 3: Extension of the orthogonal learning strategy to
competitive and cooperative strategyl
o Google Scholar Citation 47 times, SCI Citation 27 times
The rich productions of this dissertation impress me a lot. Moreover, the high
citations of these papers in both ISI Web of Science and Google Scholar clearly show
their high impacts and their large contributions to the freld of EC. The adaptive PSO
work [1] published in the IEEE TSMCB (the journal that I am current serving as EiC)
has even reached over 1000 citations in Google Scholar and over 500 citations in ISI.
Significantly, it is the second hishest citation paner amons all the papers durinsthe last 10 years (since 2006) that published in TSMCB and TCYB. Moreover, the
orthogonal learning PSO work [2] is the third hishest citation paper amons all thepapers durinq the last 5 years lsince 20Ll) that published in TEVC, which is the
Ieadins iournal in our CIS community.
Based on the above mentioned contributions of Dr. Zhi-}jui Zhan's dissertation,
from both the aspects of large number of published IEEE Transactions papers and
high citation of the papers, I strongly recommend his Ph. D. dissertation for the 2017
IEEE CIS Outstanding Ph. D. Dissertation Award.
Sincerely;
¿t/
Jun Wang, PhD
Chair Professor of Computational Intelligence
Reference Letter for 2017 IEEE CIS Outstanding Ph. D. Dissertation
Award Application
To: IEEE CIS Outstanding PhD Dissertation Award Committee:
I am writing to provide a reference letter to support Dr. Zhi-Hui Zhan for the IEEE CIS Outstanding PhD Dissertation Award, for his PhD dissertation “Research into Machine Learning Aided Particle Swarm Optimization and Its Engineering Application”.
His dissertation is related both the evolutionary computation (EC) field and the machine learning (ML) field. The population-based search and iterative-based evolution of EC algorithms generate mass of search data during the evolutionary process. This makes it possible and promising to introduce ML techniques into EC algorithms to enhance the algorithm performance. Therefore, this dissertation focuses on this and makes significant contributions to ML aided EC research and applications.
Specially, this dissertation proposes three novel particle swarm optimization (PSO) variants aided by different ML techniques. Firstly, designing an adaptive PSO (APSO) based on statistical analysis technique in ML to relieve the sensitivity of parameters in PSO. Secondly, proposing orthogonal learning PSO (OLPSO) by using the orthogonal experimental design technique in ML to enhance the rapid global search capability. Thirdly, developing a co-evolutionary multi-swarm PSO (CMPSO) by introducing the ensemble learning technique in ML to enhance the search efficiency of PSO in multi-objective problems. The proposed algorithms in the dissertation demonstrate that the ML aided PSO variants can significantly enhance the PSO’s ability in handling optimization problems. Furthermore, this dissertation extends the application of OLPSO to the challenging power electronic circuit design problem and demonstrates the effectiveness and efficiency of OLPSO in engineering application.
The novelty and significance of this dissertation have been demonstrated by 13 key publications in reputable journals and conference proceedings generated by the research reported in the dissertation. Among these 13 papers, 7 papers are published in the IEEE Transactions on Evolutionary Computation and the IEEE Transactions on Cybernetics (previous SMC Part B), which are both reputable journals in CIS and are with very low acceptance rates.
Not only the quantity is impressive, the quality of the papers is also very impressive. These papers have been cited by more than 1500 times in Google Scholar and about 1000 times in SCI. Moreover, 4 papers of them are ESI Highly Cited Papers. The OLPSO was even the ESI Hot Paper in Feb. 2014, being one of
the only 62 ESI Hot Papers in Computer Science field all over the world. Significantly, the APSO, OLPSO, and CMPSO research topics related to Chapters 2, 3, and 4, respectively, in this dissertation are listed as the ESI research front, showing their broadly impact in the EC and related communities.
It is absolutely that the outcomes of Dr. Zhi-Hui Zhan’s dissertation are impressive, not only from the large number of related high quality papers published in the CIS reputable journals and conferences, but also from the significant impact of these works in the EC field and even across different fields in computer science.
Based on the above, I strongly believe that this dissertation is with sufficient and prior quality for the IEEE CIS Outstanding PhD Dissertation Award. Therefore, I show my strongest recommendation to support this dissertation for this award.
Best regards,
C. L. Philip Chen, Ph.D., FIEEE, FAAAS
EiC, IEEE Transactions on Systems, Man, and Cybernetics: Systems
Dean, Faculty of Science and Technology, University of Macau
Reference Letter for IEEE CIS Outstanding Ph. D. Dissertation Award
To the CIS Award Committee:
It is my pleasure to recommend Dr. Zhi-Hui Zhan for applying the IEEE CIS Outstanding Ph. D. Dissertation Award. The title of the dissertation is “Research into Machine Learning Aided Particle Swarm Optimization and Its Engineering Application” which relates to using machine learning (ML) techniques to enhance the search ability of evolutionary computation (EC) algorithms, e.g., the particle swarm optimization (PSO). This dissertation has conducted a systematic research on this cross field. Specially, the dissertation proposes 3 novel ML aided PSO variants in Chapters 2, 3, and 4 respectively, and extends the algorithm to engineering application.
Firstly, an adaptive PSO (APSO) is proposed in Chapter 2, by using the ML statistical methods to analyze the population distribution information, so as to adaptive control the PSO parameters and operators. The key publication of this Chapter is:
[1]. Zhi-Hui Zhan (First Author), “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009.
ESI Highly Cited Paper, ESI Research Front Google Scholar Citation 1026 times, SCI Citation 511 times The Top 2 cited paper of this journal in recent 10 years, since 2006
Secondly, an orthogonal learning PSO (OLPSO) is proposed in Chapter 3, by using the orthogonal experimental design (OED) method in ML filed to conduct more promising learning guidance for the particle, so as to enhance the global search ability. The key publications of this Chapter are:
[2]. Zhi-Hui Zhan (First Author), “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, , vol. 15, no. 6, pp. 832-847, Dec. 2011.
ESI Hot Paper, ESI Highly Cited Paper, ESI Research Front Google Scholar Citation 351 times, SCI Citation 196 times The Top 3 cited paper of this journal in recent 5 years, since 2011
[3]. Zhi-Hui Zhan (Corresponding Author), “Competitive and cooperative particle swarm optimization with
information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, no. 1, pp. 370-382, 2015.
ESI Highly Cited Paper, ESI Research Front Google Scholar Citation 47 times, SCI Citation 27 times
Thirdly, a co-evolutionary multi-swarm PSO (CMPSO) is proposed in Chapter 4, by designing a novel optimization framework for multi-objective optimization problem (MOP). The new framework is “multiple populations for multiple objectives (MPMO)”, inspired by the idea of ensemble learning in the ML filed. The key publication of this Chapter is:
[4]. Zhi-Hui Zhan (First Author), “Multiple populations for multiple objectives: A coevolutionary technique
for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013.
ESI Highly Cited Paper, ESI Research Front Google Scholar Citation 98 times, SCI Citation 58 times
It impresses me greatly that this dissertation has produced so many high quality papers. All the key papers in all the PSO variants have caused great attentions from the research community. All the above 3 research topics are listed as ESI Research Front and all these 4 papers are ESI Highly Cited Papers.
Including the above, this dissertation has produced 13 most related papers, and 9 of them are published in IEEE Transactions, like the leading journals in our CIS community, e.g., the IEEE TEVC and IEEE TCYB (SMCB).
Based on the above-mentioned exceptional excellence of Dr. Zhi-Hui Zhan’s dissertation, I strongly support his Ph. D. dissertation for the 2017 IEEE CIS Outstanding Ph. D. Dissertation Award.
Sincerely yours, Chin-Teng(CT) Lin, IEEE Fellow Distinguished Professor, UTS
Part III: Ph. D. Dissertation
Research into Machine Learning Aided Particle Swarm
Optimization and Its Engineering Application
Major: Computer Science
Doctorate Applicant: Zhi-Hui Zhan Supervisor: Prof. Jun Zhang
Supervisory Committee Members: Chair: South China University of Techolongy, China
Prof. Guo-Qiang Hang
Member: University of Surrey, UK Prof. Yao-Chu Jin
South China Normal University, China
Prof. Yong Tang Sun Yat-sen University, China
Ptof. Xiao-Nan Luo Sun Yat-sen University, China
Prof. Xiao-La Lin
Acknowledgments
I would take this opportunity to express my sincere gratitude to those who have helped
me and supported me during my perusing of the Ph. D degree.
First of all, I would like to thank my supervisor Prof. Jun Zhang. He has leaded me into
the academic world of evolutionary computation and particle swarm optimization. During my
Ph. D. study, he always inspired me and guided me to make deep researches that can catch
the progress of international developments. Without his encouragement and supervisor, all
these work would not have been possible.
I would also like to thank Prof. Henry Chung, Prof. Yun Li, Prof. Y. H. Shi, and Prof. K.
C. Tan. They are all the seniors in the academic community and have given me great help. It
is my great luck and honour that I can learn much from them during their visit to our lab.
They show great patience.to discuss the problems that I faced during the researches, and
provide lots of creative valuable comments and suggestions to my research work.
Thanks also to the friends in the lab, especially Jing-Hui Zhong, Xiao-Min Hu,
Wei-Neng Chen, Ying Lin, Wei-Jie Yu, Yuan-Long, Li, and Yue-Jiao Gong. They have
worked with me days and nights, together happy and together tears. Moreover, the
discussions with them have greatly enhanced the quality of my research work.
At last, but also the most importantly, I wish to thank my parents and my wife for their
unconditional love, support, and encouragement.
Declaration of Thesis Originality
I solemnly declare: the submitted thesis is the achievemets of my independent
research work under the conduct of my instructor. Except for the context annotated
references in the thesis, this thesis does not contain archivements that any other personal
or team have published or written. The persons or teams who make important
constribution to my research work have already been marked in the thesis. I am fully
aware that I will take the blame for the legal consequences of this statement by myself.
Signature (Author): Zhi-Hui Zhan
Date: 18 May, 2013
Permission to Quote Copyrighted Material
I fully understand the rules about dissertation reservation and use in Sun Yat-sen
University: School has the right to reserve and submit my dissertation in electronic or
printed format to the national competent authority or its designated agency. School has
the right to make few copies of my dissertation for nonprofitable purpose and allow my
dissertation to be accessed in school’s library and college reference rooms. School has the
right to index my dissertation into related databases for retrieval, and use copy, copy in a
reduced format, or other methods to save my dissertation. Signature (Author): Zhi-Hui Zhan Signature (Suipervisor): Jun Zhang Date: 18 May, 2013 Date: 18 May, 2013
Brief CV & Publications
- VII -
Brief CV & Publications
Work and Education 01/2016 – Now Professor
School of Comput. Sci. and Eng. South China Univ of Techno
01/2015 – 12/2015 Associate Professor School of Advanced Computing Sun Yat-sen University
07/2013 – 12/2014 Lecturer School of Info. Sci. and Technology Sun Yat-sen University
09/2009 – 06/2013 Ph. D. Computer Application Technology Sun Yat-sen University
09/2003 – 07/2007 Bachelor of Science Computer Science and Technology Sun Yat-sen University
Awards and Honors 2016, Pearl River Scholar Young Professor
2015, Elsevier Most Cited Chinese Researchers in Computer Science
2014, Elsevier Most Cited Chinese Researchers in Computer Science
2014, Natural Science Found for Distinguished Young Scholars, GD, China
2015, Pearl River New Star in Science and Technology
2014, Guangdong Province Outstanding Dissertation Award
2013, China Computer Federation (CCF) Outstanding Dissertation Award
Research Interests Computation Intelligence
Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Genetic Algorithm (GA), Differential Evolution (DE), Brain Storm Optimization (BSO)
Cloud Computing and Big Data
Large Scale Resources Scheduling and Management, Multiobjective Optimization, Dynamic Optimization, Multimodual Optimization
Intelligent Application
Wireless Sensor Network, Scheduling and Control, Intelligent Systems
Brief CV & Publications
- VIII -
Publications Related to the Thesis
Publications Related to Chapter 2
[1]. Zhi-Hui Zhan, J. Zhang, Y. Li, and H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009. [Related to Chapter 2: Propose the adaptive PSO]
ESI Highly Cited Paper
Google Scholar Citation 990 times, SCI Citation 502 times
The Top 3 cited paper of this journal in recent 10 years, since 2006
[2]. Y. L. Li, Zhi-Hui Zhan (Corresponding Author), Y. J. Gong, W. N. Chen, J. Zhang, and Y. Li, “Differential evolution with an evolution path: A DEEP evolutionary algorithm,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1798-1810, Sept. 2015. [Related to Chapter 2: Extend the adaptive idea to DE]
Google Scholar Citation 30 times, SCI Citation 12 times
Publications Related to Chapter 3
[3]. Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832-847, Dec. 2011. [Related to Chapter 3: Propose the orthogonal learning PSO]
ESI Hot Paper, ESI Highly Cited Paper
Google Scholar Citation 335 times, SCI Citation 196 times
The Top 3 cited paper of this journal in recent 5 years, since 2011
[4]. Y. H. Li, Zhi-Hui Zhan (Corresponding Author), S. Lin, J. Zhang, and X. N. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, no. 1, pp. 370-382, 2015. [Related to Chapter 3: Extend the orthogonal learning strategy to competitive and cooperative strategy]
ESI Highly Cited Paper
Google Scholar Citation 46 times, SCI Citation 27 times
Publications Related to Chapter 4
[5]. Zhi-Hui Zhan, J. Li, J. Cao, J. Zhang, H. Chung, and Y. H. Shi, “Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013. [Related to Chapter 4: : Propose the co-evolutionary multiswarm PSO for MOP]
Google Scholar Citation 86 times, SCI Citation 58 times
Brief CV & Publications
- IX -
[6]. Y. L. Li, Y. R. Zhou, Zhi-Hui Zhan (Corresponding Author), and J. Zhang, “A primary theoretical study on decomposition-based multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 4, pp. 563-576, Aug. 2016. [Related to Chapter 4: Theoretical study on multiobjective evolutionary algorithms]
[7]. H. H. Li, Z. G. Chen, Zhi-Hui Zhan (Corresponding Author), K. J. Du, and J. Zhang, “Renumber coevolutionary multiswarm particle swarm optimization for multi-objective workflow scheduling on cloud computing environment,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 1419-1420. [Related to Chapter 4: Apply the CMPSO algorithm to cloud computing resources scheduling]
Publications Related to Chapter 5
[8]. Zhi-Hui Zhan and J. Zhang, “Orthogonal learning particle swarm optimization for power electronic circuit optimization with free search range,” in Proc. IEEE Congr. Evol. Comput. (CEC 2011), New Orleans, Jun. 2011, pp. 2563-2570. [Related to Chapter 5: Propose to use OLPSO to solve PEC]
[9]. M. Shen, Zhi-Hui Zhan (Corresponding Author), W. N. Chen, Y. J. Gong, J. Zhang, and Y. Li, “Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks,” IEEE Transactions on Industrial Electronics, vol. 61, no. 12, pp. 7141-7151, Dec. 2014. [Related to Chapter 5: Engineering application of PSO]
Google Scholar Citation 43 times, SCI Citation 27 times
[10]. X. F. Liu, Zhi-Hui Zhan (Corresponding Author), and J. Zhang, “An energy efficient ant colony system for virtual machine placement in cloud computing,” IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2016.2623803. 2016. [Related to Chapter 5: Engineering application]
[11]. Y. L. Li, Zhi-Hui Zhan (Corresponding Author), Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411-1423, Jun. 2016. [Related to Chapter 5: Engineering application]
Publications Related to Chapter 1&6
[12]. J. Zhang (Supervisor), Zhi-Hui Zhan, Y. Lin, N. Chen, Y. J. Gong, J. H. Zhong, H. S. H. Chung, Y. Li, and Y. H. Shi, “Evolutionary computation meets machine learning: A survey,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 68-75, Nov. 2011. [Related to Chapters 1&6: Survey of EC&ML]
Google Scholar Citation 91 times, SCI Citation 57 times
[13]. Zhi-Hui Zhan, X. Liu, H. Zhang, Z. Yu, J. Weng, Y. Li, T. Gu, and J. Zhang, “Cloudde: A heterogeneous differential evolution algorithm and its distributed cloud version,” IEEE Transactions on Parallel and Distributed Systems, DOI: 10.1109/TPDS.2016.2597826, 2016. [Related to Chapter 6: Future work]
Brief CV & Publications
- X -
Author’s Key Publications
Monograph
[1]. J. Zhang, Zhi-Hui Zhan, W. N. Chen, J. H. Zhong, N. Chen, Y. J. Gong, R. T. Xu, and Z. Guan, Computation Intelligence, Tsinghua University Press, November, 2011.
[2]. J. Zhang, W. N. Chen, X. M. Hu, Y. Lin, W. L. Zhong, Zhi-Hui Zhan, and T. Huang, Numerical Computing,Tsinghua University Press, July, 2008.
ESI Hot Paper
[1]. Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832-847, Dec. 2011. (IF=5.908; Citation: Google Scholar 335 times, SCI 196 times; The Top 3 cited paper of this journal in recent 5 years, since 2011)
ESI Highly Cited Paper
[2]. Zhi-Hui Zhan, J. Zhang, Y. Li, and H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics--Part B, vol. 39, no. 6, pp. 1362-1381, Dec. 2009. (IF=4.943; Citation: Google Scholar 990 times, SCI 502 times; The Top 3 cited paper of this journal in recent 10 years, since 2006)
[3]. Y. H. Li, Zhi-Hui Zhan(Corresponding Author), S. Lin, J. Zhang, and X. N. Luo, “Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems,” Information Sciences, vol. 293, no. 1, pp. 370-382, 2015. (IF=3.364; Citation: Google Scholar 26 times)
[4]. W. Chen, J. Zhang, Y. Lin, N. Chen, Zhi-Hui Zhan, H. Chang, Y. Li, and Y. H. Shi, “Particle swarm optimization with an aging leader and challengers,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 2, pp. 241-258, April. 2013. (IF=5.908; Citation: Google Scholar 130 times, SCI 41 times)
Other Journal Papers
[5]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), D. Deng, Y. Li, T. L. Gu, and J. Zhang, “An energy efficient ant colony system for virtual machine placement in cloud computing,” IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2016.2623803. 2016. (IF=5.908)
[6]. Y. Li, Y. Zhou, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “A primary theoretical study on decomposition-based multiobjective evolutionary algorithms,” IEEE Trans. on Evolutionary Computation, vol. 20, no. 4, pp. 563-576, Aug. 2016. (IF=5.908)
[7]. Q. Lin, J. Chen, Zhi-Hui Zhan, W. Chen, C. Coello Coello, Y. Yin, C. Lim, and J. Zhang, “A hybrid evolutionary immune algorithm for multiobjective optimization problems,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 711-729, Oct. 2016. (IF=5.908)
[8]. X. Zhang, J. Zhang, Y. Gong, Zhi-Hui Zhan, W. Chen, and Y. Li, “Kuhn-munkres parallel genetic algorithm for the set cover problem and its application to large-scale wireless sensor networks,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 695-710, Oct. 2016. (IF=5.908)
Brief CV & Publications
- XI -
[9]. Y. J. Gong, J. Zhang, H. Chung, W. N. Chen, Zhi-Hui Zhan, Y. Li, and Y. H. Shi, “An efficient resource allocation scheme using particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, no. 6, pp. 801-816, Dec. 2012. (IF=5.908; Citation: Google Scholar 34 times, SCI 16 times)
[10]. Zhi-Hui Zhan, J. Li, J. Cao, J. Zhang, H. Chung, and Y. H. Shi, “Multiple populations for multiple objectives: A coevolutionary technique for solving multiobjective optimization problems,” IEEE Transactions on Cybernetics, vol. 43, no. 2, pp. 445-463, April. 2013. (IF=4.943; Citation: Google Scholar 68 times, SCI 32 times)
[11]. Y. L. Li, Zhi-Hui Zhan(Corresponding Author), Y. J. Gong, J. Zhang, Y. Li, and Q. Li, “Fast micro-differential evolution for topological active net optimization,” IEEE Transactions on Cybernetics, vol. 46, no. 6, pp. 1411-1423, Jun. 2016. (IF=4.943; Citation: Google Scholar 3 times)
[12]. Y. L. Li, Zhi-Hui Zhan(Corresponding Author), Y. J. Gong, W. N. Chen, J. Zhang, and Y. Li, “Differential evolution with an evolution path: A DEEP evolutionary algorithm,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1798-1810, Sept. 2015. (IF=4.943; Citation: Google Scholar 13 times, SCI 1 times)
[13]. N. Chen, W. N. Chen, Y. J. Gong, Zhi-Hui Zhan, J. Zhang, Y. Li, and Y. S. Tan, “An evolutionary algorithm with double-level archives for multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 45, no. 9, pp. 1851-1863, Sept. 2015. (IF=4.943; Citation: Google Scholar 8 times, SCI 1 times)
[14]. W. J. Yu, M. Shen, W. N. Chen, Zhi-Hui Zhan, Y. J. Gong, Y. Lin, O. Liu, and J. Zhang, “Differential evolution with two-level parameter adaptation,” IEEE Transactions on Cybernetics, vol. 44, no. 7, pp. 1080-1099, Jul. 2014. (IF=4.943; Citation: Google Scholar 31 times, SCI 12 times)
[15]. Zhi-Hui Zhan, X. Liu, H. Zhang, Z. Yu, J. Weng, Y. Li, T. Gu, and J. Zhang, “Cloudde: A heterogeneous differential evolution algorithm and its distributed cloud version,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 3, pp. 704-716, March. 2017. (IF=2.661)
[16]. Zhi-Hui Zhan, J. Zhang, Y. Li, O. Liu, S. K. Kwok, W. H. Ip, and O. Kaynak, “An efficient ant colony system based on receding horizon control for the aircraft arrival sequencing and scheduling problem,” IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 399-412, Jun. 2010. (IF=2.534; Citation: Google Scholar 78 times, SCI 33 times)
[17]. M. Shen, Zhi-Hui Zhan(Corresponding Author), W. N. Chen, Y. J. Gong, J. Zhang, and Y. Li, “Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks,” IEEE Transactions on Industrial Electronics, vol. 61, no. 12, pp. 7141-7151, Dec. 2014. (IF=6.383; Citation: Google Scholar 33 times, SCI 11 times)
[18]. Y. J. Gong, M. Shen, J. Zhang, O. Kaynak, W. N. Chen, and Zhi-Hui Zhan, “Optimizing RFID network planning by using a particle swarm optimization algorithm with redundant reader elimination,” IEEE Transactions on Industrial Informatics, vol. 8, no. 4, pp. 900-912, Nov. 2012. (IF=4.708; Citation: Google Scholar 47 times, SCI 27 times)
[19]. Zhi-Hui Zhan, X. F. Liu, Y. J. Gong, J. Zhang, H. S. H. Chung, and Y. Li, “Cloud computing resource scheduling and a survey of its evolutionary approaches,” ACM Computing Surveys, vol. 47, no. 4, Article 63, pp. 1-33, Jul. 2015. (IF=5.243; Citation: Google Scholar 10 times, SCI 1 times)
Brief CV & Publications
- XII -
[20]. Q. Liu, W. Wei, H. Yuan, Zhi-Hui Zhan(Corresponding Author), and Y. Li, “Topology selection for particle swarm optimization,” Information Sciences, vol. 363, no. 1, pp. 154-173, Oct. 2016. (IF=3.364)
[21]. J. Zhang, Zhi-Hui Zhan, Y. Lin, N. Chen, Y. J. Gong, J. H. Zhong, H. S. H. Chung, Y. Li, and Y. H. Shi, “Evolutionary computation meets machine learning: A survey,” IEEE Computational Intelligence Magazine, vol. 6, no. 4, pp. 68-75, Nov. 2011. (IF=3.647; Citation: Google Scholar 77 times, SCI 41 times)
[22]. Y. Gong, W. Chen, Zhi-Hui Zhan, J. Zhang, Y. Li, Q. Zhang, and J. Li, “Distributed evolutionary algorithms and their models: A survey of the state-of-the-art,” Applied Soft Computing, vol. 34, pp. 286-300, Sept. 2015. (IF=2.857; Citation: Google Scholar 10 times, SCI 2 times)
[23]. W. Yu, Zhi-Hui Zhan, and J. Zhang, “Artificial bee colony algorithm with an adaptive greedy position update strategy,” Soft Computing, DOI:10.1007/s00500-016-2334-4. 2016. (IF=1.630)
Selected Conference Papers (First/Corresponding Author, most are CEC/GECCO/SSCI papers)
[1]. Zhi-Hui Zhan, Z. J. Wang, Y. Lin, and J. Zhang, “Adaptive radius species-based particle swarm optimization for multimodal optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC 2016), Vancouver, Canada, Jul. 2016, pp. 2043-2048.
[2]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Orthogonal learning particle swarm optimization with variable relocation for dynamic optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2016), Vancouver, Canada, Jul. 2016, pp. 594-600.
[3]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), J. H. Lin, and J. Zhang, “Parallel differential evolution on distributed computational resources for power electronic circuit optimization,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2016), 2016, pp. 117-118.
[4]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Parallel multi-strategy evolutionary algorithm using massage passing interface for many-objective optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2016), Athens, Greece, Dec. 2016, pp. 1-8.
[5]. Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), W. Shi, W. N. Chen, and J. Zhang, “When neural network computation meets evolutionary computation: A survey,” in Proc. International Symposium on Neural Networks (ISNN 2016), Saint Petersburg, Russia, Jul. 2016, pp. 603-612.
[6]. Y. F. Li, Zhi-Hui Zhan(Corresponding Author), Y. Lin, and J. Zhang, “Comparisons study of APSO OLPSO and CLPSO on CEC2005 and CEC2014 test suits,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 3179-3185.
[7]. Z. G. Chen, K. J. Du, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Deadline constrained cloud computing resources scheduling for cost optimization based on dynamic objective genetic algorithm,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 708-714.
[8]. H. H. Li, Y. W. Fu, Zhi-Hui Zhan(Corresponding Author), and J. J. Li, “Renumber strategy enhanced particle swarm optimization for cloud computing resource scheduling,” in Proc. IEEE Congr. Evol. Comput. (CEC 2015), Sendai, Japan, 2015, pp. 870-876.
[9]. X. F. Liu, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Dichotomy guided based parameter adaptation for differential evolution,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 289-296.
[10]. H. H. Li, Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), K. J. Du, and J. Zhang, “Renumber coevolutionary multiswarm particle swarm optimization for multi-objective workflow scheduling on cloud computing environment,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2015), Madrid, Spain, Jul. 2015, pp. 1419-1420.
Brief CV & Publications
- XIII -
[11]. Z. J. Wang, Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “An improved method for comprehensive learning particle swarm optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2015), Cape Town, South Africa, Dec. 2015, pp. 218-225.
[12]. Zhi-Hui Zhan and J. Zhang, “Differential evolution for power electronic circuit optimization,” in Proc. Conf. Technologies and Applications of Artificial Intelligence (TAAI 2015), Tainan, Taiwan, Nov. 2015, pp. 158-163.
[13]. Z. G. Chen, Zhi-Hui Zhan(Corresponding Author), H. H. Li, K. J. Du, J. H. Zhong, Y. W. Foo, Y. Li, and J. Zhang, “Deadline constrained cloud computing resources scheduling through an ant colony system approach,” in Proc. Int. Conf. Cloud Computing Research and Innovation (ICCCRI 2015), Singapore, Oct. 2015, pp. 112-119.
[14]. Zhi-Hui Zhan, J. J. Li, and J. Zhang, “Adaptive particle swarm optimization with variable relocation for dynamic optimization problems,” in Proc. IEEE Congr. Evol. Comput. (CEC 2014), Beijing, China, Jul. 2014, pp. 1565-1570.
[15]. X. F. Liu and Zhi-Hui Zhan(Corresponding Author), “Energy aware virtual machine placement scheduling in cloud computing based on ant colony optimization approach,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2014), Vancouver, Canada, Jul., 2014, pp. 41-47.
[16]. G. W Zhang and Zhi-Hui Zhan(Corresponding Author), “A normalization group brain storm optimization for power electronic circuit optimization,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2014), Vancouver, Canada, Jul., 2014, pp. 183-184.
[17]. Zhi-Hui Zhan, G. Y. Zhang, Y. J. Gong, and J. Zhang, “Load balance aware genetic algorithm for task scheduling in cloud computing,” in Proc. Simulated Evolution And Learning (SEAL 2014), Dec. 2014, pp. 644-655.
[18]. Meng-Dan Zhang, Zhi-Hui Zhan(Corresponding Author), J. J. Li, and J. Zhang, “Tournament selection based artificial bee colony algorithm with elitist strategy,” in Proc. Conf. Technologies and Applications of Artificial Intelligence (TAAI 2014), Taiwan, Nov. 2014, pp. 387-396.
[19]. Guang-Wei Zhang, Zhi-Hui Zhan(Corresponding Author), K. J. Du, Y. Lin, W. N. Chen, J. J. Li, and J. Zhang, “Parallel particle swarm optimization using message passing interface,” in Proc. The 18th Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES 2014), Singapore, Nov. 2014, pp. 55-64.
[20]. Y. L. Li, and Zhi-Hui Zhan(Corresponding Author), and J. Zhang, “Differential evolution enhanced with evolution path vector,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2013), Jul., 2013, pp. 123-124.
[21]. Zhi-Hui Zhan, W. N. Chen, Y. Lin, Y. J. Gong, Y. L. Li, and J. Zhang, “Parameter investigation in brain storm optimization,” in Proc. IEEE Symposium Series on Computational Intelligence (SSCI 2013), Singapore, April. 2013, pp. 103-110.
[22]. Zhi-Hui Zhan, J. Zhang, Y. H. Shi, and H. L. Liu, “A modified brain storm optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2012), Brisbane, Australia, Jun. 2012, pp. 1-8.
[23]. Zhi-Hui Zhan and J. Zhang “Enhance differential evolution with random walk,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2012), Philadelphia, America, Jul. 2012, pp. 1513-1514.
[24]. Zhi-Hui Zhan, K. J. Du, J. Zhang, and J. Xiao, “Extended binary particle swarm optimization approach for disjoint set covers problem in wireless sensor networks,” in Proc. Conf. Technologies and Applications of Artificial Intelligence (TAAI 2012), Tainan, Taiwan, 2012. pp. 327-331.
[25]. Zhi-Hui Zhan n, and J. Zhang, “Orthogonal learning particle swarm optimization for power electronic circuit optimization with free search range,” in Proc. IEEE Congr. Evol. Comput. (CEC 2011), New Orleans, Jun. 2011, pp. 2563-2570.
[26]. Zhi-Hui Zhan n and J. Zhang, “Co-evolutionary differential evolution with dynamic population size and adaptive migration strategy,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2011), Dublin, Ireland, Jul., 2011, pp. 211-212.
Brief CV & Publications
- XIV -
[27]. Zhi-Hui Zhan and J. Zhang, “Self-adaptive differential evolution based on PSO learning strategy,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2010), Portland, America, Jul., 2010, pp. 39-46.
[28]. Zhi-Hui Zhan and J. Zhang, “A parallel particle swarm optimization approach for multiobjective optimization problems,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2010), Portland, America, Jul., 2010, pp. 81-82.
[29]. Zhi-Hui Zhan, J. Zhang, and Z. Fan, “Solving the optimal coverage problem in wireless sensor networks using evolutionary computation algorithms,” in Proc. Simulated Evolution And Learning (SEAL 2010), LNCS 6457, pp. 166–176, 2010.
[30]. Zhi-Hui Zhan, J. Zhang, and Y. H. Shi, “Experimental study on PSO diversity,” in Proc. 3rd Int. Workshop on Advanced Computational Intelligence (IWACI 2010), Suzhou, China, Aug. 2010, pp. 310-317.
[31]. Zhi-Hui Zhan, X. L. Feng, Y. J. Gong, and J. Zhang, “Solving the flight frequency programming problem with particle swarm optimization,” in Proc. IEEE Congr. Evol. Comput. (CEC 2009), Trondheim, Norway, May. 2009, pp. 1383-1390.
[32]. Zhi-Hui Zhan, J. Zhang, and R. Z. Huang, “Particle swarm optimization with information share mechanism,” in Proc. Genetic Evol. Comput. Conf. (GECCO 2009), Montréal, Canada, Jul., 2009, pp. 1761-1762.
[33]. Zhi-Hui Zhan and J. Zhang, “Parallel particle swarm optimization with adaptive asynchronous migration strategy,” The 9th Int. Conf. on Algorithms and Architectures for Parallel Processing (ICA3PP), Taipei, Taiwan, Jun., 2009, pp. 490-501.
[34]. Zhi-Hui Zhan and J. Zhang, “Discrete particle swarm optimization for multiple destination routing problems,” EvoWorkshops 2009, LNCS 5484, April. 2009, pp. 117–122.
[35]. Zhi-Hui Zhan, J. Xiao, J. Zhang, and W. N. Chen, “Adaptive control of acceleration coefficients for particle swarm optimization based on clustering analysis,” in Proc. IEEE Congr. Evol. Comput. (CEC 2007), Singapore, Sept. 2007, pp. 3276-3282.
Authorized Patents
[1]. J. Zhang, Zhi-Hui Zhan, and T. Huang, Multicast Approach Based on Particle Swarm Optimization, Patent No. ZL200810220650.1
Summary of Key Publications in SCI Journals
Journal Names 5-Years IF IF IF Rank in JCR Category PapersIEEE Trans. Evol. Comput. 6.897 5.908 Computer Science – Theory & Method 1/105 7 IEEE Trans. SMC. Part B (CYB) 4.978 4.943 Computer Science – Cybernetics 1/22 6 IEEE Trans. Ind. Electron. 5.985 6.383 Automation & Control System 1/59 1 IEEE Trans. Intell. Transp.Syst. 3.155 2.534 Transportation Science & Technology 6/33 1 IEEE Trans. Ind. Informatics 4.880 4.708 Automation & Control System 3/59 1 IEEE Trans. Paral. Distr. Syst. 2.749 2.661 Computer Science – Theory & Method 11/105 1 IEEE Comput. Intell. Mag. 3.483 3.647 Computer Science – Artificial Intelligence 18/130 1 ACM Computing Surveys 6.559 5.243 Computer Science – Theory & Method 2/105 1 Information Sciences 3.683 3.364 Computer Science – Information Systems 10/144 2 Applied Soft Computing 3.288 2.857 Computer Science – Interdiscip. Appl. 14/104 1 Soft Computing 1.732 1.630 Computer Science – Artificial Intelligence 57/130 1 Total 23
Abstract
- XV -
Abstract
Particle swarm optimization (PSO) is a kind of simple yet powerful optimization technique.
When compared with other evolutionary computation (EC) algorithms such as genetic
algorithm (GA), PSO is simper in the algorithm structure, easier in the implementation, and
faster in convergence. Therefore, PSO has good application prospect in various science and
engineering optimization problems, attracting great interesting and attention from researchers
all over the world. During the almost two decades’ development since PSO was invented in
1995, some key problems as follows emerge and call for urgent solutions.
1) The PSO performance strongly relies on the parameter and operator in different
evolutionary states. How to be aware of the evolutionary states and adaptively control the
parameter and operator to obtain better algorithm performance is a hot yet difficult research
topic in PSO community.
2) Although PSO can obtain a reasonable solution fast for various problems, the fast
convergence speed makes PSO easy to be trapped into local optima, especially in complex
multimodal optimization problem. How to develop a PSO variant with both faster
convergence speed and strong global search ability is a significant yet challenging research
topic in PSO community.
3) When applying PSO to applications such as multi-objective optimization and
engineering optimization problem in practices, how to utilize and combine the problem
characteristics so as to efficiently solve the practical problem is still a challenging problem in
extending PSO to real-world applications.
In response to these issues, this dissertation carries out innovative researches into the
Abstract
- XVI -
PSO parameter adaptation control, operator orthogonal design, and population
co-evolutionary interaction. In order to make these researches efficient, this dissertation
points out that the characteristics of population-based search and iteration-based evolution in
PSO provide a mass of search data and historical data during the evolutionary process. As
machine learning (ML) technique is a powerful tool for obtaining useful information from
large amounts of data, using ML technique to analyze, process, and ultilize these data has
great significance to aid PSO algorithm design and so as to improve the algorithm
performance. In view of this, this dissertation conducts researches into ML aided PSO and its
engineering application. The main works are to apply the techniques and ideas such as
statistical analysis, orthogonal design and prediction, and ensemble learning in the ML field
to aid PSO design, improving the algorithm performance, and extending its applications.
The main innovative contributions of this thesis are as follows:
(1) Propose a statistical analysis based adaptive PSO (APSO) to make the algorithm can
act properly according to different states, enhancing the algorithm versatility.
The parameter and operator requirements for PSO are different in different evolutionary
states. By using the strong ability of ML technique in obtaining useful information from mass
data, this dissertation proposes to make statistical analyses on the population distribution data
and fitness data of PSO during the evolutionary process. This results in a novel evolutionary
state estimation (ESE) method that can classify different evolutionary states efficiently. By
using the ML technique aided ESE method, APSO can adaptively control the parameters and
operators according to different requirements in different states, improving the PSO
performance and enhancing the PSO versatility in different search environments.
Abstract
- XVII -
(2) Propose an orthogonal design and prediction based orthogonal learning PSO
(OLPSO) to enhance the algorithm global search ability in complex optimization.
As the learning strategy in traditional PSO can not sufficiently utilize the information in
the personal experience and the neighborhood experience, this dissertation proposes a novel
orthogonal learning (OL) strategy for the particle to construct a promising exemplar to guide
the flying. The OL strategy is based on the orthogonal experimental design technique in ML
that can efficiently discover useful information in the personal and neighborhood experiences
and predict promising combination of these two experiences. Therefore, OLPSO can obtain
both fast convergence speed and strong global search ability. The promising performance of
OLPSO makes it an efficient tool for complex and multimodal optimization problems.
(3) Propose a co-evolutionary multi-swarm PSO (CMPSO) inspired by the ensemble
learning idea in ML, enhancing the performance in multi-objective optimization.
The ensemble learning method in ML is to use multiple classifiers to enhance the
classification ability. Inspired by such multiple learners’ idea, this dissertation designs a novel
optimization framework as multiple populations for multiple objectives (MPMO) when using
EC algorithms to solve multi-objective optimization problems (MOP). Based on the MPMO
framework, the CMPSO algorithm on the one hand avoids the fitness assignment problem
which caused by considering all the objectives together, and on the other hand searches
sufficiently in different areas of the Pareto front (PF) by the guidance of each objective.
Moreover, CMPSO uses a novel external shared archive for the communication and
co-evolution of different swarms, so as to make the non-dominated solutions cover along the
whole PF efficiently, enhancing the performance in MOP.
Abstract
- XVIII -
(4) Apply OLPSO to the power electronic circuits (PEC) design problem, extending the
engineering application fields of PSO.
The PEC design problem is a complex engineering application problem for that it
involves lots of components such as resistors, capacitors, and inductors, which are all needed
optimally designed so as to obtain good circuit performance. This dissertation on the one
hand extends the traditional PEC optimization model by introducing free search range for the
components. Although this new model makes PEC much closer to real-world application, it
brings great challenges to current optimization methods. Therefore, this dissertation on the
other hand proposes to apply the powerful ML aided OLPSO to optimize PEC with free
search range. The successes of OLPSO in the PEC application not only provides PEC a
powerful optimizer, but also demonstrates that ML aided PSO algorithms have great potential
in real-world engineering optimization problems.
In summary, this thesis argues that ML techniques can acquire useful information from
the PSO data, and therefore proposes the statistical analysis, orthogonal design and prediction,
and ensemble learning techniques and/or ideas in the ML field to aid PSO to improve the
convergence speed, solution accuracy, and application extensions. This is also an attempt to
combine the techniques in EC and ML, which are two of the most significant research fields
in computer science.
Key Words: Particle Swarm Optimization, Machine Learning, Adaptation, Orthogonal
Design and Prediction, Multi-objective Optimization, Engineering Application
Content
- XIX -
Contents
Acknowledgments ................................................................................................................................ III Declaration of Thesis Originality ........................................................................................................... V Permission to Quote Copyrighted Material ......................................................................................... VI Brief CV & Publications ..................................................................................................................... VII Abstract ............................................................................................................................................... XV Contents ............................................................................................................................................ XIX List of Figures .................................................................................................................................. XXII List of Tables ................................................................................................................................... XXIII Chapter 1 Introduction ............................................................................................................................ 1
1.1 Motivation ....................................................................................................................... 1 1.2 Overview of PSO ............................................................................................................ 6
1.2.1 Origin and flow structure of PSO ............................................................................ 6 1.2.2 Theory analysis of PSO ........................................................................................... 9 1.2.3 Parameter control of PSO ..................................................................................... 11 1.2.4 Operator design of PSO ........................................................................................ 13 1.2.5 Population interaction of PSO ............................................................................... 15 1.2.6 Discrete optimization of PSO................................................................................ 16 1.2.7 Practical application of PSO ................................................................................. 18
1.3 ML Technique and PSO ................................................................................................ 20 1.3.1 Overview of machine learning .............................................................................. 20 1.3.2 When PSO meets ML ........................................................................................... 21 1.3.3 ML aided PSO ....................................................................................................... 22
1.4 Contributions of the Thesis ........................................................................................... 23 1.5 Organization of the Thesis ............................................................................................ 25
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning ................................................................................................................................................ 29
2.1 Introduction ................................................................................................................... 29 2.2 Evolutionary State Estimation ....................................................................................... 32 2.3 Adaptive Particle Swarm Optimization ........................................................................ 35
2.3.1 Adaptation of the Inertia Weight ........................................................................... 35 2.3.2 Control of the Acceleration Coefficients ............................................................... 36 2.3.3 Elitist Learning Strategy Adaptation ..................................................................... 38
2.4 Benchmark Tests and Comparisons .............................................................................. 39 2.4.1 Benchmark Functions and Algorithm Configuration ............................................ 39 2.4.2 Comparisons on the Solution Accuracy ................................................................ 41 2.4.3 Comparisons on the Convergence Speed .............................................................. 43 2.4.4 Comparisons on the Algorithm Reliability ............................................................ 44 2.4.5 Comparisons Using t-Tests ................................................................................... 46
2.5 Further Analysis of APSO ............................................................................................. 47 2.5.1 Analysis of Parameter Adaptation and Elitist Learning ........................................ 47 2.5.2 Search Behaviors of APSO and Parameter Evolution Analysis ............................ 48 2.5.3 Sensitivity of the Acceleration Rate ...................................................................... 50
Content
- XX -
2.5.4 Sensitivity of the Elitist Learning Rate ................................................................. 51 2.6 Chapter Summary ......................................................................................................... 52
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning ................................................................................................. 53
3.1 Introduction ................................................................................................................... 53 3.2 Orthogonal Learning Particle Swarm Optimization ...................................................... 57
3.2.1 Orthogonal Experimental Design .......................................................................... 57 3.2.2 Orthogonal Learning Strategy ............................................................................... 59 3.2.3 Orthogonal Learning Particle Swarm Optimization .............................................. 61
3.3 Experimental Verification and Comparisons ................................................................. 62 3.3.1 Functions Tested .................................................................................................... 62 3.3.2 Compared Algorithm Configuration ..................................................................... 62 3.3.3 Solution Accuracy with Orthogonal Learning Strategy ........................................ 64 3.3.4 Convergence Speed with Orthogonal Learning Strategy ...................................... 67 3.3.5 Comparisons with Other PSOs .............................................................................. 69 3.3.6 Comparisons with Other Evolutionary Algorithms ............................................... 72 3.3.7 Parameter Analysis ................................................................................................ 73 3.3.8 Discussions ........................................................................................................... 74
3.4 Chapter Summary ......................................................................................................... 75 Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning .............................................................................................................................................................. 77
4.1 Introduction ................................................................................................................... 77 4.2 Multi-objective Optimization Problem ......................................................................... 81
4.2.1 Related concept of MOP ....................................................................................... 81 4.2.2 Related work on MOP ........................................................................................... 81
4.3 CMPSO for MOP .......................................................................................................... 82 4.3.1 CMPSO Evolutionary Process .............................................................................. 82 4.3.2 CMPSO Archive Update ....................................................................................... 83 4.3.3 Complete CMPSO ................................................................................................. 86 4.3.4 Complexity Analysis of CMPSO .......................................................................... 87
4.4 Experimental Veerification and Comparisons ............................................................... 88 4.4.1 Test Problems ........................................................................................................ 88 4.4.2 Performance Metric ............................................................................................... 89 4.4.3 Experimental Settings ........................................................................................... 89 4.4.4 Experimental Results on ZDT Problems ............................................................... 91 4.4.5 Experimental Results on DTLZ and WFG Problems ............................................ 92 4.4.6 Experimental Results on UF Problems ................................................................. 94 4.4.7 The Benefit of Shared Archive .............................................................................. 96 4.4.8 Impacts of Parameter Settings ............................................................................... 99
4.5 Chapter Summary ....................................................................................................... 102 Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range ................................................................................................ 103
5.1 Introduction ................................................................................................................. 103
Content
- XXI -
5.2 Power Electronic Circuit ............................................................................................. 105 5.3.1 Particle Representation ....................................................................................... 106 5.3.2 Fitness Function .................................................................................................. 106
5.4 Experiments and comparisons .................................................................................... 107 5.4.1 Circuit Configurations ......................................................................................... 107 5.4.2 Algorithm Configurations ................................................................................... 108 5.4.3 Comparisons on Fitness Quality ......................................................................... 109 5.4.4 Comparisons on Optimization Speed and reliability ........................................... 110 5.4.5 Comparisons on Simulation Results ................................................................... 111 5.4.6 Comparisons on Discrete Search Space .............................................................. 114
5.5 Chapter Summary ....................................................................................................... 115 Chapter 6 Conclusion and Future Work .............................................................................................. 117
6.1 Conclusion .................................................................................................................. 117 6.2 Future work ................................................................................................................. 120
6.2.1 More ML techniques and EC algorithms ............................................................ 121 6.2.2 Dynamic ML aided EC algorithm ....................................................................... 121 6.2.3 Distributed ML aided EC algorithm .................................................................... 121 6.2.4 More Engineering Optimizaiton Practice Test .................................................... 122
References ........................................................................................................................................... 123
List of Figure&Table
- XXII -
List of Figures
Fig. 1-1 The journal papers related to PSO and GA in the IEEE Xploer database in recent 10 years..... 4 Fig. 1-2 The papers related to PSO and ACO in the Google Scholar database in recent 10 years.......... 4 Fig. 1-3 The basic flowchart and pseudo-code of PSO algorithm .......................................................... 8 Fig. 1-4 The topology structure of PSO algorithm ............................................................................... 13 Fig. 1-5 The interaction illustration between PSO algorithm and ML technique ................................. 21 Fig. 1-6 The organization structure and relationship illustration .......................................................... 26
Fig. 2-1 The population distributions of PSO during the evolutionary process. ................................... 30 Fig. 2-2 PSO population distribution information quantified by an evolutionary factor f. ................... 32 Fig. 2-3 Fuzzy membership functions for the four evolutionary states. ............................................... 33 Fig. 2-4 The relationship between inertia weight ω and evolutionary factor f. ..................................... 35 Fig. 2-5 The ideal variants of the acceleration coefficients c1 and c2. ................................................... 37 Fig. 2-6 Convergence performance of the 8 different PSOs on the 12 test functions. .......................... 42 Fig. 2-7 Cumulative percentages of the acceptable solutions obtained duiring the evolutionary process.
...................................................................................................................................................... 45 Fig. 2-8 Search behaviors of the APSO on Sphere function: (a) Mean value of ω during the run time
showing an adaptive momentum; (b) Mean values of c1 and c2 adapting to the evolutionary states. ...................................................................................................................................................... 49
Fig. 2-9 Search behaviors of PSOs on Rastrigin’s function: (a) Mean psd during the run time; (b) Plots of convergence during the minimization run; (c) Mean value of the ω during the run time showing an adaptive momentum; (d) Mean values of c1 and c2 adapting to the evolutionary states. ...................................................................................................................................................... 50
Fig. 3-1 The “oscillation” pheronomon caused by traditional PSO learning strategy. .......................... 54 Fig. 3-2 The “two steps forward, one step back” pheronomon caused by traditional PSO learning
strategy. ......................................................................................................................................... 55 Fig. 3-3 The flowchart of OLPSO. ....................................................................................................... 61 Fig. 3-4 Convergence progresses of PSOs with and without OL strategy on unimodal functions. ...... 65 Fig. 3-5 Convergence progresses of PSOs with and without OL strategy on multimodal functions. ... 66 Fig. 3-6 Convergence progresses of PSOs with and without OL strategy on rotated functions. .......... 67 Fig. 3-7 OLPSO performance with different values of G. (a) OLPSO-G. (b) OLPSO-L. .................... 74
Fig. 4-1 Framework of MPMO based algorithm for solving MOP. ...................................................... 79 Fig. 4-2 The archive update process. .................................................................................................... 85 Fig. 4-3 The complete flowchart of CMPSO. ....................................................................................... 86 Fig. 4-4 The final non-dominated solutions of the ZDT problems in all the 30 runs. ........................... 92 Fig. 4-5 The final non-dominated solutions of the WFG problems in all the 30 runs. ......................... 93 Fig. 4-6 The final non-dominated solutions of the UF problems in all the 30 runs. ............................. 95 Fig. 4-7 The final non-dominated solutions found by CMPSO and CMPSO-non-ELS in all the 30 runs.
List of Figure&Table
- XXIII -
...................................................................................................................................................... 97 Fig. 4-8 The mean IGD of CMPSO and CMPSO-non-aBest during the evolutionary process. ........... 98 Fig. 4-9 The final non-dominated solutions found by CMPSO and CMPSO-non-aBest after 1000 FEs.
...................................................................................................................................................... 98 Fig. 4-10 The mean IGD of CMPSO with different population size. ................................................... 99 Fig. 4-11 The final non-dominated solutions found by CMPSO and OMOPSO with different FEs on
ZDT4. .......................................................................................................................................... 100 Fig. 4-12 The mean IGD on DTLZ2 and UF1 of MOPSOs with different ω and different ci. ........... 101
Fig. 5-1 A block diagram of PEC. ....................................................................................................... 105 Fig. 5-2 Circuit schematics of the buck regulator with overcurrent protection. ................................. 108 Fig. 5-3 Mean convergence characteristics of different approaches in optimizing PEC..................... 111 Fig. 5-4 Simulated voltage responses from 0 ms to 90 ms. ................................................................ 112 Fig. 5-5 Simulated current responses from 0 ms to 90 ms. ................................................................. 113
Fig. 6-1 The summury. ........................................................................................................................ 120
List of Tables
Table 1-1 The Researches on PSO Theory ............................................................................................ 10 Table 1-2 The Rank of PSO Papers in Different IEEE Transactions According to SCI Database ........ 19
Table 2-1 The 12 Functions Used in The Comparisons ........................................................................ 39 Table 2-2 The PSO Algorithms Used in the Comparisons .................................................................... 40 Table 2-3 Results Comparisons on Solution Accuracy Among 8 PSOs on 12 Test Functions ............. 41 Table 2-4 Convergence Speed and Algorithm Reliability Comparisons ............................................... 43 Table 2-5 Comparisons Between the APSO and Other PSOs on t-Tests .............................................. 46 Table 2-6 Merits of Parameter Adaptation and Elitist Learning on Search Quality .............................. 47 Table 2-7 Effects of the Acceleration Rate on Global Search Quality .................................................. 51 Table 2-8 Effects of the Elitist Learning Rate on Global Search Quality ............................................. 51
Table 3-1 The Factors and Levels of the Chemical Experiment Example ............................................ 57 Table 3-2 Deciding the Best Combination Levels of the Chemical Experimental Factors Using an
Orthogonal Experimental Design Method .................................................................................... 58 Table 3-3 Sixteen Test Functions Used in the Comparison ................................................................... 63 Table 3-4 PSO Algorithms for Comparison .......................................................................................... 64 Table 3-5 Solutions Accuracy (Mean±Std) Comparisons Between PSOs With and Without OL
Strategy ......................................................................................................................................... 65 Table 3-6 Convergence Speed, Algorithm Reliability, and Success Performance Comparisons. ......... 68 Table 3-7 Search Result Comparisons of PSOs on 16 Global Optimization Functions ........................ 70 Table 3-8 Convergence Speed, Algorithm Reliability, and Success Performance Comparisons Among
Different PSO Variants .................................................................................................................. 71
List of Figure&Table
- XXIV -
Table 3-9 Result Comparisons of OLPSO-L and Some State of the Art Evolutionary Computation Algorithms With The Existing Results Reported in The Corresponding References ................... 73
Table 4-1 Characteristics of the Test Problems ..................................................................................... 88 Table 4-2 Parameters Settings of the Algorithms .................................................................................. 90 Table 4-3 Results Comparisons on the ZDT Problems ......................................................................... 91 Table 4-4 Results Comparisons on the DTLZ and WFG Problems ...................................................... 93 Table 4-5 Results Comparisons on the UF Problems ............................................................................ 94 Table 4-6 Comparisons Between CMPSO and Its Variants CMPSO-non-ELS (CMPSO without ELS
in the Archive Update) and CMPSO-non-aBest (CMPSO without Using Archive Information for Particle Update) ............................................................................................................................ 96
Chapter 1 Introduction
1
Chapter 1 Introduction
1.1 Motivation
Optimization is an appealing topic that has attracted human being since ancient times.
When human being engaging in the productive practice, scientific research, and social
activities, their behaviors are always driven by some specific goals [1][2]. Before the
development of modern mathematics, people mainly search for the optimal solutions to the
problem depend on their experiences. To the 17 century, after Newton inventing calculus,
many real-world problems were modeled as optimization problems (OP). Shown in (1-1) for
a minimization OP, the objective is to find out a feasible solution X in the search space D, so
as to minimize the objective function f:
Min f(X), X∈D (1-1)
The Newton’s calculus provides an effective approach to solve the peak problem such as
(1-1). For example, in the condition of secondary differentiable, we can find the peaks of f by
finding the positions where the gradient is 0 via the Newton’s method. However, the
Newton’s method, and the ones such as the Lagrange multiplier method and the Cauchy’s
gradient-based method, all require the objective function with the good mathematical
characteristics such as differentiable or secondary differentiable, which has greatly limit the
utilization of these methods in practical applications. Moreover, these gradient information
based methods are local optimization algorithms. Therefore, they are promising in unimodal
functions but are difficult to avoid local optima when dealing with multimodal functions.
With the fast developments of modern technology, the optimization problems emerge in
the scientific researches and engineering practices become more and more complex.
Especially with the widely utilization of computers, optimization problems in modern life
turn to multi-variables, multi-modal, multi-constraint, and multi-objective. Taking the
optimally design of power electronics circuit (PEC) for instance, the engineers have to face a
mount of resistors, capacitors, and inductors, which they have to carefully determine the
component values, so as to design an efficient PEC with good performance. Such an
Chapter 1 Introduction
2
optimization problem not only involves multi-variables and multi-modal, but also has its
difficulty that it is hard, if not impossible, to use a objective function to describe the problem.
Therefore, no matter the classic Newton’s method and gradient-based method, or the
traditional optimization methods such as the quadratic programming, the simplex method,
and dynamic programming method, are all deficient when applying to the complex scientific
and engineering optimization problems in modern time. How to solve the global optimization
problems that are lack of good mathematical model and are with complex challenges such as
multi-variables, multi-modal, multi-constraint, and multi-objective, has become a significant
research topic in the optimization field.
Inspired by the biology evolution and intelligent phenomena in nature, the computer
scientists invented a kind of optimization algorithm that emulated the biology evolutionary
mechanisms and social swarm behaviors. Such kinds of population search based and iteration
evolution based optimization algorithms are called evolutionary algorithm (EA). The EA can
be dated from the 1950s when the American scholar Holland proposed the genetic algorithm
(GA) [3][4]. Later in 1960s, American scholar Fogel proposed the evolutionary programming
(EP) [5], and the Germany scholar Rechenberg proposed the evolution strategy (ES) [6]. Due
to the advantages that they did not depend on the mathematical model and characteristics of
the being solved problem, their potential global search ability, and can obtained the optimal
or near-optimal solution in acceptable computational time, these EA approaches have caused
great attention and interests from the researchers all over the world. EA has fast become a
significant research branch in the computer science and artificial intelligence (AI) field [2].
Although EA provides a significant and effective approach to solve the optimization
problems in various scientific and engineering applications, the characteristics that EA is
based on population search and iterative evolution cause a problem that EA is computational
expensive. Until 1980s, computer scientists got inspirations from intelligent phenomena in
the physical annealing process and the human memory, and invented the simulated annealing
(SA) algorithm [7] and tabu search (TS) algorithm [8][9]. As the SA and TS algorithms are
based on single point search, they release the expensive computational burden of EA in a
sense. However, the single point search characteristics make them easy to be trapped into
local optima and therefore are not efficient enough in real-world application. In this sense, the
Chapter 1 Introduction
3
computer scientists still preferred to the population search based optimization approach like
EA. In order to keep the population search characteristics of EA, and together accelerate the
optimization speed, computer scientists invented the swarm intelligence (SI) algorithms
[14][15] such as ant colony optimization (ACO) [10][11] and particle swarm optimization
(PSO) [12][13]. In the literature, EA and SI algorithms are named evolutionary computation
(EC) algorithms [16][17].
The SI algorithms emulate the intelligent behaviors such as ants foraging and birds
schooling in nature. They are a kind of optimization approaches based on the memory search.
The origin of PSO can be dated back to 1995 when Eberhart and Kennedy proposed a kind of
SI algorithm emulating the birds schooling [12][13]. When compared with GA, PSO has
simpler algorithm flowchart, is easier to implement, and can converge to the optimal solution
or near-optimal solution with the help of the guided by the historical search information.
Therefore, PSO can meet the fast optimization speed requirements of real-world applications,
and has been extensively studied since its origin in 1995. Nowadays, PSO has been
successfully applied to various optimization problems in daily life, and show great
development and application potential [18]-[23]. In 2004, Eberhart, who is the inventor of
PSO, together with his colleague Shi organized a special issue on PSO in the IEEE
Transactions on Evolutionary Computation, which is the leading international journal in the
EC field [24]. In that special issue, seven high quality research papers focusing on different
aspects of PSO were published [25]-[31]. It should be noted that this special issue can be
regarded as a milestone in the PSO field because the PSO algorithm has been really widely
accepted by researchers since then. Fig. 1-1 presents the IEEE journal papers with “particle
swarm” or PSO in the title in the IEEE Xploer database. Fig. 1-1 shows that there are only
several journal papers on PSO before 2004, while the paper number rapidly increases after
2004. To the year of 2008, we can see that the PSO paper number is near the GA paper
number. This indicates that more and more researchers have turned their interests from
traditional EC algorithms like GA to the new and efficient PSO algorithm. Especially in the
applications in various engineering optimization problems, more and more publications
report that PSO can obtain better performance than GA does.
Chapter 1 Introduction
4
2003 2004 2005 2006 2007 2008 2009 2010 2011 20120
10
20
30
40
50
60
70
Pape
rs
Year
Particle Swarm Genetic Algorithm
Fig. 1-1 The journal papers related to PSO and GA in the IEEE Xploer database in recent 10 years
The PSO algorithm, together with the SI algorithms, have become one of the hottest
research topics in the EC and computer science fields all over the world. In the international
conferences such as IEEE Congress on Evolutionary Computation(CEC)sponsored by IEEE
and The Genetic and Evolutionary Computation Conference(GECCO)sponsored by ACM,
the theory, design, and application of PSO are always the key directions and topics in the
conference call for papers. Moreover, the emergences of international conferences such as
The International Conference on Ant Colony Optimization and Swarm Intelligence(ANTS)
and The International Conference on Swarm Intelligence(ICSI)which are based on ACO and
PSO also show that PSO have been paid close attentions in recent years. Fig. 1-2 presents the
recent 10 year paper numbers, which are obtained in the Google Scholar database by using
“particle swarm” as the keyword. Similar, the recent 10 year paper numbers on ACO, which
are obtained by using “ant colony” as the keyword, are also presented. From the figure we
can see that in 10 years ago, PSO was not as hot as ACO. However, the paper number on
PSO increases rapidly, and surpasses the paper number on ACO in the year of 2009. In 2012,
the paper number on PSO exceeded 20,000.
2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
3,000
6,000
9,000
12,000
15,000
18,000
21,000
24,000
Pape
rs
Year
Particle Swarm Ant Colony
Fig. 1-2 The papers related to PSO and ACO in the Google Scholar database in recent 10 years
Chapter 1 Introduction
5
The growing wealth of scientific research and active academic exchange activities in the
PSO field bring important opportunities and challenges to the algorithm design and practical
application of PSO. Due to that the different components of PSO, such as the parameter,
operator, and population, all have significant influences on the algorithm performance, it is an
open work to the research into PSO parameter control, operator design, and population
interaction.
The researches in this dissertation point out that: when using PSO to solve the global
optimization problems, the population search based and iteration evolution based
characteristics of PSO provide a mass of search data and historical dada during the running
time. How to sufficient utilize these data and discover useful information in these data to help
the parameter control, operator design, and population interaction, is an important and
efficient way to enhance the PSO performance. Machine leaning (ML) technique is a kind of
approach that emulates the human being learning behaviors so as to obtain new knowledge
and skill, and to enhance the performance [32]. As the PSO algorithm can provide mass of
search data and historical dada during the running time, using the ML technique to analyze
and process these data can help to obtain useful information, which can be used to help the
algorithm design and problem solving. Using ML techniques to aid PSO is a significant and
promising research topic.
This thesis conducts the research on ML aided PSO and its application. I will apply the
statistical analyzing technique, orthogonal predicating technique, and ensemble learning
techniques in the ML field to help the parameter control, operator design, and population
interaction of PSO, respectively. At last, I will apply the orthogonal predicating technique
aid PSO algorithm to the PEC design and optimization problem, not only to evaluate the ML
aided PSO algorithm in real-world problem, but also extend the application of PSO in the
engineering field.
Chapter 1 Introduction
6
1.2 Overview of PSO
1.2.1 Origin and flow structure of PSO
Swarm animals like bird, kennel, and fish show high organization and regularity in
natural behaviors such as migration and foraging. Many biologists, zoologists, computer
scientists, and behaviorists, and social psychologists, and other researchers conduct deep
study of such social behaviors, and indicate that the swarm intelligent behaviors is a kind of
optimization behavior drawing on the advantages, avoiding disadvantages, and adapting to
environment [33]. Especially in the application of swarm intelligence, the research of social
psychologist Wilson [34] demonstrate that in theory at least, through the process of swarm
foraging, each individual in the swarm will benefit from the discovery and experience
accumulated by all the individuals in the process. To introduce the idea of self-cognition,
social influence, swarm intelligence in social psychology into swarm behaviors with high
organization and regularity, two researchers, Kennedy, a senior social psychology scholar in
department of labor statistics in United States, and Eberhart, a famous electrical engineer in
Purdue University in United States, collaborated to develop an optimization tool for
engineering practice, and invented PSO finally. Obviously, two scholars gave full play to
their professional knowledge and skill in the design of PSO. On the one hand, PSO integrates
the optimization process based on population in evolutionary computation and swarm
intelligence. On the other hand, the self-cognition and social-influence theory in social
psychology are blended into design of PSO. In 1995, as an important branch of EC, two
papers [12][13] related to PSO were published by Eberhart and Kennedy on international
conference, which represented the birth of PSO.
The basic idea of PSO is to simulate the movement of organisms in a bird flock, and find
the optima or near-optima by population searching and iterative evolution. In a bird flock,
each bird find the position of the food through self-search and social cooperation. We
consider such a scene in [2], a group of disperse birds fly randomly to find food, and they do
not know the position of the food in advance, but are able to know the distance between itself
and the food (by the strength of the smells of the foods or others ways). Thus, each bird will
Chapter 1 Introduction
7
continually record and update the nearest position from food in the fly. Meanwhile, through
information communication, they can compare all the position found by all birds and obtain
the best position in the current entire population. So, each bird owns a guidance direction
during flying and combines self-experience and population-experience to adjust its velocity
and position. Flying closer to the food position step by step, all the birds will gather around
the position of food at last.
In PSO, each bird in the flock is regarded as a “particle” in the algorithm. A group of
particles are initialized randomly as feasible solutions in the problem searching space, and
then perform iterative search to find optima. Just like a bird, each particle has a velocity and a
position, and a fitness value defined by fitness function related to the problem. Through
continued iteration, particles update the velocity and position under the impact of personal
historical best solution and current global best solution, explore and exploit the searching
space and find the global best solutions finally.
Follow the idea represented above, PSO assumes that a population of N particles search
optimum in a D-dimension solution space. Each particle i (1≤i≤N) equipped with a position
vector Xi = [Xi1, Xi2, …, XiD] and a velocity vector Vi = [Vi1, Vi2, …, ViD] to represent the
current state of the particle. Meanwhile, each particle record a vector called personal
historical best position pBesti = [Pi1, Pi2, …, PiD]. That is to say, in the evolution progress,
each particle will record the best position (the position with best fitness value) achieved so far
in vector pBesti, and pBesti will be updated when the particle find a better position. Similarly,
the best position of all the personal historical best position pBesti is recorded as globally best
positon gBest = [G1, G2, …, GD]. It should be note that in this thesis, gBest corresponds to the
best one among all pBesti, and the adoption of symbol gBest is to simplify the expression.
Employing the above vectors expression, the basic flowchart of PSO and pseudo-code are
shown in Fig. 1-3.
Chapter 1 Introduction
8
Fig. 1-3 The basic flowchart and pseudo-code of PSO algorithm
Step 1: Initialization. In the D-dimension searching space, each dimension Xid (1≤i≤N,
1≤d≤D) of each particle i is initialized randomly in the searching range [Xdmin, Xdmax],
and its corresponding velocity Vid are generated randomly in range [–Vdmax, Vdmax],
where Vdmax is the maximum velocity of each particle in dimension d and is set as 20%
of the searching range of that dimension in general [18]. After that, for each particle, the
fitness value is evaluated and pBesti is set as the current position Xi while the globally
best position gBest is set as the best position among all pBesti.
Step 2:In each iteration:
step 2.1)For each particle i, using the pBesti and gBest to update its velocity and
position according to Eq. (1-2) and (1-3):
Vid = ωVid + c1r1d(Pid – Xid) + c2r2d(Gd – Xid) (1-2)
Xid = Xid + Vid (1-3)
step 2.2)Evaluate the new position of particle i; if the fitness value of new position is
better than that of pBesti, then pBesti will be replaced by the new position Xi.; if the
new pBesti is better than gBest, then gBest will be set as pBesti;
Chapter 1 Introduction
9
step 2.3)If all particles are updated, then finish this iteration and go to Step 3);
otherwise, go back to step 2.1) to perform the update process of the next particle.
Step 3:If the terminal condition is not met, then go back to Step 2) to enter the next
iteration; otherwise, output the globally best position gBest as the solution of the problem,
and finish the algorithm.
In Eq. (1-2), ω is inertia weight, c1 and c2 are acceleration coefficients, and r1d and r2d
areteo uniformly distributed random numbers independently generated within [0, 1] for the
dth dimension. Note that, if some components Vid of the updated velocity violate the
boundary constraints of velocity [–Vdmax, Vdmax], then the violating Vid should be revised.
Although different kinds of initialization and revising methods have been proposed for PSO
[35], this thesis follows a simple method which initializes the swarm randomly and sets the
violating components to be the corresponding components of the violated bounds. On the
other hand, some updated position Xid obtained by Eq. (1-3) may violate position constraints
[Xdmin, Xdmax], the typical revising methods are to set the violating components to the be
the corresponding violated boundary or regenerate randomly in the search range [36]. In this
thesis, without special note, a special method is used: once there is a exceed the boundary
constraints, the new position is not evaluated to avoid the spread of the infeasible
information. Due to each particle flies under the guidance of feasible pBesti and gBest, it is
expected to fly back to the search space again.
1.2.2 Theory analysis of PSO
Due to the simple implement but effective performance, PSO is explored and improved
by many researchers, and applied in increasingly wide range of areas. In order to describe
clearly the development history and current research work of PSO, the following subsections
will review the current study from several important aspects focusing on parameter control,
operator design, and population interaction firstly, and then introduce the theory analysis,
discrete optimization, and practice application of the algorithm.
The theory analysis of PSO, is firstly attempted by Ender and Mohan in the early 1999
[37]. Later in 2002, France mathematician Clerc and the inventor of PSO Kennedy focused a
Chapter 1 Introduction
10
deeper description and analysis into the mathematical foundation and convergence
mechanisms of PSO in [38]. In addition, Trelea [39] reported the results about investigation,
research, and analysis of the convergence and stability for PSO, and indicated that PSO will
converge to one position of search space but without any guarantee for converging to the
global optimum position, and in a worse case, the converged position may even not be a local
optimum, and the algorithm is trapped into a stagnation in a current best position. In 2006,
different from the static analysis of PSO before, Li ning et al. [40], Kadirkamanathan et al.
[41] and van den Bergh et al. [42] make deep study into dynamic systematical analysis for
the research and mathematical analysis of PSO. In 2011, Fernandez-Martinez and
Garcia-Gonzalo[43] have analyze the stability of PSO on continuous and discrete PSO two
models. The research works on PSO theory are summarized in Table 1-1.
Table 1-1 The Researches on PSO Theory
Year Authors Characteristic Reference
1999 Ender and Mohan The sine wave characteristics of particles in flight was observed, tracked, and analyzed, and expanded into multidimensional space
[37]
2002 Clerc and Kennedy Design a parameter named compressibility factor and use the parameter to accelerate the convergence of PSO
[38]
2003 Trelea Indicate the steady convergence to a position in the searching space but no guarantee of PSO for converging to the global optimum position
[39]
2006 Li ning et al. Describe the trajectory of the particle and analyze the convergence by differential equations and Z-transform
[40]
2006 Kadirkamanathan et al. Behavior research of PSO in dynamin environment, deep study of static analysis into dynamic analysis
[41]
2006 F. van den Bergh et al. Track the flight trajectory of PSO, and make dynamic systematical analysis and convergence study
[42]
2011 Fernandez-Martinez and
Garcia-Gonzalo
Stability analysis of on continuous and discrete PSO by using stochastic difference and difference equation
[43]
Chapter 1 Introduction
11
1.2.3 Parameter control of PSO
One important reason of the rapid popularity and being widely accepted of PSO is the
fewer number of relative parameters compared with GA or other EC [44]. It can be seen from
Eq. (1-2) that the parameters of PSO mainly includes inertia weight ω and acceleration
coefficients c1 and c2.
In the original PSO, there is no parameter ω in Eq. (1-2). However, in the later research,
Shi and Eberhart [45] discovered that the introduction of inertia weight ω in velocity update
equation of PSO benefits to maintain the continuity of the algorithm searching. They
introduced the parameterω into the velocity update equation of PSO the first time and
indicated that larger value of ω benefits to the search in large range while small value of ω
ensures that the algorithm is able to converge to the optimum position. Therefore, in order to
give the algorithm stronger global searching ability in the early stage and greater local
searching capability, they proposed a model of ω linearly decreasing with iterative
generations as
ω = ωmax – (ωmax – ωmin)t/T (1-4)
where ωmax=0.9 and ωmin=0.4 represent the maximal and minimal inertia weights,
respectively. t and T are the current number of evolutionary generations and maximal number
of generations, respectively. Except the parameter setting method of linearly decreasing, a
fuzzy adaptive [46] and random version setting [47][48] methods are also proposed. To
compare the advatage of different models, Liu et al. analyzed different inertia weight seeting
mothods in [49].
After the introduction of ω by Shi and Eberhart in 1998, Clerc introduced the
constriction factor χ for ensuring the convergence of PSO in 1999 [50], by modifying
velocity update equation to Eq. (1-5a) and Eq. (1-5b). The constriction factor greatly
improves the performance of PSO on unimodal function solving, but not obvious on
multimodal function. According to the research of Eberhart in [51], the constriction factor is
equivalent to the inertia weight mathematically.
Vid = χ[Vid + c1r1d(Pid – Xid) + c2r2d(Gd – Xid)] (1-5a)
Chapter 1 Introduction
12
42
22 ϕϕϕ
χ−−−
= (1-5b)
ϕ = c1 + c2 (1-5c)
Since the inertia weight can dynamicly balance the global and local search ability, and
better performance than constriction factor on multimoal and complex problems, it is more
widely accepted by researchers. The version of PSO adopted to study in this thesis is based
on the version with inertia weight.
In addition to inertia weight and constriction factor, two other important parameters of
PSO are acceleration coefficients c1 and c2. In the early stage of development of PSO, c1 and
c2 are set as 2.0 because the expected value of 2.0 multiplied by a random number in [0, 1] is
1.0, which make the algorithm search solution space steadily. Kennedy [52] investigated the
effect of the two coefficients by designing two models only with self-cognition or only with
social influence acceleration ability, and reported that the reasonable settings of c1 and c2 is an
important issue to ensure the performance of PSO. Suganthan [53] also discovered that
different acceleration coefficients are required for different optimization problems to obtain
optimum solution. Based on these observation, in order to improve the adaptive ability of
PSO on different optimization problems, Ratnaweera et al. [28] proposed a method linearly
increase or decrease: with the iterative generation, c1 linearly decreases while c2 linearly
increases, which is expected to increase the diversity of the swarm by learning more from
personal experience and less global information in the early stage of the algorithm and
accelerate convergence by using more global information in the later stage.
Many researches shows that, in the process of PSO solving optimization problems, and
the different evolution stage in the run of the algorithm, different parameters are suitable.
Although many researchers have proposed a number of setting methods, how to perceive the
evolutionary state of the algorithm and adaptive control the parameters according to the
current state of the algorithm, is still a challenging hot and hard problem. Thus, this thesis
takes this problem as one part of our research.
Chapter 1 Introduction
13
1.2.4 Operator design of PSO
The operators of PSO mainly include velocity update and position update operators,
which are simple and represented as Eq. (1-2) and (1-3), respectively. Each particle adjusts its
velocity and position under the guidance of the two update operators as (1-2) and (1-3) and
gets closer to the global best solution. Fig. 1-3 and (1-2) describes the global PSO (GPSO)
which search the optimum solution under the guidance of global best information gBest. Fig.
1-4(a) shows the structure of GPSO. We can that the neighbor of each particles is the entire
population. According to the definition of the neighbor of particle, PSO has corresponding
local version, local PSO (LPSO). A LPSO with ring structure is shown in Fig. 1-4(b), in
which the each particle has two neighbor particles with index of (i – 1) and (i + 1). In another
typical topology structure of LPSO illustrated in Fig. 1-4(c), each particle consider the above,
below, left, and right four neighboring particles as neighbors on the planar mesh. This
structure is known as Von Neumann structure and corresponding to the Von Neumann PSO
(VPSO).
In the local version of PSO, the velocity update operator of particle i is not guided by the
global best solution among the entire population but by the best pBest in the corresponding
neighborhood of particle i (include all neighbors and itself). As opposed to the global best
solution gBest, the guidance vector is named neighborhood best position, and denoted as
nBesti = [Ni1, Ni2, …, NiD] in this thesis. The corresponding velocity update operator is as:
Vid = ωVid + c1r1d(Pid – Xid) + c2r2d(Nid – Xid) (1-6)
Fig. 1-4 The topology structure of PSO algorithm
In the research work related to update operators in PSO, the researchers mainly focus on
designing a reasonable and effective velocity update operator to speed up the process of
Chapter 1 Introduction
14
finding the global best solution under better guidance. In early 2002, one of the PSO
inventors Kennedy published a paper on evolutionary computation international conference
CEC [54] to discuss the performance of different topology structure of PSO algorithm such as
star, ring, castellation, pyramid, and Von Neumann structure. In 2004, Kennedy and
Mendes[55] test these topology structure on fully informed PSO [25]. Much research
discovered that due to the different topology structure impacts the way of information
communication and the speed of information flow, topology structure has a great effect on
performance of PSO. In general, GPSO with star structure has rapid convergence ability and
performs well on unimodal problem but trapped into local optima easily on multimodal
problem. LPSO with ring structure is able to maintain good diversity on complex multimodal
problem and avoid trapped into local optima early due to the slow spread of information. The
structure like VPSO, whose communication density between LPSO and GPSO, obtain a good
balance between rapid convergence and maintaining population diversity.
In addition to obtain different algorithm performance by designing different topology
structure, some researchers have make many other attempts to improve velocity update
operator in PSO. For example, Suganthan [53] proposed a dynamic neighborhood extension
model to select guidance vector for velocity update, in which the neighborhood of each
particle starts with self but grows gradually to the entire population at the end. Hu and
Eberhart [56] defined the particle neighborhood according to physical position and choose the
best particle as the guidance vector from the set of nearest particles evaluated by Euclidean
distance. Liang and Suganthan [57] group the particles randomly as neighborhood, and the
grouping dynamic vary, and the best particle in each group acts as nBest of that group.
Kennedy [58] and Mendes et al. [25] employed the sum of the personal experience of all
particles in neighborhood to drive the fly of the particle. Besides, comprehensive learning
PSO (CLPSO) proposed by Liang et al. in 2006 is a widely accepted improved algorithm, and
consider as one of a typical representative of the improvement of velocity update operator. In
CLPSO, particle not only needs to learn from personal and swarm searching experience but
also from other particles. Therefore, to update the velocity of particle i, CLPSO combines the
pBest of some particles and selects one or more dimensions from each particle to construct a
guidance vector for velocity update. CLPSO demonstrate good performance on multimodal
Chapter 1 Introduction
15
problem. The stochastic combination method increases the population diversity but slows
down the convergence of the algorithm. Therefore, CLPSO performs poorly on unimodal
problem.
Much research works demonstrate that, the operator design of PSO, especially for the
design of velocity update equation, how to effectively use the search information produced in
the evolutionary process to guide the particles fly quickly and accurately is a challenging hot
topic and a hard problem. Thus, an important part of the research work in this thesis, is to
assist the orthogonal design and orthogonal prediction technique in machine learning to
discover the useful information hidden in the found solutions through the evolution process,
and then use the information to construct an efficient velocity update operator to enhance the
rapid global convergence capacity of the algorithm.
1.2.5 Population interaction of PSO
Velocity update or other operator may not be able to make full use of the population
information and cause the stagnation on local optimum position of , another important point
of the easily trapped into local optimum is the lack of the full interaction resulting in the
population premature. To avoid the premature convergence of the population, researchers
proposed different kinds of methods, such as hybrid algorithm and multiple population
algorithm, to strengthen the population interaction.
In hybrid algorithms, the population of PSO interact with other operators or algorithms
to improve the performance. For example, Angeline [60] may be the first one to propose a
method combining PSO with selection operator in GA in 1998. Later, Lovbjerg et al. [61],
Chen et al. [62]and Liang et al. [63] also mixed PSO algorithm with crossover operator.
Hybrid PSO with mutate operator [64][65][66], with local search [67][68], and with double
point expression [69] also have been designed later. In addition, ageing mechanism in nature
[70], chaotic mechanism [71][72][73], quantum mechanism [74][75][76], niching technique
[77][78], speciation pheromone [79] and other methods are used for hybrid to improve the
performance of PSO. Through the interaction with these mechanism, the population
composition and structure of particles will change in the communication process, and this is
Chapter 1 Introduction
16
able to avoid population premature. Furthermore, some researchers proposed hybrid PSO
with other evolutionary computation algorithms, for example, hybrid PSO with simulated
annealing [80][81], with GA [82], with ACO [83][84], with artificial immune algorithm
[85][86], and with differential evolution [87]. The new hybrid algorithm combines the rapid
convergence capacity of PSO with other search feature of other algorithms to enhance the
direct interaction between algorithms and improve the performance.
For multipopulation hybrid algorithms, we can interpret two classes of multipopulation
PSO algorithms as follow. The first class is based on population decomposition: the
population are divided into multiple subpopulation, and each subpopulation employs the
same PSO algorithm to address the same optimization problem [88][89]. Subpopulations can
interact with each other through related communication mechanism or information sharing
technique [90][91]. The second class is based on problem decomposition: the problem is
decomposed into a sum of subproblem, and then different algorithms are used to optimize the
subproblem. Multiple populations requires tight coupling communication and information
sharing mechanism to ensure that the solution information obtained by different populations
in different searching space can be spread rapidly [27].
Much research indicates that, in the population interaction process of PSO, one
significant problem is how to implement the cooperative communication and information
sharing between populations. There have been some related work focusing on this problem,
but how to design a simple but effective method is still a hot topic and hard problem for PSO
[92][93]. Thus, one important part of this thesis is to develop the multiple population
interaction research and design a cooperative communication and information sharing
mechanism to enhance the interaction effect of multiple population and improve the
performance of multipopulation PSO on multiobjective optimization problems.
1.2.6 Discrete optimization of PSO
PSO is proposed originally to solve continuous optimization problem. From (1-2) and
(1-3), we can see that the velocity and position update method based on addition, subtraction,
and multiplication is suitable for continuous optimization but difficult applied on discrete
Chapter 1 Introduction
17
optimization. In order to apply PSO on these discrete optimization problem, researchers work
on designing discrete version of PSO.
The earliest PSO discrete version is binary PSO (BPSO) [94] proposed by the inventors
of PSO Kennedy and Eberhart in 1997. BPSO employs a 0/1 coding mode to encode position
vector. Since BPSO still uses the velocity update equation in (1-2), each dimension of the
obtained velocity is possible to be far away from 0 or 1. To address this problem, BPSO uses
sigmoid function to normalize each dimension of the velocity to range [0, 1] which represents
the probability of valuing 1. Later, some researchers combined PSO with angle modulation
[95] and proposed multi-phase PSO [96] to solve binary optimization problem. Binary PSO
are applied on resource scheduling [97], optimal coverage problem [98] and disjoint set
covers problem [99] in wireless sensor networks, and multiple destination routing problem in
computer network [100].
PSO with Integer coding is also another typical discrete PSO. Salman et al. [101]
adopted a integer encode mode by rounding the real number of position in PSO to the
approximately equal but feasible discrete integer. Yoshida et al. [102] proposed a continuous
space decomposition method, where each area is assigned with a corresponding discrete
integer. These two methods are widely accepted by researchers, and have been successfully
applied to a number of integer discrete optimization problems [103]. The advantages of these
methods lie at no needing of transformation of velocity and position update equation, and
PSO still uses the original method to optimize in the continuous searching space, and just
transforms the continuous position into corresponding discrete value when evaluating fitness
value.
However, the above methods are difficult to solve some discrete combination problem,
so many researchers concern how to modify the velocity and position update equation in
original PSO or redefine the operators designed for continuous space to adapt to operation in
discrete space. Schoofs and Naudts [104] redefined the addition, subtraction, and
multiplication in PSO, and proposed a discrete PSO which successfully solve the constraint
satisfaction problem. Hu et al. [105] defined the velocity as exchange probability of position
variables to solve pawn exchanging in n-queen problem. Clerc [106] also modified and
redefined the operation in velocity and position update equations to implement a discrete
Chapter 1 Introduction
18
PSO to solve the traveling salesman problem.
Chen et al. [107] employed the set-based method to describe the solution spcae in
discrete combinatorial optimization problem, redefined the related operation in velocity and
position update equation in the set spcae, and proposed a novel set-based PSO. The set-based
PSO inherits the learning idea of velocity and position update in continuous space, and
successfully extends the PSO to the discrete space on the condition of retaining the advantage
of rapid convergence rate of original PSO. It demonstrates promising performance on
traveling salesman problem and multidimensional knapsack problem. Gong et al. [108] used
the discrete PSO to optimize the vehicle routing problem and achieved better solutions
compared with the best solution found before on some instances. Zhu and Wang [109] also
proposed a relevance sorting and depth sorting method to transform the solution space of
PSO, and applied it successfully to multiobjective grid scheduling discrete optimization
problem.
Furthmore, combining the characteristics of the solved problem, some researchers
proposed another solution mentality of combining PSO with other methods designed for
discrete optimization problem. Wang et al. [110] developed a discrete PSO based on
estimation of distribution for terminal assigment problems. Tian and Liu [111] employed a
hybrid PSO with iterative greedy algorithm to solve permutation flow shop scheduling
problem. Zhang et al. [112] combined PSO with simulated annealing for shop scheduling
problem. AlRashidi and El-Hawary [113] presented a hybrid PSO with Newton’s method to
address discrete OPF problem. Gao. et al. [114] mixed PSO with genetic operators in GA for
traveling salesman problem. More related work on discrete PSO and its application are
referred to a related review [115].
1.2.7 Practical application of PSO
With the constantly improvement and perfection of PSO, PSO is applied to more and
more area. As a continuous optimization method, PSO mainly is able to solve nearly all the
practical application problems. In many areas where GA has been applied well, PSO can
obtain better solutions and fast convergence speed, decrease the program complexity, and
Chapter 1 Introduction
19
increase the efficiency of algorithm application.
PSO was first used to optimize the connected weights in neural network [116] and
applied to medical diagnosis. The present PSO application cover power system [102],
electromagnetism [118], economic dispatch [119][120], biomedical image registration [31],
system design [121][122], machine learning and training [30][123], data mining and
classification [124][125], pattern recognition [18], signal control [126], flow shop scheduling
[127], mass spectrometers optimization [128]. Table 1-2 The Rank of PSO Papers in Different IEEE Transactions According to SCI Database
Rank Publication Record %, total 217 Histogram 1 IEEE Transactions on Power Systems 35 16.129 % 2 IEEE Transactions on Evolutionary Computation 31 14.286 % 3 IEEE Transactions on Magnetics 24 11.060 % 4 IEEE Transactions on Antennas and Propagation 23 10.599 % 5 IEEE Transactions on Systems Man and Cybernetics B 15 6.912 % 6 IEEE Transactions on Industrial Electronics 12 5.530 % 7 IEEE Transactions on Energy Conversion 7 3.226 % 8 IEEE Transactions on Geoscience and Remote Sensing 7 3.226 % 9 IEEE Transactions on Systems Man and Cybernetics A 7 3.226 %
10 IEEE Transactions on Industrial Informatics 6 2.765 %
Table 1-2 lists the rank of the top 10 IEEE Transactions reported by SCI database
according to the analysis function of the retrieved data under the title keyword of “particle
swarm”, publication keyword of “IEEE Transactions on*”, time interval of “Jan. 1st, 1995” to
“Dec. 31st, 2012”. From the table, we can see that, except the PSO theory study and
improvement research papers on IEEE Transactions on Evolutionary Computation, and IEEE
Transactions on Systems, Man, and Cybernetics, Part B/Part A, the application in other
volumes are mainly related to power system [129], electromagnetic [130], antennas and
propagation [131], industrial electronics [132], energy conversion [121], geoscience and
remote Sensing [133], and industrial informatics [134].
One important part of this thesis is the application development of PSO in power
electronic circuit design. Based on the application features and challenges, using improved
PSO to solve the problem effectively and optimize the engineering application.
Chapter 1 Introduction
20
1.3 ML Technique and PSO
1.3.1 Overview of machine learning
Both machine learning (ML) and evolutionary computation (EC) are important research
area in artificial intelligence (AI) [135]. The origin of ML goes back to the philosophical
issue “Is machine able to think” (another word, can machine learn) mentioned by computer
pioneer Alan Turing in paper [136] in 1950s. During the development of few decades, the
theory and methods have seen substantial development, and become an important tool to
solve information mining and learning problem [137]. The paper “Machine learning for
science: State of the art and future prospects” [138] on Science proposed by scientists at
NASA Jet Propulsion Laboratory in 2001 gave high appreciation and expectation on the
effect of ML in science research. Famous ML sholar Professor. ZhiHua Zhou also made a
detail introduction on current development of ML and data mining in 2007 [139].
Although ML develops rapidly, there is still no uniform strict concepts and definition of
ML. From the intuitive literally, ML is a subject to study how to use machine to simulate
human learning activities. According to the statement of Mitchell, the heart of ML is to get
new knowledge and skill according to the obtained experience to improve the performance
[32]. Thus, ML learns how to achieve new knowledge and new skill and recognite the
existing knowledge. Since experience and information usually is hidden in data, an important
ability of ML is to fetch useful information and knowledge through statistic analysis on the
mass data or other technology, and use them to guide the next work [140].
Based on the strong data process ability and the usful information and knowledge
acquire capability, ML has been widely applied on many areas, such as DNA sequence
sequenced and medical diagnosis, internet search engine design, credit card fraud detection in
economics and finance, security market analysis, speech and handwriting recognition,
strategy games, and robot [141][142]. ML is a promising research topic.
Chapter 1 Introduction
21
1.3.2 When PSO meets ML
ML is an efficient means to guide the performance improvement learning from existing
data. In PSO, due to the algorithm paradigms is based on population searching and iterative
evolution, it produce a large number of searching data and historical data. Tranditional PSO
(or other EC algorithms) usually do not make full use of the data to guide the searching and
running of the algorithm. Indeed, these data is a good souce to get useful information, such as
the searching path of each individual, evolutionary direction of the population, population
distribution, current running state, and the structure characteristics of the found solutions,
interaction in or among population, current advantage of the algorithm, and challenges in the
algorithm. The information, experience, and knowledge are all able to be acquired by
analyzing, predicting, and prcessing the created data in PSO using ML.
Thererfore, as shown in Fig. 1-5, when PSO meets ML, the data created by PSO is a
learning source of ML, and ML offers important assistance for the efficient searching and
running of PSO. ML analyzes and processes the data for PSO, and used it to aid the
implement of the algorithm to improve the performance of PSO.
Fig. 1-5 The interaction illustration between PSO algorithm and ML technique
Therefore, the different features and advantages make the ML and PSO complement one
and another in the study. Since both ML and PSO are two important topic and research area
in AI, the researchers in these two areas focus on their own work to develop algorithm,
technology improvement, and application work. However, in recent years, some researchers
attempted to combine ML and PSO. The works are divided into categories. The first one is to
Chapter 1 Introduction
22
use the optimization ability of PSO to modify the PSO into a more effective ML technology
or to improve the performance of the existing ML techniques. For example, design PSO to be
a efficient game learning tools [30][123], data mining tools [143][144] and classifiers [145].
The second classfier is to introduce the ML into the design of PSO to improve the
performance of PSO. Many researchers do not realize to seek efficient means from ML, but
they use ML techniques unintentionally or deliberately [146]. The following subsection gives
a brief review on related ML aided PSO.
1.3.3 ML aided PSO
When improving PSO, Many researchers have proposed the use of ML techniques such
as statistic analysis, orthogonal experiments analysis, based on inverse learning, and
clustering in PSO.
For the statistic analysis, these research works are mainly based on the data statistic and
analysis technology, and use these techniques to analyze the population position information
[147] and flight velocity data [148] for feeback to control the running of the algorithm.
However, the current findings are not referred to use statistic analysis technique to predict
and evaluate the current state of the algorithms, so the corresponding algorithms lack the
adaptability to different running state.
Researchers employed many different modes using orthogonal experimental design to
improve PSO. Sivanandam and Visalakshi [149] adopted orthogonal experimental design to
improve the initialization of PSO and make initialized solutions uniformly distributed in the
searching space. Ho et al. [150], Liu and Chang [151] combined the personal best and
popualiton best information by orthogonal experimental design to generate a new solution
with better fitness value.
In addition, opposition-based learning is also applied in the initialization of PSO and
solution mutation to produce more new diverse solutions [152][153][154].
Cluster analysis in ML is the most used technology in improving PSO. Janson and
Merkle [155] use cluster analysis to assist PSO in retaining the diversity. They divided the
population into multiple subpopulation and stated that different population need to be a niche
Chapter 1 Introduction
23
to keep independent searching ability. Thus, they adopted clustering in different population to
find and record the corresponding excellent solutions. On the contrary, Pulido and Coello
[156] thinked that it is necessary to enhance the communication between different population
to ensure the performance of the algorithm, so they clustered the optimum solutions in
different population and then used cluster analysis to classfy these optimum solutions into
different classes and assigned the solutions to different population finally. So these population
can learn from the searching information and searching experience of other population.
Kennedy [58] employed cluster analysis technique to decompose the particles into several
classes. Each particle considered the containing class as the neighborhood and used the center
of the class to guide the flight. Similarly, Mei and Zhou [157], and Alizadeh et al. [158] also
suggested that adopting fuzzy clustering to classfy the particles into classes and then using
the center of the class to lead the particle to fly directly. Zhan et al. [159] analyzed the
population distribution data using cluster analysis technique to achieve the adaptive control of
paramters.
It follows that, although some research have use ML to improve PSO, it has not formed
a system of orientation in this area, and it is still an open and chagllenging research topic.
1.4 Contributions of the Thesis
This thesis is to enhance the universality and globality of PSO, and develop research on
parameter adaptive control, operator orthorgonal design, and population collaborative
interactions. The population-based and iterative evolution process features makes PSO create
a greate number of searching data and historical data. Using ML which is able to acquire
useful information and knowledge to analyze, process, and apply these data to give feedback
to the design of the algorithm and the solution space searching to further improve the
algorithm. Based on this method, this thesis tracks the research into ML aided PSO and its
engineering application, and introduce the statistic analysis, orthorgonal prediction, and
ensemble learning into the design of PSO to improve the performance of PSO and extend its
application field.
The main research work and innovation in this thesis are summarized as follow:
Chapter 1 Introduction
24
(1) Based on the statistic analysis technology in ML, this thesis proposes adaptive
PSO (APSO) to improve the universality of the algorithm.
PSO requires different parameters and strategies in different running stage, and makes it
challenging to set the optimum parameters in different evolution stage for different problems.
This thesis uses the capability of of ML techniques discovering useful information in data
statistic analysis to analyze the data pf population distribution and fitness value, and proposes
an evolutionary state estimation (ESE) method to implement the adaptive control of the
algorithm paramters and strategies and improve the universality of PSO in different
optimization problems.
(2) Based on the orthogonal prediction technique, this thesis proposes orthogonal
learning PSO (OLPSO) to enhance the rapid global searching ability of the algorithm.
For the problem that the velocity update equation in tranditional PSO can not make full
use of personal experience and population experience, inspired by the orthogonal design and
orthogonal prediction techniques, this thesis proposes a novel orthogonal learning (OL)
method, to modify the velocity update operator. The OL method discovers usful information
under least computation to predict and construct a guidance vector with optimum searching
experience via orthogonal combination on the personal best experience and population best
experience. OL method further increases the global searching ability of the algorithm, and
makes OLPSO an efficitve and efficient tool to solve large scale and multimodal optimization
problem.
(3) Learning from the idea of ensemble learning in ML, this thesis proposes a
coevolutionary multiswarm PSO (CMPSO) to increase the application effectiveness in
multiobjective problems.
Inspired by the idea of ensemble learning using multiple classifiers to improve the
classifying quality, this thesis introduces the multipopualtion to solve the multiobjective
problmes. Multiple populations for multiple objectives (MPMO) is an optimization
framework which use multiple population to optimize multiple objectives, and each
population solves a fix objective. Based on the MPMO framework, CMPSO escapes from the
Chapter 1 Introduction
25
problem that it is difficult to assign fitness value for each individual in population due to the
multiple objectives. On the other hand, each population evolves under the corresponding
objective, and is able to find good solutions in the corresponding objective space. Meanwhile,
to avoid the unduly impact of the corresponding objective, an information sharing mechanism
is designed to promote the communication between population and collaboration evolution
and make the solutions uniformly distributed on the whole pareto front and increase the
application effect.
(4) Apply the improved OLPSO to power electronic circuit (PEC) design problem
and extend its application in engineering area.
PEC design optimization problem is a complex engineering application problem. There
is a number of resistors, inductors, and capacitors in PEC. How to set the value of the
components is a critical part in circuit design. In traditional ways, the engineers get initial
results by some physical circuit equation operation depending on experience, and then in a
predefined small value range, revise the circuit design step by step through trial-and-error
methods. This method not only needs professional knowledges, but also cause difficulty in
application on complex circuit without exact mathematics model. In order to find an effective
method for PEC design, this thesis studies how to use PSO to optimize the PEC problem, and
proposes a PEC optimization method based on OLPSO. The algorithm not only provides a
new solution but also extends the successful application of PSO in engineering area. This
thesis improve the PEC optimization model, and proposes a “free searching range” model to
address the problem of difficult prediction of the component value range in engineering
practice. Although this model approximates to the real facts, it brings challenges for
optimization algorithms. Therefore, this thesis combines the global earching ability of
OLPSO and the “free searching range” feature in circuit optimization to solve the new PEC
design problem and extend the application of the algorithm in engineering area.
1.5 Organization of the Thesis
There are six chapters in this thesis, and the overall structure is organized as shown in
Chapter 1 Introduction
26
Fig. 1-6.
Fig. 1-6 The organization structure and relationship illustration
The first chapter is the introduction which raises the existing problems in the
development of the algorithms; the second, third, and fourth chapters are about algorithm
research, to solve the problems proposed in Chater 1; the fifth chapter describes the algorithm
application for practice test; and the final chapter is the summary and outlook. The main
contents and organization of each chapter is described as follow:
Chapter 1 is introduction. In this chapter, the original and the flow structure are
introduced firstly, and then the give a survey of the related work about the development
history and current research status of PSO from multiple different views such as theory study,
parameter setting, operator design, population interaction, discrete optimization, and practical
application. To solve the problem existing in parameter control, operator design, and
population interaction, the possibility and advantages of using ML to aid PSO are described.
The current ML aided PSO algorithms are also summarized. After that, the main work and
innovation of the ML aided PSO proposed in this thesis and its application research are
briefly introduced. Finally, the organization of the chapters in the thesis is presented.
Chapter 2 proposes the adaptive PSO based on the statistical analysis in ML. Chapter 2.1
describes the background and motivation of this work, and indicates that the use of the
Chapter 1 Introduction
27
statistical analysis is able to discover information and knowledge and apply from the
population distribution data and fitness values created in the running process of the algorithm.
The information can be used to design an efficient evolutionary state estimation method and
effective adaptive control mechanism for parameters and strategies to improve the
universality of the algorithm in different evolutionary states and different optimization
problems. Chapter 2.2 gives a detail introduction of the evolutionary state based on statistical
analysis. In Chapter 2.3, the adaptive PSO based on evolutionary state estimation is presented
in detail. Chapter 2.4 reports the experiment verification and comparison. In Chapter 2.5, a
further analysis of the search behavior in adaptive PSO is described. Chapter 2.6 is the
chapter summary.
Chapter 3 proposes orthogonal learning PSO based on orthogonal design and prediction
techniques in ML. Chapter 3.1 introduces the background and motivation of this chapter, and
indicates that the traditional learning methods in PSO do not make full the personal and
population searching information, and analyzes the advantages of using the orthogonal design
and prediction technique to discover and utilize personal and population experience, and
predict and construct learning vector to guide the particles converge to global optimum
rapidly, and improve the global searching performance on complex multimodal problems.
Chapter 3.2, gave a detail introduction for the orthogonal learning PSO. Chapter 3.3 is the
experimental verification and comparison. Chapter 3.4 is the chapter summary.
Chapter 4 proposes a co-evolution multipopulation multiobjective (MPMO) PSO
algorithm. Chapter 4.1 firstly describes the background and motivation, and indicates the
possibility and advantage of adopting multiple population to optimize multiple objectives
where one population corresponding to one objective. MPMO not only is able to avoid the
fitness value assignment problem because each individual evolves for all objectives in
traditional multiobjective algorithms, but also make full searching in each objective space. It
benefits to obtain a uniform distribution of solutions along Pareto front and effective
application of PSO in multiobjective problmes. Chapter 4.2 briefly introduces the related
concepts and work of the multiobjective problem. Chapter 4.3 gives a detail description about
the co-evolution multipopulaiton multiobjective PSO algorithm. Chapter 4.4 performs
experiments and comparison. Chapter 4.5 presents a chapter summary.
Chapter 1 Introduction
28
Chapter 5 applies the OLPSO proposed in chapter three to PEC design. Chapter 5.1
firstly describes the background and motivation, and indicates that the PEC optimization
model do not meet the condition that the range of the industrial circuit components is
unpredictable. A “free searching range” PEC optimization model is proposed and the rapid
searching ability of PSO benefits to its application on the new PEC model, and expand the
field of engineering application of PSO and provide a more effective and efficient technique.
Chapter 5.2 introduces briefly the related knowledge of PEC. Chapter 5.3 gives a detail
description of PEC design method based on OLPSO. Experiments and comparison are carried
out in Chapter 5.4. Finally, Chapter 5.5 makes a conclusion.
Chapter 6 summarizes the research work of the whole thesis and give an outlook of the
future work.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
29
Chapter 2 Adaptive Particle Swarm Optimization Based on
Statistic Analysis Techique in Machine Learning
2.1 Introduction
Particle swarm optimization (PSO) has been an optimization algorithm widely accepted
by science and engineering application researchers during the development for less than 20
yeas since it is invented. As PSO is easy to implement, it has progressed rapidly in recent
years, and with many successful applications seen in solving real-world optimization
problems. Similar to other EAs, PSO is a population-based iterative algorithm. The simple
concept makes PSO converge more rapidly compared with other EAs such as GA. However,
how to improve the convergence rate of PSO in practical application and avoid trapped in
local optima on complex multimodal problems are still two hot and challenging problems.
Therefore, accelerating convergence speed and avoiding local optima have become the
two most important and appealing goals in PSO research. A number of variant PSO
algorithms have hence been proposed to achieve these two goals. As mentioned in Section
1.2.3, typical methods to control parameters include inertia weight ω linearly decreasing with
the iterative generations introduced by Shi and Eberhart [45] and linearly time-varying
acceleration coefficients method proposed by Ratnaweera [28]. However, since PSO is an
optimization process with guidance and randomness, these linear variety methods fail to meet
the non-linear evolutionary process and have great limitation on application. How to control
both the inertia weight and acceleration coefficients is crucial to improve the performance of
PSO. On the other hand, another active research trend in PSO is hybrid PSO, which combines
PSO with other evolutionary operation, such as selection [60], mutation [64], local search
[67], restart [160], and re-initialization [161]. These hybrid operations are usually
implemented in every generation or are controlled by adaptive strategies using stagnated
generations as a trigger. While these methods have brought improvements in PSO, the
performance may be further enhanced if the auxiliary operations are adaptively performed
with a systematic treatment according to the evolutionary state. For example, the mutation,
reset, and re-initialization operations can be more pertinent when the algorithm has
converged to a local optimum rather than when it is exploring.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
30
Thus, to accelerate convergence speed and avoid local optima by parameters control and
combinations with auxiliary techniques, one important problem is how to estimate the
evolutionary state and adaptively control the parameters according to the different
evolutionary states and execution strategies, which is a critical part in the systematic adaptive
control design of the PSO algorithm.
PSO is a population-based iterative optimization algorithm. A great amount of useful
information are offered in the evolutionary process. During a PSO process, the population
distribution characteristics not only vary with the generation number but also with the
evolutionary state. To illustrate the dynamics of the particle distribution in the PSO process,
herein we take a time-varying 2-D Sphere function 2 2
1 2( ) ( ) ( ) , [ 10, 10], 1,2if X r X r X i− = − + − ∈ − =X R (2-1)
as an example, where r is initialized to -5 and shifts to 5 at the 50th generation in a 100
generation optimization process. That is, the theoretical minimum of f1 shifts from (–5, –5) to
(5, 5) half way in the search process. Using a GPSO [45] with 100 particles to solve this
minimization problem, the population distributions in various running phases were observed
as shown in Fig. 2-1.
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10 Particle gBest
X 2
X1
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10 Particle gBest
X 2
X1
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10 Particle gBest
X 2
X1 (a) Generation = 1 (b) Generation = 25 (c) Generation = 49
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10 Particle gBest
X 2
X1
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
Particle gBest
X 2
X1
-10 -8 -6 -4 -2 0 2 4 6 8 10-10
-8
-6
-4
-2
0
2
4
6
8
10
Particle gBest
X 2
X1 (d) Generation = 50 (e) Generation = 60 (f) Generation = 80
Fig. 2-1 The population distributions of PSO during the evolutionary process.
It can be seen in Fig. 2-1(a) that following the initialization, the particles start to explore
throughout the search space without an evident control center. Then the learning mechanisms
of the PSO pull many particles to swarm together towards the optimal region, as seen in Fig.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
31
2-1(b). Then the population converges to the best particle, in Fig. 2-1(c). At the 50th
generation, the bottom of the sphere is shifted from (–5, –5) to (5, 5). It is seen in Fig. 2-1(d)
that a new leader quickly emerges somewhat far away from the current clustering swarm. It
leads the swarm to jump out of the previous optimal region to the new one (Fig. 2-1(e)),
forming a second convergence (Fig. 2-1(f)). From this simple investigation, it can be seen
that the population distribution information can vary significantly during the run time and
that the PSO has an ability to adapt to a time-varying environment. However, the ability of
jumping out the local optima should be enhanced when the algorithm is trapped local optima
and the convergence rate to the new global optimal solution should be improve through
parameter control methods.
Therefore, PSO offeres the population distribution information and fitness data during
the running process. In order to discover useful information and obtain more knowledge from
these data to improve the performance of the algorithm, this chapter used statistical analysis
techniques in machine learning to analyze the search data and historical data including
population distribution and fitness values, and proposed an evolutionary state estimation
(ESE) based on statistical analysis technique to determine the states, and adaptive control the
parameters and execution strategies to accelerate the convergence rate and enhance the global
search ability.
With the help of statistical analysis technique in machine learning, this chapter proposed
an adaptive particle swarm optimization (APSO). The important constributions and
innovations mainly include three aspects as follow:
1) Introduce the statistical analysis technique into PSO algorithm, and design a novel
evolutionary state estimation (ESE) approach based on population distribution information
and fitness data to provide an effective method for adaptive control.
2) Propose an adaptive parameters control strategy of PSO based on ESE approach. The
ESE method is benefit to adaptively control the parameters according to the evolutionary
state, as well as balance the local exploitation and global exploration ability, and accelate the
convergence rate of the algorithm.
3) Design an elitist learning strategy (ELS). ELS is adaptively carried out under the
control of ESE, and increase the population diversity to avoid being trapped in local optima
when the search is identified to be in a convergence state.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
32
2.2 Evolutionary State Estimation
Based on the search behaviors and population distribution characteristics of the PSO, an
ESE approach is developed in this subsection. We define an evolutionary factor to describe
the population state. The distribution information in Fig. 2-1 can be formulated as
evolutionary factor illustrated in Fig. 2-2 by calculating the mean distance of each particle to
all the other particles. It is reasonable to expect that the mean distance from the globally best
particle to other particles would be minimal in the convergence state since the global best
tends to be surrounded by the swarm. In contrast, this mean distance would be maximal in the
jumping-out state because the global best is likely to be away from the crowding swarm.
Therefore, the ESE approach will take into account the population distribution information in
every generation, as detailed in the following steps.
ipg dd ≈ipg dd <<
ipg dd >>
5/)(5/)(5141312111
54321
ppppppppgpp
gpgpgpgpgpgdddddd
dddddd++++=
++++=minmax
min
dddd
f g
−−
=
Fig. 2-2 PSO population distribution information quantified by an evolutionary factor f.
Step 1: At a current position, calculate the mean distance of each particle i to all the other
particles. For example, this mean distance can be measured using an Euclidian metric
21
1,
1 ( )1
ND k k
i i jkj j i
d x xN =
= ≠
= −− ∑ ∑ (2-2)
where N and D are the population size and the number of dimensions, respectively
Step 2: Denote di of the globally best particle as dg. Compare all di’s and determine the
maximum and minimum distances dmax and dmin. Compute an ‘evolutionary factor’ f as
defined by
min
max min
gd df
d d−
=−
∈ [0, 1] (2-3)
Step 3: Classify f into one of the four sets S1, S2, S3 and S4, representing the states of
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
33
exploration, exploitation, convergence and jumping out, respectively. These sets can be
simple crisp intervals, for a rigid classification. From Fig. 2-1 and Fig. 2-2, evolutionary
factor f varies with the evolutionary state, and is able to present the current evolutionary
state. The status transition can be in order during the PSO process, for example, it can
enter exploitation state from exploration state, and then get into convergence state, and
becomes jumping out state when meets local optima, and then enter another loop of
exploration, exploitation, and convergence process. However, the state transition would
be nondeterministic and fuzzy and that different algorithms or applications could exhibit
different characters of the transition. From Fig. 2-1 and Fig. 2-2, we can see that the large
f may indicate the exploration or jumping out states, while the small f value may indicate
the exploitation or convergence states. It is hence recommended that fuzzy classification
be adopted. Combine the advantages of evolutionary factor representation and fuzzy
control, this chapter proposed an evolutionary state estimation method based on fuzzy
classification to assign the running process to one of the four evolutionary states
according to the fuzzy membership functions depicted in Fig. 2-3. Formulation for
numerical implementation of the classification is as follows.
Fig. 2-3 Fuzzy membership functions for the four evolutionary states.
a) Exploration: A medium to large value of f represents S1, whose membership
function is defined as:
1
0, 0 0.45 2, 0.4 0.6
( ) 1, 0.6 0.710 8, 0.7 0.8
0, 0.8 1
S
ff f
f ff f
f
μ
≤ ≤⎧⎪ × − < ≤⎪⎪= < ≤⎨⎪− × + < ≤⎪
< ≤⎪⎩
(2-4a)
b) Exploitation: A shrunk value of f represents S2, whose membership function is defined
as:
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
34
2
0, 0 0.210 2, 0.2 0.3
( ) 1, 0.3 0.45 3, 0.4 0.6
0, 0.6 1
S
ff f
f ff f
f
μ
≤ ≤⎧⎪ × − < ≤⎪⎪= < ≤⎨⎪− × + < ≤⎪
< ≤⎪⎩
(2-4b)
c) Convergence: A minimal value of f represents S3, whose membership function is
defined as:
3
1, 0 0.1( ) 5 1.5, 0.1 0.3
0, 0.3 1S
ff f f
fμ
≤ ≤⎧⎪= − × + < ≤⎨⎪ < ≤⎩
(2-4c)
d) Jumping out: When PSO is jumping out of a local optimum, the globally best particle
is distinctively away from the swarming cluster, as shown in Fig. 2-2(c). Hence, the
largest values of f reflect S4, whose membership function is thus defined as:
4
0, 0 0.7( ) 5 3.5, 0.7 0.9
1, 0.9 1S
ff f f
fμ
≤ ≤⎧⎪= × − < ≤⎨⎪ < ≤⎩
(2-4d)
The state of the PSO is initialized as Exploration state S1. In each generation, calculate
the value of the evolutionary factor f, and then classify the evolutionary state according to the
following three control rules:
Unique: If f has a degree of only one membership function, then classify the evolutionary
state to the corresponding state. For example, classify f to S3 when f=0.1, and classify f to
S1 when f=0.65.
Stability and proximity: If f has two degree of two membership function, then we will
follow the stability rule first and then the proximity rule to classify the evolutionary state.
For example, an f evaluated to 0.45 has both a degree of membership for S1 and another
degree of membership for S2, indicating that the PSO is in a transitional period between S1
and S2. According the stability rule, we look at the previous state firstly. If the previous
state is S1 or S2, then the algorithm will retain the previous state.
If either the previous state is S1 or S2, then we follow the proximity rule by the sequence
S1 ⇒ S2 ⇒ S3 ⇒ S4 ⇒ S1…. to classify the evolutionary state. if the previous state is S3,
then f is classified to S2. If the previous state is S4, then f is classified to S1. This way, the
algorithm can avoid stochastic oscillator and ensure the coherence of the evolutionary
status transition.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
35
2.3 Adaptive Particle Swarm Optimization
2.3.1 Adaptation of the Inertia Weight
The inertia weight ω in PSO is used to balance the global and local search capabilities.
Many researchers have advocated that the value of ω should be large in the exploration state
and be small in the exploitation state. However, the state transition is nonlinear,
nondeterministic, and fuzzy in PSO, traditional parameter control methods based on lineary
change is difficult to ensure the good performance of the algorithm.
From the definition of the evolutionary factor f, we can see that the changing process of f
can describe the running state of the algorithm. In addition, the evolutionary factor f shares
some characteristics with the inertia weight ω in that f is also relatively large during the
exploration state and becomes relatively small in the convergence state. Hence it would be
beneficial to allow ω to follow the evolutionary states. Based on this rule, this chapter designs
an adaptivie transform function using a sigmoid mapping
[ ] ]1,0[,9.0,4.05.111)( 6.2 ∈∀∈
+= − f
ef fω (2-5)
In this chapter, ωis initialized to 0.9. As ω is not necessarily monotonic to time, but
monotonic to f, ω will thus adapt to the search environment characterized by f. In a
jumping-out or exploration state, the large f and ω will benefit global search as referenced
earlier. Conversely, when f is small, an exploitation or convergence state is detected, and
hence ωdecreases to benefit local search. The relationship between ω and f is illustrated in
Fig. 2-4.
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.00.4
0.5
0.6
0.7
0.8
0.9
Iner
tia w
eigh
t w
Evolutionary factor f Fig. 2-4 The relationship between inertia weight ω and evolutionary factor f.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
36
2.3.2 Control of the Acceleration Coefficients
According to the research work on acceleration coefficients of PSO, parameter c1 and c2
play different effect on the performance of the algorithm. Parameter c1 represents
‘self-cognition’ that pulls the particle to its own historical best position, helping explore local
niches and maintaining the diversity of the swarm. Parameter c2 represents ‘social-influence’
that pushes the swarm to converge to the current globally best region, helping with fast
convergence. These are two different learning mechanisms and should be given different
treatments in different evolutionary states. In this chapter, the acceleration coefficients are
both initialized to 2.0 and adaptively controlled according to the evolutionary state, with
strategies developed as follow.
Strategy 1: Increasing c1 and decreasing c2 in an exploration state. It is important to
explore as many optima as possible in the exploration state. Hence, increasing c1 and
decreasing c2 can help particles explore individually and achieve their own historical best
positions, rather than crowd around the current best particle that is likely to be associated
with a local optimum.
Strategy 2: Increasing c1 slightly and decreasing c2 slightly in an exploitation state. In
this state, the particles are making use of local information and grouping towards
possible local optimal niches indicated by the historical best position of each particle.
Hence, increasing c1 slowly and maintaining a relatively large value can emphasize the
search and exploitation around pBesti. In the mean time, the globally best particle does
not always locate the global optimal region at this stage yet. Therefore, decreasing c2
slowly and maintaining a small value can avoid the deception of a local optimum.
Further, an exploitation state is more likely to occur after an exploration state and before
a convergence state. Hence changing directions for c1 and c2 should be slightly altered
from the exploration state to the convergence state.
Strategy 3: Increasing c1 slightly and increasing c2 slightly in a convergence state. In the
convergence state, the swarm seems to find the globally optimal region and hence the
influence of c2 should be emphasized to lead other particles to the probable globally
optimal region. Thus the value of c2 should be increased. On the other hand, the value of
c1 should be decreased to let the swarm converge fast. However, such a strategy would
prematurely saturate the two parameters to their lower and upper bounds, respectively.
The consequence is that the swarm will be strongly attracted by the current best region,
causing premature convergence, which is harmful if the current best region is a local
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
37
optimum. In order to avoid this, both c1 and c2 are increased slightly.
Note that, slightly increasing both acceleration parameters will eventually have the
same desired effect as reducing c1 and increasing c2, because their values will be drawn
to around 2.0 due to an upper bound of 4.0 for the sum of c1 and c2 (refer to Eq. (2-7)
discussed in the following subsection).
Strategy 4: Decreasing c1 and increasing c2 in a jumping-out state. When the globally
best particle is jumping out of local optimum towards a better one, it is likely to be far
away from the crowding cluster. As soon as this new region is found by a particle, which
becomes the (possibly new) leader, others should follow it and fly to this new region as
fast as possible. A large c2 together with a relatively small c1 helps to obtain this goal.
It should be note that the above adjustments on the acceleration coefficients should not
be too irruptive. Hence, the maximum increment or decrement of ci (i=1, 2) between two
generations is bounded by
2,1,)()1( =≤−+ igcgc ii δ (2-6)
where δ is termed the ‘acceleration rate’ in this chapter. δ is a uniformly generated random
value in the interval [0.05, 0.1] and will be regenerated each time used. Note that we use 0.5δ
in strategies 2 and 3 where ‘slight’ changes are recommended.
Further, the interval [1.5, 2.5] is chosen to clamp both c1 and c2 according to the research
work in [38][159]. If the value obtained by Eq. (2-6) violates the bound constraint, then it
will be set as the corresponding violated bound value. Here the interval [3.0, 4.0] is used to
bound the sum of the two parameters. If the sum is larger than 4.0, both c1 and c2 are
normalized to
2,1,0.421
=+
= icc
cc ii (2-7)
The entire process of the ESE enabled adaptive parameter control is illustrated in Fig.
2-5.
Fig. 2-5 The ideal variants of the acceleration coefficients c1 and c2.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
38
2.3.3 Elitist Learning Strategy Adaptation
The parameter adaptation of PSO analyzes information generated in the running process
with statistical analysis technique in machine learning, and then learn to adopt suitable and
efficient strategy for parameter adaptive control. However, using parameter adaptation alone
may cause misleading the algorithm towards local optima on multimodal optimization
problem. Hence, an ‘elitist learning strategy’ is designed here and applied to the globally best
particle so as to help jump out of local optimal regions when the search is identified to be in a
convergence state.
The reason of PSO trapped in local optima is due to the lcak of the guidance vector for
the global best solution. With the guidance of the global best solution, the whole population
will converge to gBest, and stagnate at the current best region. Unlike the other particles, the
global leader has no exemplars to follow. It needs fresh momentum to improve itself. Hence,
a perturbation based ELS is developed to help gBest push itself out to a potentially better
region. If another better region is found, the rest of the swarm will follow the leader to jump
out and converge to the new region.
The process of ELS is described in detail as follow:
Step 1: randomly chooses one dimension of gBest’s historical best position, denoted by
Gd for the dth dimension. Only one dimension is chosen because the local optimum is
likely to have some good structure of the global optimum and hence this should be
protected. As every dimension has the same probability to be chosen, the ELS operation
can be regarded to perform on every dimension in a statistical sense. Similar to simulated
annealing, the mutation operation in evolutionary programming or in evolution strategies,
the elitist learning is performed through a Gaussian perturbation
( ) 2max min ( , )d d d dG G X X Gaussian μ σ= + − ⋅ (2-8)
The search range [dX min ,
dX max ] is the same as the lower and upper bounds of the
problem. The Gaussian(μ, σ2) is a random number of a Gaussian distribution with a zero
mean and a standard deviation (SD) σ, termed the ‘elitist learning rate’. Similar to some
time-varying neural network training schemes, it is suggested that σ be decreased linearly
with the generation number, given by
( )max max min /t Tσ σ σ σ= − − ⋅ (2-9)
where σmax =1.0 and σmin =0.1 are the upper and lower bounds of σ, representing the learning
scale to reach a new region.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
39
Step 2: Determine whether the obtained Gd is in the range of [Xdmin, Xdmax]. If the value
violates the bound constraint, then Gd is set as the corresponding violated bound value.
Step 3: Evaluate the new gBest. In ELS, the new position will be accepted if and only if
its fitness is better than the current gBest. Otherwise, the new position is used to replace
the particle with the worst fitness in the swarm.
It should be note that, ELS is not performed in each generation, and only carried out
when the algorithm is in a convergence state. This is because that in other evolutionary state,
the algorithm has global search ability and can avoid local optima, and there is no need to
perform ELS. Only when the algorithm is identified to be in a convergence state, no vector
can guide gBest to jump out the local optima, ELS is applied to the globally best particle so
as to help jump out of local optimal regions. Therefore, the execution of ELS is an adaptive
control process.
2.4 Benchmark Tests and Comparisons
2.4.1 Benchmark Functions and Algorithm Configuration
Twelve benchmark functions listed in Table 2-1 are used for the experimental tests here.
Seven existing PSO algorithms, as detailed in Table 2-2, are compared with the APSO. The
first 6 functions f1-f6 are unimodal functions, while the rest 6 functions f7-f12 are multimodal
functions [162]. The dimension of all functions is 30, and the global best fitness value of all
functions are 0 except for Schwefel function f7 is -12569.5. The “Accept” in Table 2-1
indicates whether a solution found by an algorithm falls between the acceptable value and the
actual global optimum. Since not all algorithms can find the global optimum, “Accept” can
be used to measure the successful rate under the predefined error and the corresponding
convergence rate of the algorithm. Table 2-1 The 12 Functions Used in The Comparisons
Function Search range Accept Name
Uni
mod
al
∑ == D
i ixxf1
21 )( [-100,100]D 0.01 Sphere [162]
∑ ∏= =+= D
i
D
i ii xxxf1 12 )( [-10,10]D 0.01 Schwefel’s P2.22 [162]
∑ ∑= == D
i
i
j jxxf1
213 )()( [-100,100]D 100 Quadric [162]
∑ −
= + −+−= 1
1222
14 ])1()(100[)( D
i iii xxxxf [-10,10]D 100 Rosenbrock [162]
⎣ ⎦∑ =+= D
i ixxf1
25 )5.0()( [-100,100]D 0 Step [162]
)1,0[)(1
46 randomixxf D
i i +=∑ = [-1.28,1.28]D 0.01 Quadric Noise [162]
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
40
Mul
timod
al
∑ = −= Di ixixxf 1 )sin()(7 [-500,500]D -10000 Schwefel [162]
∑ =+−= D
i ii xxxf1
28 ]10)2cos(10[)( π [-5.12,5.12]D 50 Rastrigin [162]
∑ =+−= D
i ii yyxf1
29 ]10)2cos(10[)( π
⎪⎩
⎪⎨⎧
≥
<=
5.0 2
)2(5.0
wherei
i
ii
i xxround
xxy [-5.12,5.12]D 50 Noncontinuous
Rastrigin [59]
exD
xDxfD
i i
D
i i
++−
−−=
∑∑
=
=
20)2cos/1exp(
)/12.0exp(20)(
1
12
10
π
[-32,32]D 0.01 Ackley [162]
∑ ∏= =+−= D
i
D
i ii ixxxf1 1
211 1)/cos(4000/1)( [-600,600]D 0.01 Griewank [162]
∑
∑
=
+−
=
+−+
+−+=
D
i iD
iD
i i
xuy
yyyD
xf
12
1221
112
12
)4,100,10,(})1(
)](sin101[)1()(sin10{)( πππ
⎪⎩
⎪⎨
⎧
−<−−
≤≤−>−
=++=
axaxk
axaaxaxk
mkaxuxy
im
i
i
im
i
iii
,)(
,0 ,)(
),,,( ),1(411 where
[-50,50]D 0.01 Generalized
Penalized [162]
Table 2-2 The PSO Algorithms Used in the Comparisons Algorithm Topology Parameters Settings Reference
GPSO Global Star ω: 0.9-0.4, c1= c2=2.0 [45] LPSO Local Ring ω: 0.9-0.4, c1= c2=2.0 [54]
VPSO Local von Neumann ω: 0.9-0.4, c1= c2=2.0 [54] FIPS Local URing χ=0.729, ∑ci = 4.1 [25]
HPSO-TVAC Global Star ω: 0.9-0.4, c1: 2.5-0.5, c2: 0.5-2.5 [28] DMS-PSO Dynamic Multi-swarm ω: 0.9-0.2, c1= c2=2.0, m=3, R=5 [57]
CLPSO Comprehensive Learning ω: 0.9-0.4, c =1.49445, m=7 [59] The first three PSOs (GPSO, LPSO with ring neighborhood and VPSO with von
Neumann neighborhood) are regarded as standard PSOs and have been widely used in PSO
applications. The FIPS is a ‘fully informed’ PSO that uses all the neighbors to influence the
flying velocity. In FIPS, the URing topology structure is implemented with a wFIPS
algorithm for higher successful ratio. The HPSO-TVAC is a ‘performance-improvement’
PSO by improving the acceleration parameters and incorporating a self-organizing technique.
The DMS-PSO is devoted to improve the topological structure in a dynamic way. Finally, in
Table 2-2, the CLPSO offers a comprehensive learning strategy, aiming at yielding better
performance for multimodal functions. The parameter configurations for these PSO variants
are also given in Table 2-2, according to their corresponding references. In the tests, the
algorithm configuration of the APSO is as follows. The inertia weight ω is initialized to 0.9
and c1 and c2 to 2.0, same as the common configuration in a standard PSO. These parameters
are then adaptively controlled during the run. Parameter δ in (2-6) is a random value
uniformly generated in the interval [0.05, 0.1], while parameters σ in (2-9) linearly decreases
from σmax = 1.0 to σmin = 0.1.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
41
For a fair comparison among all the PSO algorithms, they are tested using the same
population size of 20, a value of which is commonly adopted in PSO [163]. Further, all the
algorithms use the same number of 2.0×105 FEs for each test function [159]. All the
experiments are carried out on the same machine with a Celeron 2.26 GHz CPU, 256 MB
memory and the Windows XP2 operating system. For the purpose of reducing statistical
errors, each function is simulated 30 times independently and their mean results are used in
the comparison
2.4.2 Comparisons on the Solution Accuracy
The performance on the solution accuracy of every PSO listed in Table 2-3 is compared
with the APSO. The results are shown in Table 2-3, in terms of the mean and standard
deviation of the solutions obtained in the 30 independent runs by each algorithm. Boldface in
the table indicates the best result among those obtained by all eight contenders. Fig. 2-6
presents the comparison graphically in terms of convergence characteristics of the
evolutionary processes in solving the 12 different problems. Table 2-3 Results Comparisons on Solution Accuracy Among 8 PSOs on 12 Test Functions
Function GPSO LPSO VPSO FIPS HPSO-TVAC DMS-PSO CLPSO APSO
f1 Mean 1.98×10-53 4.77×10-29 5.11×10-38 3.21×10-30 3.38×10-41 3.85×10-54 1.89×10-19 1.45×10-150
Std. Dev 7.08×10-53 1.13×10-28 1.91×10-37 3.60×10-30 8.50×10-41 1.75×10-53 1.49×10-19 5.73×10-150
f2 Mean 2.51×10-34 2.03×10-20 6.29×10-27 1.32×10-17 6.9×10-23 2.61×10-29 1.01×10-13 5.15×10-84
Std. Dev 5.84×10-34 2.89×10-20 8.68×10-27 7.86×10-18 6.89×10-23 6.6×10-29 6.51×10-14 1.44×10-83
f3 Mean 6.45×10-2 18.60 1.44 0.77 2.89×10-7 47.5 395 1.0×10-10
Std. Dev 9.46×10-2 30.71 1.55 0.86 2.97×10-7 56.4 142 2.13×10-10
f4 Mean 28.1 21.8627 37.6469 22.5387 13 32.3 11 2.84
Std. Dev 24.6 11.1593 24.9378 0.310182 16.5 24.1 14.5 3.27
f5 Mean 0 0 0 0 0 0 0 0
Std. Dev 0 0 0 0 0 0 0 0
f6 Mean 7.77×10-3 1.49×10-2 1.08×10-2 2.55×10-3 5.54×10-2 1.1×10-2 3.92×10-3 4.66×10-3
Std. Dev 2.42×10-3 5.66×10-3 3.24×10-3 6.25×10-4 2.08×10-2 3.94×10-3 1.14×10-3 1.7×10-3
f7 Mean -10090.16 -9628.35 -9845.27 -10113.8 -10868.57 -9593.33 -12557.65 -12569.5
Std. Dev 495 456.54 588.87 889.58 289 441 36.2 5.22×10-11
f8 Mean 30.7 34.90 34.09 29.98 2.39 28.1 2.57×10-11 5.8×10-15
Std. Dev 8.68 7.25 8.07 10.92 3.71 6.42 6.64×10-11 1.01×10-14
f9 Mean 15.5 30.40 21.33 35.91 1.83 32.8 0.167 4.14×10-16
Std. Dev 7.4 9.23 9.46 9.49 2.65 6.49 0.379 1.45×10-15
f10 Mean 1.15×10-14 1.85×10-14 1.4×10-14 7.69×10-15 2.06×10-10 8.52×10-15 2.01×10-12 1.11×10-14
Std. Dev 2.27×10-15 4.80×10-15 3.48×10-15 9.33×10-16 9.45×10-10 1.79×10-15 9.22×10-13 3.55×10-15
f11 Mean 2.37×10-2 1.10×10-2 1.31×10-2 9.04×10-4 1.07×10-2 1.31×10-2 6.45×10-13 1.67×10-2
Std. Dev 2.57×10-2 1.60×10-2 1.35×10-2 2.78×10-3 1.14×10-2 1.73×10-2 2.07×10-12 2.41×10-2
f12 Mean 1.04×10-2 2.18×10-30 3.46×10-3 1.22×10-31 7.07×10-30 2.05×10-32 1.59×10-21 3.76×10-31
Std. Dev 3.16×10-2 5.14×10-30 1.89×10-2 4.85×10-32 4.05×10-30 8.12×10-33 1.93×10-21 1.2×10-30
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
42
An interesting result is that all PSO algorithms have most reliably found the minimum of
f5. It is a region rather than a point in f5 that is the optimum. Hence this problem may be
relatively easy to solve with a 100% success rate. The comparisons in both Table VI and Fig.
11 show that, when solving unimodal problems, the APSO offers the best performance on
most test functions. In particular, the APSO offers the highest accuracy on functions f1, f2, f3,
f4 and f5, and ranks third on f6.
Fig. 2-6 Convergence performance of the 8 different PSOs on the 12 test functions.
The APSO also achieves the global optimum on the optimization of complex
multimodal functions f7, f8, f9, f10 and f12. Although the CLPSO outperforms the APSO and
others on f11 (Griewank’s function), its mean solutions on other functions are worse than
those of the APSO. Further, the APSO can successfully jump out of local optima on most of
the multimodal functions and surpasses all other algorithms on functions f7, f8 and f9 where
the global optimum of f7 (Schwefel’s function) is far away from any of the local optima, and
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
43
the globally best solutions of f8 and f9 (continuous/noncontiguous Rastrigin’s functions) are
surrounded by a large number of local optima. The ability of avoiding being trapped into
local optima and achieving global optimal solutions to multimodal functions suggests that the
APSO can indeed benefit from the ELS.
2.4.3 Comparisons on the Convergence Speed
The speed in obtaining the global optimum is also a salient yardstick for measuring
algorithm performance. Since PSO is a population-based iterative algorithm, it is a common
to use the required function evaluation (FEs) times to achive an accepted solution under a
given threshold to measure the convergence rate of the algorithm. Table 2-4 reports the mean
FEs needed and mean CPU time needed to obtain an accepted solution given the threshold in
Table 2-4 for the algorithms on the 12 functions in 30 runs. The successful rate of each
algorithm is also demonstrated in Table 2-4. Note that, the mean FEs and mean CPU time are
calculated on the successful runs. For example, when the algorithm solves a function, if the
algorithm achives accepted solutions in only 27 out of 30 runs, then the successful rate of the
algorithm is 90%, and the mean FEs and mean CPU time is the average value of the 27 runs. Table 2-4 Convergence Speed and Algorithm Reliability Comparisons
Function GPSO LPSO VPSO FIPS HPSO-TVAC DMS-PSO CLPSO APSO
f1 Mean FEs 105695 118197 112408 32561 30011 91496 72081 7074 Time (sec) 0.96 1.12 1.04 0.36 0.29 0.85 0.48 0.11 Ratio (%) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
f2 Mean FEs 103077 115441 109849 36322 31371 91354 66525 7900 Time (sec) 1.02 1.19 1.10 0.44 0.32 0.91 0.62 0.17 Ratio (%) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
f3 Mean FEs 137985 162196 147133 73790 102499 185588 - 21166 Time (sec) 2.16 2.69 2.33 1.35 1.67 2.91 - 0.98 Ratio (%) 100.0 96.7 100.0 100.0 100.0 86.7 0.0 100.0
f4 Mean FEs 101579 102259 103643 13301 33689 87518 74815 5334 Time (sec) 0.99 1.05 1.05 0.16 0.35 0.86 0.55 0.09 Ratio (%) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
f5 Mean FEs 93147 107315 100389 15056 64555 76975 39296 4902 Time (sec) 1.18 1.41 1.41 0.23 0.85 0.98 0.41 0.09 Ratio (%) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
f6 Mean FEs 165599 161784 170675 47637 - 180352 99795 78117 Time (sec) 1.57 1.65 1.73 0.56 - 1.74 0.72 1.30 Ratio (%) 80.0 26.7 43.3 100.0 0.0 40.0 100.0 100.0
f7 Mean FEs 90633 89067 91811 122210 44697 101829 23861 5159 Time (sec) 2.22 1.92 2.02 2.38 0.72 2.21 0.43 0.12 Ratio (%) 56.7 20.0 40.0 66.7 100.0 20.0 100.0 100.0
f8 Mean FEs 94379 99074 98742 87760 7829 127423 53416 3531 Time (sec) 1.24 1.38 1.31 1.34 0.10 1.67 0.95 0.08 Ratio (%) 96.7 96.7 100.0 93.3 100.0 100.0 100.0 100.0
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
44
f9 Mean FEs 104987 110115 99480 80260 8293 115247 47440 2905 Time (sec) 1.74 1.99 1.67 1.52 0.14 1.91 0.69 0.07 Ratio (%) 100.0 100.0 100.0 90.0 100.0 100.0 100.0 100.0
f10 Mean FEs 110844 125543 118926 38356 52516 100000 76646 40736 Time (sec) 1.40 1.81 1.65 0.62 0.70 1.27 0.79 0.93 Ratio (%) 100.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0
f11 Mean FEs 111733 125777 117946 42604 34154 97213 81422 7568 Time (sec) 1.46 1.86 1.68 0.72 0.48 1.29 0.95 0.16 Ratio (%) 40.0 60.0 46.7 100.0 56.7 56.7 100.0 66.7
f12 Mean FEs 99541 107452 102779 19404 44491 95830 59160 21538 Time (sec) 2.17 2.57 2.38 0.50 0.98 2.10 1.38 0.68 Ratio (%) 90.0 100.0 96.7 100.0 100.0 100.0 100.0 100.0
Avg. Reliability 88.62% 83.34% 85.56% 95.83% 88.06% 83.62% 91.67% 97.23%
Due to the adaptive control of the parameter and strategy based on the evolutionary state
of the algorithm, the algorithm can optimize the problem using different suitable parameters
during different evolutionary stages and converge to global optimum quickly. Table 2-4
reveals that the APSO generally offers a much higher speed, measured by either the mean
number of FEs or by the mean CPU time needed to reach an acceptable solution. The CPU
time is important to measure computational load, as many existing PSO variants have added
extra operations that cost computational time. Although the APSO needs to calculate the
mean distance between every pair of particles in the swarm, the calculation costs negligible
CPU time.
In solving real-world problems, the ‘function evaluation’ time overwhelms algorithm
overhead. Hence the mean number of FEs needed to reach the acceptable accuracy would be
much more interesting than the CPU time. Thus the mean FEs are also explicitly presented
and compared in Table 2-4. For example, tests on f1 show that the average numbers of FEs of
105695, 118197, 112408, 32561, 30011, 91496 and 72081 are needed by the GPSO, LPSO,
VPSO, FIPS, HPSO-TVAC, DMS-PSO and CLPSO algorithms, respectively, in order to
reach an acceptable solution. However, the APSO uses only 7074 FEs, while its CPU time of
0.11 second is also the shortest among the eight algorithms. In summary, the APSO uses the
least CPU time and the smallest number of FEs to reach acceptable solutions on 9 out of 12
test functions (f1, f2, f3, f4, f5, f7, f8, f9 and f11)
2.4.4 Comparisons on the Algorithm Reliability
Table 2-4 also reveals that the APSO offers a generally highest percentage of trials
reaching acceptable solutions and the highest reliability averaged over all the test functions.
The APSO reaches the acceptable solutions with a successful ratio of 100% on all the test
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
45
functions except function f11. Note that the HPSO-TVAC and the CLPSO did not converge
on functions f6 and f3, respectively. For the mean reliability of all the test functions, APSO
offers the highest reliability of 97.25%, followed by FIPS, CLPSO, GPSO, HPSO-TVAC,
VPSO, DMS-PSO and LPSO.
According to the theorem of “no free lunch” [164], one algorithm cannot offer better
performance than all others on every aspect or on every kind of problems. This is also
observed in our experimental results. The GPSO outperforms local version PSOs, including
the LPSO, VPSO and FIPS with the U-Ring structure, on simple unimodal functions f1, f2,
and f3. However, on difficult unimodal functions (e.g., the Rosenbrock’s function, f4) and the
multimodal functions, the LPSO and FIPS offer better performance than GPSO. The FIPS
achieves the highest accuracy on function f10 while CLPSO and DMS-PSO perform best on
f11 and f12, respectively, but these global algorithms sacrifice performance on unimodal
functions. However, the APSO outperforms most on both unimodal and multimodal functions,
owing to its adaptive parameters that deliver faster convergence and to its adaptive elitist
learning strategy that avoids local optima. Further, such outperformance has been achieved
with the highest success rate on all but Griewank’s function (f11).
Fig. 2-7 Cumulative percentages of the acceptable solutions obtained duiring the evolutionary process.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
46
In order to depict how fast the algorithms reach acceptable solutions, accumulative
percentages of the acceptable solutions obtained in each function evaluation are shown in Fig.
2-7. The figure includes the representative unimodal functions (f1, f4) and complex
multimodal functions (f7, f8). For example, Fig. 2-7(c) shows that while optimizing function f7,
(i) the APSO, the CLPSO and the HPSO-TVAC manage to obtain acceptable solutions in all
the trials, but the APSO is faster than the CLPSO and the HPSO-TVAC; (ii) about only
two-thirds of the trails in the GPSO and the FIPS obtain acceptable solutions (with a medium
convergence speed); (iii) the VPSO succeeds in about 40% of the trials; (iv) the DMS-PSO
and the LPSO converge slowest and only succeed in about one-sixth of the trails.
2.4.5 Comparisons Using t-Tests
For a thorough comparison, the t-test [162] has also been carried out. Table 2-5 presents
the t values and P values on every function of this two-tailed test with a significance level of
0.05 between the APSO and another PSO algorithm. Rows “1 (Better)”, “0 (Same)” and “–1
(Worse)” give the number of functions that the APSO performs significantly better than,
almost the same as, and significantly worse than the compared algorithm, respectively. Row
“General Merit” shows the difference between the number of 1’s and the number of –1’s,
which is used to give an overall comparison between the two algorithms. For example,
comparing the APSO and the GPSO, the former outperformed the latter significantly on 7
functions (f2, f3, f4, f6, f7, f8 and f9), does as better as the latter on 5 functions (f1, f5, f10, f11 and
f12) and does worse on 0 function, yielding a “General Merit” figure of merit of 7 – 0 = 7,
indicating that the APSO generally outperforms the GPSO. Although it performed slightly
weaker on some functions, the APSO in general offered much improved performance than all
the PSOs compared, as confirmed by Table 2-5. Table 2-5 Comparisons Between the APSO and Other PSOs on t-Tests
PSOs Function GPSO LPSO VPSO FIPS HPSO-TVAC DMS-PSO CLPSO
f1 t-value 1.52851 2.31098† 1.4671 4.88501† 2.17917† 1.20579 6.93676†
P-value 0.13182 0.02441 0.14775 0.00001 0.03339 0.23279 0.00000
f2 t-value 2.35366† 3.85389† 3.96641† 9.17296† 5.48133† 2.16682† 8.47224†
P-value 0.02200 0.00029 0.00020 0.00000 0.00000 0.03437 0.00000
f3 t-value 3.73355† 3.3183† 5.08843† 4.8526† 5.32372† 4.60526† 15.24494†
P-value 0.00043 0.00157 0.00000 0.00001 0.00000 0.00002 0.00000
f4 t-value 5.57538† 8.96094† 7.58036† 32.85535† 3.29589† 6.62263† 3.00317†
P-value 0.00000 0.00000 0.00000 0.00000 0.00168 0.00000 0.00394
f5 t-value 0 0 0 0 0 0 0P-value 0 0 0 0 0 0 0
f6 t-value 5.76807† 9.46838† 9.26301† -6.35398† 13.34007† 8.08689† -1.98019
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
47
P-value 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.05243
f7 t-value 27.42668† 35.28576† 25.33892† 15.12005† 32.28794† 36.92784† 1.79505 P-value 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.07786
f8 t-value 19.3625† 26.38238† 23.13992† 15.03442† 3.52542† 24.02031† 2.12311 P-value 0.00000 0.00000 0.00000 0.00000 0.00083 0.00000 0.03802
f9 t-value 11.47452† 18.03982† 12.35315† 20.72115† 3.7845† 27.68977† 2.40832†
P-value 0.00000 0.00000 0.00000 0.00000 0.00037 0.00000 0.01923
f10 t-value 0.46159 6.73442† 3.78212† -5.12379† 1.19682 -3.58847† 11.89982†
P-value 0.64610 0.00000 0.00037 0.00000 0.23624 0.00068 0.00000
f11 t-value 1.08486 -1.07192 -0.70658 -3.56237† -1.23569 -0.6606 -3.79146†
P-value 0.28247 0.28820 0.48265 0.00074 0.22155 0.51148 0.00036
f12 t-value 1.79505 1.87465 1 -1.15597 8.68004† -1.61897 4.51261†
P-value 0.07786 0.06588 0.32146 0.25243 0.00000 0.11088 0.00003 1 (Better) 7 9 8 7 9 7 8 0 (Same) 5 3 4 2 3 4 3
-1 (Worse) 0 0 0 3 0 1 1 General Merit 7 9 8 4 9 6 7
2.5 Further Analysis of APSO
2.5.1 Analysis of Parameter Adaptation and Elitist Learning
APSO operations involve an acceleration rate δ in equation (2-6) and an elitist learning
rate σ in (2-9). Hence, are these new parameters sensitive in the operations? What impacts do
the two operations of parameter adaptation and elitist learning have on the performance of the
APSO? This section aims to answer these questions by further testing the APSO on 3
unimodal (f1, f2 and f4) and 3 multimodal (f7, f8 and f10) functions
In order to quantify the significance of these two operations, the performance of the
APSO without parameter adaptation or elitist learning is tested under the same running
conditions as in Section 2.4.1. Results of the mean values on 30 independent trials are
presented in Table 2-6. Table 2-6 Merits of Parameter Adaptation and Elitist Learning on Search Quality
Algorithm APSO With Both Adaptation & Learning
APSO With Only Adaptive Parameters
APSO With Only ELS
GPSO (PSO Without Either)
Function Mean Std. Dev Mean Std. Dev Mean Std. Dev Mean Std. Dev f1 1.45×10-150 5.73×10-150 7.67×10-160 3.42×10-159 3.6×10-50 1.43×10-49 1.98×10-53 7.08×10-53
f2 5.15×10-84 1.44×10-83 6.58×10-88 2.34×10-87 2.41×10-32 9.98×10-32 2.51×10-34 5.84×10-34
f4 2.84 3.27 13.8879 14.6335 12.7464 18.1979 28.0972 24.5981 f7 -12569.5 5.22×10-11 -7367.77 681.983 -12569.5 3.34×10-12 -10090.16 495.135 f8 5.8×10-15 1.01×10-14 52.7327 15.0326 1.78×10-16 5.42×10-16 30.6779 8.6781 f10 1.11×10-14 3.55×10-15 1.0885 1.00363 1.12×10-14 2.64×10-15 1.15×10-14 2.27×10-15
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
48
It is clear from the results that with elitist learning alone and without adaptive control of
parameters, the APSO can still deliver good solutions to multimodal functions. However, the
APSO suffers from lower accuracy in solutions to unimodal functions. As algorithms can
easily locate the global optimal region of a unimodal function and then refine the solution, the
lower accuracy may be caused by the slower convergence speed to reach the global optimal
region. On the other hand, the APSO with parameters adaptation alone but without ELS can
hardly jump out of local optima and hence results in poor performance on multimodal
functions. However, it can still solve unimodal problems well.
Note that both reduced APSO algorithms generally outperform a standard PSO that
involves neither adaptation parameters nor elitist learning. However, the full APSO is the
most powerful and robust for any tested problem. This is most evident in the test results on f4.
These results confirm the hypothesis that parameters adaptation speeds up the convergence of
the algorithm and elitist learning helps the swarm jump out of local optima and find better
solutions.
2.5.2 Search Behaviors of APSO and Parameter Evolution Analysis
In order to understand the running mechanisms of APSO further, we first investigate its
search behaviors and parameter variation on unimodal function Sphere function f1 and
multimodal function Rastrigin function f8.
Firstly, the test is performed on the unimodal function f1. In a unimodal space, it is
important for an optimization or search algorithm to converge fast and to refine the solution
for a high accuracy. The inertia weight shown in Fig. 2-8(a) confirms that the APSO
maintains a large ω in the exploration phase (for about 50 generations) and then a rapidly
decreasing ω follows exploitation leading to convergence, as the unique global optimum
region is found by a leading particle and the swarm follows it.
Fig. 2-8(b) shows how ESE in the APSO has influenced the acceleration coefficients.
The curves for c1 and c2 somewhat show good agreement with the ones given in Fig. 2-5. It
can be seen that c1 increases whilst c2 decreases in the exploration and exploitation phases.
Then c1 and c2 reverse their directions when the swarm converges, eventually returning to
around 2.0. Then trials in elitist learning perturb the particle that leads the swarm, which is
reflected in the slight divergence between c1 and c2 that follows. The search behavior on the
unimodal function indicates that the proposed APSO algorithm has indeed identified the
evolutionary states and can adaptively control the parameters for improved performance.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
49
Fig. 2-8 Search behaviors of the APSO on Sphere function: (a) Mean value of ω during the run time
showing an adaptive momentum; (b) Mean values of c1 and c2 adapting to the evolutionary states.
Secondly, test is carried out on f8. Here the APSO is tested again to see how it will adapt
itself to a multimodal space. When solving multimodal functions, a search algorithm should
maintain diversity of the population and search for as many optimal regions as possible. The
search behavior of the APSO is investigated on the Rastrigin function (f8 in Table 2-1). In
order to compare the diversity in the search by the APSO and the traditional PSO, a yardstick
proposed in [165] is used here called the ‘population standard deviation’, denoted by psd
)1/(])([1 1
2 −−= ∑ ∑= =Nxxpsd N
i
D
j
jji (2-10)
where N, D and x are the population size, the number of dimension and the mean position
of all the particles, respectively.
The variations in psd can indicate the diversity level of the swarm. If psd is small, it
indicates that the population has converged closely to a certain region, and the diversity of the
population is low. A larger value of psd indicates that the population is of a higher diversity.
However, it does not necessarily mean that a larger psd is always better than a smaller one
because an algorithm which cannot converge may also present a large psd. Hence, the psd
needs to be considered together with the solution that the algorithm arrives at.
Results of psd comparisons are plotted in Fig. 2-9(a) and those of the evolutionary
processes in Fig. 2-9(b). It can be seen that the APSO has an ability to jump out of local
optima, reflected by the regained diversity of the population, as revealed in Fig. 2-9(a), with a
steady improvement in the solution, as shown in Fig. 2-9(b). Fig. 2-9(c) and (d) show the
inertia weight and the acceleration coefficients behaviors of the APSO, respectively. These
plots confirm that, in a multimodal space, the APSO can also find a potential optimal region
(maybe a local optimum) fast in an early phase and converge fast with a rapid decreasing
diversity, due to the adaptive parameters strategies. However, if a current optimal region is
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
50
local, the swarm can separate and jump out. Hence, the APSO can appropriately increase the
diversity of the population so as to explore for a better region, owing to the ELS in the
convergence state. This behavior with adaptive population diversity is valuable for a global
search algorithm to prevent from being trapped in local optima and to find the global
optimum in a multimodal space
Fig. 2-9 Search behaviors of PSOs on Rastrigin’s function: (a) Mean psd during the run time; (b) Plots of
convergence during the minimization run; (c) Mean value of the ω during the run time showing an
adaptive momentum; (d) Mean values of c1 and c2 adapting to the evolutionary states.
2.5.3 Sensitivity of the Acceleration Rate
APSO introduce two new parameters, δ in Eq. (2-6) andσ in Eq. (2-9). This section
and the next section will analyze the sensitivity of these two parameters, respectively.
The effect of the acceleration rate, reflected by its bound δ, on the performance of the
APSO is investigated here. For this, the learning rate σ is hence fixed (e.g., σmax=σmin=0.5)
and the other parameters of the APSO remain the same as in Chapter 2.4.1. The investigation
consists of 6 test strategies for δ, the first 3 being to fix its value to 0.01, 0.05 and 0.1,
respectively, and the remaining 3 being randomly to generate its value using a uniform
distribution within [0.01, 0.05], [0.05, 0.1] and [0.01, 0.1], respectively. The results are
presented in Table 2-7, in terms of the mean of the solutions found in 30 independent trials.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
51
Table 2-7 Effects of the Acceleration Rate on Global Search Quality
Value of f1 f2 f4 f7 f8 f10 Fixed at 0.01 6.79×10-151 1.23×10-83 3.4159 -12474.3 2.25×10-2 1.08×10-14
Fixed at 0.05 8.3×10-149 2.81×10-83 4.06522 -12317 6.16×10-15 1.14×10-14
Fixed at 0.1 8.03×10-149 6.05×10-84 3.72976 -12153.8 6.63×10-2 1.14×10-14
Random (0.01,0.05) 2×10-148 2.15×10-80 2.6106 -12420.7 0.132661 1.14×10-14
Random (0.05,0.1) 2.62×10-150 6.95×10-82 1.79069 -12475.2 8.11×10-15 1.17×10-14
Random (0.01,0.1) 2.64×10-149 3.12×10-83 3.00886 -12133.6 9.95×10-2 1.11×10-14
It can be seen that the APSO is not very sensitive to the acceleration rate δ and the six
acceleration rates all offer good performance. This may be owing to the use of bounds for the
acceleration coefficients and the saturation to restrict their sum by (2-7). Therefore, given the
bounded values of c1 and c2 and their sum restricted by (2-7), an arbitrary value within the
range [0.05, 0.1] for δ should be acceptable to the APSO algorithm
2.5.4 Sensitivity of the Elitist Learning Rate
In order to assess the sensitivity of σ in elitist learning, six strategies for setting its value
are tested here, using 3 fixed values (0.1, 0.5 and 1.0) and 3 time-varying ones (from 1.0 to
0.5, from 0.5 to 0.1, and from 1.0 to 0.1). All other parameters of the APSO remain as those
in Section 2.4.1. The mean results of 30 independent trials are presented in Table 2-8. Table 2-8 Effects of the Elitist Learning Rate on Global Search Quality
Value of f1 f2 f4 f7 f8 f10 Fixed at 0.1 5.16×10-152 1.62×10-82 1.94812 -11622 6.87×10-15 1.10×10-14
Fixed at 0.5 6.83×10-148 7.02×10-77 1.73717 -12045.9 3.32×10-2 1.05×10-7 Fixed at 1.0 2.07×10-149 3.39×10-83 2.7744 -12277.3 9.95×10-2 1.12×10-14
From 1.0 to 0.5 2.85×10-148 5.21×10-82 2.34211 -12263.9 0.132661 1.34×10-14
From 0.5 to 0.1 1.90×10-148 8.83×10-82 2.0236 -12565.5 6.63×10-2 1.21×10-14
From 1.0 to 0.1 1.24×10-152 8.71×10-83 3.23075 -12569.5 4.03×10-15 1.12×10-14
The results show that if σ is small (e.g., 0.1), the learning rate is not enough for a long
jump out of local optima, evident in the performance on f7. However, all other settings, which
permit a larger σ, have delivered almost the same excellent performance, especially the
strategy with a time-varying σ decreasing from 1.0 to 0.1. It is seen that a smaller σ
contributes more to helping the leading particle refine, while a larger σ contributes more to
helping the leader move away from its existing position so as to jump out of local optima.
This confirms the intuition that long jumps should be accommodated at an early phase to
avoid local optima and premature convergence, whilst small perturbations at a latter phase
should help refine global solutions, as recommended in this chapter.
Chapter 2 Adaptive Particle Swarm Optimization Based on Statistic Analysis Techique in Machine Learning
52
2.6 Chapter Summary
This chapter introduced the statistical analysis technique into particle optimization
algorithm, and proposed an adaptive particle optimization algorithm (APSO). Due to the
online perception and analysis process ability on the population distribution and fitness data
of the statistical analysis technique, APSO defines an evolutionary factor to describe the
evolutionary state. With the identification and classification ability of the evolutionary factor,
APSO adaptively control the parameters and strategies. As shown in the benchmark tests, the
adaptive control of the inertia weight and the acceleration coefficients makes the algorithm
extremely efficient, offering a substantially improved convergence speed, in terms of both
number of FEs and CPU time needed to reach acceptable solutions for both unimodal and
multimodal functions.
The features and advantages of APSO are:
1) Combine the statistical analysis technique with PSO algorithm. Through the
perception and analysis of the population distribution information and relative particle fitness,
this chapter proposed an adaptive PSO. Machine learning aided method is an effective way to
design adaptive PSO or other adaptive EAs.
2) Adaptive control makes the algorithm evolve suitanl strategies and parameter values as
evolution progresses in differwnt evolutionary states and performs rapid convergence
capacity and global searching ability.
3) APSO remains the simplicity, easy implement, efficiency of the original PSO. The
introduction of the evolutionary state estimation and parameter and strategy adaptive control
process, which is based on the simple and efficient statistical analysis technique, requires
little computation consumption.Tthe APSO is still simple and almost as easy to use as the
standard PSO, whilst it brings in substantially improved performance in terms of convergence
speed, global optimality, solution accuracy, and algorithm reliability. Moreover, the
acceleration rate and learning rate have insignificant impact on the performance of the PSO,
which enhance the accessibility of the APSO.
In a conclusion, with the help of statistical analysis technique, this chapter proposed an
adaptive particle swarm optimization to accelerate convergence speed and enhance solution
accuracy. It is an important and successful exploration of machine learning aided particle
swarm optimization algorithm.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
53
Chapter 3 Orthogonal Learning Particle Swarm Optimization
Based on Orthogonal Experiments Design Techique in Machine
Learning
3.1 Introduction
The salient feature of PSO lies in its learning mechanism that distinguishes the
algorithm from other EC techniques. When searching for a global optimum in a hyperspace,
particles in a PSO fly in the search space according to guiding rules. It is the guiding rules
that make the search effective and efficient. In the traditional PSO, the rules are the
mechanism that each particle learns from its own best historical experience pBesti and its
neighborhood’s best historical experience nBesti. But how to make full use of these guidance
information to bring better learning efficiency to PSO and hence better global optimization
performance is an impotant and challenging problem.
As described in Chapter 1.2.4, according to the method of choosing the neighborhood’s
best historical experience, PSO algorithms are traditionally classified into global version PSO
(GPSO) and local version PSO (LPSO). Without loss of generality, this chapter aims at
improving the performance of both the GPSO and the LPSO with the ring structure, where a
particle takes its left and right particles (by particle index) as its neighbors. In both GPSO and
LPSO, the information of a particle’s best experience and its neighborhood’s best experience
is utilized in a simple way, where the flying is adjusted by a simple learning summation of
the two experiences which will be given in Eq. (1-6). However, this is not necessarily an
efficient way to make the best use of the search information in these two experiences. For
example, in one case, it may cause an ‘oscillation’ phenomenon because the guidance of the
two experiences may be in opposite directions. This is inefficient to the search ability of the
algorithm and delays the convergence speed. In another case, the particle may suffer from the
‘two steps forward, one step back’ phenomenon that some components of the solution vector
may be improved by one exemplar but may be deteriorated by the other. This is because that
one exemplar may have good values on some dimensions of the solution vector while the
other exemplar may have good values on some other dimensions. Hence, how to discover
more useful information embedded in the two exemplars and thus how to utilize the
information to construct an efficient and promising exemplar to guide the particle flying
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
54
steadily towards the global optimal region are important and challenging research issues that
PSO researchers need to pay attention to.
The ‘oscillation’ phenomenon is likely to be caused by linear summation of the personal
influence and the neighborhood influence. For the clearness and easiness of understanding,
we first simplify the Eq. (1-6) as Eq. (3-1) by removing the inertia weight component and the
random values.
Vid = (Pid – Xid) + (Nid – Xid) (3-1)
Fig. 3-1 The “oscillation” pheronomon caused by traditional PSO learning strategy.
In Eq. (3-1), we consider the following case for a maximization problem where the
current particle Xi is between its personal best position pBesti and its neighborhood’s best
position nBesti, as shown in Fig. 3-1. At first, the distance between nBesti and Xi may be
farther than the one between pBesti and Xi, as in Fig. 3-1 (a), then Xi will move towards
nBesti because of its larger pull. However, as moving towards nBesti, the distance between
pBesti and Xi will increase, as shown in Fig. 3-1 (b). In this case, the particle will move
towards pBesti instead. The oscillation would thus occur and the particle will be puzzled in
deciding where to stay. This oscillation phenomenon causes inefficiency to the search ability
of the algorithm and delays convergence.
Another related phenomenon of the traditional learning mechanism is the ‘two step
forward, one step back’ phenomenon as described in Fig. 3-2. For example, given a
3-dimension Sphere function 23
22
21)( xxxf ++=X , whose global minimum point is [0, 0, 0].
Suppose that the current position is Xi = [2, 5, 2], its personal best position is pBesti = [0, 2, 5]
and its neighborhood’s best position is nBesti = [5, 0, 1]. The updated velocity is Vi = [1, –8,
2] according to Eq. (3-1), and thus the new position is Xi = Xi + Vi = [3, –3, 4], resulting in a
new position with a cost value of 34 which is worse than Xi and pBesti. Therefore, the
particle does not benefit from the learning from pBesti and nBesti in this generation. However,
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
55
vectors pBesti and nBesti indeed possess good information in their structures. For example, if
we can discover good dimensions of the two vectors, we can then combine them to form a
new guidance vector of oBesti = [0, 0, 1] where the first coordinate 0 comes from pBesti
while the second and the third coordinates 0 and 1 come from nBesti (with corresponding
dimension). Given the guidance of Po, the updated velocity become Vi = oBesti – Xi = [0, 0, 1]
– [2, 5, 2] = [–2, –5, –1]; thus the new position is Xi = Xi + Vi = [0, 0, 1], resulting in a new
and better position with a cost f(Xi) = 1 that makes the particle fly faster towards the global
optimum [0, 0, 0].
[2,5, 2]( ) ( ) [ 2, 3,3] [3, 5, 1] [1, 8, 2]
[0, 2,5][2,5, 2] [1, 8,2] [3, 3, 4]
[5,0,1]
= ⎫= − + − = − − + − − = −⎪= ⇒⎬ = + = + − = −⎪= ⎭
ii i i i i
ii i i
i
XV pBest X nBest X
pBestX X V
nBest
(a) Traditional PSO learning strategy
[2,5,2] [0,2,5]( ) [ 2, 5, 1]
[0,2,5] [0,0,1][0,0,1]
[5,0,1] [5,0,1]
= ⎫= − = − − −⎪= ⇒ = ⊕ = ⊕ = ⇒⎬ = + =⎪= ⎭
ii i i
i i i ii i i
i
XV oBest X
pBest oBest pBest nBestX X V
nBest(b) Orthogonal PSO learning strategy
Fig. 3-2 The “two steps forward, one step back” pheronomon caused by traditional PSO learning strategy.
It follows that traditional PSO is not able to perform efficient global searching not just
because the particle can not find high-quality position, but because the traditional learning
strategy can not make full use of the information in the found solutions. Although the
solutions obtained in the evolutionary process are not enough good, these solutions have good
structures generally. Especially in the process of learning from its own best historical
experience pBesti and its neighborhood’s best historical experience nBesti, how to find more
useful information from these two learning exemplars and then construct a better learning
exemplars to guide the fly of the particle is important. Machine learning technique is artificial
intelligence technology and is able to discover useful information from the available data to
guide the actions of the particle. In order to make full use of the search information of both
pBesti and nBesti, this chapter introduced the orthogonal prediction technique in machine
learning into PSO and construct a learning vector to guide the particle fly towards global
optimal position more steadily.
Because orthogonal experimental design (OED) offers an ability to discover the best
combination levels for different factors with a reasonably small number of experimental
samples, in this chapter, we propose to use the OED method to construct a promising learning
exemplar. Here, OED is used to discover the best combination oBesti (Orthogonal Best) of a
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
56
particle’s best historical position pBesti and its neighborhood’s best historical position nBesti.
The orthogonal experimental factors are the dimensions of the problem and the levels of each
dimension (factor) are the two choices of a particle’s best position value and its
neighborhood’s best position value on this corresponding dimension. If we exhaustively test
all the combinations of pBesti and nBesti for the best guidance vector oBesti, 2D trials are
need. OED has high orthogonal test ability and prediction ability. Through testing a few
representative combinations with typical orthogonal combinations method and predicting the
potentially best combinations with a factor analysis (FA) method, ODE is able to discover
useful information from pBesti and nBesti.This way, the best combination of the two
exemplars can be constructed to guide the particle to fly more steadily, rather than oscillatory,
because only one constructed exemplar is used for the guidance. It is thus expected for a
particle to fly more promisingly towards the global optimum because the constructed
exemplar makes the best use of the search information of both the particle’s best position and
its neighborhood’s best position.
Therefore, with the aid of orthogonal prediction technique in machine learning, this
chapter designs an orthogonal learning (OL) strategy, and proposes orthogonal learning
particle swarm optimization (OLPSO) based on the OL stategy. The important contributions
and innovations include 3 aspects as follow:
1) Introduce the orthogonal prediction technique in machine learning into PSO to
discover, analyze, and prodict the information of individuals and population, and to form an
orthogonal learning (OL) strategy to bring better learning efficiency to PSO and hence better
global optimization performance;
2) Different from the idea of using OED to construct a better solution for EA before,
such as using ODE to design crossover operator [166], initialize the searching space [167],
search local optima [168][169] in GA; and similar methods of using OED to find a better
solution are also used in SA [170][171], ACO [172], and PSO[150][151][149], the OL
strategy in this chapter is to discover, analyze, and predict the exiting searching information
to construct a vector to guide the algorithm searching. Therefore, the OL strategy in this
chapter is focused on designing a guidance exemplar with an ability to predict promising
search directions towards the global optimum but not finding out a better solution;
3) The OL strategy designed in this chapter is a generic operator and can be applied to
PSO with any kind of topology structure. It is applied on GPSO and LPSO to verify the
effectiveness and efficiency of OL strategy. The experimental results not only demonstrate
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
57
the advantage of OL strategy and the OLPSO algorithm, but also give a good inspiration of
how to apply the OL strategy to other PSO versions.
3.2 Orthogonal Learning Particle Swarm Optimization
3.2.1 Orthogonal Experimental Design
In order to illustrate how to use the OED, a simple example is shown in Table 3-1,
which arises from chemical experiments. In this example, the aim is to find the best level
combination of the 3 factors involved to increase the conversion ratio. Table 3-1 shows that 3
factors, which will affect experimental results, are the temperature, time and alkali, denoted
as factors A, B, and C, respectively. Moreover, there are 3 levels (different choices) involved
in each factor. For example, the temperature can be 80ºC, 85ºC, or 90ºC. Thus, there are in
total 33=27 combinations of experimental designs. This is a combinatorial explosion problem.
When the dimension increases, the possible combinations increases rapidly. So enumeration
is not suitable to address this problem. However, with the help of OED, one can obtain or
predict the best combination by testing only few representative experimental cases. Take the
example shown in Table 3-1 with the describtion in Table 3-2, we introduce the procedure of
OED method as follow: Table 3-1 The Factors and Levels of the Chemical Experiment Example
Factors Levels
A Temperature (0C)
B Time (Min)
C Alkali (%)
Level1 L1 80 90 5 Level2 L2 85 120 6 Level3 L3 90 150 7
49
1 1 1 11 2 2 21 3 3 32 1 2 3
(3 ) 2 2 3 12 3 1 23 1 3 23 2 1 33 3 2 1
L
⎡ ⎤⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥=⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎣ ⎦
(3-2)
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
58
Step 1, Combination test based on orthogonal array: The OED method works on a
predefined table called an orthogonal array (OA). An OA with N factors and Q levels per
factor is always denoted by LM(QN), where L denotes the orthogonal array and M is the
number of combinations of test cases. For the example shown in Table 3-1, the L9(34) OA
given by Eq.(3-2) is suitable.
The OA in (3-2) has 4 columns, meaning that it is suitable for the problems with at most
4 factors. As any sub columns of an OA is also an OA, we can to use only the first 3 columns
(or arbitrary 3 columns) of the array for the experiment. For example, the first three columns
in the first row is [1, 1, 1], meaning that in this experiment, the first factor (temperature), the
second factor (time), and the third factor (alkali) are all designed to the 1st level, that is, 80ºC,
90 minutes, and 5% as given in Table 3-1. Similarly, combination of [1, 2, 2] is used in the
second experiment, and so on. The total of 9 experiments specified by the L9(34) are
presented in Table 3-2. Table 3-2 Deciding the Best Combination Levels of the Chemical Experimental Factors Using an
Orthogonal Experimental Design Method
Combination A:Temperature(0C) B: Time (Min) C: Alkali (%) Result C1 (1) 80 (1) 90 (1) 5 F1=31 C2 (1) 80 (2) 120 (2) 6 F2=54 C3 (1) 80 (3) 150 (3) 7 F3=38 C4 (2) 85 (1) 90 (2) 6 F4=53 C5 (2) 85 (2) 120 (3) 7 F5=49 C6 (2) 85 (3) 150 (1) 5 F6=42 C7 (3) 90 (1) 90 (3) 7 F7=57 C8 (3) 90 (2) 120 (1) 5 F8=62 C9 (3) 90 (3) 150 (2) 6 F9=64
Level Factor Analysis L1 (F1+ F2+ F3)/3=41 (F1+ F4+ F7)/3=47 (F1+ F6+ F8)/3=45 L2 (F4+ F5+ F6)/3=48 (F2+ F5+ F8)/3=55 (F2+ F4+ F9)/3=57 L3 (F7+ F8+ F9)/3=61 (F3+ F6+ F9)/3=48 (F3+ F5+ F7)/3=48
Result of OED A3 B2 C2
Step 2, Prediction based on Factor Analysis: The ability of discovering the best
combination of levels is through the factor analysis (FA). The FA is based on the
experimental results of all the M cases of the OA. The FA results are shown in Table 3-2
and the process is described as follows. Let Fm denote the experimental result of the mth
( Mm ≤≤1 ) combination and Snq denote the effect of the qth ( Qq ≤≤1 ) level in the nth
( Nn ≤≤1 ) factor. The calculation of Snq is to add up all the Fm in which the level is q in
the nth factor, and then divide the total count of zmnq, as shown in Eq. (3-3) where zmnq is 1
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
59
if the mth experimental test is with the qth level of the nth factor, otherwise, zmnq is 0. For
example, to
1
1
Mm mnqm
nq Mmnqm
F zS
z=
=
×= ∑
∑ (3-3)
In this way, the effect of each level on each factor can be calculated and compared, as
shown in Table 3-2. For example, when we calculate the effect of level 1 on factor A,
denoted by element A1, the experimental results of C1, C2, and C3 are summed up for Eq.
(3-3) because only these combinations are involved in level 1 of factor A. Then the sum
divides the combination number (3 in this case) to yield Snq (SA1 in this case).
Step 3, Best combination: With all the Snq calculated, the best combination of the levels
can be determined by selecting the level of each factor that provides the highest-quality
Snq. For a maximization problem, the larger the Snq is, the better the qth level on factor n
will be. Otherwise, vice versa. As in the maximization example shown in Table 3-2, the
best result is the combination of A3, B2 and C2. Although the combination of (A3, B2,
C2) itself does not exist in the 9 combinations tested, it is discovered by the FA process.
3.2.2 Orthogonal Learning Strategy
Using the OED method, the original PSO can be modified as an orthogonal learning
PSO with an OL strategy that combines information of pBesti and nBesti to form a better
guidance vector oBesti = [Oi1, Oi2, …, OiD]. The particle’s flying velocity is thus changed as:
Vid = ωVid + crd(Oid – Xid) (3-4)
where ω is the same as in (1-4) that linearly decreases from 0.9 to 0.4, and c is fixed to be 2.0,
the same as c1 and c2, and rd is a random value uniformly generated within the interval [0, 1].
The guidance vector oBesti is constructed from pBesti and nBesti as (3-5):
oBesti = pBesti ⊕ nBesti (3-5)
where the symbol ⊕ stands for the OED operation. Therefore, the value oBestid comes from
pBesti or nBesti as the construct result of OED. With this efficient learning exemplar oBesti,
particle i adjusts its flying velocity, position and updates its personal best position in every
generation. In order to avoid the guidance changing the direction frequently, the vector oBesti
will be used as the exemplar for a certain number of generations until it cannot lead the
particle to a better position any more. For example, if the personal best position pBesti has
not been improved for G generations, then particle i will reconstruct a new oBesti by using
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
60
pBesti and nBesti. On the other hand, as oBesti is used for some time until it cannot improve
the position, one problem should be addressed is how to use the information that comes from
pBesti and nBesti immediately after pBesti and nBesti go to a better position during the search
process. In our implementations, vector oBesti stores only the index of pBesti and nBesti, not
the copy of the real position values. That is, oBestid only indicates that the dth dimension is
guided by pBesti or nBesti, it does not store the current value of pBestid or nBestid. Thus, in
the OLPSO algorithm, when pBesti or nBesti moves to a better position, the new information
will be used immediately by the particle through oBesti.
The construction process of oBesti is described as the following six steps:
Step 1: An OA is generated as LM(2D) where 2log ( 1)2 DM +⎡ ⎤⎢ ⎥= , using the procedure as
given follow:
1) Determine the row number ⎡ ⎤)1(log22 += DM , the column number N = M–1, and the
basic column number u=log2(M).
2) The elements in the basic columns are set as:
1[ ][ ] ( ) mod 22u k
aL a b −
−⎢ ⎥= ⎢ ⎥⎣ ⎦ (3-6)
where a=1, 2,…, M is the row index, b=2k–1 is the basic column index, and k=1, 2,…, u.
3) The elements in other columns are set as:
L[a][b+s] = (L[a][s] + L[a][b]) mod 2 (3-7)
where a=1, 2,…, M is the row index, b=2k–1 is the basic column index, s=1, 2, …, b–1,
and k=2,…, u.
4) For all the elements in the OA, transform the level value to 1 for the first level and
the level value to 2 for the second level.
Step 2: Make up M tested solutions Xj (1≤j≤M) by selecting the corresponding value
from pBesti or nBesti according to the OA. Here, if the level value in the OA is 1, then the
corresponding factor (dimension) selects pBesti; otherwise, selects nBesti.
Step 3: Evaluate each tested solution Xj (1≤j≤M), and record the best (with best fitness)
solution Xb.
Step 4: Calculate the effect of each level on each factor and determine the best level for
each factor using Eq. (3-3).
Step 5: Get a predictive solution Xp with the levels determined in Step 4 and evaluate Xp.
Step 6: Compare f(Xb) and f(Xp) and the level combination of the better solution is used
to construct the vector oBesti.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
61
3.2.3 Orthogonal Learning Particle Swarm Optimization
The OL strategy is a generic operator and can be applied to any kind of topology
structure. If the OL is used for the global version PSO, then nBesti is gBest. If it is used for
the local version PSO, then nBesti is lBest. Either for a global or a local version, when
constructing the vector of oBesti, if pBesti is the same as nBesti (e.g., for the globally best
particle, pBesti and gBest are identical vectors), the OED makes no contribution. In such a
case, OLPSO will randomly select another particle pBestr, and then construct oBesti by using
the information of pBesti and pBestr through the OED.
The flowchart of OLPSO is shown in Fig. 3-3.
Start
Initialization Vi, Xi. Calculate pBesti, nBesti. Set gen=0, ω=0.9, c=2.0.
For each particle, construct the learning exemplar oBesti through pBesti and
nBesti.
gen<GENERATION
ω = 0.9–0.5 gen/GENERATION;i = 1;
For each dimension dvid = ω vid + c rd Bestid – xid)
xid = xid + vid
Xmin<=Xi<=Xmax
Evaluate particle i
f(Xi)<f(pBesti)
pBesti = Xistagnatedi = 0
f(pBesti)<f(nBesti)
nBesti = pBesti
i = i + 1;
i <= SIZE
stagnatedi = stagnatedi + 1;
stagnatedi > G
construct the learning exemplar oBesti through
pBesti and nBesti.
stagnatedi = 0;
gen = gen + 1;
Finish
No
Yes
Yes
Yes
Yes
YesYes
No
No
No
NoNo
Fig. 3-3 The flowchart of OLPSO.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
62
3.3 Experimental Verification and Comparisons
3.3.1 Functions Tested
Sixteen benchmark functions listed in Table 3-3 are used in the experimental tests.
These benchmark functions are widely adopted in benchmarking global optimization
algorithms [59][162][163]. In this chapter, the functions are divided into three groups. The
first group includes 4 unimodal functions, where f1 and f2 are simple unimodal, f3
(Rosenbrock) is unimodal in a 2-dimensional or 3-dimensional search space but can be
treated as a multimodal function in high dimension cases [173], and f4 is with noisy
perturbation. The second group includes 6 complex multimodal functions with high
dimensionality. The last group includes 4 rotated multimodal functions and 2 shifted
functions defined in [163].
All functions are minimization problems with D=30 with the global optimal value of 0
except for f15 and f16 whose value are 390 and -330, respectively, due to the shift of the global
optimal value. Table 3-3 gives the global optimal solution (Column 5). Moreover, biased
initializations (Column 4) and the ‘Accept’ (Column 6) is also defined for each test function.
If a solution found by an algorithm falls between the acceptable value and the actual global
optimum fmin (Column 5), the run is judged to be successful. It should be note that f3, f8, f11,
f12, f13, f14 and f15 are coupling functions.
3.3.2 Compared Algorithm Configuration
Variant PSO algorithms, as detailed in Table 3-4, are used for comparisons. The
parameter configurations are all based on the suggestions in the corresponding references.
The first two are traditional PSOs of GPSO [45] and LPSO [54]. The third is a ‘fully
informed’ PSO (FIPS) [25] that uses all the neighbors to influence the flying velocity. The
fourth is a ‘performance-improvement’ PSO by improving the acceleration coefficients,
namely hierarchical PSO with time-varying acceleration coefficients (HPSO-TVAC) [28].
The fifth is a dynamic multi-swarm PSO (DMS-PSO) [57] which is designed to improve the
topological structure in a dynamic way. The sixth, CLPSO [59], aims to offer a better
performance for multimodal functions by using a CL strategy. The seventh, the OPSO [150]
algorithm, aims to improve the algorithm by using an OED to generate a better position, not
by constructing a learning exemplar as proposed in this chapter. These PSO variants are used
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
63
for comparisons because they are typical PSOs that are reported to perform well on their
studied problems. Moreover, they span a wide time interval from 1998 to 2008, which
witness the developments of PSO on variant aspects. For OLPSO developed in this chapter,
we implement the OL strategy in both the global and the local version PSO, resulting in two
OLPSO algorithms, the OLPSO-G and the OLPSO-L, respectively. Both will be compared
with GPSO, LPSO, FIPS, HPSO-TVAC, DMS-PSO, CLPSO, and OPSO. Table 3-3 Sixteen Test Functions Used in the Comparison
Type Test function Search Range
Initialization Range
Global Opt. x* Accept Name
Uni
mod
al
∑ == D
i ixxf1
21 )( [-100,100]D [-100,50]D {0}D 1×10-6 Sphere[162]
∑ ∏= =+= D
i
D
i ii xxxf1 12 )( [-10,10]D [-10,5]D {0}D 1×10-6 Schwefel’sP2[162]
1 2 2 23 11( ) [100( ) ( 1) ]D
i i iif x x x x−
+== − + −∑ [-10,10]D [-10,10]D {1}D 100 Rosenbrock[162]†
∑ =+= D
i i randomixxf1
44 )1,0[)( [-1.28,1.28]D [-1.28,0.64]
D {0}D 0.01 Noise[162]
Mul
timod
al
∑ =−×= Di ixixDxf 1 )sin(9829.418)(5 [-500,500]D [-500,500]D {420.96}D 2000 Schwefel[162]
∑ =+−= D
i ii xxxf1
26 ]10)2cos(10[)( π [-5.12,5.12]D [-5.12,2]D {0}D 100 Rastrigin[162]
exD
xD
xfD
i iD
i i ++−−−= ∑∑ ==20)2cos1exp( )12.0exp(20)(
112
7 π [-32,32]D [-32,16]D {0}D 1×10-6 Ackley [162]
∑ ∏= =+−= D
i
D
i ii ixxxf1 1
28 1)/cos(4000/1)( [-600,600]D [-600,200]D {0}D 1×10-6 Griewank [162]
∑
∑
=
+−
=
+−+
+−+=
D
i iD
iD
i i
xuy
yyyD
xf
12
1221
112
9
)4,100,10,(})1(
)](sin101[)1()(sin10{)( πππ
⎪⎩
⎪⎨
⎧
−<−−
≤≤−>−
=++=
axaxk
axaaxaxk
mkaxuxy
im
i
i
im
i
iii
,)(
,0 ,)(
),,,( ),1(41
1 where
[-50,50]D [-50,25]D {0}D 1×10-6
Generalized Penalized[162]
∑
∑
=
+−
=
++−+
+−+=
D
i iDD
iD
i i
xuxx
xxxxf
122
1221
112
10
)4,100,5,()]}2(sin1[)1(
)]3(sin1[)1()3({sin101)(
π
ππ [-50,50]D [-50,25]D {0}D 1×10-6
Rot
ated
and
Shi
fted
matrix orthogonalan is M ),96.420(*Mwherer
,96.420,otherwise ,0
500|| if ),||sin( where
1 ,9828.418)(11
−=′
+′=⎪⎩
⎪⎨⎧ ≤
=
∑ =−×=
xy
iyiyiyiyiyiz
i izDyf D
[-500,500]D [-500,500]D {420.96}D 5000 Rotated Schwefel[59]
matrix orthogonalan is M ,*M where
]10)2cos(10[)(1
212
xy
yyyfD
i ii
=
+−=∑ =π
[-5.12,5.12]D [-5.12,2]D {0}D 100 Rotated Rastrigin[59]
matrix orthogonalan is M ,*M where
20)2cos1exp( )12.0exp(20)(11
213
xy
eyD
yD
yfD
i iD
i i
=
++−−−= ∑∑ ==π [-32,32]D [-32,16]D {0}D 1×10-6 Rotated
Ackley[59]
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
64
matrix orthogonalan is M ,*M where
1)/cos(4000/1)(1 1
214
xy
iyyyfD
i
D
i ii
=
+−= ∑ ∏= = [-600,600]D [-600,200]D {0}D 1×10-6 Rotated Griewank[59]
optimum global shifted the: ],,,[
,1,_))1()(100()(
21
61
122
12
15
D
D
i iii
oooo
oxzbiasfzzzxf
=
+−=+−+−=∑ −
= +
[-100, 100]D [-100, 100]D o 490 Shifted Rosenbrock[163]
optimum global shifted the: ],,,[
,,_)10)2cos(10()(
21
912
16
D
D
i ii
oooo
oxzbiasfzzxf
=
−=++−=∑ =π
[-5, 5]D [-100, 100]D o -230 Shifted Rastrigin[163]
Table 3-4 PSO Algorithms for Comparison
PSO algorithm Parameter configurations References
GPSO ω: 0.9~0.4, c1= c2=2.0, VMAXd=0.2×Range [45] LPSO ω: 0.9~0.4, c1= c2=2.0, VMAXd=0.2×Range [54] FIPS χ=0.729, ∑ci = 4.1, VMAXd=0.5×Range [25]
HPSO-TVAC ω: 0.9~0.4, c1: 2.5~0.5, c2: 0.5~2.5, VMAXd=0.5×Range [28] DMS-PSO ω: 0.9~0.2, c1= c2=2.0, m=3, R=5, VMAXd=0.2×Range [57]
CLPSO ω: 0.9~0.4, c =1.49445, m=7, VMAXd=0.2×Range [59] OPSO ω: 0.9~0.4, c1= c2=2.0, VMAXd=0.5×Range [150]
OLPSO ω: 0.9~0.4, c =2.0, G=5, VMAXd=0.2×Range –
For a fair comparison among all the PSOs, they are tested using the same population size
of 40. Furthermore, all the algorithms use the same maximum number of function evaluations
(FEs) 2×105 in each run for each test function as suggested in [163]. Note that the number of
FEs consumed during the construction of the guidance exemplar oBesti in OLPSO are
included in this maximum FEs number allowed. Another notice is that an L32(231) OA is
suitable for all the test functions because they are all 30 dimension. For the purpose of
reducing statistical errors, each algorithm is tested 25 times independently for every function
and the mean results are used in the comparison.
3.3.3 Solution Accuracy with Orthogonal Learning Strategy
The solutions obtained by OLPSOs are compared with the ones obtained by PSOs
without OL strategy in Table 3-5. Table 3-5 compares the mean values and the standard
deviations of the solutions found. The best results are marked in boldface. The t-test results
between OLPSO-G and GPSO, and OLPSO-L and LPSO are also given, respectively.
1) Unimodal Functions
For the four unimodal functions, the results show that OLPSOs generally outperform the
traditional PSOs. For example, OLPSO-G does better than GPSO on functions f1, f2, and f3
whilst OLPSO-L outperforms LPSO on functions f1, f2, f3, and f4. The experimental results
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
65
show that the OL strategy brings solution with much higher accuracy to the problem. For the
very simple unimodal functions f1 and f2, OLPSO-G provides solutions with the highest
quality. However, as the problem becomes more complex, even become multimodal in high
dimension, such as the Rosenbrock’s function (f3), the performance of OLPSO-L is much
better. This is in coincidence with the general observation that a local version PSO does
better than a global version PSO on complex problems. This is because a local version PSO
draws experience from locally best particles, as opposed to the interim global best, and hence
avoids a premature convergence, although it could converge more slowly. As for the Noise
function (f4), we can observe that OLPSO-G does not show an advantage. This is perhaps
because the effect of the OL strategy is largely canceled out by the random fluctuation.
Table 3-5 Solutions Accuracy (Mean±Std) Comparisons Between PSOs With and Without OL Strategy
Function GPSO OLPSO-G t-Test LPSO OLPSO-L t-Test
f1 2.05×10-32±3.56×10-32 4.12×10-54±6.34×10-54 2.88† 3.34×10-14±5.39×10-14 1.11×10-38±1.28×10-38 3.10†
f2 1.49×10-21±3.60×10-21 9.85×10-30±1.01×10-29 2.07† 1.70×10-10±1.39×10-10 7.67×10-22±5.63×10-22 6.12†
f3 40.70±32.19 21.52±29.92 2.18† 28.08±21.79 1.26±1.40 6.14†
f4 9.32×10-3±2.39×10-3 1.16×10-2±4.10×10-3 -2.38† 2.28×10-2±5.60×10-3 1.64×10-2±3.25×10-3 4.96†
f5 2.48×103±2.97×102 3.84×102±2.17×102 28.53† 3.16×103±4.06×102 3.82×10-4±0 38.95†
f6 26.03±7.27 1.07±0.99 17.00† 35.07±6.89 0±0 25.46†
f7 1.31×10-14±2.08×10-15 7.98×10-15±2.03×10-15 8.80† 8.20×10-08±6.73×10-08 4.14×10-15±0 6.09†
f8 2.12×10-2±2.18×10-2 4.83×10-3±8.63×10-3 3.50† 1.53×10-3±4.32×10-3 0±0 1.77
f9 2.23×10-31±7.07×10-31 1.59×10-32±1.03×10-33 1.46 8.10×10-16±1.07×10-15 1.57×10-32±2.79×10-48 3.80†
f10 1.32×10-3±3.64×10-3 4.39×10-4±2.20×10-3 1.03 3.26×10-13±3.70×10-13 1.35×10-32±5.59×10-48 4.41†
f11 4.61×103±6.21×102 4.00×103±6.08×102 3.51† 4.50×103±3.97×102 3.13×103±1.24×103 5.28†
f12 60.02±15.98 46.09±12.88 3.39† 53.36±13.99 53.35±13.35 0.00
f13 1.93±0.96 7.69×10-15±1.78×10-15 10.01† 1.55±0.45 4.28×10-15±7.11×10-16 17.44†
f14 1.80×10-2±2.41×10-2 1.68×10-3±4.13×10-3 3.33† 1.68×10-3±3.47×10-3 4.19×10-8±2.06×10-7 2.42†
f15 427.93±54.98 424.75±34.80 0.24 432.33±43.41 415.95±23.96 1.65
f16 -223.18±38.58 -328.57±1.04 13.65† -234.95±18.82 -330±1.64×10-14 25.36†† The value of t with 48 degrees of freedom is significant at α=0.05 by a two-tailed test between the two algorithms.
Fig. 3-4 Convergence progresses of PSOs with and without OL strategy on unimodal functions.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
66
The plots in Fig. 3-4 show the convergence progress of the mean solution values of the
25 trials during the run for functions f1 and f3. It is apparent that OLPSOs perform better than
the traditional PSOs in terms of final solution and convergence speed. It can be observed
from the figures that OLPSOs with the OL strategy converge considerably faster than the
traditional PSOs (GPSO and LPSO) without an OL strategy.
2) Multimodal Functions
As the efficiency of the OL strategy provides PSO an ability to discover, preserve, and
utilize useful information of the learning exemplars, it is expected that OLPSO can avoid
local optima and bring about improved performance on multimodal functions. Indeed, the
experimental results for functions f5 to f10 given in Table 3-5 support this intuition. OLPSO-G
surpasses GPSO on all the six multimodal functions. OLPSO-L yields the best performance
among the four PSOs on all the six multimodal functions, in terms of mean solutions and
standard deviations. In comparison, GPSO can only reach the global optimum on function f7
and f9 while LPSO on functions f7, f9, and f10. Best of all, OLPSO-L is able to find the global
optimum on all the functions and only OLPSO-L can show significantly improved
performance in reaching the global optimum 0 on the Rastrigin’s function (f6) and the
Griewank’s function (f8). These experimental results verify that the OLPSOs with the OL
strategy offer the ability of avoiding local optima to obtain the global optimum robustly in
multimodal functions.
The evolutionary progresses of the PSOs in optimizing the multimodal functions f5 and
f6 are plotted in Fig. 3-5. It can be observed that OLPSOs are able to improve solutions
steadily for a long period without being trapped in local optima. OLPSO-L appears to exhibit
the strongest search ability and can converge to the global optimum 0 in about 1.5×105 FEs
on the Rastrigin’s function. The convergent curves on the Schewefel’s function (f5) also show
that OLPSO-L has strong global search ability to avoid local optima.
Fig. 3-5 Convergence progresses of PSOs with and without OL strategy on multimodal functions.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
67
3) Rotated and Shifted Functions
Functions f11 to f14 are multimodal functions with coordinate rotation while f15 and f16 are
shifted functions. In order to avoid biases of specific rotations in the tests, a new rotation is
computed before each run of the 25 independent trials according to Salomon method in [174].
Experimental results for the four rotated multimodal functions are also given in Table 3-5 and
the evolutionary progresses of f13 and f14 are plotted in Fig. 3-6. It appears that all the PSO
algorithms are affected by the coordinate rotation. However, it is interesting to observe that
the OLPSO algorithms can still reach the global optima of the rotated Ackley’s function (f13)
and the rotated Grienwank’s function (f14). All the PSOs are trapped by the rotated
Schwefel’s function (f11) and the rotated Rastrigin’s function (f12) as they become much more
difficult after coordinate rotation. However, OLPSOs still perform better than traditional
PSOs on these two problems. The experimental results also show that OLPSO-G and
OLPSO-L outperform GPSO and LPSO respectively on the two shifted function f15 and f16.
Moreover, only OLPSO-L can obtain the global optimum -330 on the shifted Rastrigin’s
function (f16). Overall, even though affected by the rotation and the shift, the comparisons
still indicate that the OL strategy is beneficial to the PSO performance, and OLPSOs
generally perform better than traditional PSOs.
Fig. 3-6 Convergence progresses of PSOs with and without OL strategy on rotated functions.
3.3.4 Convergence Speed with Orthogonal Learning Strategy
As the OL strategy can provide a promising guidance exemplar oBesti, it is natural that
OLPSO can reach more accurate solution with a faster convergence speed. In order to verify
this, more experimental results are given and compared in Table 3-6. The results given there
are the average FEs needed to reach the threshold expressed as acceptable solutions specified
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
68
in Table 3-3. In addition, successful rate (SR%) of the 25 independent runs for each function
are also compared. Note that the average FEs are calculated only for the runs that have been
‘successful’. As some algorithm may not succeed in reaching the acceptable solution every
run on some problems, the metric success performance (SP), defined as SP=(Average
FEs)/(SR%) [163], is also compared in Table 3-6. Table 3-6 Convergence Speed, Algorithm Reliability, and Success Performance Comparisons.
Function GPSO OLPSO-G LPSO OLPSO-L
FEs SR% SP FEs SR% SP FEs SR% SP FEs SR% SP f1 134561 100 134561 89247 100 89247 161985 100 161985 98337 100 98337 f2 141262 100 141262 101698 100 101698 171962 100 171962 114441 100 114441 f3 126343 100 126343 78749 100 78749 137934 100 137934 92233 100 92233 f4 171048 60 285080 150238 40 375595 × 0 × 186351 4 4658775f5 117710 8 1471375 40533 100 40533 × 0 × 51498 100 51498 f6 75274 100 75274 37783 100 37783 76061 100 76061 43635 100 43635 f7 152659 100 152659 109627 100 109627 189154 100 189154 126571 100 126571 f8 137576 32 429925 93336 68 137258.8 171756 80 214695 107217 100 107217 f9 128474 100 128474 80761 100 80761 153943 100 153943 90610 100 90610 f10 135620 88 154113.6 86667 96 90278.13 168060 100 168060 97534 100 97534 f11 77083 76 101425 54901 92 59675 89029 88 101169.3 54097 96 56351.04f12 100215 100 100215 66023 100 66023 107072 100 107072 68809 100 68809 f13 163356 16 1020975 111961 100 111961 × 0 × 129946 100 129946 f14 146446 32 457643.8 112053 84 133396.4 186771 68 274663.2 137850 96 143593.8f15 37203 84 44289.29 101632 96 105866.7 42935 84 51113.1 113317 100 113317 f16 4758 56 8496.429 37143 100 37143 16999 60 28331.67 43393 100 43393
Ave. SR 72.00% 92.25% 73.75% 93.50% It can be observed from the table that OLPSO-G and OLPSO-L are constantly faster
than GPSO and LPSO respectively on the tested functions. This indeed shows the advantage
of the OL strategy in constructing promising exemplar to guide the flying direction for faster
optimization speed. Moreover, with a reasonable agreement to the fact that GPSO is always
faster than LPSO, OLPSO-G is observed to be faster than OLPSO-L and is also the fastest
algorithm among the four contenders. Even the slower OLPSO-L (when compared with
OLPSO-G), still converges faster than GPSO (global version but without OL strategy) on
most of the functions. For example, in solving the Sphere function (f1), average numbers of
FEs 134561 and 161985 are needed by GPSO and LPSO respectively to reach the acceptable
accuracy 1×10-6. However, OLPSO-G uses only 89247 FEs, which indicates that it is the
fastest algorithm. OLPSO-L uses 98337 FEs to obtain the solution, which is faster not only
than LPSO, but also than GPSO.
The successful rates shown in the Table 3-6 also indicate that the OL strategy is very
promising in bringing a high reliability to PSO. The OLPSOs result in higher algorithm
reliability with 100% successful rate on most of the test functions while traditional PSOs are
sometimes trapped in the multimodal, rotated, or the shifted problems. Overall, OLPSO-L
yields the highest successful rate 93.50% averaged on all the 16 functions, and followed by
OLPSO-G, LPSO, and GPSO.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
69
The experimental results have demonstrated that the OL strategy indeed can provide a
much better guidance for the particles to fly to a promising region faster. The OLPSOs with
the OL strategy are more robust and reliable in solving global optimization problems.
3.3.5 Comparisons with Other PSOs
To verigy the effectiveness and efficiency of the proposed OLPSO, the OLPSOs will be
compared with some other improved PSO variants, namely, FIPS, HPSO-TVAC, DMS-PSO,
CLPSO, and OPSO. The mean and the standard deviation (SD) of the final solutions are
given and compared in Table 3-7.
It can be observed that OLPSOs achieve the best solution on most of the functions. FIPS
performs best on the the Noise function (f4) and rotated Griewank’s function (f14). DMS-PSO
yields the best solution on the rotated Rastrigin’s function (f12). CLPSO does best on the
shifted Rosenbrock function (f15) and obtains the same best mean solution as OLPSO-L does
on the Schewefel’s function (f5) and the shifted Rastrigin function (f16). OLPSO-G performs
best on f1, and f2.Overall, OLPSO-L performs best on f3, f5, f6, f7, f8, f9, f10, f11, f13, and f16, i.e.,
10 out of the 16 functions.
On the unimodal functions, OLPSO-G is shown to offer superior performance among all
the PSOs. On the multimodal functions, OLPSO-L generally outperform all the other PSO
variants. On the coordinate rotated and shifted functions, OLPSOs also generally do better
than other PSOs. OLPSO-G can still obtain the global optimum of the rotated Ackley
function (f13) while OLPSO-L can still obtain the global optima of the rotated Ackley (f13)
and the rotated Griewank (f14) functions. Same as other PSOs, the OLPSO algorithms failed
on the rotated Schwefel (f11) and Rastrigin (f12) functions, as they become much harder after
rotation. However, OLPSO-L is still the best algorithm on f11 and the results are comparable
with DMS-PSO on f12. Only can FIPS, DMS-PSO, OPSO, and our OLPSOs achieve the
global optimum on f13, only can FIPS and OLPSO-L achieve the global optimum on f14, and
only CLPSO and OLPSO-L achieve the global optimum on f16.
Table 3-7 also ranks the algorithms on performance in terms of the mean solution
accuracy. It can be observed from the final rank that OLPSO-L offers the best overall
performance, while OLPSO-G is the second best, followed by CLPSO, FIPS, HPSO-TVAC,
DMS-PSO, OPSO, GPSO, and LPSO. Both FIPS and HPSO-TVAC ranks the fourth.
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
70
Table 3-7 Search Result Comparisons of PSOs on 16 Global Optimization Functions
Function GPSO LPSO FIPS HPSO-TVAC DMS-PSO CLPSO OPSO OLPSO-G OLPSO-L
f1 Mean 2.05×10-32 3.34×10-14 2.42×10-13 2.83×10-33 2.65×10-31 1.58×10-12 6.45×10-18 4.12×10-54 1.11×10-38
SD 3.56×10-32 5.39×10-14 1.73×10-13 3.19×10-33 6.25×10-31 7.70×10-13 4.64×10-18 6.34×10-54 1.28×10-38
Rank 4 7 8 3 5 9 6 1 2
f2 Mean 1.49×10-21 1.70×10-10 2.76×10-8 9.03×10-20 1.57×10-18 2.51×10-8 1.26×10-10 9.85×10-30 7.67×10-22
SD 3.60×10-21 1.39×10-10 9.04×10-9 9.58×10-20 3.79×10-18 5.84×10-9 5.58×10-11 1.01×10-29 5.63×10-22
Rank 3 7 9 4 5 8 6 1 2
f3 Mean 40.70 28.08 25.12 23.91 41.58 11.36 49.61 21.52 1.26
SD 32.19 21.79 0.51 26.51 30.25 9.85 36.54 29.92 1.40 Rank 7 6 5 4 8 2 9 3 1
f4 Mean 9.32×10-3 2.28×10-2 4.24×10-3 9.82×10-2 1.45×10-2 5.85×10-3 5.50×10-2 1.16×10-2 1.64×10-2
SD 2.39×10-3 5.60×10-3 1.28×10-3 3.26×10-2 5.05×10-3 1.11×10-3 1.70×10-3 4.10×10-3 3.25×10-3 Rank 3 7 1 9 5 2 8 4 6
f5 Mean 2.48×103 3.16×103 9.93×102 1.59×103 3.21×103 3.82×10-4 2.93×103 3.84×102 3.82×10-4
SD 2.97×102 4.06×102 5.09×102 3.26×102 6.51×102 1.28×10-07 5.57×102 2.17×102 0 Rank 6 8 4 5 9 2 7 3 1
f6 Mean 26.03 35.07 65.10 9.43 27.15 9.09×10-5 6.97 1.07 0
SD 7.27 6.89 13.39 3.48 6.02 1.25×10-4 3.07 0.99 0 Rank 6 8 9 5 7 2 4 3 1
f7 Mean 1.31×10-14 8.20×10-8 2.33×10-7 7.29×10-14 1.84×10-14 3.66×10-7 6.23×10-9 7.98×10-15 4.14×10-15
SD 2.08×10-15 6.73×10-8 7.19×10-8 3.00×10-14 4.35×10-15 7.57×10-8 1.87×10-9 2.03×10-15 0 Rank 3 7 8 5 4 9 6 2 1
f8 Mean 2.12×10-2 1.53×10-3 9.01×10-12 9.75×10-3 6.21×10-3 9.02×10-9 2.29×10-3 4.83×10-3 0
SD 2.18×10-2 4.32×10-3 1.84×10-11 8.33×10-3 8.14×10-3 8.57×10-9 5.48×10-3 8.63×10-3 0 Rank 9 4 2 8 7 3 5 6 1
f9 Mean 2.23×10-31 8.10×10-16 1.96×10-15 2.71×10-29 2.51×10-30 6.45×10-14 1.56×10-19 1.59×10-32 1.57×10-32
SD 7.07×10-31 1.07×10-15 1.11×10-15 1.88×10-29 1.02×10-29 3.70×10-14 1.67×10-19 1.03×10-33 2.79×10-48
Rank 3 7 8 5 4 9 6 2 1
f10 Mean 1.32×10-3 3.26×10-13 2.70×10-14 2.79×10-28 2.64×10-3 1.25×10-12 1.46×10-18 4.39×10-4 1.35×10-32
SD 3.64×10-3 3.70×10-13 1.57×10-14 2.18×10-28 4.79×10-3 9.45×10-13 1.33×10-18 2.20×10-3 5.59×10-48
Rank 8 5 4 2 9 6 3 7 1
f11 Mean 4.61×103 4.50×103 4.41×103 5.32×103 4.04×103 4.39×103 4.48×103 4.00×103 3.13×103
SD 6.21×102 3.97×102 9.94×102 7.00×102 5.68×102 3.51×102 1.03×103 6.08×102 1.24×103 Rank 8 7 5 9 3 4 6 2 1
f12 Mean 60.02 53.36 1.50×102 52.90 41.97 87.14 63.78 46.09 53.35 SD 15.98 13.99 14.48 12.54 9.74 10.76 19.73 12.88 13.35
Rank 6 5 9 3 1 8 7 2 4
f13 Mean 1.93 1.55 3.16×10-7 9.29 2.42×10-14 5.91×10-5 1.49×10-8 7.69×10-15 4.28×10-15
SD 0.96 0.45 1.00×10-7 2.07 1.52×10-14 6.46×10-5 6.36×10-9 1.78×10-15 7.11×10-16
Rank 8 7 5 9 3 6 4 2 1
f14 Mean 1.80×10-2 1.68×10-3 1.28×10-8 9.26×10-3 1.02×10-2 7.96×10-5 1.28×10-3 1.68×10-3 4.19×10-8 SD 2.41×10-2 3.47×10-3 4.29×10-8 8.80×10-3 1.24×10-2 7.66×10-5 3.70×10-3 4.13×10-3 2.06×10-7
Rank 9 5 1 7 8 3 4 6 2
f15 Mean 427.93 432.33 424.83 494.20 502.51 403.07 2.45×107 424.75 415.94 SD 54.98 43.41 25.37 96.54 95.18 13.50 4.40×107 34.80 23.96
Rank 5 6 4 7 8 1 9 3 2
f16 Mean -223.18 -234.95 -245.77 -318.33 -303.17 -330 -284.11 -328.57 -330 SD 38.58 18.82 22.08 5.75 5.01 3.39×10-5 13.62 1.04 1.64×10-14
Rank 9 8 7 4 5 2 6 3 1 Total rank 97 104 89 89 91 76 96 50 28 Ave. rank 6.06 6.50 5.56 5.56 5.69 4.75 6.00 3.13 1.75 Final rank 8 9 4 4 6 3 7 2 1 Algorithms GPSO LPSO FIPS HPSO-TVAC DMS-PSO CLPSO OPSO OLPSO-G OLPSO-L
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
71
Table 3-8 Convergence Speed, Algorithm Reliability, and Success Performance Comparisons Among
Different PSO Variants
Function GPSO LPSO FIPS HPSO-TVAC DMS-PSO CLPSO OPSO OLPSO-G OLPSO-L
f1 FEs 134561 161985 118306 63982 138103 139554 92908 89247 98337
SR% 100 100 100 100 100 100 100 100 100 SP 134561 161985 118306 63982 138103 139554 92908 89247 98337
f2 FEs 141262 171962 165502 78944 147644 172886 134640 101698 114441
SR% 100 100 100 100 100 100 100 100 100 SP 141262 171962 165502 78944 147644 172886 134640 101698 114441
f3 FEs 126343 137934 48456 50628 125573 108669 71654 78749 92233
SR% 100 100 100 100 100 100 100 100 100 SP 126343 137934 48456 50628 125573 108669 71654 78749 92233
f4 FEs 171048 × 91081 × 194220 133550 × 150238 186351
SR% 60 0 100 0 24 100 0 40 4 SP 285080 × 91081 × 809250 133550 × 375595 4658775
f5 FEs 117710 × 133646 56683 104422 65429 43200 40533 51498
SR% 8 0 100 92 4 100 4 100 100 SP 1471375 × 139214.6 61611.96 2610550 65429 1080000 40533 51498
f6 FEs 75274 76061 79421 6096 74803 44000 24768 37783 43635
SR% 100 100 100 100 100 100 100 100 100 SP 75274 76061 79421 6096 74803 44000 24768 37783 43635
f7 FEs 152659 189154 183341 102496 162400 190767 155088 109627 126571
SR% 100 100 100 100 100 100 100 100 100 SP 152659 189154 183341 102496 162400 190767 155088 109627 126571
f8 FEs 137576 171756 133787 66965 141489 167486 110232 93336 107217
SR% 32 80 100 28 56 100 80 68 100 SP 429925 214695 133787 239160.7 252658.9 167486 137790 137258.8 107217
f9 FEs 128474 153943 94368 74033 137909 124779 77587 80761 90610
SR% 100 100 100 100 100 100 100 100 100 SP 128474 153943 94368 74033 137909 124779 77587 80761 90610
f10 FEs 135620 168060 107315 75483 145063 138209 86716 86667 97534
SR% 88 100 100 100 76 100 100 96 100 SP 154113.6 168060 107315 75483 190872.4 138209 86716 90278.13 97534
f11 FEs 77083 89029 115196 66394 109220 128544 31920 54901 54097
SR% 76 88 68 28 96 100 60 92 96 SP 101425 101169.3 169405.9 237121.4 113770.8 128544 53200 59675 56351.04
f12 FEs 100215 107072 × 8208 88935 146299 107942 66023 68809
SR% 100 100 0 100 100 92 100 100 100 SP 100215 107072 × 8208 88935 159020.7 107942 66023 68809
f13 FEs 163356 × 187032 × 169314 × 161856 111961 129946
SR% 16 0 100 0 100 0 100 100 100 SP 1020975 × 187032 × 169314 × 161856 111961 129946
f14 FEs 146446 186771 150433 105910 163996 × 161083 112053 137850
SR% 32 68 100 32 36 0 88 84 96 SP 457643.8 274663.2 150433 330968.8 455544.4 × 183048.9 133396.4 143593.8
f15 FEs 37203 42935 75137 129660 140749 129159 75960 101632 113317
SR% 84 84 92 48 56 100 24 96 100 SP 44289.29 51113.1 81670.65 270125 251337.5 129159 316500 105866.7 113317
f16 FEs 4758 16999 98131 27875 57607 39619 25459 37143 43393
SR% 56 60 68 100 100 100 100 100 100 SP 8496.429 28331.67 144310.3 27875 57607 39619 25459 37143 43393
Ave. SR 72.00% 73.75% 89.00% 70.50% 78.00% 87.00% 78.50% 92.25% 93.50% SR rank 8 7 3 9 6 4 5 2 1
Algorithms GPSO LPSO FIPS HPSO-TVAC DMS-PSO CLPSO OPSO OLPSO-G OLPSO-L
In order to compare the convergence speed, algorithm reliability, and success
performance, Table 3-8 gives the mean FEs to reach the acceptable accuracy among the
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
72
success runs, the successful rate, and the success performance. The results show that FIPS
and HPSO-TVAC converges very fast on most of the function. Especially on unimodal
functions, HPSO-TVAC is fastest on f1 and f2, and FIPS is fastest on f3 and f4. In addition,
HPSO-TVAC converges fast on some multimodal functions, but the rapid convergence rate is
not able to ensure that HPSO-TVAC can achive the global optimum solution. For example,
HPSO-TVAC has low successful rate on some multimodal functions, which limits the real
value of the algorithm.
However, OLPSOs do much better in reaching the global optima robustly, as measured
by the successful rate. Even though OLPSOs are sometimes slower than HPSO-TVAC, they
are still general faster than many other PSOs.
Moreover, OLPSOs generally outperform the contenders with higher successful rate.
One interesting fact, is the “total failure” on some functions of some algorithms, that is to say,
the algorithms can not obtain an accepted solution in all 25 runs. For example, the successful
rate of LPSO on f4, f5 and f13 are 0, FIPS fails on f4 and f13, and CLPSO cannot achive
accepted solutions on f13 and f14 while OPSO on f4. In contrast, OLPSO-G and OLPSO-L
successfully obtain accepted solutions on all functions. OLPSO-G gets a successful rate of
100% on f1, f2, f3, f5, f6, f7, f9, f12, f13, and f16, total 10 functions, while OLPSO-L also has a
successful rate of 100% on 16 functions, f1, f2, f3, f5, f6, f7, f8, f9, f10, f12, f13, f15, and f16. In a
conclusion, OLPSO-L has the highest average successful rate of 93.50%, OLPSO-G has the
second highest one of 92.25%, followed by FIPS, CLPSO, OPSO, DMS-PSO, LPSO, GPSO,
and HPSO-TVAC.
Overall, from the great performance of OLPSO on convergence speed, algorithm
reliability, and success performance, we can see that OLPSO is an effective and efficient
global optimization algorithm.
3.3.6 Comparisons with Other Evolutionary Algorithms
The proposed OLPSOs are further compared with some state of the art evolutionary
algorithms (EAs) in Table 3-9. These algorithms include variants of EAs, such as fast
evolutionary programming (FEP) with Cauchy mutation (1999) [162], orthogonal GA with
quantization (OGA/Q) (2001) [167], estimation of distribution algorithm with local search
(EDA/L) (2004) [175], evolution strategy with covariance matrix adaptation (CMA-ES)
(2005) [176], and adaptive differential evolution (JADE) with optional external archive (2009)
[177]. As OLPSO-L generally outperforms OLPSO-G in global optimization, we only
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
73
compare OLPSO-L with these algorithms in Table 3-9. The results of the compared
algorithms are all derived directly from their corresponding references except that the results
of CMA-ES are obtained by our independent experiments on these functions based on the
provided source code [176].
OGA/Q is an EA with orthogonal initialization and orthogonal crossover. It yielded
good performance and did best on 5 of the 10 functions, indicating the advantages of the
OED method. Our OLPSO-L also does best on 5 of the 10 functions. Specifically, OGA/Q
seems to be better than OLPSO-L on unimodal functions (e.g., f1 and f2) while OLPSO-L
does better than OGA/Q on some of the multimodal functions (e.g., f9 and f10). CMA-ES does
best on the Rosenbrock function (f3) and JADE did best on the Noise function (f4). The results
show that OLPSO-L has very competitive performance when compared with these state of
the art EAs, especially for its strong global search ability on multimodal functions. OLPSO-L
works best on f5, f9, and f10, the same best with OGA/Q, EDA/L, and JADE on f6, the same
best with OGA/Q and EDA/L on f8, and the second best on f7. Table 3-9 Result Comparisons of OLPSO-L and Some State of the Art Evolutionary Computation
Algorithms With The Existing Results Reported in The Corresponding References
Func FEP [162] OGA/Q [167] EDA/L [175]† CMA-ES [176]‡ JADE [177] OLPSO-L f1 5.7×10-4±1.3×10-4 0±0 N/A 4.54×10-16±1.13×10-16 1.3×10-54±9.2×10-54 1.11×10-38±1.28×10-38
f2 8.1×10-3±7.7×10-4 0±0 N/A 2.32×10-3±9.51×10-3 3.9×10-22±2.7×10-21 7.67×10-22±5.63×10-22
f3 5.06±5.87 0.75±0.11 4.324×10-3 2.33×10-15±7.73×10-16 0.32±1.1 1.26±1.40 f4 7.6×10-3±2.6×10-3 6.30×10-3±4.07×10-4 N/A 5.92×10-2±1.73×10-2 6.8×10-4±2.5×10-4 1.64×10-2±3.25×10-3
f5 14.98±52.6♠ 3.03×10-2±6.447×10-4♠ 2.9×10-3♠ 3.15×103±5.79×102 7.1±28 3.82×10-4±0 f6 4.6×10-2±1.2×10-2 0±0 0 1.76×102±13.89 0±0 0±0 f7 1.8×10-2±2.1×10-3 4.440×10-16±3.989×10-17 4.141×10-15 12.12±9.28 4.4×10-15±0 4.14×10-15±0 f8 1.6×10-2±2.2×10-2 0±0 0±0 9.59×10-16±3.51×10-16 2.0×10-4±1.4×10-3 0±0 f9 9.2×10-6±3.6×10-6 6.019×10-6±1.159×10-6 3.654×10-21 1.63×10-15±4.93×10-16 1.6×10-32±5.5×10-48 1.57×10-32±2.79×10-48
f10 1.6×10-4±7.3×10-5 1.869×10-4±2.615×10-5 3.485×10-21 1.71×10-15±3.70×10-16 1.4×10-32±1.1×10-47 1.35×10-32±5.59×10-48
†The standard deviation is not available in [175] and N/A means the results are nor available. ‡The results of CMA-ES are obtained by our independent experiments on these functions.
♠The mean value of f5 has been added to 418.9829×D to make the global optimal value is equal to 0.
3.3.7 Parameter Analysis
In order to investigate the influence of G on the performance of the OLPSO algorithm,
empirical studies are carried out on relevant functions, namely the Sphere, Rosenbrock,
Schwefel, Rastrigrin, Ackley, and Grienwank functions listed in Table 3-3 as the f1, f3, f5, f6,
f7, and f8, respectively. Parameter G controls the update frequency of the guidance vector
oBesti. As discussed in the previous subsection, the particle will use vector oBesti as the
learning exemplar steadily and reconstruct the oBesti only after a stagnation of pBesti for G
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
74
generations. As can be imagined, if G is too small, the particles will reconstruct the guidance
exemplar oBesti frequently. This may waste computations on OED when it is not indeed
necessary. Also, the search direction will not be steady if oBesti changes frequently. On the
other hand, if G is too large, the particles will waste much computation on the local optima
with an oBesti which is not effective any longer. In order to further analyze the effect of G,
different values for G from 0 to 10 are tested, and two OLPSO versions that based on a global
topology (OLPSO-G) and a local topology (OLPSO-L) are simulated. The results of the
investigation are shown in Fig. 3-7(a) and Fig. 3-7(b) with averagely 25 independent runs for
the OLPSO-G and the OLPSO-L, respectively. The figures reveal that a value of G around 5
offers the best performance. This also indicates OLPSO indeed benefits from the OL strategy
by the steadily guidance of a promising learning exemplar. Therefore, a reconstruction gap of
G=5 is used in this chapter.
Fig. 3-7 OLPSO performance with different values of G. (a) OLPSO-G. (b) OLPSO-L.
3.3.8 Discussions
Experimental results and comparisons verify that the OL strategy indeed helps the
OLPSOs perform better than the traditional PSOs and most existing improved PSO variants
on most of the test functions, in terms of solution accuracy, convergence speed, and
algorithm reliability. OLPSOs offer not only better performance in global optimization, but
also finer-grain search ability, owing to the OL strategy that can discover, preserve, and
utilize useful information from the search experiences.
The OPSO in [150]also uses the OED method to improve the algorithm performance.
The particle in OPSO uses OED on both the cognitive learning and social learning
components to construct the position for the next move. The particle velocity is obtained by
calculating the difference between the new position and the current position. Differently, our
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
75
proposed OLPSO emphasizes the learning strategy and uses OED to design an OL strategy.
The OL strategy uses OED to construct a promising and efficient exemplar to guide the
particle’s flying. OLPSO works under the framework of traditional PSO except that particle
in OLPSO learns from its constructed guidance exemplar oBesti instead of pBesti and nBesti,
i.e., uses Eq. (3-4) instead of Eq. (3-5). Therefore, the useful information in pBesti and nBesti
can be discovered and preserved through the OL strategy.
OLPSO benefits from the following three advantages. First, since only one learning
exemplar oBesti is used, the guidance would be more steady and can weaken the ‘oscillation’
phenomenon. Second, as oBesti is constructed via OED on pBesti and nBesti, the useful
information can be discovered and preserved to predict promising region for guiding the
particle, weakening the ‘two steps forward, one step back’ phenomenon. Third, as the oBesti
is used as the learning exemplar steadily until it can not improve the particle’s fitness for G
generations, it can guide the particle to fly towards the promising region steadily, resulting in
better global search performance. The experimental results and comparisons support these
advantages.
The comparisons between OLPSO-G and OLPSO-L show that for unimodal functions
OLPSO-G outperforms OLPSO-L on both accuracy and speed, whilst for multimodal
functions OLPSO-L is better for final solution accuracy. This may be due to that OLPSO-L is
based on the local version PSO that provides a better diversity and avoids premature
convergence. Nevertheless, OLPSO-L can also do very well on unimodal functions and it
outperforms most of the existing PSOs. Hence, OLPSO-L is the recommended global
optimizer here. Moreover, the comparisons with some state of the art EAs show that
OLPSO-L is general better than, or at least comparable to these variants of EAs.
3.4 Chapter Summary
In this chapter, we introduced the orthogonal experiment design and orthogonal
prediction technique in machine learning into PSO and presented a new orthogonal learning
particle swarm optimization by designing an OL strategy to discover useful information from
a particle’s personal best position pBesti and its neighborhood’s best position nBesti.
Comprehensive experimental tests have been conducted on 16 benchmarks including
unimodal, multimodal, coordinate-rotated, and shifted functions. The experimental results
demonstrate the high effectiveness and the high efficiency of the OL strategy and the OLPSO
algorithms. The resultant OLPSO-G and OLPSO-L algorithms both significantly outperform
Chapter 3 Orthogonal Learning Particle Swarm Optimization Based on Orthogonal Experiments Design Techique in Machine Learning
76
other existing PSO algorithms on most of the functions tested, contributing to higher solution
accuracy, faster convergence speed, and stronger algorithm reliability. Comparisons are also
made with some state of the art EAs, and the OLPSO algorithm shows very promising
performance.
The features and advantages of the proposed OL strategy and OLPSO algorithms are:
1) Only one guidance vector is used in OL strategy to guide the fly of the particles,
which is able to avoid the ‘oscillation’ phenomenon in traditional PSO due to the guidance of
two vector, obtain sustained and stable guidance information, and ensure the convergence of
the algorithm.
2) As a orthogonal prediction technique based on OED, this new OL strategy helps a
particle discover useful information from a particle’s personal best position pBesti and its
neighborhood’s best position nBesti and construct a more promising and efficient guidance
exemplar oBesti to adjust its flying velocity and direction, which results in easing the ‘two
steps forward, one step back’ phenomenon, offering rapid and correct direction guidance to
the particles, and accelerating the convergence to global optimum solution.
3) OL is an operator and can be applied to PSO with any topological structures, such as
the star (global version), the ring (local version), the wheel, and the von Neumann structures.
Without loss of generality, we applied it is to both the global and the local versions of PSO,
yielding the novel OLPSO-G and OLPSO-L algorithms to verify that the OL strategy is able
to discover useful information, and use it to enhance the solution accuracy, convergence rate,
and algorithm reliability with orthogonal technique.
4) OL follows the same simple algorithm framework of PSO, and introduces only one
parameter G to control the update frequency of guadiance vector, which makes the algorithm
still easy to understand and implement. Thus, OL strategy and OLPSO algorithm not only
retains the simplicity of the traditional PSO, but also greatly improves the performance of the
algorithm.
In a conclusion, this chapter introduced the orthogonal experiment design and orthogonal
prediction technique into PSO, and designed an easy understand and implement OL learning
strategy and OLPSO algorithm. The OL strategy plays a great effect on the rapid global
convergence of PSO algorithm, and improves the performance of the algorithm on complex
multimodal optimization problems. OL and OLPSO is an important and successful
exploration to the machine learning aided PSO algorithms.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
77
Chapter 4 Multiple Populations for Multiple Objectives: A
Co-evolutionary Technique for Solving Multi-objective
Optimization Problems based on Ensemble Learning Techique in
Machine Learning
4.1 Introduction
Particle swarm optimization (PSO) has been widely applied on many optimziaiton areas
due to its simple algorithm procedure and effective performance. An active research trend in
PSO is to extend PSO to multi-objective optimization problem (MOP) [178].
Multi-objective optimization problems (MOP) have received considerable attentions
over the past several decades because of its significance in a large number of real-world
applications [179][180]. An MOP has multiple objectives that often contradict with each
other, so the optima of an MOP are a group of solutions but not just a single point. Without
any preference of the decision maker for the objectives, each one in the solutions group is an
optimum solution. For example, solution A is better than solution B on the first objective,
while solution B is better than solution A on the second objective, so these two solutions have
no significant difference. Thus, the algorithms addressing the MOP should achive a group of
solutions covering the entire Pareto front (PF). As the population-based characteristic of
evolutionary computation (EC) algorithms meets the requirement of a set of solutions of
MOP, many researchers try to extend the EC algorithms to MOP [181][182][183]. However,
when using multi-objective evolutionary algorithm (MOEA) to solve MOP, a problem is that
how to select good individuals into the next generation. Since an MOP has multiple
objectives that often contradict with each other, it is difficult to say whether one individual is
better than another if it is better on one objective but is worse on another objective. This is
the fitness assignment problem encountered by the MOEA researchers. As the EC algorithms
derive from the concept of “survival of the fittest” in Darwin’s natural selection law, it would
cause the search inefficiency if we can not address the fitness assignment problem. Therefore,
one of the most significant research topics in MOEA is to design a suitable method to assign
an individual’s fitness [184][185].
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
78
To overcome the fitness assignment difficulty in MOEA, various techniques like the
objectives aggregation technique, the objectives alternate technique, and the Pareto based
technique have been proposed in the literature. The objective aggregation technique sums the
objectives to a single objective by weights and then optimizes the formed single objective.
However, this technique requires the users to determine the weights for different objectives.
Moreover, only one solution can be obtained in a run by using this technique. The objectives
alternate technique sorts the objectives according to their importance and optimizes them
alternately. However, the ordering of the objectives may affect the performance significantly
while determining the importance of different objectives is often problem dependent. The
third popular technique is to apply the Pareto dominance to rank the individuals and assign
fitness to them. The domination rank technique may be helpful for approximating the Pareto
front (PF). However, as the Pareto dominance is a partial order relation, it is difficult to select
individuals for the next generation. Thus the obtained solutions may still not spread along the
whole PF if the selection operator can not maintain sufficient diversity. Therefore, developing
MOEA that can assign the individual’s fitness easily and also can keep the diversity to
approximate the whole true PF is still a challenging research topic in the MOEA community.
According to the research on the published papers, this chapter find that most
multiobjective evolutionary algorithm (MOEA) and multi-objective PSO (MOPSO) adopts
the algorithm paragiam of single population for multi-objective, hence each individual in the
population must track all objectives, which result in the difficulty of the fitness assignment
problem mentioned above. As there are multiple objectives in the MOP, can we use multiple
populations, instead of only one population, to optimize the MOP?
Enemble learning (EL) is a popular technique in machine learning area. In machine
learning, the researchers find that using only one classifier is difficult to make an efficient
classification for the tranning data with various type. The main idea of EL is to use multiple
classifiers to improve learning efficiency [186]. The research work in machine learning has
reported that EL has higher efficiency [187] compared with single classifier method. Inspired
by the idea of EL, this chapter proposed a multiple populations for multiple objectives
(MPMO) framework. The motivations of MPMO lie in that: as there are multiple objectives
in the MOP, can we use multiple populations, instead of only one population, to optimize the
MOP? Since it is difficult to consider all the objectives as a whole in one population, can we
treat them separately in different populations? Combine the idea of ensemble learning in
machine learning with the design of MOEA and MOPSO, a novel technique termed multiple
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
79
populations for multiple objectives (MPMO) is proposed in this chapter.Therefore, the idea of
the MPMO technique is novel and is different from the above techniques: instead of tackling
all the objectives together as a whole by the same population, MPMO uses multiple
populations to deal with the multiple objectives, with each population being corresponded
with only one objective and all populations cooperating to approximate the whole PF.
Fig. 4-1 illustrates the framework of the MOP algorithm with M populations based on
the MPMO technique to solve an MOP with M objectives. In every generation, the
individuals in each population calculate all the objective functions like that in traditional
MOP algorithms. However, when executing evolutionary operators like selection, the fitness
value of an individual in the mth population is assigned by the mth objective function of the
MOP, where 1≤m≤M. This way, the individuals would not be confused by different
conflicting objectives any more, but are guided by the corresponding objective to search
different regions of the PF. However, as each population focuses on optimizing one objective
only, it may cause the problem that MPMO leads the individuals in each population to the
margin of the corresponding objective, resulting in inefficient approximation of the whole PF.
Therefore, another feature of MPMO is that it requires the algorithm to design an information
sharing strategy, as shown in Fig. 4-1. This way, different populations can share their search
information and communicate with others through the information sharing strategy to
approximate the whole PF efficiently.
Fig. 4-1 Framework of MPMO based algorithm for solving MOP.
Therefore, the proposed MPMO inspred by the ensemble learning is a basis for the
design of co-evolutionary multiple population algorithm for multiple objectives. During the
last two decades, the co-evolutionary concept has also been used by scientists from the EC
community [188][189][190]. However, multiple populations for multiple objectives based on
co-evolutionary has been rarely reported. Being a general technique, it is straightforward to
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
80
implement MPMO algorithm which can accommodate any existing single objective
optimization algorithm in each population. In this chapter, by considering that particle swarm
optimization (PSO) is a simple yet powerful global optimizer with very fast convergence
speed, we adopt PSO for each swarm and design a co-evolutionary multi-swarm PSO
(CMPSO) as an instantiation of MPMO to solve MOP.
Based on the MPMO framework, CMPSO uses an external shared archive to implement
the information sharing strategy, with two novel designs being developed to enhance the
algorithm performance. The first design is to modify the particle velocity updating equation
with information obtained from externally shared archive. The shared archive is used to store
the non-dominated solutions found by different swarms and is updated every generation. The
velocity and position of a particle are not only updated by considering its personal experience
and its swarm’s experience, but also the experience fetched from the archive. Therefore, all
the swarms can share their search information thoroughly via the shared archive. This is
useful for the algorithm to accelerate the approximation the whole PF. The second design is
to utilize an elitist learning strategy (ELS) to update the archive in order to introduce
sufficient diversity so as to avoid the occurrence of local PFs. This may be helpful for MOPs
with multimodal objective functions or with complicated Pareto sets.
Inspired by the ensemble learning technique in machine learning, this chapter designed a
framework termed multiple populations for multiple objectives and proposed a
co-evolutionary multi-swarm PSO termed CMPSO. The innovations and advantages of the
CMPSO algorithm are as follows.
1) Different from existing algorithms that treat an MOP as a whole by considering all
the objectives together in a population, a framework termed multiple population for multiple
objectives (MPMO) is designed based on the idea of ensemble learning. With the MPMO
framework, each swarm is optimized by taking only one objective into account. Then,
different swarms will cooperate with each other to approximate the whole PF efficiently.
2) MPMO is a general technique applicable to many evolutionary algorithms. This
chapter adopts PSO for each population, hence proposed a co-evolutionary multi-swarm PSO
(CMPSO) and designed an external shared archive to implement the information sharing and
cevolution strategy among population.
3) CMPSO developes two novel designs, modifying particle velocity updating equation
to accelerate the convergence of the algorithm and using an elist learning strategy (ELS) in
the archive update process to avoid local optima.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
81
4.2 Multi-objective Optimization Problem
4.2.1 Related concept of MOP
A minimization MOP can be described as follow:
Minimize F(X) = {F1(X), F2(X), …, FM(X)} (4-1)
where the X = [X1, X2, …, XD] ∈ ℜD is a point in the D dimensional decision space (search
space) and the F = [F1, F2, …, FM] ∈ ΩM is the objective space with M minimization
objectives. For minimization MOP as (3), some definitions of MOP are given as follows:
Definition 1: Pareto domination. Given two vectors U = [U1, U2, …, UM] ∈ ΩM and
W = [W1, W2, …, WM] ∈ ΩM in the objective space. We say that U dominates (is better
than) W if Um≤Wm, for all m=1, 2, …, M, and U≠W.
Definition 2: Pareto optimal. Given a vector X = [X1, X2, …, XD] ∈ ℜD in the decision
space. We say that X is Pareto optimal if there is no X* ∈ ℜD such that F(X*) dominates
F(X).
Definition 3: Pareto set. The Pareto set PS is defined as
PS = {X ∈ ℜD and X is Pareto optimal} (4-2)
Definition 4: Pareto front. The Pareto front PF is defined as:
PF = {F(X) | X ∈ PS} (4-3)
4.2.2 Related work on MOP
This section will review the related work on MOP, and give a description from the
design and application of MOEA and MOPSO.
There are many algorithms, like MOEAs and MOPSOs, for solving MOP. Some
researchers used aggregation approach to solve MOP [191]. That is, the multiple objectives
are weighted and summed to form a single objective and the obtained single objective
problem is then optimized. However, the weights for different objectives are dependent on
the problem or the decision makers. Therefore they are difficult to be determined. Moreover,
the aggregation approach obtains only one solution in a run which is not sufficient in
practical applications. To overcome these disadvantages, Zhang and Li [185] proposed to
decompose the MOP into different scalar optimization sub-problems and use different
weights to these sub-problems to obtain a set of Pareto solutions in a single run. Some other
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
82
researchers proposed to optimize the objectives alternately when solving MOP [56]. However,
as the approach used in [56] involves using two objectives for determining the neighbors and
the neighborhood best particle respectively, it seems to be useful only in MOP with two
objectives. Moreover, determining the importance of different objectives is problem
dependent and the ordering of the objectives may affect the performance significantly.
Other researchers applied the concept of Pareto dominance to solve MOP. The
multi-objective genetic algorithm (GA) assigns a rank to each individual according to the
number of other individuals that dominate it [192]. Then the fitness is assigned to the
individual based on its rank. The niching Pareto GA compares every two individuals based
on the Pareto domination tournament strategy [193]. The non-dominated sorting genetic
algorithm (NSGA-II) sorts all the individuals according to the Pareto domination relationship
and selects individuals with better ranks to form the next generation population [184].
Moreover, many MOPSOs adopt the Pareto domination concept when assigning the fitness
value of the individuals [29][194], e.g., when determining the personal historically best
position [29] or selecting the globally best position [194].
Recently, some new work has been reported to use new techniques to help solve MOP
more efficiently. In the studies of Rachmawati and Srinivasan [195], Karahan and Koksalan
[196], and Zitzler et al. [197], the preference information is used in MOEAs for better
selecting the individuals to the next generation. In designing efficient MOEAs, Wang et al.
[198] used hybrid technique, Adra et al. [199] used a convergence acceleration operator,
Avigad and Moshaiov [200] used interactive concept, Lara et al. [201] used a hill climber
with sidestep local search strategy, Song and Kusiak [202] used a data mining process, and
Zhang et al. [203] used a Gaussian process model. Moreover, some multi-objective
optimization algorithms based on memetic algorithm (MA) [204], quantum GA [205],
differential evolution (DE) [206][207], and estimation of distribution algorithm (EDA) [208]
have also been proposed in recent years.
4.3 CMPSO for MOP
4.3.1 CMPSO Evolutionary Process
Suppose that there are M objectives in the MOP, and therefore there are M swarms
working concurrently in CMPSO to optimize the MOP. The evolutionary process in each
swarm is similar to that in a conventional PSO which is used to optimize a single objective
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
83
problem. Without loss of generality, we herein consider only one of the M swarms, indicating
by index of m, to describe the evolutionary process.
In the initialization, each particle i in the mth swarm randomly initializes its velocity Vim
and position Xim, evaluates Xi
m, and lets pBestim be the same as Xi
m. In every generation
during the evolutionary process, for the mth swarm, each particle i updates its velocity and
position by the influences of its personal historically best position pBestim, the swarm’s
globally best position gBestm, and an archive position Archiveim =
1 2, , ,
i i iD
m m mA A A⎡ ⎤⎣ ⎦ selected
from the archive by the particle i. The velocity update is as
)()()( 332211mid
midd
mid
mdd
mid
midd
mid
mid XArcXGrcXPrcVV −+−+−+= ω (4-4)
where d is the index of the dimension and r3d is a random value in the range of [0, 1]. The
position update is as m
idmid
mid VXX += (4-5)
In the velocity update equation, the term )(33mid
midd XArc − is the sharing information
from the shared archive. With the help of solution information in the shared archive, the
particle can use the search information not only from its own swarm, but also from other
swarms. The particle is expected to search along the whole PF by using the whole search
information of all the swarms instead of being attracted to the margin only by the search
information of its own swarm. Therefore the algorithm can approximate the whole PF fast
with the help of the archived information. The Archiveim is chosen by randomly selecting a
solution from the shared archive by the particle i. A random selection method is rapid, has
advantages of high diversity, and is with low computational cost. Therefore it is adopted in
this chapter. One notice is that if the archive is empty then the Archiveim is selected randomly
from the gBests of the other M–1 swarms excluding the mth swarm itself.
4.3.2 CMPSO Archive Update
CMPSO uses an external archive to store the non-dominated solutions from all the
swarms. This archive is not only used to store the non-dominated solutions to be reported at
the end of the evolution like in traditional MOP algorithm [209], but also is used for
information sharing among different swarms. Therefore, the particles in all the swarms can
access the information in the archive and use it to guide the flying, as indicated in Eq. (4-4)
Denoted as A, the archive is initialized to be empty and is updated in every generation.
Researches show that it is better to use an archive with a fixed maximal size because the
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
84
number of non-dominated solutions may increase very fast. Therefore, in CMPSO, the
archive is set to with maximal size of NA and the size of the archive in the current generation
is denoted as na. In the end of every generation, the archive A is updated. The archive update
process is described as follow:
Step 1: a new set S is initialized to be empty. Then the pBest of each particle in each
swarm is added into the set S.
Step 2: All the solutions in the old archive A are added into the set S.
Step 3: The ELS is performed on each solution in the archive A to form a new solution.
All the new solutions are then added into the set S. After the above operations, the set S
would have (N×M+2×na) solutions, where N and M are the population size of a swarm
and the swarm number (objectives number) respectively, and na is the number of
solutions of the old archive A.
Step 4: A non-dominated solutions determining procedure is performed on the set S to
determine all the non-dominated solutions and store them in a set R.
Step 5: If the size of R (the number of the non-dominated solutions) is not larger than NA,
then all the non-dominated solutions are stored in the new archive A, and na is set as the
size of R. Otherwise, all the non-dominated solutions are sorted according to the density,
and the first NA less crowded ones are selected to store in the new archive A, with the
number na set as NA.
The following parts give the details of the elitist learning strategy procedure, the
non-dominated solutions determining procedure, and the density based selection procedure.
1) Elitist Learning Strategy
The ELS was first introduced in the adaptive PSO in Chapter 2.3.3 for the globally best
particle to jump out of possible local optima. In this chapter, we perform the ELS on all the
solutions in the archive because they are all globally best solutions of CMPSO. The
pseudo-code of the procedure is presented in Fig. 4-2(a) and is described as follows
For each solution Ai in the archive A, let the new solution Ei first equals to Ai, and then a
random dimension d is selected to perform the Gaussian perturbation as
Aid = Aid + (Xdmax –Xdmin)Gaussian(0, 1) (4-6)
where Xmax,d and Xmin,d are the upper and lower bounds of the dth dimension respectively.
Gaussian(0, 1) is a random value generated by a Gaussian distribution with mean value 0 and
standard deviation 1.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
85
After the perturbation, the Eid is checked and is guaranteed to be in the search range
[Xmin,d, Xmax,d], otherwise, Eid is set to the corresponding bound. Later, all the objectives of the
new solution Ei are calculated, and the solution Ei is added into the set S.
Fig. 4-2 The archive update process.
2) Non-dominated Solutions Determining
This procedure is to determine the non-dominated solutions in a given solutions set S.
The procedure is described as follows and its pseudo-code is given in Fig. 4-2(b). A set R is
used to store the non-dominated solutions and it is initialized to be empty. Then for each
solution i in the set S, the procedure checks whether the solution i is dominated by any other
solution j. If solution i is not dominated by any other solution, then solution i is added into
the set R. This checking process is performed on all the solutions in the set S and all the
non-dominated solutions can be determined and stored in the set R.
3) Density Based Selection
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
86
This procedure is performed if the size of R is larger than the maximal size of the
archive, NA. The function is to select less crowded solutions in the first NA into the new
archive. For the detailed calculation, the procedure is given in Fig. 4-2(c) and is described as
follows. Given a set of solutions R, the distance of each solution is initialized to be zero. Then
all the solutions are sorted according to each objective value, from the smallest to the largest.
For each objective, the boundary solutions, that is, the solutions with the smallest value and
the largest value on this objective are assigned an infinite distance value. The distance of the
other solutions is increased by the absolute normalized difference of the objective values
between the two adjacent solutions.
After the density estimation, all the solutions in the set R are assigned with a distance.
Then we can select the first NA solutions with large distances to the new archive A.
4.3.3 Complete CMPSO
With the MPMO framework, CMPSO adopts PSO for each swarm and achives M
objectives by using M co-evolutionary population to solve MOP. The complete CMPSO
algorithm is given in Fig. 4-3.
Fig. 4-3 The complete flowchart of CMPSO.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
87
4.3.4 Complexity Analysis of CMPSO
The complexity analysis of CMPSO is described as follow:
Firstly, according to the flowchart illustrated in Fig. 4-3, the calculation process of
CMPSO is smilar to the traditional PSO except that CMPSO uses M population. Assume that
the population size of each population is N, then the compuatation complexity of all particle
in each generation is O1=O(N×M).
Secondly, with the help of Fig. 4-2, we can see that the computation complexity of
archive update operation in each generation mainly includes 3 parts as follow:
A) Assume that the size of the set S is |S|, then the computation complexity of the
construction of S is Oa(|S|).
B) Another important step in archive update operation is to determine the
non-dominated solutions in a given solutions set S and store them in a set R as shown in Fig.
4-2(b). The computation complexity of this step is Ob(|S|2).
C) The final step in archive update is the density based selection operation shown in
Fig. 4-2(c). As the operation is performed on R, assume that the size of R is |R|, then the
complexity of this step is Oc=O(M×|R|2)+O(|R|2), where O(M×|R|2) and O(|R|2) represent the
complexity of the distance calculation and the sort operation based on distance, respectively.
Therfore, the complexity of archive update is
O2=Oa(|S|)+Ob(|S|2)+Oc(M×|R|2)=O(|S|2)+O(M×|R|2). On the other hand, since both of |S| and
|R| are linear with the size of archive NA, O2 can be further represented as O2=O(M×NA2).
Overall, the complexity of CMPSO in each generation can be calculated as
OCMPSO=O1+O2=O(N×M)+O(M×NA2)=O(M×(N+NA2)). Due to the comparability of the
population size and archive size, OCMPSO can be transformed to
OCMPSO=O(M×(N+NA2))=O(M×NA2). If the maximum generation of CMPSO is G, then the
complexity of CMPSO is OCMPSO=G×O(M×NA2)=O(G×M×NA2).
From the above analysis, we can see that the complexity of CMPSO is mainly focused
on the density based selection process, that is O(M×NA2). In other multi-objective algorithms
such as NSGA-II, the algorithm complexity mainly lies on the non-dominated solutions
sorting process. Hence, the complexity of NSGA-II is ONSGA-II=G×O(M×N2) with M
objectives and population size N [184].
From the complexity expressions of CMPSO and NSGA-II, we see that the complexity
of the two algorithms can both be expressed as O(G×M×NA2) or O(G×M×N2) due to the the
comparability of the population size N and archive size NA. That is to say, although using
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
88
multiple population, the complexity of CMPSO is still not higher than the other traditional
multi-objective evolutionary algorithms.
4.4 Experimental Veerification and Comparisons
4.4.1 Test Problems
Various test problems have been proposed to evaluate the multi-objective optimization
algorithms in the literature. First, we adopt the most frequently used problems ZDT1, ZDT2,
ZDT3, ZDT4, and ZDT6 from the ZDT test set [210]. However, some researchers argue that
ZDT problems are lack of characteristics such as variables linkage, objective functions
modality, and PF shape. Therefore, we also adopt DTLZ1 and DTLZ2 from the DTLZ test set
[211], and WFG1, WFG2, WFG3, and WFG4 from the WFG test set [212], which are with
multimodal objective functions and non-separable variables. Most recently, Zhang et al. [213]
proposed a new UF test set where the problems are with complicated Pareto sets. In this
chapter, we further choose all the two-objective unconstrained problems UF1-UF7 to
evaluate the algorithm performance Table 4-1 Characteristics of the Test Problems
Name Dimension Search Range Global Optima Note ZDT1 30 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0, 2≤i≤D Convex, F1 U, F2 U (Unimodal) ZDT2 30 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0, 2≤i≤D Concave, F1 U, F2 U ZDT3 30 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0, 2≤i≤D Convex, Disconnected, F1 U, F2 M (Multimodal)
ZDT4 10 x1∈[0, 1], xi∈[-5,5], 2≤i≤D x1∈[0, 1], xi=0, 2≤i≤D Concave, F1 U, F2 M
ZDT6 10 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0, 2≤i≤D Concave, Non-uniformly, F1 M, F2 M DTLZ1 10 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0.5, 2≤i≤D Linear, F1 M, F2 M DTLZ2 10 xi∈[0, 1], 1≤i≤D x1∈[0, 1], xi=0.5, 2≤i≤D Concave, F1 U, F2 U WFG1 10 zi∈[0, 2i], 1≤i≤D zi∈[0, 2i], 1≤i≤k, zi=0.35, k+1≤i≤D Convex, Mixed, F1 U, F2 U WFG2 10 zi∈[0, 2i], 1≤i≤D zi∈[0, 2i], 1≤i≤k, zi=0.35, k+1≤i≤D Convex, Disconnected, F1 U, F2 M WFG3 10 zi∈[0, 2i], 1≤i≤D zi∈[0, 2i], 1≤i≤k, zi=0.35, k+1≤i≤D Linear, Degenerated, F1 U, F2 U WFG4 10 zi∈[0, 2i], 1≤i≤D zi∈[0, 2i], 1≤i≤k, zi=0.35, k+1≤i≤D Concave, F1 M, F2 M
UF1 30 x1∈[0, 1], xi∈[-1,1], 2≤i≤D
x1∈[0, 1], 1sin(6 / )ix x j Dπ π= + , 2≤i≤D Convex, F1 M, F2 M
UF2 30 x1∈[0, 1], xi∈[-1,1], 2≤i≤D
x1∈[0, 1], 21 1 1 1 121 1 1 1 2
(0.3 cos(24 4 / ) 0.6 )cos(6 / ),
(0.3 cos(24 4 / ) 0.6 )sin(6 / ),i
x x i D x x j D j Jx
x x i D x x j D j J
π π π ππ π π π
⎧ + + + ∈⎪= ⎨+ + + ∈⎪⎩
,
2≤i≤D
Convex, F1 M, F2 M
UF3 30 xi∈[0,1], 1≤i≤D x1∈[0, 1], 0.5(1.0 (3( 2)) /( 2))1
i Dix x + − −= , 2≤i≤D Convex, F1 M, F2 M
UF4 30 x1∈[0, 1], xi∈[-2,2], 2≤i≤D
x1∈[0, 1], 1sin(6 / )ix x j Dπ π= + , 2≤i≤D Concave, F1 M, F2 M
UF5 30 x1∈[0, 1], xi∈[-1,1], 2≤i≤D
(F1, F2) = ( / 2i N , 1– / 2i N ), 0≤i≤2N, N=10 Scatter, F1 M, F2 M
UF6 30 x1∈[0, 1], xi∈[-1,1], 2≤i≤D 1 1
[(2 1) / 2 ,2 / 2 ]N
iF i N i N
== ∪ − , F2=1–F1, N=2 Linear, Disconnected, F1 M, F2 M
UF7 30 x1∈[0, 1], xi∈[-1,1], 2≤i≤D
x1∈[0, 1], 1sin(6 / )ix x j Dπ π= + , 2≤i≤D Linear, F1 M, F2 M
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
89
Totally, 18 test problems (5 ZDT, 2 DTLZ, 4 WFG, and 7 UF) are used and their
characteristics are described in Table I. The problems are with characteristics of different
dimensionalities like 10 and 30, of different objective functions like unimodal and
multimodal, of different PFs like scatter, linear, convex, concave, disconnected,
non-uniformly, and mixed, and of different Pareto set shapes like simple and complicated.
Therefore, they are comprehensive and useful for testing the algorithm’s performance from
different aspects. For more details of the problems, please refer to [210], [211], [212], and
[213] for ZDT, DTLZ, WFG, and UF respectively.
4.4.2 Performance Metric
The inverted generational distance (IGD) indicator is adopted in this chapter as the
performance metric because it can reflect both the convergence and diversity of the obtained
solutions to the true PF. The indicator has been widely adopted and strongly recommended in
MOP community in recent years [213]. Assume that the set of non-dominated solutions
obtained by an algorithm is A and a set of solutions uniformly sampled along the true PF is P,
the calculation of IGD(A, P) is as
PAPd
PAIGDP
i i∑ ==||
1),(
),( (4-7)
where |P| is the size of the set P and d(Pi, A) denotes the distance between the solution Pi and
the solution in the set A that is nearest to Pi, measured by the Euclidean distance in the
objective space. This IGD indicator has an assumption that the true PF is known. In this
chapter, we sample 500 uniformly distributed points along the PF to form the set P for each
problem. Intuitively, if the non-dominated solutions in the set A have a good spread along the
true PF, then the indicator IGD will have a small value.
4.4.3 Experimental Settings
In this chapter, we compare the results obtained by CMPSO with not only MOEAs, but
also MOPSOs. The MOEAs include NSGA-II [184], generalized DE 3 (GDE3) [206], and
MOEA with decomposition and DE operators (MOEA/D-DE) [207], while the MOPSOs
include multi-objective comprehensive learning PSO (MOCLPSO) [214], optimized MOPSO
(OMOPSO) [215], and VEPSO [216]. These algorithms are chosen because NSGA-II and
MOCLPSO are two state-of-the-art algorithms, GDE3 and MOEA/D-DE are two most
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
90
recently well performing MOEAs, OMOPSO is a very salient MOPSO according to a very
recent comparative study [217], and VEPSO is a MOPSO that also uses multiple populations.
Therefore, these algorithms are representative and helpful to make the comparisons more
comprehensive and convincing
The parameters of the above algorithms are set according to the proposals in their
corresponding references, as summarized in Table 4-2. For CMPSO, we adopt the common
configurations that the inertia weight in Eq. (4-4) linearly decreasing from 0.9 to 0.4, and the
acceleration coefficients c1, c2, and c3 setting to be 4.0/3. We set such a value for ci because
that the sum of ci usually is to be 4.0 in PSO. As the CMPSO uses different swarms to
optimize different objectives, we set a relative small population size of 20 for each swarm. In
order to make the comparisons fair, all the seven algorithms have the same archive size of
100. The non-dominated solutions in the archive are updated and used to calculate the IGD
value in every generation according to Eq. (4-7) and are reported at the end of the algorithm
running.
It should be noticed that when solving different kinds of MOPs, different population size
and different maximal number of function evaluations (FEs) are used [184][213]. The
population sizes in Table 4-2 are used when solving the ZDT problems and the maximal
number of FEs is set to be 25000 [184]. However, when solving the more difficult DTLZ and
WFG problems, the population size is set to be 200 for all the algorithms (except CMPSO
which still uses population size of 20 for each swarm) and the maximal number of FEs is
1×105. When solving the complicated UF problems, all the algorithms (CMPSO still uses
population size of 20 for each swarm) are with population size of 300 and the maximal
number of FEs of 3×105 [213]. The impacts of population size on the CMPSO performance
will be investigated in Section 4.4.8. Moreover, the experimental results are the average
values of 30 independent runs. The best results are denoted by the bold font. The results
obtained by different algorithms are compared with the CMPSO by Wilcoxon’s rank sum test
with significant level α=0.05.
Table 4-2 Parameters Settings of the Algorithms
Algorithm Parameters Settings NSGA-II N=100, px=0.9, pm=1/D, ηc=20, and ηm=20
GDE3 N=100, CR=0.0, and F=0.5 MOEA/D-DE N=100, CR=1.0, F=0.5, η=20, pm==1/D, T=20, δ=0.9, and nr=2 MOCLPSO N=50, pc=0.1, pm=0.4, ω=0.9→0.2, and c=2.0 OMOPSO N=100, ω=rand(0.1, 0.5), c1=rand(1.5, 2.0), c2=rand(1.5, 2.0)
VEPSO N=100, χ=0.729, c1=c2=2.05, and M=6 CMPSO N=20, ω=0.9→0.4, and c1=c2=c3=4.0/3
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
91
4.4.4 Experimental Results on ZDT Problems
Table 4-3 compares the results on the ZDT problems. The results show that CMPSO is
promising in dealing with ZDT problems with both convex and concave PFs. It performs best
on ZDT1 which is with convex PF and on ZDT2 which is with concave PF. It also performs
the second best on ZDT6 (only slightly worse than MOCLPSO) which is with non-uniform
concave PF. Moreover, CMPSO is very promising (the third best) on ZDT3 whose PF is
disconnected convex. Table 4-3 Results Comparisons on the ZDT Problems
Function NSGA-II GDE3 MOEA/D-DE MOCLPSO OMOPSO VEPSO CMPSO
ZDT1 Mean 5.00×10-3 1.27×10-2 0.16 4.80×10-3 7.02×10-3 0.31 4.13×10-3
Std 2.33×10-4 1.56×10-3 1.93×10-2 1.76×10-4 7.83×10-4 3.39×10-2 8.30×10-5
Rank 3 − 5 − 6 − 2 − 4 − 7 − 1
ZDT2 Mean 0.19 2.97×10-2 0.23 0.38 6.06×10-3 0.33 4.32×10-3
Std 0.28 1.82×10-2 3.07×10-2 0.30 3.81×10-4 0.11 1.03×10-4
Rank 4 − 3 − 5 − 7 − 2 − 6 − 1
ZDT3 Mean 1.54×10-2 1.16×10-2 0.23 5.49×10-3 2.30×10-2 0.60 1.39×10-2
Std 2.71×10-2 2.24×10-3 2.17×10-2 2.49×10-4 5.99×10-3 9.40×10-2 3.49×10-3
Rank 4 − 2 + 6 − 1 + 5 − 7 − 3
ZDT4 Mean 0.29 0.34 0.31 3.26 16.47 26.75 0.79 Std 0.40 0.37 0.23 1.35 4.12 6.06 0.26
Rank 1 + 3 + 2 + 5 − 6 − 7 − 4
ZDT6 Mean 6.22×10-3 7.36×10-2 1.54 3.69×10-3 4.61×10-3 0.41 3.72×10-3
Std 7.02×10-4 9.16×10-2 0.13 1.31×10-4 3.36×10-4 0.19 1.47×10-4
Rank 4 − 5 − 7 − 1 ≈ 3 − 6 − 2 Final Rank
Total 16 18 26 16 20 33 11 Final 2 4 6 2 5 7 1
Better−Worse -3 -1 -3 -2 -5 -5 ‘+’、‘−’ and ‘≈’ indicate that the results of the algorithm are significant better than, worse than, and similar to ones of CMPSO by Wilcoxon’s
rank sum test with α=0.05. As CMPSO has the best performance on ZDT1 and ZDT2 whose objectives are all
unimodal, it indicates that CMPSO has the strong convergence ability to approximate the PF
of MOP with simple objectives. Table 4-3 also shows that all the MOPSOs are beaten by
MOEAs on ZDT4. This may be caused by the local PFs of ZDT4 for that it is with the
multimodal Rastrigin function. However, CMPSO is still the best algorithm among all the
four MOPSOs and only CMPSO can obtain comparable results with MOEAs. This indicates
that CMPSO has the ability to avoid local PFs caused by complex objectives. Moreover,
when the general performance is considered over all the problems, CMPSO is the winner for
that it has the first average rank among the seven contenders over all the five problems. The
Wilcoxon’s rank sum tests also indicate that CMPSO significantly outperforms all the six
competitors on the ZDT set problems.
Fig. 4-4 visualizes the final non-dominated solutions obtained by different algorithms in
all the 30 runs when solving some of the ZDT problems. Notice that some algorithms have
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
92
similar performance on the same problems, and therefore we only use one algorithm as the
representative. For example, the figures of the solutions obtained by NSGA-II, GDE3,
MOCLPSO, OMOPSO, and CMPSO on ZDT1 are not evidently different, and therefore we
only use the solutions obtained by CMPSO to plot in Fig. 4-4(a) for clarity. The figures in Fig.
4 show that the solutions obtained by CMPSO not only approximate the whole PF well, but
also form a good spread along the whole PF. Fig. 4-4(c) and Fig. 4-4(d) show the obtained
solutions to ZDT4. As MOCLPSO, OMOPSO, and VEPSO perform very poorly on this
problem, the solutions are not plotted in the figures.
Fig. 4-4 The final non-dominated solutions of the ZDT problems in all the 30 runs.
4.4.5 Experimental Results on DTLZ and WFG Problems
The results on the DTLZ and WFG problems are presented and compared in Table 4-4.
The results show that CMPSO performs competitively with GDE3, NSGA-II, and
MOEA/D-DE on DTLZ1 and performs the best on DTLZ2. Moreover, CMPSO yield the best
IGD values on all the four WFG problems. In general, CMPSO has the first average rank
over all the six DTLZ and WFG problems, followed by other MOPSOs and then MOEAs.
The statistics by the Wilcoxon’s rank sum tests also confirm that CMPSO has significant
better performance than all the six contenders. As these problems are with mixed,
disconnected, or degenerated PFs, the good performance of CMPSO indicates that it is
promising not only in MOPs with simple PFs like the ZDT problems, but also in MOPs with
complex PFs like the DTLZ and WFG problems.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
93
Fig. 4-5 further confirms the advantages of CMPSO by plotting the obtained
non-dominated solutions in the 30 runs. Although all the algorithms fail to obtain the true PF
of WFG1, CMPSO gives the best approximate to PF and yields the best diversity to spread
along the PF. For WFG2, WFG3, and WFG4, only CMPSO obtain solutions approximating
to the PFs. CMPSO is observed to perform remarkably better than not only the other
MOPSOs, but also all the compared MOEAs. Table 4-4 Results Comparisons on the DTLZ and WFG Problems
Function NSGA-II GDE3 MOEA/D-DE MOCLPSO OMOPSO VEPSO CMPSO
DTLZ1 Mean 2.75×10-3 2.42×10-3 2.75×10-3 38.75 43.04 91.56 5.67×10-2
Std 2.86×10-4 1.34×10-4 4.36×10-4 7.36 8.06 23.42 2.21×10-2
Rank 2 + 1 + 3 + 5 − 6 − 7 − 4
DTLZ2 Mean 5.81×10-3 5.49×10-3 3.33×10-2 8.79×10-3 6.65×10-3 6.07×10-2 4.62×10-3
Std 4.70×10-4 6.85×10-4 3.02×10-3 8.06×10-4 6.08×10-4 8.69×10-3 1.50×10-4
Rank 3 − 2 − 6 − 5 − 4 − 7 − 1
WFG1 Mean 1.80 2.58 2.57 1.37 1.38 1.64 1.23 Std 0.31 1.03×10-2 1.62×10-3 0.13 3.71×10-3 0.30 6.69×10-2
Rank 5 − 7 − 6 − 2 − 3 − 4 − 1
WFG2 Mean 0.46 1.14 0.96 0.38 0.40 0.41 0.11 Std 4.03×10-2 4.66×10-2 7.94×10-2 3.45××10-2 4.65×10-2 4.34×10-2 6.19×10-2
Rank 5 − 7 − 6 − 2 − 3 − 4 − 1
WFG3 Mean 0.36 1.66 0.78 0.30 0.30 0.33 1.47×10-2
Std 3.07×10-2 0.28 4.83×10-2 2.36×10-2 2.61×10-2 2.25×10-2 5.80×10-4
Rank 5 − 7 − 6 − 2 − 3 − 4 − 1
WFG4 Mean 0.35 1.09 0.65 0.22 0.22 0.27 1.37×10-2
Std 4.17×10-2 0.12 3.34×10-2 1.00×10-2 1.23×10-2 1.68×10-2 4.99×10-4
Rank 5 − 7 − 6 − 2 − 3 − 4 − 1 Final Rank
Total 25 31 33 18 22 30 9 Final 4 6 7 2 3 5 1
Better − Worse -4 -4 -4 -6 -6 -6
Fig. 4-5 The final non-dominated solutions of the WFG problems in all the 30 runs.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
94
4.4.6 Experimental Results on UF Problems
The above two sub-sections have demonstrated that CMPSO shows a very good
performance on ZDT problems and the advantages of CMPSO become more evident on
DTLZ and WFG problems as the objective functions and the PFs become more complex. In
this sub-section, the algorithms performance is further compared on the UF problems, which
are the recently proposed problems with complicated Pareto set.
The results compared in Table 4-5 show that CMPSO performs best on UF2, UF4, and
UF5 while MOEA/D-DE performs best on UF3 and UF7, NSGA-II on UF6, and GDE3 on
UF1. Even though GDE3 and MOEA/D-DE outperform CMPSO on UF1, the results are not
significantly different according to the Wilcoxon test. In general, CMPSO has the first
average rank, followed by GDE3 and MOEA/D-DE over all the 7 UF problems. By the
Wilcoxon’s rank sum tests, CMPSO is also the most promising algorithm in solving the UF
set problems. Table 4-5 Results Comparisons on the UF Problems
Function NSGA-II GDE3 MOEA/D-DE MOCLPSO OMOPSO VEPSO CMPSO
UF1 Mean 7.30×10-2 5.75×10-2 5.96×10-2 0.10 9.81×10-2 0.71 6.64×10-2
Std 2.46×10-2 2.48×10-2 2.15×10-2 7.17×10-3 7.91×10-3 0.15 1.99×10-2
Rank 4 ≈ 1 ≈ 2 ≈ 6 − 5 − 7 − 3
UF2 Mean 2.06×10-2 2.02×10-2 6.63×10-2 0.11 7.24×10-2 0.15 1.69×10-2
Std 3.67×10-3 3.81×10-3 1.32×10-2 3.39×10-3 3.54×10-3 1.29×10-2 3.37×10-3
Rank 3 − 2 − 4 − 6 − 5 − 7 − 1
UF3 Mean 6.95×10-2 0.16 3.89×10-2 0.48 0.37 0.58 9.80×10-2
Std 1.14×10-2 6.66×10-2 1.57×10-2 1.55×10-2 9.71×10-3 4.83×10-2 1.39×10-2
Rank 2 + 4 − 1 + 6 − 5 − 7 − 3
UF4 Mean 4.26×10-2 2.95×10-2 4.72×10-2 0.12 0.16 0.17 2.38×10-2
Std 4.46×10-4 1.03×10-3 1.59×10-3 1.10×10-2 1.39×10-2 6.22×10-3 1.90×10-3
Rank 3 − 2 − 4 − 5 − 6 − 7 − 1
UF5 Mean 0.32 0.21 0.33 0.51 0.74 3.25 0.20 Std 8.41×10-2 1.61×10-2 5.41×10-2 0.18 0.12 0.53 2.01×10-2
Rank 3 − 2 − 4 − 5 − 6 − 7 − 1
UF6 Mean 0.12 0.30 0.14 0.40 0.40 2.83 0.14 Std 1.93×10-2 1.72×10-2 9.05×10-2 4.30×10-2 3.40×10-2 0.78 2.04×10-2
Rank 1 + 4 − 3 − 6 − 5 − 7 − 2
UF7 Mean 0.16 2.97×10-2 8.34×10-3 0.19 0.22 0.69 0.12 Std 0.16 1.02×10-3 9.40×10-4 0.15 0.15 0.16 0.13
Rank 4 ≈ 2 + 1 + 5 − 6 − 7 − 3 Final Rank
Total 20 17 19 39 38 49 14 Final 4 2 3 6 5 7 1
Better − Worse -1 -4 -2 -7 -7 -7 Algorithms NSGA-II GDE3 MOEA/D-DE MOCLPSO OMOPSO VEPSO CMPSO
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
95
Fig. 4-6 The final non-dominated solutions of the UF problems in all the 30 runs.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
96
Fig. 4-6 further compares MOEA/D-DE with CMPSO by plotting the 30 non-dominated
fronts obtained by the two algorithms. MOEA/D-DE is observed to be weaker than our
CMPSO on UF1 and UF2 for that MOEA/D-DE misses the solutions located in the middle of
the PFs. For UF3, CMPSO has difficulty in obtaining enough good solutions along the PF as
that by MOEA/D-DE. However, CMPSO seems to be more stable than MOEA/D-DE on UF4
because the solutions obtained by CMPSO locate closer to the true PF of UF4, as shown in
Fig. 4-6(g) and Fig. 4-6(h)
4.4.7 The Benefit of Shared Archive
The CMPSO algorithm uses an external shared archive to let different swarms share
their search information and communicate with each other efficiently. In this sub-section, the
benefit of shared archive is investigated, including the benefit of the ELS used in the archive
update process, and the benefit of using the shared archive solutions information in the
particle update equation. The experimental results are given in Table 4-6. Table 4-6 Comparisons Between CMPSO and Its Variants CMPSO-non-ELS (CMPSO without ELS in the
Archive Update) and CMPSO-non-aBest (CMPSO without Using Archive Information for Particle Update)
Function CMPSO CMPSO-non-ELS CMPSO-non-aBest ZDT1 4.13×10-3 0.30 1.09×10-2 ZDT2 4.32×10-3 0.84 1.81×10-2 ZDT3 1.39×10-2 0.47 2.42×10-2 ZDT4 0.79 26.09 0.78 ZDT6 3.72×10-3 0.18 6.63×10-2
DTLZ1 5.67×10-2 1.43×102 6.36×10-2 DTLZ2 4.62×10-3 0.12 4.64×10-3 WFG1 1.23 2.19 1.66 WFG2 0.11 0.57 0.10 WFG3 1.47×10-2 0.42 1.48×10-2 WFG4 1.37×10-2 0.31 1.37×10-2 UF1 6.64×10-2 0.38 5.36×10-2 UF2 1.69×10-2 0.16 1.61×10-2 UF3 8.90×10-2 0.57 7.76×10-2 UF4 2.38×10-2 0.19 2.35×10-2 UF5 0.20 2.53 0.19 UF6 0.14 1.20 0.16 UF7 0.12 0.46 0.12
As the ELS is demonstrated to be helpful for bringing in diversity to avoid being trapped
into local optima when solving single objective optimization problems, it is also expected that
the ELS can help CMPSO to avoid local PFs in MOP. The comparisons in Table 4-6 show
that CMPSO significantly outperforms its variant without using the ELS in the archive update
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
97
(denoted as CMPSO-non-ELS) on all the 18 test problems. The advantages of CMPSO are
more evident when solving MOPs with multimodal objective functions for that
CMPSO-non-ELS is easy to be trapped into local PFs. We further compare the results
obtained by CMPSO and CMPSO-non-ELS on the WFG2 (with disconnected PF and with
one multimodal objective function), WFG4 (with concave PF and with two multimodal
objective functions), and UF2 (with convex PF and with two multimodal objective functions)
in Fig. 4-7. These figures clearly show that CMPSO can approximate the true PFs of these
problems whilst CMPSO-non-ELS is totally trapped into local PFs
Fig. 4-7 The final non-dominated solutions found by CMPSO and CMPSO-non-ELS in all the 30 runs.
The benefit of using the shared archive information to guide the particle update is also
summarized in Table 4-6. The shared archive information is expected to be beneficial for
accelerating the convergence speed to approximate the PF. For the MOP with unimodal PF,
CMPSO is observed to remarkably outperform its variant without using archived information
in the particle update (denoted as CMPSO-non-aBest), e.g., on most of the ZDT, DTLZ, and
WFG problems. However, CMPSO-non-aBest seems to be better than CMPSO on some of
the UF problems. The reason may be that these UF problems have complicated Pareto sets
and therefore algorithms with too fast convergence speed will cause premature convergence
and cannot search the whole space more efficiently.
Fig. 4-8 compares the convergence characteristics of the IGD indicator on ZDT1 (with
uniform convex PF and unimodal objective functions), ZDT6 (with non-uniform concave PF
and multimodal objective functions), WFG1 (with mixed convex PF and unimodal objective
functions), and UF2 (with convex PF and multimodal objective functions) during the
CMPSO and CMPSO-non-aBest search processes. The figures further show that the
utilization of the archived information to guide the particle update remarkably accelerates the
convergence speed for the algorithm to approximate the PF, especially on problems with
unimodal objective functions. However, when the objective functions are multimodal, such as
Fig. 4-8(d) for UF2, CMPSO is faster in the early evolutionary phase, but is taken over by
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
98
CMPSO-non-aBest in the late evolutionary phase. This is because that too fast convergence
speed may cause an adverse effect on prematurity and make the algorithm not search the
whole space sufficiently when the Pareto sets are complicated. Therefore, in the late
evolutionary phase, CMPSO may perform slightly poorly than CMPSO-non-aBest does on
some UF problems.
Fig. 4-8 The mean IGD of CMPSO and CMPSO-non-aBest during the evolutionary process.
In order to show more clearly how CMPSO can approximate the PF faster than
CMPSO-non-aBest does, we plot the finial non-dominated solutions of them in Fig. 4-9 for
UF1, UF2, and UF3. These solutions are obtained after 1000 FEs, and are obtained by the run
that has the minimal IGD value among the 30 runs. The figures confirm that CMPSO
approximates the PF faster than CMPSO-non-aBest in the early evolutionary phase.
Fig. 4-9 The final non-dominated solutions found by CMPSO and CMPSO-non-aBest after 1000 FEs.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
99
4.4.8 Impacts of Parameter Settings
1) Population Size for Each Swarm
As each swarm optimizes only one objective in CMPSO, we set a relative small
population size of 20 particles for each swarm. Herein, the population size for each swarm is
set to be 40, 60, 80, and 100, respectively, to investigate the impact of population size on
algorithm’s performance. The investigations are conducted on ZDT1 whose objective
functions are umimodal and on UF1 whose objective functions are multimodal. For CMPSO
with different population size, the other parameters are still the same as in Chapter 4.4.3 and
the maximal number of FEs is still 25000 for ZDT1 and 3×105 for UF1.
The average values of 30 independent runs on the IGD indicator are compared in Fig.
4-10. The comparisons show that small population size is efficient enough for CMPSO to
obtain good performance, especially for simple MOP. This may be due to the contribution of
the MPMO technique in reducing the search complexity for each swarm because only one
objective is optimized by each swarm. When the population size increases to be large,
CMPSO performs even worse, e.g., on the ZDT1 problem. This may be caused that with the
fixed value of maximal number of FEs, larger population size reduces the evolutionary
generation, and at last affects the algorithm’s performance. However, when solving
complicated MOP, increasing the population size can increase the diversity of the algorithm
and therefore can lead to better results, e.g. on the UF1 problem. Nevertheless, too large
population size costs much computational burden in each generation and therefore may
weaken the performance when the maximal number of FEs is fixed. By considering both the
computational burden and the performance, this chapter adopts the population size of 20 for
each swarm. In general, population size of 20-60 for each swarm may be promising. The
small population size can bring good performance is an advantage of the CMPSO algorithm.
20 40 60 80 1000.0041
0.0042
0.0043
0.0044
0.090
0.095
0.100
0.105
0.110
0.115
Mea
n IG
D in
dica
tor v
alue
Population size for each swarm
ZDT1 UF1
Fig. 4-10 The mean IGD of CMPSO with different population size.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
100
2) Maximal Number of FEs
As shown in Section 4.4.4, CMPSO and other MOPSOs perform poorly on ZDT4. It
could be due to that ZDT4 is with multimodal objective functions while CMPSO does not
have sufficient number of FEs to converge to the true PF. Herein, we keep the other
parameters the same as in Section 4.4.3 and set different maximal number of FEs (e.g., 1×105,
2×105, and 3×105) for CMPSO to solve ZDT4. The final non-dominated solutions found with
different maximal number of FEs are plotted in Fig. 4-11.
When compared with Fig. 4-4(c), it is clear that by increasing the maximal number of
FEs, CMPSO can search the space sufficiently to approximate the true PF well. Moreover,
we plot the solutions obtained by OMOPSO in all the 30 runs in Fig. 4-11. When compared
with CMPSO, it is clear that CMPSO has stronger global search ability than OMOPSO to
approximate the PF. This indicates that the MPMO technique is helpful to enhancing the
MOPSOs’ performance.
Fig. 4-11 The final non-dominated solutions found by CMPSO and OMOPSO with different FEs on ZDT4.
3) Inertia Weight ω and Acceleration Coefficients ci
To investigate the impact of ω on the MOPSOs’ performance, we test different values of
ω (e.g., 0.1, 0.3, 0.5, 0.7, and 0.9) on CMPSO, MOCLPSO, and OMOPSO. The
investigations are conducted on DTLZ2 with umimodal objective functions and on UF1
multimodal objective functions.
Fig. 4-12(a) and (b) show the mean IGD value of DTLZ1 and UF1 when MOPSOs using
different inertia weight values. For DTLZ2 whose objective functions are unimodal, a relative
small ω value would be preferred while for UF1 whose objective functions are multimodal, a
relative large ω value seems to be preferred. This may be due to that large ω is helpful for
global search while small ω is beneficiary for local fine tuning. The results are also compared
with the ones obtained by the MOPSOs using ω values as in Table 4-2. The figure shows that
the parameters in Table II are promising. Moreover, it is evident from Fig. 4-12(a) and (b)
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
101
that CMPSO is less sensitive to the ω value while the other two MOPSOs, especially
MOCLPSO, are affected by the ω value significantly. This is another advantage of the
CMPSO algorithm.
The impact of the acceleration coefficients ci on the CMPSO performance is also
investigated on DTLZ2 and UF1, with the results shown in Fig. 4-12(c) and (d). The results
further confirm that the parameters in Table 4-2 are promising and CMPSO is much less
sensitive to the ci value when compared with MOCLPSO and OMOPSO, showing the
advantage of CMPSO.
It follows that the performance of CMPSO is not dependent on the parameters, and the
parameter values used in this chapter are also widely adopted in PSO. Therefore, CMPSO
retains the simplicity and easy implement of PSO, as well as improves the performance of
PSO on multi-objective optimization problems by coevolutionary technique with a multiple
population for multiple objectives framework, which is inspired by the idea of ensemble
learning in machine learning.
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.9--->0.40.90.70.50.3
Mea
n IG
D in
dica
tor v
alue
Inertia weight value
MOCLPSO OMOPSO CMPSO
0.1
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
0.22
0.9--->0.40.90.70.50.3
Mea
n IG
D in
dica
tor v
alue
Inertia weight value
MOCLPSO OMOPSO CMPSO
0.1
(a) Different ω on DTLZ2 (b) Different ω on UF1
0.000
0.005
0.010
0.015
0.020
0.025
0.030
3.53.02.52.01.51.00.5
Mea
n IG
D in
dica
tor v
alue
Acceleration coefficient value
MOCLPSO OMOPSO CMPSO
4.0/3
0.06
0.07
0.08
0.09
0.10
0.11
0.12
0.13
4.0/3
Mea
n IG
D in
dica
tor v
alue
MOCLPSO OMOPSO CMPSO
3.53.02.52.01.51.00.5Acceleration coefficient value
(c) Different ci on DTLZ2 (d) Different ci on UF1
Fig. 4-12 The mean IGD on DTLZ2 and UF1 of MOPSOs with different ω and different ci.
Chapter 4 Multiple Populations for Multiple Objectives: A Co-evolutionary Technique for Solving Multi-objective Optimization Problems based on Ensemble Learning Techique in Machine Learning
102
4.5 Chapter Summary
Learning from the ensemble learning in machine learning, this chapter proposed a novel
technique termed multiple populations for multiple objectives.
The advantages and characteristics of the proposed MPMO and CMPSO algorithm are
as follow:
1) As each swarm focus on optimizing one objective, it can use the conventional or any
other improved PSO to solve a single objective problem. Importantly, the difficulty of
fitness assignment can be avoided.
2) As an external shared archive is used to store the non-dominated solutions found by
different swarms and the shared archive information is used to guide the particles update,
the algorithm can use the whole search information to approximate the whole PF fast.
3) As an ELS is performed on the archived solutions in the update process, the algorithm is
able to avoid local PFs. This is helpful for MOPs with multimodal objective functions or
with complicated Pareto sets.
4) As the experiments demonstrated that the parameters have less significant impact on the
performance of the CMPSO compared with other MOPSO algorithms, CMPSO is still
simple and easy to use. Hence, the CMPSO has important application value and promoted
meaning.
In a conclusion, inspired by the ensemble learning in machine learning, this chapter
designed a new co-evolutionary technique named MPMO, and proposed a co-evolutionary
multi-swarm PSO with shared archive based on the MPMO technique. Experimental results
show the effectiveness and efficiency of the MPMO framework and CMPSO algorithm. The
design of MPMO framework and CMPSO algorithm for multiobjective problems is an
important and successful exploration of the machine learning aided PSO design.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
103
Chapter 5 Orthogonal Learning Particle Swarm Optimization for
Power Electronic Circuit Optimization with Free Search Range
5.1 Introduction
Power electronics has developed quickly since the advent of power semiconductor
devices in the 1950s and has become a significant technology for variants of applications in
the industrial, commercial, residential, aerospace, military, and utility areas [218]. The model,
design, and analysis of power electronic circuit (PEC) are the fundamental and significant
research areas in the power electronics. PEC always consists of a number of components such
as resistors, capacitors, and inductors which have to be optimized design in order to obtain
good circuit performance. Suitable components design and control parameters tuning of the
PEC often challenge the engineers because they may require systematic procedure.
Traditional approaches include the state-space average method [219][220], current injected
equivalent circuit method [221][222], sampled-data modeling method [223], and state-plane
analysis method [224], etc. However, these approaches are usually only applicable for
specific circuits and require comprehensive knowledge on the circuit operation. Moreover, as
these approaches are based on small-signal models, circuit designers would sometimes find it
difficult to predict precisely the circuit responses under large-signal conditions.
With the rapid development of the power electronics technology and the growing
complexity of PEC, automatic design and optimization of PEC have become great need.
Since the 1970s, variants of optimization approaches such as heuristic method [225],
knowledge based method [226], gradient descent or hill-climbing method [227], and
simulated annealing method [228], have been proposed for analog circuit design automation.
However, these approaches are very sensitive to the initial solution. Moreover, they might be
inefficient enough to search globally and are subjective to be trapped into local optima when
the problems are complex. Therefore, the obtained values for the circuit components may be
sub-optimal, leading to low satisfaction when used in practical applications.
Since EA did not need accurate mathematical model, it is an important tool to solve the
PEC design problem. Zhang et al. [229] proposed the fitness function to to evaluate the
performance of PEC, and used GA to optimize the circuit in 2001. The fitness function in
[229] are adopted further, and ACO [230] and PSO[231] have been successfully applied to
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
104
optimize the component values of PEC. The research work indicates the good performance of
the EAs on PEC optimization design. However, there still exist disadvantages in the GA and
ACO approaches that they have to consume a lot of computation before obtaining the good
component values because of the slow convergence speed. In this chapter, an effective and
efficient particle swarm optimization (PSO) algorithm, named orthogonal learning PSO
(OLPSO) is adopted to optimize PEC because of its faster optimization speed and stronger
global search ability. Moreover, previous studies using GA and ACO approaches always
optimize the circuit components within careful pre-defined search ranges which are
determined by expert designers. For example, the search range for some resistors is from
470Ω to 47kΩ and the search range for some capacitors is from 0.33μF to 33μF. However,
such search ranges are difficult to be defined for different components in different PECs
when used in practical applications. Therefore it is of practical value to develop an effective
and efficient approach that can optimize the components with free component configurations
that are set to commonly used ranges. Based on the problem, this chapter proposed a PEC
optimization model with “free search range”. In this model, there is no need to predefine
the range of the circuit components by the experts, but only provide a free search range
according to the practical industrial application and market supply. For example, the search
range of all resistors can be 100Ω to 100 kΩ, and all capacitors can be valued from 0.33μF to
33μF [232].
This optimization model fits more the pratical application requirement, but also creates
changelles to the algorithms. In order to effectively solve the PEC optimzaiton design
problem with free search range, this chapter will apply the OLPSO proposed in Chapter 3 to
solve the problem. The successful application of OLPSO on PEC optimization problem
verify the expansibility of the OLPSO algorithm on engineering area. The important
contributions and innovations mainly include 3 points as follow:
1) Point out that the search ranges which are determined by expert designers in traditional
PEC optimization model causes the limitation of its practical application, and hence proposed
a PEC optimization model with “free search range” defined according to the practical
industrial application and market supply, which meets the practical application requirements.
2) Design a rapid global optimization algorithm based OLPSO to efficiently solve the PEC
optimization problem with “free search range”, and the algorithm is compared with GA, PSO,
other CLPSO algorithm with good performance, and JADE to verify the effectiveness and
efficiency of the CLPSO.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
105
3) Extend the PSO to the engineering application problem, PEC optimization design, and test
and verify the performance of the algorithm in continuous and discrete search space, which
ensures that the obtained optimized components are avalibale from the market and the
algorithm can solve the problem efficiently.
5.2 Power Electronic Circuit
PEC is a circuit that contains a number of components such as resistors, capacitors, and
inductors. Fig. 5-1 shows the basic block diagram of a PEC. In this PEC, the circuit can be
decoupled into two parts where the first part named the power conversion stage (PCS) and
the second part is the feedback network (FN).
],,,[ 21 FRF RRRR =
],,,[ 21 FIF IIII =],,,[ 21 FCF CCCC =
],,,[ 21 FRF RRRR =
],,,[ 21 FIF IIII =],,,[ 21 FCF CCCC =
],,,[ 21 PRP RRRR =],,,[ 21 PIP IIII =],,,[ 21 PCP CCCC =
],,,[ 21 PRP RRRR =],,,[ 21 PIP IIII =],,,[ 21 PCP CCCC =
Fig. 5-1 A block diagram of PEC.
The function of PCS is to transfers the power from the input source vin to the output load
RL. It consists of RP resistors, IP inductors, and CP capacitors. On the other hand, FN is the
control part that consists of RF resistors, IF inductors, and CF capacitors. There is a signal
conditioner H in the FN circuit to convert the PCS output voltage vo into a suitable form 'ov
which is used to compared with the reference voltage vref. Their difference vd is then sent to
an error amplifier in order to obtain a output ve. This output is combined with the feedback
signals Wp from the PCS part to give an output control voltage vcon. Then vcon is modulated by
a pulse-width modulator to give a feedback voltage vg to the PCS part.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
106
In order to optimize the component values of PEC, the components are coded as the
variables and are optimized through the optimization process. Although the components of
PCS and FN can be optimized together in a single process, it is computationally intensive
since the number of variables is considerably large. Moreover, as the interactions between the
two parts in the optimization are relatively low during the training process, the components in
the two parts can be optimized separately [229].Therefore, this chapter adopts the technique
to consider the components of PCS and FN separately. This decoupled technique is not only
effective to reduce the computational effort, but also is helpful for obtaining better
component values. However, as the PCS part is always with static characteristics and the
component values are relative stable, the components in PCS are not optimized in this chapter,
but the components in the FN part which are crucial to the circuit performance are optimized
by OLPSO.
5.3 OLPSO FOR PEC
5.3.1 Particle Representation
Using OLPSO to solve the PEC, the components in the FN part can be represented with
the use of an vectors X(FN). Specifically, the representation of each particle for optimizing
the FN components is coded as:
][)( FFF CIRFNX = (5-1)
where ],,,[ 21 FRF RRRR = are the resistors, ],,,[ 21 FIF IIII = are the inductors, and
],,,[ 21 FCF CCCC = are the capacitors.
5.3.2 Fitness Function
The fitness function definition for FN is according to the proposals in [229] whose main
considerations includes reducing the settling time and controlling the overshoot. The fitness
function is described as:
)()],,(),,(),,([)( 4, ,
321
max_
min_
max_
min_
XFXvRFXvRFXvRFXL
LLL
in
ininin
R
RRR
v
vvvinLinLinLFN +++=Φ ∑ ∑
= =δ δ (5-2)
where RL_min and RL_max, vin_min and vin_max are the minimal and maximal values of RL and
vin, respectively. δRL and δvin are the step length in varying the values of RL and vin.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
107
The F1, F2, F3, and F4 are the four objective functions for the FN as designed in [229].
Specifically, F1 is to measure the steady-state error of the output voltage vo.. Define a
cumulative variance equation E to evaluate the proximity of output voltage vo and the
reference voltage vref in NS=⎣(vin_max –vin_min)/δvin ⎦ simulation points as:
2
1[ ( ) ]
sN
o refm
E v m v=
= −∑ (5-3)
If the value of E2 is smaller, then the stable state error is small, so the value F1 of
obtained by equation (5-4) is larger:
2/1 1
E KF K e−= (5-4)
Where is K1 the reachable maximize value of F1, and K2 用 is used to adjust the
sensitivity of F1 value to E.
In addition, F2 is to measure the transient response of vd, including the maximum
overshoot and undershoot, and the settling time; F3 is to control the steady-state ripple
voltage on the output vo; F4 is to measure the dynamic behaviors during the large-signal
change. For more details of the fitness function definitions, refer to [229]. It should be note
that, the larger value of F1、F2、F3 和 F4 represent better performance of the circuit. Thus,
fitness function defined in (5-2) is a maximization problem.
5.4 Experiments and comparisons
5.4.1 Circuit Configurations
In this section, the OLPSO algorithm is applied to the PEC design and optimization
problem. A buck regulator with overcurrent protection as shown in Fig. 5-2 is used as
simulation case. The performance of OLPSO in optimizing the PEC is evaluated and
compared with both the GA approach [229] and the PSO approach [231]. Since the PCS part
is always with static characteristics and the components L and C are relatively stable [229]
the components in PCS are not optimized in this chapter, but the values are set as 200μH and
1000μF for L and C, respectively, according the proposals in [229] and by the considerations
of available component values in industry. For FN, all component values are required to be
optimized. That is, the components R1, R2, RC3, R4, C2, C3, and C4 in the FN part are
optimized by OLPSO and the fitness function is as (5-2) and the code are as shown in (5-1).
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
108
As mentioned in introduction in this chapter, the components’ search ranges are difficult
to be defined for different PECs. In this chapter, we set the components search ranges freely
according to commonly used ranges. That is, the search range for resistors R1, R2, RC3, and R4
are all set to be [100Ω, 100kΩ] and the search range for capacitors C2, C3, and C4 are all set
to be [0.1μF, 100μF].
Fig. 5-2 Circuit schematics of the buck regulator with overcurrent protection.
Moreover, the required specifications of the whole PEC are listed as follows:
Input voltage range vin: 20 ~ 40 V
Output load range RL::5 ~ 10 Ω
Nominal output voltage: 5 V±1%
Switching frequency:20 kHz
Maximum settling time:20 ms
5.4.2 Algorithm Configurations
The performance of OLPSO in optimizing the PEC is evaluated and compared with both
the GA approach proposed in [229] and the PSO approach in [231], and the CLPSO [59] and
JADE [177] which perform well in function optimization area.
The parameters of GA and PSO are set according to the configurations in their
references. The crossover and mutation probabilities of the GA approach are set the same as
in [229] where px=0.85 and pm=0.25. According to the parameter suggestion of PSO in [231],
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
109
the inertia weight ω in PSO and OLPSO linearly decreases from 0.9 to 0.4, while the
acceleration coefficients c1 and c2 in PSO and the acceleration coefficient c in OLPSO are all
set to be 2.0. Moreover, is also 2.0 and the parameter G is set to be 5. The parameters of
CLPSO and JADE are set as the configurations in [59] and [177], respectively. For the
population size and the maximal generation number, they are set to be 30 and 500
respectively in both GA and PSO as proposed in [229] and [231], respectively. The
population size for CLPSO and JADE are set as N=40 and N=30 as suggested in [59] and
[177], respectively. However, in order to make a fair comparison, all the algorithms use the
same maximal fitness evaluations (FEs) of 1.5×104 (the value of 30×500) as the termination
criterion. As the evaluation of the fitness function is usually the most expensive
computational part in the optimization of PEC, the execution time of different algorithms will
be almost the same if they use the same number of FEs.
It should be note that, according to the experimental results of OLPSO in Chapter 3 in
this chapter, the OLPSO-L on local version performs better than the global version OLPSO-G,
so we adopt OLPSO-L to solve the PEC optimzaiton design problem, and use the name of
OLPSO in the following chapter. The population size of OLPSO is set as N=40. In order to
make the comparisons in a statistical sense, the experiment is carried out 30 times
independently with each approach and the average results are used for comparison.
5.4.3 Comparisons on Fitness Quality
The PEC optimzaiton is a maximization problem. The results of GA, PSO, CLPSO,
JADE, and OLPSO are compared in Table 5-1 where the “Mean” stands for the average
fitness value of the 30 independent runs and “Std. Dev” is the standard deviation. Moreover,
the “Best” fitness values among the 30 runs are given and compared in the table. It can be
observed from the table that OLPSO achieves best results when measured by the mean fitness
value. Therefore, OLPSO is capacity to obtain good solutions consistently. The Wilcoxon test
further confirms that OLPSO outperforms other algorithms significantly and the results
obtained by OLPSO are remarkably better. Moreover, OLPSO can obtain the highest “Best”
fitness solution among the three approaches, indicating the strong global search ability of
OLPSO. The obtained component values in the “Best” fitness solution optimized by different
approaches are presented in Table 5-2. Moreover, OLPSO also obtain the highest “Worst”
and “Medium” solutions, which indicates that OLPSO can obtain high quality solutons in
most cases.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
110
It is should be reminded that these results in Tbale 5-1 are obtained in the new
configured large search space. The total failure of GA indicates that this approach is not
efficient enough to make sufficient search in the large space to find the good solution, even
though it is promising in the carefully pre-defined search range [229]. Moreover, the large
search space challenges the search ability of traditional PSO, CLPSO, and JADE. The
OLPSO algorithm is still promising and its results are demonstrated to be much better than
the others. Table 5-1 Experimental Result Comparisons of Different Approaches
Algorithm Mean Std Wilcoxon Test Best Medium Worst Mean
FEs Success
GA 109.636 9.0058 Z=6.64578† 127.494 109.502 97.191 × 0 PSO 152.110 21.2900 Z=5.31537† 192.304 137.879 137.699 3817 8
CLPSO 155.902 13.9257 Z=5.31515† 191.638 149.147 138.919 8588 14 JADE 135.711 25.1273 Z=6.02482† 183.208 135.361 95.973 8681 8
OLPSO 183.749 14.1892 NA 192.962 192.425 137.924 3675 28 †The difference is significant at α=0.05 by Wilcoxon test.
Mean FEs indictes the mean FEs needed to find a solution with fitness value larger than 150. Success indicates the number of runs that the algorithm finds a solution whose fitness value is larger than 150.
Table 5-2 Optimized Component Values in the Best Run with Different Approaches
Components GA PSO CLPSO JADE OLPSO R1 356.099 Ω 100 Ω 100 Ω 100 Ω 100 Ω R2 60.4418 kΩ 71.7442 kΩ 36.4732 kΩ 82.223 kΩ 13.1202 kΩ RC3 98.6189 kΩ 831.532 Ω 960.373 Ω 136.366 Ω 1.04713 kΩ R4 2.07867 kΩ 11.4945 kΩ 115.497 Ω 100 Ω 11.1206 kΩ C2 19.6276 μF 0.1 μF 0.1 μF 0.1 μF 0.1 μF C3 28.0941 μF 1.72671 μF 1.47557 μF 6.5245 μF 1.11032 μF C4 3.38356 μF 0.1 μF 9.85044 μF 14.1778 μF 0.1 μF
Fitness value 127.494 192.304 191.638 183.208 192.962
5.4.4 Comparisons on Optimization Speed and reliability
Besides the high solution quality of OLPSO, the fast optimization speed and strong
algorithm reliability of OLPSO are also supported by the comparisons in Table 5-1. By
giving an acceptable fitness value of 150, OLPSO can successfully obtain final solutions with
fitness values larger than 150 in 28 out of the 30 runs whose figure is the twice of that of
CLPSO, whilst PSO and JADE can only succeeds in 8 runs and GA totally fails in obtaining
solutions with fitness values larger than 150. Thus, OLPSO is the most reliable algorithm to
solve PEC optimization problem. Moreover, among the successful runs in each algorithm, the
mean FEs needed to reach the acceptable value of 150 in Table 5-1 further shows that
OLPSO is the fastest algorithm among the three contenders.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
111
The mean convergence characteristics of different approaches are plotted in Fig. 5-3.
The curves show that both GA and JADE fall into a poor local optimum quite early, whilst
OLPSO is able to obtain very high fitness in early state and to improve the fitness value
steadily for a long time. Although PSO and CLPSO can find good solution in a certain range,
but the results are worse compared with OLPSO. OLPSO has strong global search ability to
avoid local optima and has significantly improved the fitness value. Moreover, the curves
indicate that OLPSO is faster than the other algorithms to optimize the component values.
That is, when a fixed fitness is given, OLPSO is observed to use much less FEs to obtain this
specific value than the GA or PSO algorithm. OLPSO is a effective and efficient algorithm to
solve the PEC optimization problem.
0.0 3.0x103 6.0x103 9.0x103 1.2x104 1.5x104100
110
120
130
140
150
160
170
180
190
200
Mea
n fit
ness
val
ue
Fitness evaluations (FEs)
GA PSO CLPSO JADE OLPSO
Fig. 5-3 Mean convergence characteristics of different approaches in optimizing PEC.
5.4.5 Comparisons on Simulation Results
In the simulations, the component values of the PEC are set as the medium optimized
results in 30 runs obtained by GA, PSO, CLPSO, JADE, and OLPSO respectively, as given
in Table 5-2. In the simulation results comparison, Fig. 5-4 gives the results of voltage and
Fig. 5-5 gives the results of current.
The simulation lasts for 90 milliseconds (ms). The input voltage vin is 20 V and the
output load RL is 5 Ω. The simulated startup transients can be compared in the first 30 ms of
the figures. It is observed that the circuit with OLPSO-optimized component values has better
performance, giving faster settling time. The buck with component values optimized by
OLPSO uses only about 5 ms to reach the steady state, while the one with component values
optimized by GA uses about 10 ms. A high voltage impulse appears during startup in the
JADE-optimized circuit, which decrease the application of the circuit. Moreover, the output
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
112
ripple voltage of the OLPSO-optimized circuit is less than 1%, satisfying the required
specification very well
Fig. 5-4 and Fig. 5-5 also show the simulated transient responses under large signal
disturbances. On the 30 ms, when the regulator is in steady state, the input voltage is
suddenly changed from 20 V to 40 V, with the load still fixed as 5 Ω. As the responses to this
change, the output voltage vo, the control voltage vcon, and the inductor current iL are all
disturbed. However, the circuit optimized by OLPSO has much smaller disturbance and
shorter response time than the one optimized by GA, PSO, CLPSO, and JADE (2ms vs 12ms,
5ms, 10ms, and 3ms), confirming the advantages of the OLPSO algorithm. In addition, the
transient overshoot of the output velatage in GA-optimized and PSO-optimized circuits are
too large that they are unsuitable for the practical application of the circuit.
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900
1
2
3
4
5
6
7
8
9
vcon
Vol
tage
(V)
Time (MS)
vo
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900
1
2
3
4
5
6
vcon
Volta
ge (V
)
Time (MS)
vo
(a) GA (b) PSO
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900
1
2
3
4
5
6
vcon
Volta
ge (V
)
Time (MS)
vo
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900123456789
vcon
Volta
ge (V
)
Time (MS)
vo
(c) CLPSO (d) JADE
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900
1
2
3
4
5
6
vcon
Vol
tage
(V)
Time (MS)
vo
(e) OLPSO Fig. 5-4 Simulated voltage responses from 0 ms to 90 ms.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
113
(a) GA (b) PSO
(c) CLPSO (d) JADE
(e) OLPSO
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Time (MS)
Curre
nt (A
)
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Curre
nt (A
)
Time (MS)
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Curre
nt (A
)
Time (MS)0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Curre
nt (A
)
Time (MS)
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 900.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Time (MS)
Cur
rent
(A)
Fig. 5-5 Simulated current responses from 0 ms to 90 ms.
Similar tests on load disturbances are also studied when the system has reverted a steady
state with vin equals 40 V and RL equals 5 Ω. In this disturbance, RL is suddenly changed from
5 Ω to 10 Ω on the 60 ms, with the vin is still fixed as 40 V. The simulation results in the
figures also show that the OLPSO-optimized circuit has a smaller disturbance response to the
change and a shorter time to revert the steady state. Therefore, the OLPSO algorithm can
optimize the circuit component values to make the circuit exhibit better dynamic
performance.
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
114
5.4.6 Comparisons on Discrete Search Space
The comparisons on fitness quality, optimization speed, and simulation results have
demonstrated that OLPSO performs much better than both the GA, PSO, CLPSO and JADE
approach. However, all these results are based on the continuous search space and therefore
the optimized component values are sometimes not readily available in manufactory, but
require post-fabrication. For example, the results in Table 5-2 show that the optimized value
for the resistor R2 is 13.1202 kΩ and the optimized value for the capacitor C3 is 1.11032 μF.
These values are needed to be composed by connecting several resistors or capacitors in
series and/or parallel, and therefore make it not easy to use in practical applications. As the
resistors and capacitors are always manufactured with discrete values [232], we will test the
search abilities of different algorithms in obtaining the optimized component values in the
discrete search space. Table 5-3 Experimental Result Comparisons on Discrete Search Space
Algorithm Mean Std Wilcoxon Test Best Medium Worst Mean
FEs Success
GA 104.229 7.0487 Z=6.64846† 128.089 102.851 96.9369 × 0 PSO 156.287 26.8560 Z=3.24948† 191.842 146.778 111.801 2063 13
CLPSO 155.919 12.8958 Z=4.28188† 188.885 160.215 133.374 10868 18 JADE 144.281 19.5665 Z=5.11015† 187.7 142.572 95.9546 8455 7
OLPSO 176.550 19.5284 NA 192.199 182.996 134.021 4072 25 †The difference is significant at α=0.05 by Wilcoxon test.
Mean FEs indictes the mean FEs needed to find a solution with fitness value larger than 150. Success indicates the number of runs that the algorithm finds a solution whose fitness value is larger than 150
Table 5-4 Optimized Component Values in the Best Run of Different Approaches in Discrete Search Space
Components GA PSO CLPSO JADE OLPSO R1 200 Ω 100 Ω 100 Ω 100 Ω 100 Ω R2 91 kΩ 30 kΩ 43 kΩ 56 kΩ 10 kΩ RC3 100 kΩ 620 Ω 5.1 k Ω 510 Ω 1.1 kΩ R4 4.7 kΩ 100 Ω 11 k Ω 100 Ω 100 Ω C2 43 μF 0.1 μF 0.1 μF 0.1 μF 0.1 μF C3 36 μF 2.2 μF 1.1 μF 2.7 μF 0.91 μF C4 1.8 μF 12 μF 0.1 μF 11 μF 11 μF
Fitness Value 128.089 191.842 188.885 187.7 192.199
In the following experiments in this sub-section, the optimization algorithms work the
same as they are searching in a continuous search space except that when evaluating the
fitness value of an individual, the variables are first rounded to the nearest readily available
value. For example, in the E24 series of resistor and capacitor, the resistor values are among
{…, 180Ω, 200Ω, 220Ω, …, 12kΩ, 13kΩ, 15kΩ, …} and the capacitor values are among
{…, 0.33μF, 0.36μF, 0.39μF, …, 1.1μF, 1.2μF, 1.3μF, …} [232]. Therefore, if the algorithm
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
115
finds a resistor value of 13.1202kΩ, it will be rounded to 13kΩ, and if the capacitor value is
1.11032μF, it will be rounded to 1.1μF.
We apply such a strategy to the GA, PSO, CLPSO, JADE, and OLPSO algorithms to
test their search abilities in discrete space whose ranges are still the same as free search range
given in Chapter 5.4.1. The experimental results are compared in Table 5-3 and the obtained
component values in the “Best” fitness solution optimized by different approaches are
presented in Table 5-4. The results show that GA still totally fails in the discrete search space,
both of JADE and PSO only success less than 50% (7 times and 13 times out of 30 trials,
respectively), and CLPSO successes 18 times. OLPSO performs better and successes 25
times. The mean fitness obtained by OLPSO is much better than GA, PSO, CLPSO, and
JADE, as indicated by the Wilcoxon test. Therefore, OLPSO has advantages in optimizing
the PEC not only when the search space is continuous, but also when the search space is
discrete, especially on PEC optimization problem with free search range.
5.5 Chapter Summary
This chapter presents an orthogonal learning PSO proposed in Chapter 3 for optimizing
the component values in designing PEC. The challenge of the problem is that the components
interact with each other and make the search space complex. Moreover, the PEC optimization
model is difficult to be described by accurate mathematic model. Evolutionary algorithms
such as PSO did not need any and global search ability, which makes the promising
application on PEC optimization problem. Previous studies using GA, ACO, and PSO
approaches have been reported, but the solution accuracy and convergence rate are still
needed to be improved. Moreover, the circuit components within careful pre-defined search
ranges in the studied optimization model are always determined by expert designers, which
strongly influence the performance of the algorithms on practical PEC optimization
problems.
To improve the algorithm and PEC optimization model, this chapter extended the PEC
optimization model, and proposed a free search range model to meet the requirement of the
unpredictable range of the components in practicle engineering application. However, the
complex search space of PEC will challenge the efficiency of traditional approaches and
requires optimization approach with strong global search ability. Therefore, on the other hand,
the orthogonal learning PSO (OLPSO) proposed in Chapter 3 is adopted to optimize PEC.
Combine the faster optimization speed and stronger global search ability of OLPSO and the
Chapter 5 Orthogonal Learning Particle Swarm Optimization for Power Electronic Circuit Optimization with Free Search Range
116
free search range characteristic of PEC, this chapter solves the new PEC optimization
problem successfully and extends the application of the algorithm in engineering.
The effectiveness and efficiency of the OLPSO algorithm in optimally designing PEC
have been evaluated with the design of a buck regulator with overcurrent protection. In order
to demonstrate the advantages of the proposed OLPSO algorithm, results obtained by GA,
PSO with traditional learning strategy, CLPSO, and JADE are compared with the ones
obtained by OLPSO. The results show that OLPSO outperforms the other algorithms not only
with higher quality fitness value, but also with faster optimization speed and stronger
algorithm reliability. Moreover, simulations results on the circuits demonstrate the
advantages of the OLPSO algorithm by showing that the circuit optimized by OLPSO
exhibits both shorter startup time and short settling time in the transient responses. The
experimental results demonstrate that the machine learning aided OLPSO can not only
performs well on the benchmark functions but also on is outperforming on practical
engineering application optimization problems.
The good performance of OLPSO presented on the PEC optimization design problem
not only indicates that the design of machine learning aided PSO algorithm has significant
impact, but also shows that OLPSO is a powerful approach to solve the complex multimodal
optimization problem.
Chapter 6 Conclusion and Future Work
117
Chapter 6 Conclusion and Future Work
6.1 Conclusion
As an emergent global optimization method, due to the simple algorithm implement,
high operation efficiency, and more rapid convergence rate compared with other traditional
evolutionary algorithm such as GA on most problems, PSO has attracted wide attention of
academia and engineering. Although PSO has made remarkable advances in recent 20 years
since 1995, researchers often ignore the worth of the larger number of data created in the
process of the algorithm when considering how to improve the algorithm or algorithm
application. Similar to other evolutionary algorithm, PSO is also a population-based and
iterative evolution optimization method. Thus, a great number of searching data and historical
data are produced in running process of PSO, and these data implies useful information such
as personal searching path, population evolution trend, population distribution, current
running state, structural feature of the found solutions, interaction in population, interaction
between population, current advantage of the algorithm, challenges met by the problems.
How to analyze and process these data and use them to aid the implement of the algorithm,
will be an important mean to improve the performance of the algorithm. ML technique gets
new knowledge and skill by computer simulating human learning behaviors to improve the
performance and have ability of acquiring useful information from a large number of data.
Therefore, this thesis focusses on innovation research into ML aided PSO and its engineering
application. It is expected to combine the ML and EC these two important area in computer
science, and make important attempts using ML aided PSO and ML assisted EC methods for
algorithm design and application.
For the ML aided PSO algorithm design and its engineering application problem, the
main work in this thesis is to use statistical analysis, orthogonal prediction technique and
ensemble learning to improve the algorithm from parameter control, operator design, and
population interaction three levels, and applied successfully the novel effective PSO
algorithm on one practical engineering application problem of power electronic circuit design.
Specific summary is as follow:
Firstly, this thesis bases on the statistical analysis technique in ML, and proposes
adaptive PSO (APSO) to improve the universality of the algorithm.
Chapter 6 Conclusion and Future Work
118
APSO use statistical analysis technique to analyze and utilize the population data and
fitness value data created in the running process of PSO to implement the evolutionary state
estimation and partition. Finally, the algorithm adaptively controls the parameters and
strategies according to the current evolutionary state, and increases the convergence rate of
the algorithm and avoid falling into local optima. Experiments on 12 unimodal and
multimodal benchmark functions are carried out to evaluate the performance of the algorithm.
The results show that APSO is able to adaptively adjust the parameters according to the
optimization environment under different running state to get rapid convergence rate on
unimodal and multimodal problems. Meanwhile, since the elite learning strategies (ELS) in
APSO can be adaptively carried out when the algorithm has converged, and enhance the
ability of jumping out the local optima, and can converge to global optima on multimodal
problem. The experiments results and comparison analysis show that ML aided PSO APSO
has good convergence rate, strong global searching ability, and stable algorithm reliability,
and is an important and successful exploration on ML aided PSO design.
Secondly, based on orthogonal prediction technique in ML, this thesis proposes
orthogonal learning particle swarm optimization (OLPSO) to increase the rapid global
searching ability of PSO.
For the problem that the traditional learning strategies in PSO is not able to make full
use of personal and swarm experience information, inspired by the ML technique orthogonal
design and orthogonal prediction that can discover useful information and provide effective
prediction, the OLPSO proposed a new orthogonal learning (OL) method to modify the
velocity update operator of PSO. The OL method employed orthogonal combination
technique to combine the personal historical best experience and population historical best
experience to discover useful information and predict and construct a guidance vector with
best searching experience to guide the flight of the particles. The method using only one
learning vector with correct guidance direction to lead the particle to fly, can obvious “local
fluctuation” problem as well as “two steps forward, one step back ” problem. The
experiments on 16 unimodal, multimodal, rotation, and shift functions and comparison with
traditional PSO, improved PSO, and other well perform evolutionary computation methods
are taken to verify the contribution of OL strategy to the algorithm convergence rate and
solution accuracy, and demonstrate that OLPSO has rapid global searching ability, and is a
successful and significant attempt to explore the ML aided PSO algorithm design and also a
efficient tool to solve the complex multimodal global optimization problem.
Chapter 6 Conclusion and Future Work
119
Thirdly, learning from the idea of ensemble learning in ML, this thesis proposes a
co-evolutionary multipopulation multiobjective particle swarm optimization (CMPSO) to
increase the application effect of the algorithm in multiobjective area.
CMPSO learns from idea of using the multiple classifiers combination method to
enhance classify effect in ensemble learning in ML, and adopts multiple coevolution
technique in PSO to solve the multiobjective optimization problems. Similar to the concept
that one classfier corresponds to one classification in ensemble learning, an optimization
framework of multiple population for multiple objective (MPMO) is to use multiple
population to optimize multiple objectives, and one population optimize one objective. Based
on the framework of MPMO, CMPSO proposes a new method with external archive sharing
to implement the information sharing between population and coevolution, and modifies the
velocity update operator to enhance the algorithm convergence rate and uses elite learning
strategy (ELS) in archive to avoid being trapped into local optima. The experimental results
reported on 18 MPO benchmark problems with different characteristics demonstrate that
CMPSO can find nondominant solution set uniformly distributed along the Pareto front and
performs well on the multiobjective problems. As an algorithm inspired by ML technique,
CMPSO is a successful and important exploration on ML aided PSO algorithm design.
Finally, OLPSO is applied on power electronic circuit optimization (PEC) design to
improve the PEC model, and extend the application area of PSO.
PEC design is a complex engineering practice problem. PEC consists of a great number
of capacitor, resistors, and inductors. How to set the value of these circuit components is a
critical problem in designing a high stability circuit. In traditional ways, engineers calculates
physical equations to get a the initial result on depending on their experience, and then tune
the value to revise the circuit design by trial and error method. However, this method needs
professional knowledge and difficult to solve the increasingly complex and lacking of
accurate mathematical model circuit optimization problem. The experiments show that GA
can not obtain feasible solution on this new PEC optimization model. Therefore, this chapter
combines the global searching ability and “free searching range” characteristic of PEC
optimization to address the new PEC optimization problem. The good performance on PEC
design optimization of OLPSO illustrates that it is significant and effective to improve PSO
with the assist of ML technique and OLPSO is an efficient tool to solve the complex
multimodal optimization problem.
In summary, this thesis tracks the ML aided PSO design and its engineering application
problems, introduces the technique and idea of ML such as statistical analysis, orthogonal
Chapter 6 Conclusion and Future Work
120
prediction, and ensemble learning into PSO to develop innovation work from adaptive
control, orthogonal design of operator update, and multiple population coevolution
interaction three levels. Three proposed ML aided PSO are tested on benchmark functions to
verify their effectiveness and efficiency. Furthermore, this thesis applies the proposed
OLPSO to an engineering application problem as an example to present the availability of
algorithm.
Thus, as illustrated in Fig. 6-1, this thesis discovers problems in the process of algorithm
application, and then proposed three ML aided algorithm to improve the performance, and
finilly verify the effectiveness and efficiency of the proposed improved algorithm. The whole
procedure is a research and application process going from practice and return back to
practice.
Fig. 6-1 The summury.
6.2 Future work
This thesis develops a serial of innovation research on the ML aided PSO algorithm
design and its engineering application. The research work in this thesis is an important
attempt to combine machine learning and evolutionary computation these two important
research part in computer science. Based on the research results in this thesis, our future work
will be developed from several aspects:
Chapter 6 Conclusion and Future Work
121
6.2.1 More ML techniques and EC algorithms
This thesis focuses on improving a typical EC algorithm, PSO, and its application
development. Chapter 2, 3, and 4 use statistical analysis, orthogonal design and prediction,
and ensemble learning techniques, respectively, to aid PSO to improve performance. Actually,
there are still many powerful technique, such as cluster analysis, support vector machine, and
deep learning, can be introduced into the PSO design. In future, we will make more
development work by employing more ML techniques. Meanwhile, these ML techniques not
only performs well on PSO algorithm, but also can improve other algorithms, for example,
relatively new algorithms, differential evolution algorithm [233][234] and brain storming
algorithm [235]. Therefore, our future work will include that introducing more ML
techniques into more EC algorithm to aid the algorithms to improve their performance and
extend their application area.
6.2.2 Dynamic ML aided EC algorithm
With the increasingly complexity of the engineering application problems, dynamic
environment will be an important challenges in optimization algorithm. The problem variants
and optimized objective change with the time in dynamic optimization environment, but the
changes always relate to the current or the past state. As we know, ML can learn and
summary from past and present experience, and determine and predict the future changes.
Therefore, one important part of our future work is to design ML aided EC algorithms to get
good performance on the dynamic optimization problems.
6.2.3 Distributed ML aided EC algorithm
Many current research works on EC algorithms are based on centralized computation
framework. However, with the development and the increasing complexity of the engineering
practice problems, especially problems in internet of things, cloud computing, big data and
other emerging areas, the optimized objects are often distributed being, for example, sensor
nodes distributed in internet of things, distributed compute resource and user requirements in
cloud computing, multi-source heterogeneous data from different level in different regions in
big data, these problems needs distributed optimization. How to use ML techniques to design
Chapter 6 Conclusion and Future Work
122
distributed EC algorithms to improve the performance and application effect will be one
important work in future.
6.2.4 More Engineering Optimizaiton Practice Test
This thesis applies PSO algorithms (especially OLPSO) on power electronic circuit
design, a typical engineering practice optimization problem, to verify the efficient of the
algorithm. However, the newly presented problems in engineering practice still need EC
algorithms to further solve. So, one part of our future work is to use ML technique to analyze
the characteristics of these engineering practice problems, combine the characteristics to
propose well-focused solutions and improve the performance. The future engineering practice
optimization will focus on the emerging problems in internet of thing, cloud computing, and
big data, such as cover optimization in sensor network, positioning and tracking optimization,
resource scheduling, user intelligent management, modeling optimization and intelligent
management in big data. The experiments on more engineering practice optimization
problems, not only provide effective and efficient solutions to these engineering optimization
problems, but also extend the application area of the algorithm.
References
123
References
[1]. Shi G Y, Dong J L. Optimization Methods [M]. Higher Education Press, 2002.
[2]. Zhang J, Zhan Z H, Chen W N, Zhong J H, Chen N, Gong Y J, Xu R T, Guan Z. Computational Intelligence [M]. Beijing: Tsinghua University Press, 2009.
[3]. Holland J H. Concerning efficient adaptive systems [M]. In Yovits, M.C., Eds., Self-Organizing Systems, 1962.
[4]. Holland J H. Adaptation in natural and artificial systems [M], University of Michigan Press, Ann Arbor, 1975.
[5]. Fogel L J, Owens A J, Walsh M J. Artificial intelligence through simulated evolution [M]. New York: John Wiley, 1966.
[6]. Rechenberg I. Evolutions strategie: Optimierung technischer systeme nach prinzipien der biologischen evolution [M]. Stuttgart: Frommann Holzboog Verlag, 1973.
[7]. Kirkpatrick S, Gelatt Jr C D, Vecchi M P. Optimization by simulated annealing [J]. Science,1983 (220): 671-680.
[8]. Glover F. Tabu search: part I [J]. ORSA Journal on Computing, 1989 (1): 190-206.
[9]. Glover F. Tabu search: part II [J]. ORSA Journal on Computing, 1990 (2): 4-32.
[10]. Dorigo M, Gambardella L M. Ant colony system: A cooperative learning approach to traveling salesman problem [J]. IEEE Transactions on Evolutionary Computation, 1997, 1(1): 53–66.
[11]. Zhan Z H, Zhang J, Li Y, Liu O, Kwok S K, Ip W H, Kaynak O. An efficient ant colony system based on receding horizon control for the aircraft arrival sequencing and scheduling problem [J]. IEEE Transactions on Intelligent Transportation Systems, 2010, 11(2): 399-412.
[12]. Kennedy J, Eberhart R C. Particle swarm optimization [C]. //in Proc. IEEE Int. Conf. Neural Networks, 1995: 1942–1948.
[13]. Eberhart R C, Kennedy J. A new optimizer using particle swarm theory [C]. //in Proc. 6th Int. Symp. Micro Machine and Human Science, 1995: 39-43.
[14]. Kennedy J, Eberhart R C, Shi Y H. Swarm Intelligence [M]. San Mateo, CA: Morgan Kaufmann, 2001.
[15]. Engelbrecht A P. Fundamentals of Computational Swarm Intelligence [M]. Hoboken, NJ:
References
124
John Wiley & Son, 2005.
[16]. Back T, Hammel U, Schwefel H P. Evolutionary computation: Comments on the history and current state [J]. IEEE Trans. Evolutionary Computation, 1997, 1(1): 3–17.
[17]. Jong K D. Evolutionary computation: a unified approach [C]. //Proc Genetic and Evolutionary Computation Conference (GECCO 2012), New York: ACM Press, 2012: 737–750.
[18]. Eberhart R C, Shi Y H. Particle swarm optimization: developments, applications and resources [C]. //in Proc. IEEE Congr. Evol. Comput., 2001: 81-86.
[19]. Hu X H, Shi Y H, Eberhart R C. Recent advances in particle swarm [C]. //in Proc. Congr. Evol. Comput., 2004: 90–97.
[20]. Li X D, Engelbrecht A P. Particle swarm optimization: an introduction and its recent developments [C]. //in Proc. Genetic Evol. Comput. Conf., 2007: 3391-3414.
[21]. Banks A, Vincent J, Anyakoha C. A review of particle swarm optimization. Part I: background and development [J]. Natural Computing, 2007, 6(4): 467-484.
[22]. del Valle Y, Venayagamoorthy G K, Mohagheghi S, Hernandez J C, Harley R G. Particle swarm optimization: Basic concepts, variants and applications in power system [J]. IEEE Transactions on Evolutionary Computation, 2008, 12(2): 171-195.
[23]. AlRashidi M R, El-Hawary M E. A survey of particle swarm optimization applications in electric power system [J]. IEEE Transactions on Evolutionary Computation, 2009, 13(4): 913-918.
[24]. Eberhart R C, Shi Y H. Guest editorial: special issue on particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 201-203.
[25]. Mendes R, Kennedy J, Neves J. The fully informed particle swarm: Simper, maybe better [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 204-210.
[26]. Parsopoulos K E, Vrahatis M N. On the computation of all global minimizers through particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 211–224.
[27]. van den Bergh F, Engelbrecht A P. A cooperative approach to particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 225–239.
[28]. Ratnaweera A, Halgamuge S, Watson H. Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 240-255.
References
125
[29]. Coello C A C, Pulido G T, Lechuga M S. Handling multiple objectives with particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 256-279.
[30]. Messerschmidt L, Engelbrecht A P. Learning to play games using a PSO-based competitive learning approach [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 280-288.
[31]. Wachowiak M P, Smolikova R, Zheng Y F, Zurada J M, Elmaghraby A. S. An approach to multimodal biomedical image registration utilizing particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2004, 8(3): 289-301.
[32]. Mitchell T M. Machine Learning [M]. New York: McGraw-Hill, 1997
[33]. Reynolds C W. Flocks, herds and schools: a distributed and behavioral model [J]. ACM Computer Graphics, 1987, 21(4): 25-34.
[34]. Wilson E O. Sociobiology: The New Synthesis [M]. Cambridge, MA: Belknap Press. 1975.
[35]. Engelbrecht A. Particle swarm optimization: Velocity initialization [C]. //Proc IEEE
World Congress on Computational Intelligence(WCCI 2012), Piscataway: IEEE Press,
2012: 1-8.
[36]. Helwig S, Branke J, Mostaghim S M. Experimental analysis of bound handling techniques in particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(2): 259-271.
[37]. Ender O, Mohan C K. Particle swarm optimization: surfing the waves [C] //In: Proceedings of t he 1999 Congress of Evolutionary Computation, Washington DC USA , 1999 , 1939-1944
[38]. Clerc M, Kennedy J. The particle swarm-explosion, stability and convergence in a multidimensional complex space [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 58-73.
[39]. Trelea I C, The particle swarm optimization algorithm: Convergence analysis and parameter selection [J]. Information Processing Letters, 2003, (85): 317-325.
[40]. Li N, Sun D B, Chu T, Qin Y Q, Wei Y. An analysis for a particle’s trajectory of PSO based on difference equation [J]. Chinese Journal of Computers, 2006, 29(11): 2052–2069.
[41]. Kadirkamanathan V, Selvarajah K, Fleming P J. Stability analysis of the particle dynamics in particle swarm optimizer [J]. IEEE Transactions on Evolutionary
References
126
Computation, 2006, 10(3): 245-255.
[42]. van den Bergh F, Engelbrecht A P. A study of particle optimization particle trajectories [J]. Information Sciences, 2006, 176(8): 937-971.
[43]. Fernandez-Martinez J L, Garcia-Gonzalo E. Stochastic stability analysis of the linear continuous and discrete PSO models [J]. IEEE Transactions on.Evolutionary Computation, 2011, 15(3): 405-423.
[44]. Shi Y H, Eberhart R C. Comparison between genetic algorithms and particle swarm optimization [C] //in Proc. 7th Int. Conf. Evolutionary Programming, 1998: 611–616.
[45]. Shi Y H, Eberhart R C. A modified particle swarm optimizer [C]. //in Proc. IEEE World Congr. Comput. Intell, 1998: 69–73.
[46]. Shi Y H, Eberhart R C. Fuzzy adaptive particle swarm optimization [C]. //in Proc. IEEE Int. Congr. Evolutionary Computation, 2001: 101–106.
[47]. Eberhart R C, Shi Y H. Tracking and optimizing dynamic systems with particle swarms [C]. //in Proc. IEEE Congr. Evolutionary Computation, 2001: 94-97.
[48]. Huang X, Zhang J, Zhan Z H. Faster particle swarm optimization algorithm with random inertia weight [J]. Computer Engineering and Design. 2009, 30(3): 647-650.
[49]. Liu Y, Tian X F, Zhan Z H. Research on inertia weight control approaches in particle swarm optimization [J]. Journal of Nanjing University (Natural Sciences). 2011, 47(4): 364-371
[50]. Clerc M. The swarm and the queen: Toward a deterministic and adaptive particle swarm optimization [C]. //in Proc. IEEE Int. Conf. Evol. Computation, 1999: 1951-1957.
[51]. Eberhart R C, Shi Y H. Comparing inertia weights and constriction factors in particle swarm optimization [C]. //In Proc. of the Congr. on Evolu. Comp., 2000: 84-88.
[52]. Kennedy J. The particle swarm: social adaptation of knowledge [C]. //in Proc. IEEE Congr. Evol. Comput. 1997: 303-308.
[53]. Suganthan P N. Particle swarm optimizer with neighborhood operator [C]. //in Proc. IEEE Congr. Evol. Comput., 1999: 1958-1962.
[54]. Kennedy J, Mendes R. Population structure and particle swarm performancer [C]. //in Proc. IEEE Congr. Evol. Comput., 2002: 1671-1676.
[55]. Kennedy J, Mendes R. Neighborhood topologies in fully informed and best-of-neighborhood particle swarms [J]. IEEE Trans. Syst., Man, Cybern., C, 2006, 36(4): 515-519.
References
127
[56]. Hu X, Eberhart R C. Multiobjective optimization using dynamic neighborhood particle swarm optimization [C]. //in Proc. IEEE Congr. Evol. Comput., 2002: 1677-1681.
[57]. Liang J J, Suganthan P N. Dynamic multi-swarm particle swarm optimizer [C]. //in Proc. Swarm Intelligence Symp., 2005: 124-129.
[58]. Kennedy J. Stereotyping: Improving particle swarm performance with cluster analysis [C]. //in Proc. IEEE Congr. Evol. Comput., 2000: 1507-1512.
[59]. Liang J J, Qin A K, Suganthan P N, Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions [J]. IEEE Trans. Evol. Comput., 2006, 10(3): 281-295.
[60]. Angeline P J. Using selection to improve particle swarm optimization [C]. //in Proc. IEEE Congr. Evol. Comput., 1998: 84-89.
[61]. Lovbjerg M, Rasmussen T K, Krink T. Hybrid particle swarm optimizer with breeding and subpopulations [C]. //in Proc. Genetic Evol. Comput. Conf., 2001: 469-476.
[62]. Chen Y P, Peng W C, Jian M C. Particle swarm optimization with recombination and dynamic linkage discovery [J]. IEEE Trans. Syst., Man. Cybern., B, 2007, 37(6): 1460-1470.
[63]. Liang J J, Song H, Qu B Y, Mao X B. Path planning based on dynamic multi-swarm particle swarm optimizer with crossover [C]. //Proc International Conference on Intelligent Computing, 2012: 159-166
[64]. Andrews P S. An investigation into mutation operators for particle swarm optimization [C]. //in Proc. IEEE Congr. Evol. Comput., 2006: 1044-1051.
[65]. Lu Z S, Hou Z R. Particle swarm optimization with adaptive mutation [J]. Acta Electronica Sinica, 2004, 32(3): 416-420.
[66]. Pehlivanoglu Y V. A new particle swarm optimization method enhanced with a periodic mutation strategy and neural networks [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(3): 436-452.
[67]. Liang J J, Suganthan P N. Dynamic multi-swarm particle swarm optimizer with local search [C]. //in Proc. IEEE Congr. Evol. Comput., 2005: 522-528.
[68]. Qu B Y, Suganthan P N, Das S. A distance-based locally informed particle swarm model for multimodal optimization [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(3): 387-402.
[69]. Gong Y J, Zhang J, Chung H, Chen W N, Zhan Z H, Li Y, Shi Y H. An efficient
References
128
resource allocation scheme using particle swarm optimization [J]. IEEE Transactions on Evolutionary Computation, 2012, 16(6): 801-816.
[70]. Chen W N, Zhang J, Lin Y, Chen N, Zhan Z H, Chung H, Li Y, Shi Y H. Particle swarm optimization with an aging leader and challengers [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(2): 241-258.
[71]. Chen G Q, Wang Y P. Overlapping community detection of complex networks based on discrete particle swarm algorithm [J], Journal of Xi’ an Jiaotong University, 2013, 47(1): 1-9.
[72]. dos Santos Coelho L, Herrera B M. Fuzzy identification based on a chaotic particle swarm optimization approach applied to a nonlinear Yo-yo motion system [J]. IEEE Trans. Industrial Electronics, 2007, 54(6): 3234-3245.
[73]. Liu B, Wang L, Jin Y H, Tang F, Huang D X. Improved particle swarm optimization combined with chaos [J]. Chaos, Solitons Fractals, 2005, 25(5): 1261-1271.
[74]. Sun J, Feng B, Xu W B. Particle swarm optimization with particles having quantum behavior [C]. //in Proc. IEEE Congr. Evol. Comput., 2004: 325-331.
[75]. Mikki S M, Kishk A A. Quantum particle swarm optimization for electromagnetics [J]. IEEE Trans. Antennas and Propagation, 2006, 54(10): 2764- 2775.
[76]. Yang S Y, Wang M, Jiao L C. A quantum particle swarm optimization [C]. //in Proc. IEEE Congr. Evol. Comput., 2004: 320-324.
[77]. Brits R, Engelbrecht A P, van den Bergh F. A niching particle swarm optimizer [C]. //in Proc. 4th Asia-Pacific Conf. Simulated Evolutionary Learning, 2002: 692-696.
[78]. Brits R, Engelbrecht A P, van den Bergh F. Locating multiple optima using particle swarm optimization [J]. Applied Mathematics and Computation, 2007, 189(2): 1859-1883.
[79]. Parrott D, Li X D. Locating and tracking multiple dynamic optima by a particle swarm model using speciation [J]. IEEE Trans. Evol. Comput., 2006, 10(4): 440-458.
[80]. Dou Q S, Zhou C G, Ma M. Two improvement strategies for particle swarm optimization [J]. Journal of Computer Research and Development, 2005, 42(5): 897-904.
[81]. Bao Q J B, Jiang J Q, Song C Y, Liang Y C. Optimal stock cutting based on particle swarm optimization and simulated annealing [J]. Computer Engineering and Applications, 2008, 44(26): 246-248.
References
129
[82]. Huang C F. A hybrid of genetic algorithm and particle swarm optimization for recurrent network design [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 2004, 34(2): 997-1006
[83]. Sun K, Wu H X, Wang H, Ding J D. Hybrid ant colony and particle swarm algorithm for solving TSP [J]. Computer Engineering and Applications, 2012, 48(34): 60-63.
[84]. Shelokar P S, Siarry P, Jayaraman V K, Kullkarni B D. Particle swarm and ant colony algorithms hybridized for improving continuous optimization [J]. Applied Mathematics and Computation, 2007, 188(1): 129-142.
[85]. Gao Y, Xie S L. Particle swarm optimization with immunity [J]. Computer Engineering and Applications, 2004, 3: 4-6.
[86]. Cong L, Jiao L C, Sha Y H. An orthogonal immune clone particle particle swarm algorithm with quantization for numerical optimization [J]. Pattern Recognition and Artifical Intelligence, 2007, 20(5): 583-592.
[87]. Zhang W J, Xie X F. DEPSO: Hybrid particle swarm with differential evolution operator [C]. //Proc IEEE International Conference on Systems, Man and Cybernetics, 2003, 3816-3821
[88]. Chang J F, Chu S C, Roddick J F, Pan J S. A parallel particle swarm ptimization algorithm with communication strategies. Journal of information science and engineering, 2005, 21, 809–818.
[89]. Chu S C, Pan J S. Intelligent parallel particle swarm optimization algorithms. Studies in Computational Intelligence, 2006(22): 159–175.
[90]. Leong W F, Yen G G. PSO-based multiobjective optimization with dynamic population size and adaptive local archives [J]. IEEE Trans. Syst., Man, Cybern. B, 2008, 38(5): 1270–1293.
[91]. Yen G G, Leong W F. Dynamic multiple swarms in multiobjective particle swarm optimization [J]. IEEE Trans. Syst., Man, Cybern. A, 2009, 39(4): 890–911.
[92]. Krohling R A, dos Santos Coelho L. Coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems [J]. IEEE Trans. Syst., Man. Cybern., B, 2006, 36(6): 1407-1416.
[93]. Zhan Z H, Zhang J. Parallel particle swarm optimization with adaptive asynchronous migration strategy [C]. //Proc The 9th International Conference on Algorithms and Architectures for Parallel Processing (ICA3PP 2009), Heidelberg: Springer Press, 2009: 490–501.
References
130
[94]. Kennedy J, Eberhart R C. A discrete binary version of the particle swarm algorithm [C]. //in Proc. IEEE Int Conf on Computational Cybernetics and Simulation, 1997: 4104-4108.
[95]. Pampara G, Franken N, Engelbrecht A P. Combining particle swarm optimization with angle modulation to solve binary problems [C]. //Proc IEEE Congress on Evolutionary Computation, 2005: 89-96
[96]. Al-kazemi B, Mohan C K. Discrete multi-phase particle swarm optimization [C]. //Infomation Processing with Evolutionary Algorithms, Springer: Berlin Heidelberg, 2006: 306-326
[97]. Pedrasa M A A, Spooner T D, MacGill I. Scheduling of demand side resources using binary particle swarm optimization [J]. IEEE Transacitons on Power Systems, 2009, 24(3): 1173-1181.
[98]. Zhan Z H, Zhang J, Fan Z. Solving the optimal coverage problem in wireless sensor networks using evolutionary computation algorithms [C]. //Proc Simulated Evolution And Learning (SEAL 2010), Heidelberg: Springer Press, 2010: 166–176.
[99]. Zhan Z H, Du K J, Zhang J, Xiao J. Extended binary particle swarm optimization approach for disjoint set covers problem in wireless sensor networks [C]. //Proc
Conference on Technologies and Applications of Artificial Intelligence(TAAI 2012),
Piscataway: IEEE Press, 2012: 327-331.
[100]. Zhan Z H, Zhang J. Discrete particle swarm optimization for multiple destination rRouting problems [C]. //Proc EvoWorkshops, Heidelberg: Springer Press, 2009: 117–122.
[101]. Salman A, Ahmad I, Al-Madani S. Particle swarm optimization for task assignment problem [J]. Microprocessors and Microsystems, 2002, 26(8): 363-371.
[102]. Yoshida H, Kawata K, Fukuyama Y, Takayama S, Nakanishi Y. A particle swarm optimization for reactive power and voltage control considering voltage security assessment [J]. IEEE Trans. Power Syst., 2000, 15(4): 1232–1239.
[103]. Zhan Z H, Feng X L, Gong Y J, Zhang J. Solving the flight frequency programming problem with particle swarm optimization [C]. //Proc IEEE Congress on Evolutionary
Computation(CEC 2009), Piscataway: IEEE Press, 2009: 1383-1390.
[104]. Schoofs L, Naudts B. Swarm intelligence on the binary constraint satisfaction problem [C]. //in Proc. IEEE Congr. Evol. Comput., 2002: 1444-1449.
[105]. Hu X, Eberhart R C, Shi Y H. Swarm intelligence for permutation optimization: A case
References
131
study on n-Queen problem [C]. //in Proc. IEEE Swarm Intelligence Symposium, 2003: 243-246.
[106]. Clerc M. Discrete particle swarm optimization illustrated by the traveling salesman problem [C]. //in New Optimization Techniques in Engineering, 2004: 219-239.
[107]. Chen W N, Zhang J, Chung H, Zhong W L, Wu W G, Shi Y H. A novel set-based particle swarm optimization method for discrete optimization problem [J]. IEEE Transactions on Evolutionary Computation. 2010, 14(2): 278-300.
[108]. Gong Y J, Zhang J, Liu O, Huang R Z, Chung H, Shi Y H. Optimizing the vehicle routing problem with time windows: a discrete particle swarm optimization approach [J]. IEEE Transactions on Systems, Man, and Cybernetics--Part C: Applications and Reviews. 2012, 42(2): 254-267.
[109]. Zhu H, Wang Y P. Integration of security grid dependent tasks scheduling double-objective optimization model and algorithm [J]. Journal of Software, 2011, 22(11): 2729-2748.
[110]. Wang J, Cai Y, Zhou Y, Wang R, Li C. Discrete particle swarm optimization based on estimation of distribution for terminal assignment problems [J]. Computers & Industrial Engineering, 2011, 60(4): 566-575.
[111]. Tian Y. Liu D Y. A hybrid particle swarm optimization method for flow shop scheduling problem [J]. Acta Electronica Sinica, 2011, 39(5): 1087-1093.
[112]. Sun C S, Sun J G, Yang Q Y, Zheng L H. A hybrid algorithm for flowshop scheduling problem [J]. Acta Automatica sinica, 2009, 35(3): 332-336.
[113]. AlRashidi M R, El-Hawary M E. Hybrid particle swarm optimization approach for solving the discrete OPF problem considering the valve loading effects [J]. IEEE Transactions on Power Electronics, 2007, 22(4): 2030-2038.
[114]. Gao H B, Zhou C, Gao L. General particle swarm optimization model [J]. Chinese Journal of Computers, 2005, 28(2): 1980-1987.
[115]. Guo W Z, Chen G L, Chen Z. Survy on discrete particle swarm optimization algorithm [J]. Journal of Fuzhou (Natural Science Edition), 2011, 39(5): 631-638.
[116]. Eberhart R C, Shi Y H. Evolving artificial neural networks [C]. //in Proc. Int’l. Conf. on Neural Networks and Brain, Beijing, 1998: 84-89.
[117]. Eberhart R C, Hu X. Human tremor analysis using particle swarm optimization [C]. //in Proc. IEEE Congr. Evol. Comput., 1999: 1927-1930.
References
132
[118]. Ciuprina G, Ioan D, Munteanu I. Use of intelligent-particle swarm optimization in electromagnetism [J]. IEEE Trans. Magn., 2002, 38(2): 1037-1040.
[119]. Gaing Z L. Particle swarm optimization to solving the economic dispatch considering the generator constraints [J]. IEEE Trans. Power Syst., 2003, 18(3): 1187-1195.
[120]. Victoire T A A, Jeyakumar A E. Reserve constrained dynamic dispatch of units with valve-point effects [J]. IEEE Trans. Power Syst., 2005, 20(3): 1273–1282.
[121]. Abido M A. Optimal design of power-system stabilizers using particle swarm optimization [J]. IEEE Trans. Energy Conversion, 2002, 17(3): 406-413.
[122]. Gaing Z L. A particle swarm optimization approach for optimum design of PID controller in AVR system [J]. IEEE Trans. Energy Conversion, 2004, 19(2): 384-391.
[123]. Franken N, Engelbrecht A P. Particle swarm optimization approaches to coevolve strategies for the iterated prisoner’s dilemma [J]. IEEE Trans. Evol. Comput., 2005, 9(6): 562-579.
[124]. Sousa T, Silva A, Neves A. A particle swarm data miner [C]. //in Lecture Notes in Computer Science (LNCS), 2003: 43-53.
[125]. Sousa T, Silva A, Neves A. Particle swarm based data mining algorithms for classification tasks [J]. Parallel Computing, 2004, (30): 767-783.
[126]. Donelli M, Azaro R, De Natale F G B, Massa A. An innovative computational approach based on a particle swarm strategy for adaptive phased-arrays control [J]. IEEE Trans. Antennas and Propagation, 2006, 54(3): 888-898.
[127]. Liu B, Wang L, Jin Y H. An effective PSO-based memetic algorithm for flow shop scheduling [J]. IEEE Trans. Syst., Man, and Cybern. B, 2007, 37(1): 18-27.
[128]. Bieler A, Altwegg K, Hofer L. Optimization of mass spectrometers using the adaptive particle swarm algorithm [J]. Journal of Mass Spectrometry, 2011, 46(11): 1143-1151.
[129]. Zhang J R, Wang J, Yue C Y. Small population-based particle swarm optimization for short-term hydrothermal scheduling [J]. IEEE Transactions on Power Systems, 2012, 27(1): 142-152.
[130]. Pham M T, Zhang D H, Koh C S. Multi-guider and cross-searching approach in multi-objective particle swarm optimization for electromagnetic problems [J]. IEEE Transactions on Magnetics, 2012, 48(2): 539-542.
[131]. Boeringer D W, Werner D H. Particle swarm optimization versus genetic algorithms for phased array synthesis [J]. IEEE Trans. Antennas and Propagation, 2004, 52(3): 771-779.
References
133
[132]. Li Q, Chen W R, Wang Y G, Liu S K, Jia J B. Parameter identification for PEM fuel-cell mechanism model based on effective informed adaptive particle swarm optimization [J]. IEEE Transactions on Industrial Electronics, 2011, 58(6): 2410-2419.
[133]. Zhang B C, Sun X, Gao L R, Yang L. Endmember extraction of hyperspectral remote sensing images based on the discrete particle swarm optimization algorithm [J]. IEEE Transactions on Geoscience and Remote Sensing, 2011, 49(11): 4173-4176.
[134]. Gong Y J, Shen M, Zhang J, Kaynak O, Chen W N, Zhan Z H. Optimizing RFID network planning by using a particle swarm optimization algorithm with redundant reader elimination [J]. IEEE Transactions on Industrial Informatics, 2012, 8(4): 900-912.
[135]. Michalski R S, Carbonell J G, Mitchell T M. Machine Learning: An Artificial Intelligence Approach [M]. Morgan Kaufmann, 1986.
[136]. Turing A M. Computing machinery and intelligence [J]. Mind, New Series, 1950, 59(236): 433-460.
[137]. Dietterich T G. Machine learning research: Four current directions [J]. AI Magazine, 1997, 18(4): 97-136.
[138]. Mjolsness E, DeCoste D. Machine learning for science: State of the art and future prospects [J]. Science, 2001, 293(5537): 2051-2055.
[139]. Zhou Z H. Machine learning and data mining [J]. 2007: 35-44.
[140]. Nilsson N. Introduction to Machine Learning, http://ai.stanford.edu/~nilsson/mlbook.html [OL], 2010, Accessed March, 2013.
[141]. Wang Y, Shi C Y. Machine learning [J]. Journal of Guangxi Normal University (Natural Science Edition), 2003, 21(2): 1-15.
[142]. Lin W Y, Hu Y H, Tsai C F. Machine learning in financial crisis prediction: A survey [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part C, 2012, 42(4): 421-436.
[143]. Omran M G, Engelbrecht A P, Salman A. Particle swarm optimization method for image clustering [J]. International Journal on Pattern Recognition and Artificial Intelligence, 2005, 19(3): 297-322.
[144]. Omran M G, Engelbrecht A P, Salman A. A color image quantization algorithm based on particle swarm optimization [J]. Informatic, 2005, 29(3): 261-269.
[145]. Xu R, Anagnostopoulos G C, Wunsch D C. Multiclass cancer classification using semisupervised ellipsoid ARTMAP and particle swarm optimization with gene expression data [J]. IEEE/ACM Trans. Computational Biology and Bioinformatics, 2007,
References
134
4(1): 65-77.
[146]. Zhang J, Zhan Z H, Lin Y, Chen N, Gong Y J, Zhong J H, Chung H, Li Y, Shi Y H. Evolutionary computation meets machine learning: A survey [J]. IEEE Computational Intelligence Magazine, 2011, 6(4): 68-75.
[147]. Chen D B, Zhao C X. Particle swarm optimization with adaptive population size and its application [J]. Applied Soft Comput., 2009, 9: 39-48.
[148]. Iwasaki N, Yasuda K. Adaptive particle swarm optimization using velocity feedback [C]. //in Proc. Annual Int. Conf. Symposium on Stochastic Syst. Theory and Its Applications, 2004: 369-380.
[149]. Sivanandam S N, Visalakshi P. Dynamic task scheduling with load balancing using parallel orthogonal particle swarm optimization [J]. Int. J. Bio-Inspired Computation, 2009, 1(4): 276-286.
[150]. Ho S Y, Lin H S, Liauh W H, Ho S J. OPSO: Orthogonal particle swarm optimization and its application to task assignment problems [J]. IEEE Trans. Syst., Man, Cybern. A, 2008, 38(2): 288–298.
[151]. Liu J L, Chang C C. Novel orthogonal momentum-type particle swarm optimization applied to solve large parameter optimization problems [J]. J. Artif. Evol. Appl., 2008: 1-16.
[152]. Han L, He X. A novel opposition-based particle swarm optimization for noisy problems [C]. //in Proc. 3rd Int. Conf. Natural Comput., 2007: 624-629.
[153]. Omran M G H, AL-Sharhan S. Using opposition-based learning to improve the performance of particle swarm optimization[C]. //in Proc. IEEE Symp. Swarm Intell., 2008: 1-6.
[154]. Wu Z, Ni Z, Zhang C, Gu L. Opposition based comprehensive learning particle swarm optimization [C]. //in Proc. Int. Conf. 3rd Intell. Syst. Knowledge Engineering, 2008: 1013-1019.
[155]. Janson S, Merkle D. A new multi-objective particle swarm optimization algorithm using clustering applied to automated docking [C]. //in Proceedings of Hybrid Metaheuristics, 2005: 128-141.
[156]. Pulido G T, Coello C A C. Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer [C]. //in Proc. Genetic Evol. Comput. Conf., 2004: 225-237.
References
135
[157]. Mei C, Zhou D. An improved particle swarm optimization with fuzzy c-means clustering algorithm [C]. //in Proc. Int. Conf. Intelligent Human-Machine Systems and Cybernetics, 2009: 118-122.
[158]. Alizadeh M, Fotoohi E, Roshanaei V, Safavieh E. Clustering based fuzzy particle swarm optimization [C]. //in Proc. 28th North American Fuzzy Information Processing Society Annual Conf., 2009: 1-6.
[159]. Zhan Z H, Xiao J, Zhang J, Chen W N. Adaptive control of acceleration coefficients for particle swarm optimization based on clustering analysis [C]. //Proc IEEE Congress on
Evolutionary Computation(CEC 2007), Piscataway: IEEE Press, 2007: 3276-3282.
[160]. De Oca M, Stützle, T, Van den Enden K, Dorigo M. Incremental social learning in particle swarms [J]. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 2011, 41(2): 368-384.
[161]. Rout N K, Das D P, Panda G R. Particle swarm optimization based active noise control algorithm without secondary path identification [J]. IEEE Transactions on Instrumentation and Measurement, 2012, 61(2): 554-563.
[162]. Yao X, Liu Y, Lin G. Evolutionary programming made faster [J]. IEEE Transactions on Evolutionary Computation, 1999, 3(2): 82-102.
[163]. Suganthan P N, Hansen N, Liang J J, Deb K, Chen Y P, Auger A, Tiwari S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization [R]. Nanyang Technol. Univ., Singapore, IIT Kanpur, India, KanGAL Rep. 2005005, May 2005.
[164]. Wolpert D H, Macready W G. No free lunch theorems for optimization [J]. IEEE Transactions on Evolutionary Computation, 1997, 1(1): 67-82.
[165]. Deb K, Beyer H G. Self-adaptive genetic algorithms with simulated binary crossover [J]. Evol. Comput., 2001, 9(2): 197-221.
[166]. Zhang Q, Leung Y W. An orthogonal genetic algorithm for multimedia multicast routing [J]. IEEE Transactions on Evolutionary Computation, 1999, 3(1): 53–62.
[167]. Leung Y W, Wang Y. An orthogonal genetic algorithm with quantization for global numerical optimization [J]. IEEE Transactions on Evolutionary Computation, 2001, 5(1): 41–53.
[168]. Hu X M, Zhang J, Zhong J H. An enhanced genetic algorithm with orthogonal design [C]. //in Proc. IEEE Congr. Evol. Comput., 2006: 3174-3181.
References
136
[169]. Ho S Y, Ho S J, Lin Y K, Chu W C C. An orthogonal simulated annealing algorithm for large floorplanning problems [J]. IEEE Trans. Very Large Scale Integr. Syst., 2004, 12(8): 874–877.
[170]. Ho S J, Ho S Y, Shu L S. OSA: Orthogonal simulated annealing algorithm and its
application to designing mixed H2/H∞ optimal controllers [J]. IEEE Trans. Syst., Man,
Cybern. A, 2004, 34(5): 588-600.
[171]. Ho S Y, Shu L S, Chen J H. Intelligent evolutionary algorithms for large parameter optimization problems [J] IEEE Transactions on Evolutionary Computation, 2004, 8(6): 522-541.
[172]. Hu X M, Zhang J. Orthogonal methods based ant colony search for solving continuous optimization problems [J]. Journal of Computer Science and Technology, 2008, 23Z(1): 2-18.
[173]. Shang Y W, Qiu Y H. A note on the extended Rosenbrock function [J]. Evol. Comput., 2006, 14(1): 119-126.
[174]. Salomon R. Reevaluating genetic algorithm performance under coordinate rotation of benchmark functions [J]. BioSystems, 1996, 39: 263–278.
[175]. Zhang Q, Sun J, Tsang E, Ford J. Hybrid estimation of distribution algorithm for global optimization [J]. Eng. Comput., 2004, 21(1): 91–107.
[176]. Auger A, Hansen N. Performance evaluation of an advanced local search evolutionary algorithm [C]. //In Proc. IEEE Congr. Evol. Comput., 2005: 1777-1784.
[177]. Zhang J Q, Sanderson A C. JADE:Adaptive differential evolution with optional
external archive [J]. IEEE Transactions on Evolutionary Computation, 2009, 13(5): 945-958.
[178]. Reyes-Sierra M, Coello C A C. Multi-objective particle swarm optimizers: A survey of the state-of-the-art [J]. International Journal of Computational Intelligence Research, 2006, 2(3): 287-308.
[179]. Tang L X, Wang X P. A hybrid multiobjective evolutionary algorithm for multiobjective optimization problems [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(1): 20-45.
[180]. Saxena D K, Duro J A, Tiwari A, Deb K, Zhang Q F. Objective reduction in many-objective optimization: Linear and nonlinear algorithms [J]. IEEE Transactions on Evolutionary Computation, 2013, 17(1): 20-45.
References
137
[181]. Masazade E, Rajagopalan R, Varshney P K, Mohan C K, Sendur G K, Keskinoz M. A multiobjective optimization approach to obtain decision thresholds for distributed detection in wireless sensor networks [J]. IEEE Trans. Syst., Man. Cybern., B, 2010, 40(2): 444-457.
[182]. Gong M G, Jiao L C, Yang D D, Ma W P. Research on evolutionary multi-objective optimization Algorithms [J]. Journal of Software, 2009, 20(2): 271-289.
[183]. Ting C K, Lee C N, Chang H C, and Wu J S. Wireless heterogeneous transmitter placement using multiobjective variable-length genetic algorithm [J]. IEEE Trans. Syst., Man. Cybern., B, 2009, 39(4): 945-958.
[184]. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II [J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 183-197.
[185]. Zhang Q, Li H. MOEA/D: A multi-objective evolutionary algorithm based on decomposition [J]. IEEE Transactions on Evolutionary Computation, 2007, 11(6): 712–731.
[186]. Zhou Z H. When Semi-supervised learning meets ensemble learning [J]. Multiple Classifier Systems, 2009, LNCS(5519): 529-538.
[187]. Zhou Z H. Ensemble Methods: Foundations and Algorithms [M]. Chapman & Hall/CRC, 2012.
[188]. Goh C K, Tan K C, Liu D S, Chiam S C. A competitive and cooperative co-evolutionary approach to multi-objective particle swarm optimization algorithm design [J]. European Journal of Operational Research, 2010, 202(1): 42-54.
[189]. Li X D, Yao X. Cooperatively coevolving particle swarms for large scale optimization [J]. IEEE Transactions on Evolutionary Computation, 2012, 16(2): 210-224.
[190]. Zhan Z H, Zhang J. Co-evolutionary differential evolution with dynamic population size and adaptive migration strategy [C]. //in Proc. Genetic Evol. Comput. Conf., 2011: 211-212.
[191]. Parsopoulos K E, Vrahatis M N. Particle swarm optimization method in multiobjective problems [C]. //in Proc. ACM Symp. Applied Computing, 2002: 603–607.
[192]. Fonseca C M, Fleming P J. Genetic algorithms for multiobjective optimization: Formulation, discussion and generalization [C]. //In Proc. the 5th Int. Conf. Genetic Algorithms, 1993:416–423.
References
138
[193]. Horn J, Nafploitis N, Goldberg D E. A niched Pareto genetic algorithm for multiobjective optimization [C]. //in Proc. the 1st IEEE Conf. Evol. Comput., 1994: 82–87.
[194]. Li X. A non-dominated sorting particle swarm optimizer for multiobjective optimization [C]. //in Proc. Genetic Evol. Comput. Conf., 2003: 37–48.
[195]. Rachmawati L Srinivasan D. Incorporating the notion of relative importance of objectives in evolutionary multiobjective optimization [J]. IEEE Transactions on Evolutionary Computation, 2010, 14(4): 530-546.
[196]. Karahan I, Koksalan M. A territory defining multiobjective evolutionary algorithms and preference incorporation [J]. IEEE Transactions on Evolutionary Computation, 2010, 14(4): 636-664.
[197]. Zitzler E, Thiele L, Bader J. On set-based multiobjective optimization [J]. IEEE Transactions on Evolutionary Computation, 2010, 14(1): 58-79.
[198]. Wang Y, Cai Z X, Guo G Q, Zhou Y R. Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems [J]. IEEE Trans. Syst., Man, Cybern. B, 2007, 37(3): 560–575.
[199]. Adra S F, Dodd T J, Griffin I A, Fleming P J. Convergence acceleration operator for multiobjective optimization [J]. IEEE Transactions on Evolutionary Computation, 2009, 13(4): 825-847.
[200]. Avigad G, Moshaiov A. Interactive evolutionary multiobjective search and optimization of set-based concepts [J]. IEEE Trans. Syst., Man, Cybern. B, 2009, 39(4): 1013–1027.
[201]. Lara A, Sanchez G, Coello C A C, Schütze O. HCS: A new local search strategy for memetic multiobjective evolutionary algorithms [J]. IEEE Transactions on Evolutionary Computation, 2010, 14(1): 112-132.
[202]. Song Z, Kusiak A. Multiobjective optimization of temporal processes [J]. IEEE Trans. Syst., Man, Cybern. B, 2010, 40(3): 845–856.
[203]. Zhang Q, Liu W, Tsang E, Virginas B. Expensive multiobjective optimization by MOEA/D with Gaussian process model [J]. IEEE Transactions on Evolutionary Computation, 2010, 14(3): 456-474.
[204]. Liu D S, Tan K C, Goh C K, Ho W K. A multiobjective memetic algorithm based on particle swarm optimization [J]. IEEE Trans. Syst., Man, Cybern. B, 2007, 37(1): 42–50.
[205]. Li B B, Wang L. A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling [J]. IEEE Trans. Syst., Man, Cybern. B, 2007, 37(3): 576–691.
References
139
[206]. Kukkonen S, Lampinen J. Performance assessment of generalized differential evolution 3 with a given set of constrained multi-objective test problems [C]. //in Proc. IEEE Congr. Evol. Comput., 2009: 1943-1950.
[207]. Li H, Zhang Q. Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II [J]. IEEE Transactions on Evolutionary Computation, 2009, 13(2): 284-302.
[208]. Zhang Q, Zhou A, Jin Y. RM-MEDA: A regularity model-based multiobjective estimation of distribution algorithm [J]. IEEE Transactions on Evolutionary Computation, 2008, 12(1): 41–63.
[209]. Knowles J D, Corne D W. Properties of an adaptive archiving algorithm for storing nondominated vectors [J]. IEEE Transactions on Evolutionary Computation, 2003, 7(2): 100-116.
[210]. Zitzler E, Deb K, Thiele L. Comparison of multiobjective evolutionary algorithms: Empirical results [J]. Evol. Comput., 2000, 8(2): 173-195.
[211]. Deb K, Thiele L, Laumanns M, Zitzler E. Scalable multi-objective optimization test problems [C]. //in Proc. IEEE Congr. Evol. Comput., 2002: 825-830.
[212]. Huband S, Barone L, While L, Hingston P. A scalable multiobjective test problem toolkit [C]. //in Lecture Notes in Computer Science. 2005: 280–295.
[213]. Zhang Q, Zhou A, Zhao S Z, Suganthan P N, Liu W, Tiwari S. Multiobjective optimization test instances for the CEC 2009 special session and competition [C]. //in Proc. IEEE Congr. Evol. Comput., 2009: 1-30.
[214]. Huang V L, Suganthan P N, Liang J J. Comprehensive learning particle swarm optimizer for solving multiobjective optimization problems [J]. International Journal of Intelligent Systems, 2006, 21: 209–226.
[215]. Sierra M R, Coello C A C. Improving PSO-based multi-objective optimization using
crowding, mutation and ε-dominance [C]. in Lecture Notes in Computer Science, 2005: 505–519.
[216]. Parsopoulos K E, Tasoulis D K, Vrahatis M N. Multiobjective optimization using parallel vector evaluated particle swarm optimization [C]. //In Proc. Int. Conf. on Artificial Intelligence and Applications, 2004: 823–828.
[217]. Durillo J J, García-Nieto J, Nebro A J, Coello C A C, Luna F, Alba E. Multi-objective particle swarm optimizers: An experimental comparison [C]. //in Proc. 5th Int. Conf. Evolutionary Multi-Criterion Optimization, 2009: 495-509.
References
140
[218]. Bose B K. Energy, environment, and advances in power electronics [J]. IEEE Trans. Power Electron., 2000, 15(4): 688-701.
[219]. Sanders S R, Noworolski J M, Liu X Z, Verghese G C. Generalized averaging method for power conversion circuits [J]. IEEE Trans. Power Electron., 1991, 6: 251–259.
[220]. Emadi A. Modeling of power electronic loads in AC distribution systems using the generalized state-space averaging method [J]. IEEE Trans. Ind. Electron., 2004, 51(5): 992-1000.
[221]. Chetty P R K. Current injected equivalent circuit approach to modeling switching dc-dc converters [J]. IEEE Trans. Aerosp. Electron. Syst., 1981, (6): 802-8081.
[222]. Lee J Y, Kim J S, Jung N S, Cho B H. The current injection method for AC plasma display panel (PDP) sustainer [J]. IEEE Trans. Ind. Electron., 2004, 51(3): 615-624.
[223]. Vergliese G C, Elbuluk M E, Iiassakian J G. A general approach to sampled-data modeling for power electronic circuits [J]. IEEE Trans. Power Electronics, 1986, 76-89.
[224]. Oruganti R, Lee F C. State-plane analysis of parallel resonant converter [J]. IEEE PESC Record, 1985: 56-73.
[225]. Sussman G J, Stallman R M. Heuristic techniques in computer aided circuit analysis [J]. IEEE Trans. Circuits Syst., 1975(11): 857–865.
[226]. Harjani R, Rutenbar R A, Carley L R. OASYS: A framework for analog circuit synthesis [J]. IEEE Trans. Comput.-Aided Design, 1989, 8(6): 1247–1266.
[227]. Huelsman L P. Optimization—A powerful tool for analysis and design [J]. IEEE Trans. Circuits Systs. I, Reg. Papers, 1993, 40(7): 431–439.
[228]. Massara R E. Optimization Methods in Electronic Circuit Design [M]. New York: Longman, 2000.
[229]. Zhang J, Chung H, Lo W L, Hui S Y R, Wu A. Implementation of a decoupled optimization technique for design of switching regulators using genetic algorithm [J]. IEEE Trans. Power Electron., 2001, 16(6): 752–763.
[230]. Zhang J, Chung H, Lo W L, Huang T. Extended ant colony optimization algorithm for power electronic circuit design [J]. IEEE Trans. Power Electron., 2009, 24(1): 147-162.
[231]. Zhang J, Shi Y, Zhan Z H. Power electronic circuits design: A particle swarm optimization approach [C]. //In Proc. The 7th International Conference on Simulated Evolution And Learning, 2008: 605–614.
References
141
[232]. Electronics 2000, http://www.electronics2000.co.uk/data/itemsmr/res_val.php [OL], (http://www.electronics2000.co.uk/) access date: March. 2013.
[233]. Zhan Z H, Zhang J. Self-adaptive differential evolution based on PSO learning strategy [C]. //Proc. Genetic Evol. Comput. Conf. 2010 (GECCO 2010), 2010, 39-46
[234]. Zhan Z H, Zhang J. Enhance differential evolution with random walk [C]. //Proc Genetic and Evolutionary Computation Conference (GECCO 2012), New York: ACM Press, 2012: 1513–1514.
[235]. Zhan Z H, Zhang J, Shi Y H, Liu H L. A modified brain storm optimization [C]. //Proc
IEEE World Congress on Computational Intelligence(WCCI 2012), Piscataway: IEEE
Press, 2012: 1-8.