particle swarm optimization for function optimization in noisy environment

12

Click here to load reader

Upload: hui-pan

Post on 26-Jun-2016

224 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Particle swarm optimization for function optimization in noisy environment

Applied Mathematics and Computation 181 (2006) 908–919

www.elsevier.com/locate/amc

Particle swarm optimization for function optimizationin noisy environment

Hui Pan a, Ling Wang a,b,*, Bo Liu a

a Department of Automation, Tsinghua University, Beijing 100084, Chinab School of Information Engineering, Shandong University at Weihai, Weihai 264209, China

Abstract

As a novel evolutionary searching technique, particle swarm optimization (PSO) has gained wide research and effectiveapplications in the field of function optimization. However, to the best of our knowledge, most studies based on PSO areaimed at deterministic optimization problems. In this paper, the performance of PSO for function optimization in noisyenvironment is investigated, and an effective hybrid PSO approach named PSOOHT is proposed. In the PSOOHT, thepopulation-based search mechanism of PSO is applied for well exploration and exploitation, and the optimal computingbudget allocation (OCBA) technique is used to allocate limited sampling budgets to provide reliable evaluation and iden-tification for good particles. Meanwhile, hypothesis test (HT) is also applied in the hybrid approach to reserve good par-ticles and to maintain the diversity of the swarm as well. Numerical simulations based on several well-known functionbenchmarks with noise are carried out, and the effect of noise magnitude is also investigated as well. The results and com-parisons demonstrate the superiority of PSOOHT in terms of searching quality and robustness.� 2006 Elsevier Inc. All rights reserved.

Keywords: Particle swarm optimization; Hypothesis test; Optimal computing budget allocation; Function optimization; Noisy environ-ment

1. Introduction

Particle swarm optimization (PSO) [1–3], as an alternative to genetic algorithm (GA) [4], is originally pro-posed for unconstrained continuous optimization problems. Compared with GA, PSO has some attractivecharacteristics. It has memory, so knowledge of good solutions is retained by all particles; whereas in GA,previous knowledge of the problem is destroyed once the population changes. It has constructive cooperationbetween particles, particles in the swarm share information between them. Due to the simple concept, easyimplementation and quick convergence, nowadays PSO has gained much attention and wide applicationsin different fields [1–3]. However, to the best of our knowledge, the research about PSO is mainly for deter-ministic optimization problems, while there is almost no research about PSO for optimization problems in

0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2006.01.066

* Corresponding author. Address: Department of Automation, Tsinghua University, Beijing 100084, China.E-mail address: [email protected] (L. Wang).

Page 2: Particle swarm optimization for function optimization in noisy environment

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 909

noisy environment. As we know, many real-world optimization problems include uncertainty that has to betaken into account. Since the uncertainty is often structureless, the performance evaluation is often estimatedonly using simulation by multiple sampling. Meanwhile, due to the huge search space and the existence ofmany local optima, it is very hard to obtain optimum in global sense. Currently, the study on evolutionaryalgorithms in uncertain environments has been a hot topic in the international academic fields, especiallythe research on designing effective and robust algorithms [5].

In this paper, we carry out the study on the performance of PSO for function optimization in noisy envi-ronment and propose an effective hybrid PSO approach named PSOOHT. The features of the proposedhybrid PSO can be summarized as follows. Firstly, the population-based search mechanism of PSO is appliedfor well exploration and exploitation among solution space. Secondly, the optimal computing budget alloca-tion (OCBA) technique is used to allocate limited sampling budgets to provide reliable evaluation and iden-tification for good particles. Thirdly, hypothesis test (HT) is applied to reserve good particles for new swarmand to maintain the diversity of the swarm. Numerical simulation results and comparisons based on severalwell-known function benchmarks with noise demonstrate the superiority of PSOOHT in terms of searchingquality and robustness. Moreover, the effect of noise magnitude is investigated as well.

The organization of the remaining content is as follows. In Section 2, we provide a brief review on PSO.And in Section 3 we simply discuss the implementation of PSO for optimization in noisy environment. Then,in Section 4 we present the hybrid PSO approach after briefly introduce hypothesis-test and optimal comput-ing budget allocation technique. Simulation results and comparisons based several noisy functions, togetherwith the investigation on the effect of noise magnitude, are provided in Section 5. Finally, we end with someconclusions in Section 6.

2. Particle swarm optimization

PSO is an evolutionary computation technique through individual improvement plus population coopera-tion and competition, which is based on the simulation of simplified social models, such as bird flocking, fishschooling, and the swarming theory [1]. The theoretical framework of PSO is very simple and PSO is easy to becoded and implemented with computer [1]. Besides, it is computationally inexpensive in terms of memoryrequirements and CPU times. Thus, nowadays PSO has gained much attention and wide applications in var-ious fields [2,3].

PSO starts with the random initialization of a swarm of particles in the search space and works on the socialbehavior of the particles in the swarm. Therefore, it finds the global best solution by simply adjusting the tra-jectory of each individual towards its own best location and towards the best particle of the swarm at eachtime step. However, the trajectory of each individual in the search space is adjusted by dynamically alteringthe velocity of each particle, according to its own flying experience and the flying experience of the other par-ticles in the search space.

The position and the velocity of the ith particle in the d-dimensional search space can be represented asXi = [xi,1,xi,2, . . . ,xi,d] and Vi = [vi,1,vi,2, . . . ,vi,d], respectively. Each particle has its own best position (pbest)pi = (pi,1,pi,2, . . . ,pi,d) corresponding to the personal best objective value obtained so far at time t. The globalbest particle (gbest) is denoted by pg = (pg,1,pg,2, . . . ,pg,d), which represents the best particle found so far attime t in the entire swarm. The new velocity of each particle is calculated as follows:

vi;jðt þ 1Þ ¼ w � vi;jðtÞ þ c1 � r1 � ðpi;j � xi;jðtÞÞ þ c2 � r2 � ðpg;j � xi;jðtÞÞ; j ¼ 1; 2 . . . ; d; ð1Þ

where c1 and c2 are constants called acceleration coefficients, w is called the inertia factor, r1 and r2 are twoindependent random numbers uniformly distributed in the range of [0, 1].

Thus, the position of each particle is updated in each generation according to the following equation:

xi;jðt þ 1Þ ¼ xi;jðtÞ þ vi;jðt þ 1Þ; j ¼ 1; 2; . . . ; d. ð2Þ

In Eq. (1), the inertial weight factor w provides the necessary diversity to the swarm by changing the

momentum of particles and hence avoids the stagnation of particles at local optima. Usually, it needs to definea maximum velocity for each modulus of velocity vector, which is often set as the upper limit of the eachmodulus of position vector. This helps to control the unnecessary excessive roaming of particles outside the

Page 3: Particle swarm optimization for function optimization in noisy environment

910 H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919

predefined search space. While in [6], it was pointed out that use of a constriction factor may be necessary toinsure convergence of PSO and the maximum velocity is not needed. In particular, the jth velocity of the ithparticle is calculated according to Eq. (3).

vi;jðk þ 1Þ ¼ v � ½vi;jðkÞ þ c1 � r1 � ðpi;jðkÞ � xi;jðkÞÞ þ c2 � r2 � ðpg;jðkÞ � xi;jðkÞÞ�; ð3Þ

where v ¼ 2

j2�u�ffiffiffiffiffiffiffiffiffiffiu2�4up

j; u ¼ c1 þ c2; u > 4. Typically, c1 and c2 are both set to be 2.05. Thus, u = 4.1 and

v = 0.729.The procedure of standard PSO can be summarized as follows.

Step 1: Initialize a population of particles with random positions and velocities, where each particle containsd variables.

Step 2: Evaluate the objective values of all particles, let pbest of each particle and its objective value equal toits current position and objective value, and let gbest and its objective value equal to the position andobjective value of the best initial particle.

Step 3: Update the velocity and position of each particle according to Eqs. (3) and (2).Step 4: Evaluate the objective values of all particles.Step 5: For each particle, compare its current objective value with the objective value of its pbest. If current

value is better, then update pbest and its objective value with the current position and objective value.Furthermore, determine the best particle of current swarm with the best objective value. If the objec-tive value is better than the objective value of gbest, then update gbest and its objective value with theposition and objective value of the current best particle.

Step 6: If a predefined stopping criterion is met, then output gbest and its objective value; otherwise go backto Step 3.

3. Simple implementation of PSO for noisy optimization

Generally, uncertain function optimization problems can be described as follows:

min JðX Þ ¼ E½LðX ; nÞ�; X ¼ ½x1; . . . ; xd �;s.t. xi ¼ ½ai; bi�; i ¼ 1; 2; . . . ; d;

ð4Þ

where X is the decision vector consisting of d variables, n denotes the noise, L(X,n) and J(X) are the sampleperformance and the expected performance.

Ideally, it expects that optimization algorithm works on the expected value J(X) while not be misled by thenoise. Since the expected value J(X) can not be estimated precisely with limited evaluations, in practice multi-ple samplings are used to calculate the following mean sum of a number of random samples L(X,n) as areplacement for J(X).

LðX Þ ¼ 1

n

Xn

i¼1

LðX ; niÞ; ð5Þ

where n is sample size or evaluation number.By using LðX Þ as objective function, PSO can be applied to the uncertain optimization problem mentioned

above. Obviously, it is worthy studying the optimization performance and robustness of PSO when uncer-tainty with different magnitude is present and limited sampling number is allowed. In the following Sections,these issues will be investigated and a hybrid PSO-based approach will be proposed to improve optimizationperformances.

4. Hybrid PSO for function optimization in noisy environment

In this section, we will propose a hybrid PSO approach named PSOOHT by combining PSO with hypoth-esis test (HT) as well as optimal computing budget allocation (OCBA) technique for function optimization innoisy environments. So, we first provide some brief introduce for HT and OCBA.

Page 4: Particle swarm optimization for function optimization in noisy environment

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 911

4.1. Hypothesis test

Hypothesis test (HT) is an important statistical technique used to make test for predefined hypothesis usingexperiment data [7,8]. To perform HT for two different decision solutions when solving uncertain optimizationproblems, it often needs multiple independent evaluations to provide suitable performance estimations. If ni

independent simulations are carried out for solution Xi, then its unbiased estimated mean value J i and vari-ance s2

i can be calculated as follows:

J i ¼ LðX iÞ ¼Xni

j¼1

LðX i; njÞ=ni; ð6Þ

s2i ¼

Xni

j¼1

½LðX i; njÞ � J i�2=ðni � 1Þ. ð7Þ

Considering two different solutions X1 and X2, whose estimated performances bJ ðX 1Þ and bJ ðX 2Þ are twoindependent random variables. According to the law of large number and central limit theorem, the estimationbJ ðX iÞ subjects to NðJ i; s2

i =niÞ when ni approaches to 1. Suppose bJ ðX 1Þ � Nðl1; r21Þ and bJ ðX 2Þ � Nðl2; r

22Þ,

where the unbiased estimations of l1, l2 and s21, s2

2 are given by Eqs. (6) and (7), and let the null hypothesisH0 be ‘‘l1 = l2’’ and the alternative hypothesis H1 be ‘‘l1 5 l2’’. If r2

1 ¼ r22 ¼ r2 and r2 is unknown, then the

critical region of H0 is described as follows:

jJ 1 � J 2jP ta=2ðn1 þ n2 � 2Þ �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffin1 þ n2

n1n2

r�

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðn1 � 1Þs2

1 þ ðn2 � 1Þs22

n1 þ n2 � 2

s¼ s. ð8Þ

Thus, if jJ 1 � J 2j < s, i.e., the null hypothesis holds, then it can be regarded that the performances of thosetwo solutions have no significant difference in statistical sense, otherwise they are significantly different. Fur-thermore, for uncertain minimization problem it is assumed that X2 is better than X1 if J 1 � J 2 P s, while X1 isbetter than X2 if J 1 � J 2 6 �s. In addition, for a specific problem it often supposes that the theoretical per-formance variances of all solutions are the same [7,8], so the hypothesis test can be made according to Eq. (8).For a multi-modal stochastic optimization problem, repeated search can be avoided to some extent by apply-ing HT to search process, but pure blind search with comparison under hypothesis test can often be trappedinto local optima. Since PSO is of good performances for continuous deterministic optimization problems, itmotivates us to investigate the performance of PSO by combining HT for uncertain optimization.

4.2. Ordinal optimization and OCBA

Due to the stochastic nature of the uncertain problem, usually it requires multiple independent simulations(i.e. Monte Carlo experiments) to estimate the expected objective value Ji for Xi with J i ¼

Pnij¼1LðX i; njÞ, where

ni is the number of independent simulations. And its variance is r2i ¼ Var½LðX i; nÞ�, which can be approxi-

mated by sample variance when r2i is unknown beforehand. As ni increases, J i becomes a better approximation

to Ji in the sense that its corresponding confidence interval becomes narrower. But the ultimate accuracy ofthis estimation cannot improve faster than 1=

ffiffiffiffinip

, so a large number of simulations for all of the solutionsare required to achieve precise estimations, which may be very time-consuming.

Based on the idea of order comparison and goal softening, ordinal optimization (OO) was proposed to dealwith the optimization problems of discrete event dynamic systems, which are always difficult to solve becauseof the lack of structure, a large search space, the stochastic nature of such systems and time-consuming sim-ulation evaluation [9]. The qualities of potential solutions are determined in OO by their relative order ratherthan the exact difference of their performance value, i.e. order comparison. Meanwhile, a departure from tra-ditional optimization philosophy, OO aims to obtain good enough solutions with high probability instead ofseeking the exactly optimal solution, i.e. goal softening. It has been shown that an alignment probability ofordinal comparison could converge to 1.0 exponentially in most cases [10], and it can significantly reduce com-puting efforts by aiming for good-enough solutions instead of the best one [11].

Page 5: Particle swarm optimization for function optimization in noisy environment

912 H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919

The alignment probability, which is also called the probability of correct selection or P(CS), is defined togive the probability that the selected observed good-enough solutions via order comparison are indeed the truegood-enough solutions [10]. For a particular problem, the estimation of P(CS) by Monte Carlo simulation isvery time-consuming [12]. Chen et al. [12] provided an approximate probability of correct selection, APCS,which is easy to compute as the lower bound of P(CS), as follows:

P ðCSÞ ¼ PfSolution b is actually the best oneg

¼ PfJ b < J i; i 6¼ bjLðX i; nijÞ; j ¼ 1; 2; . . . ; ni; i ¼ 1; 2; . . . ; Sg ¼ P\S

i¼1;i6¼b

ðeJ b < eJ iÞ( )

P 1�XS

i¼1;i6¼b

PfeJ b > eJ ig ¼ 1�XS

i¼1;i 6¼b

Z 1

�db;i=rb;i

e�t2=2=ffiffiffiffiffiffi2pp

dt ¼ APCS; ð9Þ

where eJ i � NðJ i; r2i =niÞ; J b 6 miniJ i; db;i ¼ J b � J i; r2

b;i ¼ r2b=nb þ r2

i =ni, S is number of total potentialsolutions.

Although OO could significantly reduce the computational cost for identifying the good-enough solutions,there is still potential for further improvement of performance by intelligently determining the number of sim-ulation replications among different solutions instead of equally simulating all solutions. Intuitively, to ensurea high P(CS) or APCS, a large portion of the computing budget should be allocated to those solutions that arepotentially good ones. In other words, a large number of simulations must be conducted with those criticalsolutions in order to reduce estimator variance. On the other hand, limited computational effort should beexpended on non-critical solutions that have little effect on identifying the good ones even if they have largevariances. Hence, the overall simulation efficiency can be improved. Based on such motivation, Chen et al. [12]proposed the OCBA technique to optimally choose the number of simulations for all of the solutions in orderto maximize simulation efficiency with a given computing budget.

In particular, OCBA is used to solve the problem described as follows:

Maximize P ðCSÞ or APCS such that n1 þ n2 þ � � � þ nS ¼ T ; ð10Þ

where T is the total computing budget constraint. Moreover, Chen et al. [12] offered an asymptotic solution asfollows:

Theorem 1. Given a total number of simulations (T) to be allocated to S competing solutions whose performancesare depicted by random variables with means JðX 1Þ; JðX 2Þ; . . . ; JðX SÞ, and finite variances r2

1; r22; . . . ; r2

S ,

respectively, as T!1, if nb ¼ rbPS

i¼1;i 6¼bn2i =r

2i

� �1=2and ni=nj ¼ ri=db;i

rj=db;j

� �2; i; j 2 f1; 2; . . . ; Sg, and i 5 j 5 b,

where db;i ¼ Jb � J i and Jb 6 miniJ i, then APCS can be asymptotically maximized.

According to the theorem, a sequential algorithm for OCBA [12] can be designed as follows:

Step 0: Let k = 0, and perform n0 simulation replications for each design, i.e. nki ¼ n0; i ¼ 1; 2; . . . ; S.

Step 1: IfPS

i¼1nki P T , stop the algorithm.

Step 2: Increase additional simulation replications by D, and compute the new budget allocation nkþ1i ; i ¼

1; 2; . . . ; S according to Theorem 1.Step 3: Perform additional maxf0; nkþ1

i � nki g simulation replications for design Xi, i = 1,2, . . . ,S. Let

k = k + 1 and go back step 1.

Obviously, n0 cannot be too small, as the estimates of the mean and the variance may be very poor, result-ing in premature termination of the comparison. Also, a large D can result in waste of computational timeto obtain an unnecessarily high confidence level, but if D is small, we need to repeat Step 2 many times.According to [12], the suggested value for n0 is between 5 and 20 and the suggested value for D is betweenS/5 and S/10. In [13], OCBA was reasonably combined with GA to effectively solve the stochastic flow shopscheduling, which motivates us to study the combination of OCBA with PSO for uncertain functionoptimization.

Page 6: Particle swarm optimization for function optimization in noisy environment

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 913

4.3. The framework of hybrid PSO in noisy environment

Based on the above description, we know that the population-based search mechanism of PSO can beapplied for well exploration and exploitation among solution space, and the OCBA technique can be usedto allocate limited sampling budgets to provide reliable evaluation and identification for good particles. Mean-while, we can apply HT to reserve good particles for new swarm and to reduce repeated search, so as to main-tain the quality and diversity of the new swarm. Thus, we propose a hybrid PSO approach, namely PSOOHT,for function optimization in noisy environment, whose optimization procedure is described as follows.

Step 1: Initialize a population of N particles with random positions and velocities.Step 2: Use OCBA to allocate T sampling budgets for all particles of current swarm while estimating the

objective values of all particles. Then, determine the pbest of each particle and the gbest of the swarm.Step 3: Update the velocity and position of each particle according to Eqs. (3) and (2).Step 4: Use OCBA to allocate T sampling budgets for all particles with new positions while estimating their

objective values. Then, update the pbest of each particle.Step 5: Use HT to form the new swarm:

Step 5.1: Order all particles both with the old positions and with the new positions from the best to the worst,and denote them sequentially by h1,h2, . . . ,h2N. Let m = 1, j = 2, and put h1 into the next swarm anddenote it as hm.

Step 5.2: Perform hypothesis test for hj with hm which is in the next swarm. If the null hypothesis holds, i.e. Eq.(8) does not hold, then hj is discarded; otherwise, hj is put into the next swarm denoted as hm+1 and letm = m + 1.

Step 5.3: If m < N and j < 2N, then let j = j + 1 and go to Step 5.2; otherwise go to Step 5.4.Step 5.4: If m = N, it means that the new swarm has been formed; otherwise, generate N-m new particles ran-

domly and evaluate them (each is evaluated with T/N samples) to form the new swarm.Step 5.5: Update the gbest of the swarm if necessary.

Step 6: If a predefined stopping criterion is met, then output the gbest and its estimated objective value andthe expected objective value by large number of simulations; otherwise go back to step (3) to performthe PSO-based search.

It can be seen from the above procedure that, the PSOOHT inherits the fundamental population-basedsearching framework of PSO. Secondly, when evaluating the particles with limited simulation budgets, OCBAis applied to identify the good-enough solutions by intelligently determining the number of simulation repli-cations for different solutions instead of equally simulating all solutions as the simple implementation of PSOin Section 3. Thirdly, after the OCBA-based evaluation of particles, HT can reserve the best solutions andmaintain the diversity of the next swarm by deleting some particles with similar performances, which is alsouseful to reduce repeated search to some extent. In a word, both the search element and the evaluation elementare simultaneously considered in PSOOHT for optimization problem in noisy environments.

In addition, the above PSOOHT can be transferred to some simpler approaches. For example, if in Step 5HT is not used, but only the best top N particles are simply selected from all the particles both with the oldpositions and with the new positions, we denote such a simplified PSOOHT approach as PSOO, that PSO withOCBA. On the other hand, if in Step 2 and Step 4 OCBA is not used for evaluation, but T/N samples areequally allocated for each particle and HT is still applied, we denote such a simplified PSOOHT approachas PSOHT. Obviously, if HT and OCBA are both omitted from PSOOHT, the PSOOHT is changed intothe simple implementation of PSO described in Section 3. In the next section, we will investigate the perfor-mances of these four approaches for function optimization in noisy environment.

5. Numerical simulation and comparisons

In this paper, function optimization problems with noisy environments are constructed based on six famousdeterministic problems described in Appendix A [14]. In particular, let f(X) be the deterministic function, theuncertain problem considered here is formulated as follows:

Page 7: Particle swarm optimization for function optimization in noisy environment

Table 1Simulation results by different methods for different noisy problem when g = 0.5

Method Index GP BR HN3 HN6 RA SH

PSOOHT Jaðx�Þ 3.2168 0.4008 �3.4789 �2.9203 �1.9833 �186.5962Laðx�Þ 3.1708 0.3997 �3.5116 �3.0135 �1.9992 �193.5933

PSOHT Jaðx�Þ 3.5088 0.4013 �3.1904 �2.7352 �1.9760 �184.6536Laðx�Þ 3.3834 0. 3999 �3.3472 �2.8903 �1.9993 �197.6524

PSOO Jaðx�Þ 3.4284 0.4017 �3.2156 �2.5333 �1.9762 �183.3029Laðx�Þ 3.3526 0.4007 �3.3312 �2.6281 �1.9885 �191.5923

PSO Jaðx�Þ 4.4995 0.4025 �3.1836 �2.4191 �1.9688 �182.0559Laðx�Þ 4.2919 0.4008 �3.3272 �2.6295 �1.9914 �196.1946

Table 2Effect of g on different methods (GP problem is used)

Method Index g = 0.01 g = 0.1 g = 0.2 g = 0.5 g = 1.0 g = 2.0

PSOOHT Jaðx�Þ 3.0013 3.0693 3.1470 3.2168 3.5623 3.6213Laðx�Þ 3.0013 3.0610 3.1383 3.1708 3.3807 3.3114Ns 50 50 50 50 50 47

PSOHT Jaðx�Þ 3.0020 3.1053 3.2880 3.5088 3.8631 4.4034Laðx�Þ 3.0017 3.0857 3.1365 3.3834 3.6210 4.1843Ns 50 50 50 48 46 41

PSOO Jaðx�Þ 3.0033 3.1033 3.5241 3.4284 4.0743 5.2913Laðx�Þ 3.0032 3.0968 3.4810 3.3526 3.9254 4.9532Ns 50 50 47 45 41 38

PSO Jaðx�Þ 3.0015 3.1083 4.0427 4.4995 5.8513 6.9257Laðx�Þ 3.0010 3.0757 3.9333 4.2919 5.4430 6.3431Ns 50 50 45 42 34 31

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 1. Searching results of 50 random runs by PSOOHT when g = 0.5.

914 H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919

Page 8: Particle swarm optimization for function optimization in noisy environment

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 915

min J ¼ E½LðX ; nÞ� ¼ E½f ðX Þ þ g � F � n�; ð11Þ

where n is noise subjected to Gaussian distribution N(0,1), g denotes noise magnitude, F is a scale factor re-lated to function f(X). For the six problems, the values of F are set as 3, 0.398, 3.86, 3.32, 2 and 186, respec-tively in this paper.

Firstly, let g = 0.5 for all the six problems. The parameters of PSOOHT are set as follows: swarm sizeN = 100, c1 = 3.85, c2 = 0.25, v = 0.729, T = 2000, n0 = 10, D = 10, and a maximum generation number(100) is used as the stopping condition. We carry out 50 independent runs for each approach for each problemon PC with AMD-ATHLON 2800+ CPU and 1 G RAM using MATLAB 7.04. The statistical results aresummarized in Table 1, where X* denotes the obtained optimal solution in a random run, LaðX �Þ andJ aðX �Þ denote the average estimated performance and the average expected performance of all X* in 50 runs,respectively for a certain approach.

Secondly, we investigate the influence of g on the performances of the four algorithms when solving GPproblem in noisy environment. Each approach is also independently 50 times with the same parameters as

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 2. Searching results of 50 random runs by PSOHT when g = 0.5.

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 3. Searching results of 50 random runs by PSOO when g = 0.5.

Page 9: Particle swarm optimization for function optimization in noisy environment

916 H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919

before, and the statistical results are summarized in Table 2, where Ns denotes the number of those obtainedsatisfactory solutions that are close to the theoretical optimum with distances less than 0.1.

In addition, we provide the distribution chart of the 50 searching results of each approach for GP problem.When g = 0.5, the distribution charts of four approaches are illustrated in Figs. 1–4, respectively; and wheng = 1.0, the distribution charts of four approaches are illustrated in Figs. 5–8, respectively.

From Table 1, it can be seen that the results of PSO for function optimization in noisy environment areoften not very satisfactory. By incorporating HT or OCBA into PSO, the performances can be improvedbut slightly. If HT and OCBA are both incorporated into PSO, that is using PSOOHT, the optimization per-formances can be improved greatly. For PSOOHT, it can be seen that, not only the value of J aðx�Þ is closeto the expected optimal value, but also the values of J aðx�Þ and Laðx�Þ are very close. So, it can be con-cluded that, comparing with PSO, PSOOHT is of obvious superiority in terms of searching quality androbustness.

From Table 2, it can be seen that PSO is also of good performances when noise is very small, for examplewhen g = 0.01. However, as noise increases, the performances of PSO are getting worse rapidly. Comparing

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 4. Searching results of 50 random runs by PSO when g = 0.5.

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 5. Searching results of 50 random runs by PSOOHT when g = 1.0.

Page 10: Particle swarm optimization for function optimization in noisy environment

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 6. Searching results of 50 random runs by PSOHT when g = 1.0.

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 7. Searching results of 50 random runs by PSOO when g = 1.0.

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 917

with PSO, the performances of PSOO and PSOHT are better, but their performances are also very sensitive tothe magnitude of noise. After combining PSO with HT and OCBA, it can be seen from Table 2 that theperformances of PSOOHT are not very sensitive to the noise and PSOOHT can obtain solutions very closeto the true global optima with fairly high probability. So, it is concluded that PSOOHT is very robust on noise.

From Figs. 1–4, it can be seen that the solutions obtained by PSOOHT in 50 random runs are all very closeto the true globally optimal solution. However, sometimes the solutions obtained by PSO, PSOO and PSOHTare far away from the true globally optimal solution, especially for the pure PSO. From Figs. 5–8, when noisemagnitude becomes larger (g = 1.0), it still can be seen that the solutions obtained by PSOOHT in 50 randomruns are all very close to the true globally optimal solution. But, the solutions obtained by PSO are often faraway from the true globally optimal solution. Once again, the effectiveness and robustness of PSOOHT havebeen demonstrated.

In a word, both the search element and the evaluation element are considered in PSOOHT. PSOOHT notonly applies the population-based searching framework of PSO for exploration and exploitation in solutionspace, it but also applies OCBA to evaluate the swarm and identify the good-enough solutions by intelligently

Page 11: Particle swarm optimization for function optimization in noisy environment

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-2

-1.5

-1

-0.5

0

0.5

1

1.5

2

x1

2x

Fig. 8. Searching results of 50 random runs by PSO when g = 1.0.

918 H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919

determining the limited number of simulation replications for particles. Besides, PSOOHT also applies HT toreserve the best solutions and maintain the diversity of the next swarm by deleting some particles with similarperformances and to reduce repeated search to some extent. Thus, PSOOHT is of good searching qualityand robustness. So, the proposed PSOOHT is an effective approach for function optimization in noisyenvironments.

6. Conclusion

In this paper, we proposed an effective hybrid approach (PSOOHT) that combined PSO, OCBA and HTfor function optimization in noisy environment. It was concluded that, PSO is effective for deterministic opti-mization problems but may not effective for uncertain optimization problems, especially when large noise ispresent. Fortunately, with the help of OCBA to allocate limited sampling budgets to provide reliable evalu-ation and identification for good particles, and with the help of HT to reserve good particles and to maintainthe diversity of the swarm, PSOOHT is of superior performances for function optimization in noisy environ-ments. The future work is to carry out some real application for PSOOHT and to develop effective PSO forcombinatorial optimization in noisy environments.

Acknowledgements

This paper is partially supported by National Natural Science Foundation of China (Grant No. 60204008,60374060 and 60574072) and 973 Program under the Grant 2002CB312200.

Appendix A. Six deterministic optimization problems

(1) GP—Goldstein–Price, (n = 2):

fGðxÞ ¼ 1þ ðx1 þ x2 þ 1Þ2 19� 14x1 þ 3x21 � 14x2 þ 6x1x2 þ 3x2

2

� �h i� 30þ ð2x1 � 3x2Þ2 18� 32x1 þ 12x2

1 þ 48x2 � 36x1x2 þ 27x22

� �h i; �2 6 xi 6 2; i ¼ 1; 2.

The global minimum is equal to 3 and the minimum point is (0,�1). There are four local minima in theminimization region.

Page 12: Particle swarm optimization for function optimization in noisy environment

H. Pan et al. / Applied Mathematics and Computation 181 (2006) 908–919 919

(2) BR—Branin (n = 2):

fBðxÞ ¼ x2 �5:1

4p2x2

1 þ5

px1 � 6

� �2

þ 10 1� 1

8p

� �cos x1 þ 10; �5 6 x1 6 10; 0 6 x2 6 15.

The global minimum is approximately 0.398 and it is reached at the three points (�3.142, 12.275),(3.142, 2.275) and (9.425,2.425).

(3,4) Hn—Hartman (n = 3, 6):

fH ðxÞ ¼ �X4

i¼1

ci exp �Xn

j¼1

aijðxj � pijÞ2

" #; 0 6 xi 6 1; i ¼ 1; . . . ; n.

For n = 3, the global minimum is equal to �3.86 and it is reached at the point (0.114,0.556,0.852). Forn = 6, the minimum is �3.32 at the point (0.201,0.150,0.477, 0.275,0.311,0.657). We refer to [14] for thevalues of parameters ci, aij and pij.

(5) RA–Rastrigin (n = 2):

fRðxÞ ¼ x21 þ x2

2 � cos 18x1 � cos 18x2; �1 6 xi 6 1; i ¼ 1; 2.

The global minimum is equal to �2 and the minimum point is (0,0). There are about 50 local minimaarranged in a lattice configuration.

(6) SH–Shuber (n = 2):

fSðxÞ ¼X5

i¼1

i cos½ðiþ 1Þx1 þ i�( ) X5

i¼1

i cos½ðiþ 1Þx2 þ i�( )

; �10 6 x1; x2 6 10.

The function has 760 local minima, 18 of which are global with fS = �186.7309.

References

[1] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. IEEE Int. Conf. on Neural Networks, WA, Australia, 1995,pp. 1942–1948.

[2] B. Liu, L. Wang, Y.H. Jin, D.X. Huang, Advances in particle swarm optimization algorithm, Control Instrum. Chem. Ind. 32 (3)(2005) 1–6.

[3] B. Liu, L. Wang, Y.H. Jin, D.X. Huang, Designing neural networks using hybrid particle swarm optimization, Lect. Notes Comput.Sci. 3496 (2005) 391–397.

[4] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, MA, 1989.[5] Y.C. Jin, J. Branke, Evolutionary optimization in uncertain environments-a survey, IEEE Trans. Evolution. Comput. 9 (3) (2005)

303–317.[6] M. Clerc, J. Kennedy, The particle swarm: explosion, stability, and convergence in a multi-dimensional complex space, IEEE Trans.

Evol. Comput. 6 (2002) 58–73.[7] V.S. Pugachev, Probability Theory and Mathematical Statistics for Engineers, Pergamon Press, NY, 1984.[8] L. Wang, L. Zhang, D.Z. Zheng, A class of hypothesis-test based genetic algorithm for flow shop scheduling with stochastic

processing time, Int. J. Adv. Manufactur. Technol. 25 (11–12) (2005) 1157–1163.[9] Y.C. Ho, R. Sreenivas, P. Vakili, Ordinal optimization of discrete event dynamic systems, Discrete Event Dyn. Syst. 2 (2) (1992) 61–

88.[10] L. Dai, Convergence properties of ordinal comparison in the simulation of discrete event dynamic systems, J. Optim. Theory Appl. 91

(2) (1996) 363–388.[11] L.H. Lee, T.W.E. Lau, Y.C. Ho, Explanation of goal softening in ordinal optimization, IEEE Trans. Automatic Control 44 (1) (1999)

94–99.[12] C.H. Chen, J. Lin, E. Yucesan, S.E. Chick, Simulation budget allocation for further enhancing the efficiency of ordinal optimization,

Discrete Event Dyn. Syst. 10 (2000) 251–270.[13] L. Wang, L. Zhang, D.Z. Zheng, Genetic ordinal optimization for stochastic flow shop scheduling, Int. J. Adv. Manufactur. Technol.

27 (1–2) (2005) 166–173.[14] L. Wang, Intelligent Optimization Algorithms with Applications, Tsinghua University & Springer Press, Beijing, 2001.