ipesa-ii: improved pareto envelope-based selection ...limx/preprint/emo13b.pdf · ipesa-ii:...

14
IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li 1 , Shengxiang Yang 2 , Xiaohui Liu 1 , and Kang Wang 3 1 Department of Information Systems and Computing, Brunel University, Uxbridge, Middlesex, UB8 3PH, United Kingdom {miqing.li, xiaohui.liu}@brunel.ac.uk 2 School of Computer Science and Informatics, De Montfort University Leicester LE1 9BH, United Kingdom [email protected] 3 Institute of Information Engineering, Xiangtan University, Xiangtan, 411105, China [email protected] Abstract. The Pareto envelope-based selection algorithm II (PESA-II) is a classic evolutionary multiobjective optimization (EMO) algorithm that has been widely applied in many fields. One attractive characteristic of PESA-II is its grid-based fitness assignment strategy in environmen- tal selection. In this paper, we propose an improved version of PESA-II, called IPESA-II. By introducing three improvements in environmental selection, the proposed algorithm attempts to enhance PESA-II in three aspects regarding the performance: convergence, uniformity, and extensi- ty. From a series of experiments on two sets of well-known test problems, IPESA-II is found to significantly outperform PESA-II, and also be very competitive against five other representative EMO algorithms. Key words: Evolutionary multiobjective optimization, PESA-II algo- rithm, convergence, uniformity, extensity 1 Introduction Many real-world problems involve multiple competing objectives that should be considered simultaneously. Because of the conflicting nature of the objectives, in these multiobjective optimization problems (MOPs) there is usually no single optimal solution but rather a set of Pareto optimal solutions (or called Pareto front in the objective space). Evolutionary algorithms (EAs), which use nat- ural selection as their search engine, have been recognized to be well suitable for MOPs due to their population-based property of achieving an approxima- tion of the Pareto front in a single run. Over the past two decades, a number of state-of-the-art evolutionary multiobjective optimization (EMO) algorithm- s have been proposed [6], [2]. Generally speaking, these algorithms share three common goals—minimizing the distance from the resulting solutions to the Pare- to front (i.e., convergence), maintaining the uniform distribution of the solutions

Upload: others

Post on 07-Nov-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved Pareto Envelope-BasedSelection Algorithm II

Miqing Li1, Shengxiang Yang2, Xiaohui Liu1, and Kang Wang3

1 Department of Information Systems and Computing, Brunel University,Uxbridge, Middlesex, UB8 3PH, United Kingdom

{miqing.li, xiaohui.liu}@brunel.ac.uk2 School of Computer Science and Informatics, De Montfort University

Leicester LE1 9BH, United [email protected]

3 Institute of Information Engineering, Xiangtan University,Xiangtan, 411105, [email protected]

Abstract. The Pareto envelope-based selection algorithm II (PESA-II)is a classic evolutionary multiobjective optimization (EMO) algorithmthat has been widely applied in many fields. One attractive characteristicof PESA-II is its grid-based fitness assignment strategy in environmen-tal selection. In this paper, we propose an improved version of PESA-II,called IPESA-II. By introducing three improvements in environmentalselection, the proposed algorithm attempts to enhance PESA-II in threeaspects regarding the performance: convergence, uniformity, and extensi-ty. From a series of experiments on two sets of well-known test problems,IPESA-II is found to significantly outperform PESA-II, and also be verycompetitive against five other representative EMO algorithms.

Key words: Evolutionary multiobjective optimization, PESA-II algo-rithm, convergence, uniformity, extensity

1 Introduction

Many real-world problems involve multiple competing objectives that should beconsidered simultaneously. Because of the conflicting nature of the objectives,in these multiobjective optimization problems (MOPs) there is usually no singleoptimal solution but rather a set of Pareto optimal solutions (or called Paretofront in the objective space). Evolutionary algorithms (EAs), which use nat-ural selection as their search engine, have been recognized to be well suitablefor MOPs due to their population-based property of achieving an approxima-tion of the Pareto front in a single run. Over the past two decades, a numberof state-of-the-art evolutionary multiobjective optimization (EMO) algorithm-s have been proposed [6], [2]. Generally speaking, these algorithms share threecommon goals—minimizing the distance from the resulting solutions to the Pare-to front (i.e., convergence), maintaining the uniform distribution of the solutions

Page 2: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

2 M. Li et al.

(i.e., uniformity), and maximizing the distribution range of the solutions alongthe Pareto front (i.e., extensity).

The Pareto Envelope-based Selection Algorithm II (PESA-II) [3] and itspredecessor PESA [5], proposed by Corne et al. at the turn of the century, area kind of classic EMO algorithms [2], which have been widely applied in manyfields [25], [15]. The most attractive characteristic of them is their grid-basedfitness assignment mechanisms that maintain diversity in both environmentalselection and mating selection. From some comparative studies, they have beenfound to be competitive on some artificial test functions [3], [12] and real-worldproblems [10]; additionally, for problems with a high number of objectives, theyhave been shown to significantly outperform their contemporary competitors(e.g., nondominated sorting genetic algorithm II (NSGA-II) [9] and strengthPareto evolutionary algorithm 2 (SPEA2) [31]) regarding the ability of searchingtowards the Pareto front [17].

Despite the obvious usefulness of PESA-II and its predecessor, there is someroom for improvement in this kind of algorithms. The first issue is concernedwith the distribution uniformity. Since the grid environment of the archive setmay need to be adjusted after the entry of each individual into the archive, theuniformity of the final population may be negatively affected to some extent (adetailed explanation will be given in Section 3.1). The second issue is relatedto the distribution extensity. Unlike some other classic EMO algorithms, suchas NSGA-II and SPEA2, which explicitly or implicitly assign boundary solu-tions better fitness than internal solutions, PESA-II and its predecessor haveno boundary solution preservation mechanism, which largely reduces the dis-tribution range of their final solution set [31], [20]. The last issue is about theconvergence. Although, compared with their contemporary algorithms, PESA-II and its predecessor appear to be competitive, they are beaten by some re-cent state-of-the-art algorithms, such as indicator-based evolutionary algorithm(IBEA) [30], ϵ-dominance [19] based multiobjective evolutionary algorithm (ϵ-MOEA) [7], S metric selection evolutionary multiobjective optimization algo-rithm (SMS-EMOA) [1], decomposition-based multiobjective evolutionary algo-rithm (MOEA/D) [28], and territory defining evolutionary algorithm (TDEA)[16].

In this paper, an improved PESA-II is presented, called IPESA-II. IPESA-IIattempts to address the above issues by introducing three simple but effectiveimprovements. The remainder of this paper is organized as follows. Section 2reviews the original PESA-II. Section 3 contains the detailed description of ourproposed improvements. The experimental results and discussions are given inSection 4. Finally, Section 5 indicates our conclusions and notes for further work.

2 PESA-II

PESA-II follows the standard principles of an EA, maintaining two populations:an internal population of fixed size, and an external population (i.e., archive set)of non-fixed but limited size. The internal population stores the new solutions

Page 3: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 3

generated from the archive set by variation operations, and the archive set onlycontains the nondominated solutions discovered during the search. A grid divi-sion of the objective space is introduced to maintain diversity in the algorithm.The number of solutions within a hyperbox is referred to as the density of thehyperbox, and is used to distinguish solutions in two crucial processes of anEMO algorithm: mating selection and environmental selection.

Unlike most EMO algorithms (including its predecessor PESA), the matingselection process of PESA-II is implemented in a region-based manner ratherthan in an individual-based manner. That is, a hyperbox is first selected andthen the resulting individual for genetic operations is randomly chosen fromthe selected hyperbox—thus highly crowded hyperboxes do not contribute moreindividuals than less crowded ones.

In the environmental selection process, the candidate individuals in the in-ternal population are inserted into the archive set one by one, thus the gridenvironment updated step by step. A candidate may enter the archive if it isnondominated within the internal population, and is not dominated by any cur-rent member of the archive. Once a candidate has entered the archive, corre-sponding adjustment of the archive and grid environment will be implemented.Firstly, the members in the archive which the candidate dominates are removedto ensure that only nondominated individuals exist in the archive. Secondly, thegrid environment is checked to see whether its boundaries have changed withthe addition or removal of individuals in the archive4. Finally, if the additionof a candidate renders the archive overfull, an arbitrary individual in the mostcrowded hyperbox will be removed.

3 IPESA-II

IPESA-II is proposed as an enhanced version of PESA-II that introduces threesimple but effective improvements in the algorithm’s environmental selection:

– Maintaining the archive after all individuals in the internal population haveentered it, instead of doing step by step.

– Extending the distribution range of the solution set by keeping the boundaryindividuals.

– Improving the convergence of the solution set by removing the worst-performedindividual in the most crowded hyperbox.

These three improvements, which focus on the performance of the solutionset in terms of uniformity, extensity, and convergence respectively, are explainedin the following sections.

3.1 Uniformity Improvement

As stated in the previous section, for the environmental selection process PESA-II adopts an incremental update mode. Once a candidate has entered the archive,

4 In PESA-II, the setting of grid environment is adaptive according to the archive setso that it can just envelop all the individuals in the archive set [5]

Page 4: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

4 M. Li et al.

(a) (b)

(c) (d)

Fig. 1. An example of environmental selection in PESA-II, where individuals A–Eare the current members in the archive set (the size of the archive set is five), andindividuals X–Z are the candidates to be archived. (b)–(d) show the archiving processof the candidates in the order of Z, Y, and X. Black points correspond to the currentindividuals in the archive, hollow points stand for the candidates, and gray pointsdenote the individuals removed from the archive.

the grid environment will need to be checked to see if it has changed. Anyexcess of the grid boundary or the upper limit of the archive size will leadto the adjustment (or even reconstruction) of the grid environment. This notonly causes extra time consumption, but also affects the uniformity of the finalarchive set since different sequences that the candidates enter the archive resultin different distributions. Figure 1 gives an example to illustrate this issue.

Supposing that individuals A–E are the current members in the archiveset, and individuals X–Z are the candidates to be archived (cf. Fig. 1(a)).Figures 1(b)–(d) show the entry of the candidates into the archive in the or-der of Z, Y, and X. Here the archive size is set to five. Firstly, the candidateZ enters the archive and the grid environment is re-constructed. Accordingly,either B or C will be deleted since they reside in the most crowded hyperbox.Figure 1(b) shows the archiving result, assuming that individual C is removed.Then, individuals E and A will be eliminated in turn since they are dominatedby Y and Z respectively (cf. Figs. 1(c) and (d)). The final archive set is formedby X, B, D, Y, and Z.

Note that the archive obtained above is not an ideal distribution result. Abetter archive is that individual C is preserved and either D or Y is removed,which, in fact, is the result of the entry of the candidates into the archive in the

Page 5: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 5

order of X, Y, and Z. In addition, if the enter order is Y, Z, and X, both theabove results may occur with equal probability.

The above case clearly indicates that different entry orders of individualsinto the archive may lead to different distribution results. In fact, it may beunreasonable to adjust the grid environment to maintain diversity whenever onecandidate has entered the archive because the locations of the later candidatesare unknown. Here, we make a simple improvement by only adjusting the gridenvironment after all candidates have entered the archive. Specifically, we firstput all candidates into the archive, then delete the dominated individuals in thearchive and construct the grid environment according to all individuals in thearchive, and finally remove the individuals located in the current most crowdedhyperbox one by one until the upper limit of the archive size is achieved. Let’srevisit the example of Fig. 1 using the improved approach. Firstly, the archiveset contains all eight individuals. Then, A and E are eliminated, and the gridenvironment is constructed by the rest individuals. Finally, either D or Y isremoved, considering that they are located in the most crowded hyperbox.

3.2 Extensity Improvement

Extreme solutions (or boundary solutions) of a solution set refer to those so-lutions that have a maximum or minimum value on at least one objective foran MOP. Extreme solutions of a nondominated solution set are very importan-t since they determine its distribution range. Many EMO algorithms preserveextreme solutions by explicitly or implicitly assigning them better fitness thaninternal solutions. However, in PESA-II, extreme solutions do not get specialtreatment—they could be eliminated with the same probability as other solu-tions, which leads the solution set of PESA-II to have a poorer distribution rangethan that of other algorithms [21].

In this paper, we preserve the extreme solutions of the archive set in theenvironmental selection. To be specific, extreme solutions will not be removedunless they are the only solution (or solutions) in the selected hyperbox. Inother words, if a hyperbox has both extreme and internal solutions, the extremesolutions will always be eliminated after the internal ones.

3.3 Convergence Improvement

In environmental selection of PESA-II, when the archive set is overfull, a non-dominated solution will be randomly eliminated in the most crowded hyperbox.Although this elimination strategy seems to be reasonable, it still has room forimprovement in the context of convergence. The Pareto dominance relation isa qualitative metric of distinguishing individuals, which fails to give a quanti-tative difference of objective values among individuals. That is, two individualsare incomparable even if the former is largely superior to the latter in most ofthe objectives but only slightly inferior to the latter in one or a few objectives.

Inspired by ϵ-MOEA [7], we adopt a distance-based elimination strategy inIPESA-II, instead of the random elimination strategy in PESA-II. For the most

Page 6: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

6 M. Li et al.

crowded hyperbox, the proposed strategy removes the individual that is farthestaway from the utopia point of the hyperbox (i.e., the best corner of the hyperbox)among all the individuals in the hyperbox. For example, consider individuals Dand Y in Fig. 1. If one individual needs to be eliminated in the hyperbox wherethey are located, D will be removed since it is farther from the lower left cornerof the hyperbox.

It deserves to be pointed out that the calculation of the distance is basedon the normalized difference of objective values between individuals and thecorresponding utopia point according to the size of a hyperbox, which is con-sistent with the adaptive grid environment. This means that the distance-basedcomparison strategy is unaffected by the range difference of different objectivefunctions.

4 Experimental Results

This section validates the performance of IPESA-II by comparing it with PESA-II and five other well-known EMO algorithms: NSGA-II [9], SPEA2 [31], IBEA[30], ϵ-MOEA [7], and TDEA [16]. First, we briefly introduce the performancemetrics and test problems used in the experiments. Second, the general experi-mental setting is assigned for the seven algorithms. Then, a comparative studybetween IPESA-II and PESA-II is presented, in terms of convergence, uniformi-ty, and extensity. Finally, we further compare the comprehensive performancebetween IPESA-II and the other five algorithms.

4.1 Performance Metrics and Test Problems

This paper considers four performance metrics to examine the proposed algo-rithm. They are Generational Distance (GD) [26], Spacing (SP) [24], MaximumSpread (MS) [11], and Hypervolume (HV) [32]. The first three metrics are usedto assess the convergence, uniformity, and spread of a solution set, respectively.The last one involves the comprehensive performance of the above three aspects.

The convergence metric GD calculates the average distance of the obtainedsolutions set away from the Pareto front, and a lower value is better. The unifor-mity metric SP measures the standard deviation of distance from each solution toits closest neighbor in the obtained set, and also a lower value is preferable. Thespread metric MS is an improved version of the original Maximum Spread [32]and considers the distribution range of the Pareto front. The original MS, whichmeasures the length of the diagonal of the hypercube formed by the extremeobjective values in the obtained set, may be influenced heavily by convergenceof an algorithm. For easing this effect, the new MS introduces the comparisonbetween the the extreme values of the obtained solutions and the boundariesof the Pareto front. It takes the value between zero and one, and a larger valuemeans a broader range of the set. The HV metric is a very popular quality metricdue to its good properties, one of which is that it can assess the comprehensive

Page 7: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 7

Table 1. div setting in PESA-II and IPESA-II, and ϵ and τ settings in ϵ-MOEA andTDEA, respectively.

ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ1div 50 50 70 50 50 9ϵ 0.0076 0.0076 0.0030 0.0075 0.0065 0.0340τ 0.0090 0.0090 0.0070 0.0075 0.0060 0.0600

DTLZ2 DTLZ3 DTLZ4 DTLZ5 DTLZ6 DTLZ7div 8 8 7 50 9 10ϵ 0.0630 0.0630 0.0150 0.0050 0.0300 0.0500τ 0.1050 0.0200 0.0400 0.0110 0.0250 0.0600

performance on convergence, uniformity, and extensity. HV calculates the vol-ume of the objective space between the obtained set and a given reference point.A larger HV value is preferable. More details of these metrics can be referred toin their original papers.

To benchmark the performance of the seven algorithms, two commonly usedtest problem suites, the ZDT problem family [29] and DTLZ problem family [8],are invoked. The former, including ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6, isused to test the algorithms on bi-objective problems, and the latter, includingDTLZ1 to DTLZ7, is employed to test the algorithms on tri-objective problems.All the test problems have been configured as in the original paper where theyare described.

4.2 General Experimental Setting

All compared algorithms are given real-valued decision variables. A crossoverprobability pc = 1.0 and a mutation probability pm = 1/l (where l is the num-ber of decision variables) are used. The operators for crossover and mutationare simulated binary crossover (SBX) and polynomial mutation with the bothdistribution indexes 20 [6].

We independently run each algorithm 30 times for each test problem. Thetermination criterion of the algorithms is a predefined number of evaluations. Forbiobjective problems, the evaluation number is set to 25,000, and for problemswith three objectives, the evaluation number is 30,000. The population size, forthe generational algorithms IPESA-II, PESA-II, NSGA-II, SPEA2, and IBEA,is set to 100, and the archive is also maintained with the same size if existing.For the steady-state algorithms ϵ-MOEA and TDEA, the regular population sizeis set to 100.

In the calculation of HV, similar to [18], we select the integer point slightlylarger than the worst value of each objective on the Pareto front of a problemas its reference point. As a consequence, the reference points for the ZDT andDTLZ problems are (2, 2) and (2, 2, 2) respectively, except (1, 1, 1) for DTLZ1and (2, 2, 7) for DTLZ7.

PESA-II and IPESA-II require the user to set a grid division parameter (calldiv here). Base on some trail-and-error experiments, both algorithms can workwell under the settings in Table 1. In addition, the algorithms ϵ-MOEA and

Page 8: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

8 M. Li et al.

Table 2. Comparison results between PESA-II and IPESA-II regarding GD, SP, andMS, where the top and bottom values in each cell correspond to PESA-II and IPESA-II,respectively, and the better mean is highlighted in boldface.

Problem GD SP MS

ZDT12.048e–4(1.4e–4)

† 6.771e–3(5.7e–4)† 9.762e–1(1.9e–2)

4.930e–5(3.3e–5) 4.468e–3(5.2e–4) 9.997e–1(4.7e–4)

ZDT22.040e–4(8.1e–5)

† 6.793e–3(6.2e–4)† 9.779e–1(1.5e–2)

4.733e–5(2.7e–5) 4.268e–3(5.3e–4) 9.994e–1(3.0e–4)

ZDT31.216e–4(4.5e–5)

† 8.181e–3(1.1e–3) 9.817e–1(1.0e–2)†

5.120e–5(1.9e–5) 8.415e–3(7.7e–4) 9.995e–1(5.7e–4)

ZDT44.628e–2(8.7e–2)

† 3.669e–1(7.5e–1)† 9.726e–1(3.6e–2)

2.805e–4(1.5e–4) 6.202e–3(7.0e–4) 9.964e–1(1.2e–3)

ZDT61.014e–2(9.5e–3)

† 7.236e–2(8.0e–2)† 9.930e–1(3.8e–3)

3.472e–4(7.7e–4) 5.112e–3(7.3e–3) 9.978e–1(2.9e–4)

DTLZ16.746e–2(1.0e–1)

† 1.463e–1(2.0e–1)† 9.862e–1(2.3e–2)

2.293e–2(3.5e–2) 5.997e–2(6.4e–2) 9.963e–1(6.4e–3)

DTLZ21.500e–3(2.0e–4)

† 3.674e–2(3.4e–3)† 9.901e–1(5.4e–3)

8.186e–4(2.3e–4) 2.905e–2(4.6e–3) 9.998e–1(4.9e–4)

DTLZ31.039e+0(8.0e–1)

† 3.065e+0(5.0e+0) 9.999e–1(1.8e–4)†

5.153e–1(6.2e–1) 1.172e+0(2.6e+0) 1.000e+0(0.0e+0)

DTLZ41.209e–3(5.7e–4)

† 3.460e–2(1.6e–2) 8.313e–1(3.7e–1)7.622e–4(2.7e–4) 3.489e–2(1.6e–2) 9.031e–1(2.0e–1)

DTLZ55.703e–4(4.4e–5)

† 7.385e–3(9.3e–4)† 9.894e–1(1.1e–2)

4.577e–4(3.5e–5) 6.232e–3(6.2e–4) 9.989e–1(1.8e–3)

DTLZ67.980e–2(1.9e–2)

† 8.172e–2(4.5e–2)† 9.889e–1(3.0e–2)

4.961e–2(1.1e–2) 5.577e–2(3.1e–2) 1.000e+0(1.5e–6)

DTLZ73.315e–3(1.1e–3)

† 4.206e–2(1.4e–2) 8.688e–1(1.4e–1)†

2.077e–3(6.8e–4) 4.643e–2(7.5e–3) 9.979e–1(2.4e–3)

“†” indicates that the p-value of 58 degrees of freedom is significant at a 0.05 level ofsignificance by a two-tailed t-test.

Page 9: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 9

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.0

0.2

0.4

0.6

0.8

1.0

1.2

1.4

1.6

1.8

f2

f1

Pareto front PESA-II

0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.0

0.2

0.4

0.6

0.8

1.0 Pareto front IPESA-II

f2

f1

Fig. 2. The final solution set of PESA-II and IPESA-II on ZDT6.

TDEA require the user to set the size of a hyperbox in grid (i.e., ϵ and τ). Inorder to guarantee a fair comparison, we set them so that the archive of thetwo algorithms is approximately of the same size as that of the other algorithms(also given in Table 1).

For each problem, we have executed 30 independent runs. The values includedin the tables of results are mean and standard deviation. By the central limittheorem, we assume that the sample means are normally distributed. Then, wetest the following hypothesis at a 95% significance level to check whether thereexists a statistically significant difference between the performances of IPESA-II(denoted as I) and its competitors (denoted as C).

H0 : µI = µC (1)

H1 : µI = µC (2)

In the tables, the symbols “†” indicates that the p value of 58 degrees offreedom is significant at a 0.05 level of significance by a two-tailed t-test.

4.3 Comparison with PESA-II

Table 2 shows the comparison results between IPESA-II and PESA-II in termsof convergence, uniformity, and extensity. The values in the table correspond tomean and standard deviation. It is clear that IPESA-II performs significantlybetter than PESA-II on all the considered metrics. For the convergence metricGD, IPESA-II always achieves lower value with statistical significance on all thetest problems, and especially on the problems ZDT4 and ZDT6, the result is im-proved by around two orders of magnitude. Figure 2 shows a typical distributionof the final solution set of the two algorithms on ZDT6. Clearly, of the solutionsobtained by PESA-II, a boundary solution fails to approximate the Pareto front.

Concerning the uniformity metric SP, IPESA-II performs better for 9 out ofthe 12 problems. PESA-II has a better SP value on ZDT3, DTLZ4, and DTLZ7.Also, the difference on most of the test problems where IPESA-II performs betterthan PESA-II has statistical significance (8 out of the 9 problems), whereas

Page 10: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

10 M. Li et al.

(a) PESA-II (b) IPESA-II

Fig. 3. The final solution set of PESA-II and IPESA-II on DTLZ7.

for all problems where PESA-II outperforms IPESA-II, the difference has notstatistical significance. In fact, for the problems DTLZ4 and DTLZ7, the solutionset obtained by IPESA-II has better uniformity than that obtained by PESA-II(the typical distribution of the final solutions of the two algorithms on DTLZ7is shown in Fig. 3). The misleading results on SP are due to the influence ofthe extensity of a solution set on this uniformity metric: a solution set with lessdistribution range may reach a lower SP value than another solution set even ifthe former performs worse in terms of uniformity [23]. Therefore, the solutionset obtained by PESA-II, which fails to cover the whole Pareto front (cf. the MSresults of PESA-II on DTLZ4 and DTLZ7 in the table), may result in better SPvalues.

Regarding the extensity of solution sets, IPESA-II also performs significantlybetter PESA-II, and with statistical significance for all the problems except oneproblem (DTLZ4). The MS results of IPESA-II in Table 2 very approximate theideal value (i.e., MS = 1) on most of the tested problems. This means that theproposed algorithm can effectively extend the distribution range of a solutionset.

4.4 Comparison with other EMO algorithms

Table 3 shows the HV comparison results between IPESA-II and the other fivealgorithms, NSGA-II, SPEA2, IBEA, ϵ-MOEA, and TDEA. Apparently, IPESA-II is a very competitive algorithm, and achieves the best result in 6 out of the12 problems. TDEA, IBEA, ϵ-MOEA, and SPEA2 perform the best in 2, 2, 1,and 1 out of the 12 problems, respectively. Concerning the statistical significanceof results between IPESA-II and the other five algorithms, the number of theproblems where the proposed algorithm outperforms NSGA-II, SPEA2, IBEA, ϵ-MOEA, and TDEA with statistical significance is 8, 7, 9, 8, and 7, respectively;the number of the problems where IPESA-II performs worse with statisticalsignificance is 1, 2, 3, 3, and 1, respectively. This means that IPESA-II can

Page 11: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 11

Table 3. HV comparison of the six EMO algorithms. The best mean for each problemis highlighted in boldface.

Problem NSGA-II SPEA2 IBEA ϵ-MOEA TDEA IPESA-II

ZDT13.659e+0† 3.659e+0† 3.659e+0† 3.648e+0† 3.657e+0† 3.661e+0(4.1e–4) (4.7e–4) (8.0e–4) (1.7e–3) (1.8e–3) (2.5e–4)

ZDT23.325e+0† 3.325e+0† 3.324e+0† 3.323e+0† 3.319e+0† 3.327e+0(5.8e–4) (8.9e–4) (2.8e–4) (1.6e–3) (3.1e–3) (5.6e–4)

ZDT34.812e+0† 4.812e+0† 4.806e+0† 4.809e+0† 4.804e+0† 4.813e+0(4.7e–4) (5.1e–4) (2.1e–4) (1.1e–3) (3.6e–3) (6.6e–4)

ZDT43.651e+0 3.650e+0 2.482e+0† 3.635e+0† 3.631e+0† 3.652e+0(7.8e–3) (8.6e–3) (2.1e–1) (2.1e–2) (4.1e–2) (9.9e–3)

ZDT63.022e+0† 3.023e+0† 3.037e+0† 3.028e+0† 3.024e+0† 3.035e+0(2.7e–3) (2.1e–3) (5.7e–4) (2.2e–3) (2.7e–3) (8.4e–4)

DTLZ19.673e–1† 9.721e–1† 8.977e–1† 9.561e–1† 9.707e–1 9.695e–1(5.3e–4) (1.0e–3) (1.0e–2) (1.2e–2) (5.1e–4) (2.8e–3)

DTLZ27.351e+0† 7.391e+0† 5.702e+0† 7.389e+0† 7.401e+0 7.398e+0(1.9e–2) (6.9e–3) (2.9e+0) (9.0e–3) (1.2e–2) (1.0e–2)

DTLZ36.919e–1 6.369e–1 6.228e+0† 6.326e+0† 9.825e–01 1.668e+00(2.3e+0) (1.7e+0) (2.1e+0) (2.0e+0) (2.2e+0) (2.6e+0)

DTLZ46.888e+0 6.912e+0 6.411e+0† 7.013e+0 6.942e+0 7.128e+0(5.9e–1) (5.5e–1) (1.2e+0) (4.0e–1) (4.5e–1) (6.3e–1)

DTLZ56.099e+0† 6.100e+0† 6.088e+0† 6.100e+0† 6.101e+0† 6.097e+0(6.8e–4) (7.2e–4) (1.1e–3) (2.1e–3) (1.2e–3) (6.4e–3)

DTLZ64.098e+0† 4.044e+0† 5.974e+0† 5.292e+0† 5.255e+0† 5.101e+0(2.6e–1) (2.6e–1) (9.7e–2) (1.2e–1) (1.1e–1) (1.3e–1)

DTLZ71.326e+1† 1.333e+1† 1.026e+1† 1.312e+1† 1.328e+1† 1.340e+1(5.7e–2) (2.0e–1) (2.7e+0) (2.0e–2) (4.8e–2) (5.0e-2)

“†” indicates that the p-value of 58 degrees of freedom is significant at a 0.05 level ofsignificance by a two-tailed t-test.

provide a good tradeoff among convergence, uniformity, and extensity on mostof the problems.

Considering the test problems with different numbers of objectives, IPESA-IIperforms significantly better than the other algorithms on biobjective problems.Its HV value achieves the best on all the problems except on ZDT6 where IBEAperforms the best and IPESA-II takes the second place. For tri-objective prob-lems, the advantage of IPESA-II over the other algorithms is not as clear as thatfor biobjective problems. The proposed algorithm obtains the best values in 2out of the 7 problems. On some difficult problems, such as DTLZ3 and DTLZ6,the algorithm IBEA and ϵ-MOEA perform clearly better than IPESA-II. One ofthe reasons is that in contrast to the Pareto dominance relation, the hypervol-ume indicator used in IBEA and the ϵ dominance relation used in ϵ-MOEA canprovide more selection pressure searching towards the Pareto front.

In addition, it is worth to point out that IPESA-II is capable of dealingwith MOPs with many objectives. Although most grid-based EMO algorithmsencounter difficulties (e.g., exponential increase of the computational cost) whenthe number of objectives scales up [4], some effective measures can be adopted[22], [27]. Interested readers are referred to the three measures in [27] whichmake grid-based algorithms suitable in many-objective optimization.

Page 12: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

12 M. Li et al.

5 Conclusion

In this paper, an enhanced version of PESA-II, called IPESA-II, is presented.IPESA-II introduces three operations to improve the performance of PESA-II interms of convergence, uniformity, and extensity, respectively. Simulation exper-iments have been carried out by providing a detailed comparison with PESA-IIand five well-known EMO algorithms, NSGA-II, SPEA2, IBEA, ϵ-MOEA andTDEA. The results reveal that IPESA-II has a clear advantage over PESA-IIin finding a near-optimal, uniformly-distributed, and well-extended solution set.Additionally, IPESA-II is also found to be very competitive with the other fivealgorithms, considering the fact that it performs better than them on the ma-jority of the tested problems.

Future work includes the investigation of IPESA-II on more test problems,e.g., on some MOPs with a high number of objectives [14], [13]. Moreover, adeeper understanding of the algorithm behavior may be obtained. In this context,the setting of the grid division parameter div and the separate contributions ofthe three simple improvements will be first investigated.

References

1. Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: Multiobjective selectionbased on dominated hypervolume. European Journal of Operational Research,181(3), 1653–1669 (2007)

2. Coello, C. A. C. Veldhuizen, D. A. V., Lamont, G. B.: Evolutionary Algorithmsfor Solving Multi-Objective Problems, 2nd ed. Berlin, Heidelberg: Springer-Verlag(2007)

3. Corne, D.W., Jerram, N. R., Knowles, J. D., Oates, M. J.: PESA-II: Region-basedselection in evolutionary multiobjective optimization. In: Proceedings of the Genet-ic and Evolutionary Computation Conference (GECCO 2001), Morgan Kaufmann,San Mateo, CA, pp. 283–290 (2001)

4. Corne, D. W., Knowles, J. D.: Techniques for highly multiobjective optimisation:some nondominated points are better than others. In Proceedings of the 9th annu-al conference on Genetic and evolutionary computation (GECCO 2007), 773–780(2007)

5. Corne, D. W., Knowles, J. D., Oates, M. J.: The Pareto envelope-based selection al-gorithm for multiobjective optimization. In: Parallel Problem Solving from Nature(PPSN VI), pp. 839–848 (2000)

6. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. New York:John Wiley (2001)

7. Deb, K., Mohan, M., Mishra, S.: Evaluating the ϵ-dominated based multi-objectiveevolutionary algorithm for a quick computation of Pareto-optimal solutions. Evo-lutionary Computation, 13(4), 501–525 (2005)

8. Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolution-ary multi-objective optimization. in Evolutionary Multiobjective Optimization. In:Abraham, A., Jain, L., Goldberg, R. (eds.) Theoretical Advances and Applications,pp. 105–145 (2005)

9. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A Fast and Elitist MultiobjectiveGenetic Algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

Page 13: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

IPESA-II: Improved PESA-II Algorithm 13

10. Diosan L.: A multi-objective evolutionary approach to the portfolio optimizationproblem. In: Proceedings of the International Conference on Computational Intel-ligence for Modelling, Control and Automation, pp. 183–187 (2005)

11. Goh, C.K., Tan, K.C.: An Investigation on Noisy Environments in EvolutionaryMultiobjective Optimization. IEEE Trans. Evol. Comput. 11(3), 354–381 (2007)

12. Hu, J., Seo, K., Fan, Z., Rosenberg, R. C., Goodman, E. D.: HEMO: A SustainableMulti-objective Evolutionary Optimization Framework. In: Proceedings of the Ge-netic and Evolutionary Computation (GECCO 2003) Part I, pp. 1029–1040 (2003)

13. Hughes, E. J.: Radar Waveform Optimisation as a Many-Objective ApplicationBenchmark. In Evolutionary Multi-Criterion Optimization, 4th International Con-ference (EMO 2007), pp. 700–714 (2007)

14. Ishibuchi, H., Tsukamoto, N., Nojima, Y.: Evolutionary many-objective optimiza-tion: A short review. In Proc. IEEE Congr. on Evol. Comput., pp. 2419–2426(2008)

15. Jarvis, R. M., Rowe, W., Yaffe, N. R., O’Connor, Richard., Knowles, J. D.,Blanch, E. W., Goodacre, R.: Multiobjective evolutionary optimisation for surface-enhanced Raman scattering, Analytical and Bioanalytical Chemistry, 397(5), 1893–1901 (2010)

16. Karahan, I. and Koksalan, M.: A territory defining multiobjective evolutionaryalgorithm and preference incorporation. IEEE Transactions on Evolutionary Com-putation, 14(4):636–664 (2010).

17. Khare, V., Yao, X., Deb, K.: Performance scaling of multi-objective evolutionaryalgorithms. In: Proc. 2nd Int. Conf. Evol. Multi-Criterion Optim., pp. 376–390(2003)

18. Kukkonen. S., Deb, K.: A fast and effective method for pruning of non-dominatedsolutions in many-objective problems. In: Parallel Problem Solving from Nature(PPSN IX), Reykjavik, Iceland, pp. 553–562 (2006)

19. Laumanns, M., Thiele, L., Deb, K., Zitzler, E. Combining convergence and diversityin evolutionary multi-objective optimization. Evolutionary Computation, 10(3),263–282 (2002)

20. Li, M., Yang, S., Zheng, J., Liu, X.: ETEA: A Euclidean minimum spanning tree-based evolutionary algorithm for multiobjective optimization. Evolutionary Com-putation, in press (2012)

21. Li, M., Zheng, J.: Spread Assessment for Evolutionary Multi-Objective Optimiza-tion. In Evolutionary Multi-Criterion Optimization, 5th International Conference(EMO 2009), pp. 216–230. Springer, Heidelberg (2009)

22. Li, M., Zheng, J., Shen, R., Li, K., Yuan, Q.: A grid-based fitness strategy for evo-lutionary many-objective optimization. In: Proceedings of the Genetic and Evolu-tionary Computation Conference (GECCO 2010), 463–470 (2010)

23. Li, M., Zheng, J., Xiao, G.: Uniformity Assessment for Evolutionary Multi-Objective Optimization, in Proceedings of IEEE Congress on Evolutionary Com-putation (CEC 2008), Hongkong, pp. 625–632 (2008)

24. Schott, J. R.: Fault tolerant design using single and multicriteria genetic algo-rithm optimization. Master’s thesis, Department of Aeronautics and Astronautics,Massachusetts Institute of Technology (1995)

25. Talaslioglu, T.: Multi-objective Design Optimization of Grillage Systems Accordingto LRFDAISC. Advances in Civil Engineering, in press (2011)

26. Van Veldhuizen, D. A., Lamont, G. B.: Evolutionary computation and convergenceto a Pareto front. In: John R. Koza, edt., Late Breaking Papers at the GeneticProgramming Conference, pp. 221–228 (1998)

Page 14: IPESA-II: Improved Pareto Envelope-Based Selection ...limx/Preprint/EMO13b.pdf · IPESA-II: Improved Pareto Envelope-Based Selection Algorithm II Miqing Li1, Shengxiang Yang2, Xiaohui

14 M. Li et al.

27. Yang, S., Li, M., Liu, X., and Zheng, J.: A grid-based evolutionary algorithm formany-objective optimization. IEEE Transactions on Evolutionary Computation,in press (2012)

28. Zhang, Q., and Li, H.: MOEA/D: A multiobjective evolutionary algorithm basedon decomposition. IEEE Transactions on Evolutionary Computation, 11(6), 712–731 (2007)

29. Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algo-rithms: Empirical results. Evolutionary Computation, 8(2), 173–195 (2000)

30. Zitzler, E. and Kunzli, S.: Indicator-based selection in multiobjective search. InProc. Parallel Problem Solving from Nature (PPSN VIII), pages 832–842. Springer,Berlin / Heidelberg (2004).

31. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: Improving the strength Pareto evo-lutionary algorithm for multiobjective optimization. In: Evolutionary Methods forDesign, Optimisation and Control, CIMNE, Barcelona, Spain, pp. 95–100 (2002)

32. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: A comparative casestudy and the strength Pareto approach. IEEE Transactions on Evolutionary Com-putation, 3(4), 257–271 (1999)