a mesh adaptive direct search algorithm for multiobjective optimization

12
Decision Support A mesh adaptive direct search algorithm for multiobjective optimization q Charles Audet a,b, * , Gilles Savard a,b , Walid Zghal b a GERAD, HEC Montréal, 3000, chemin de la côte, Sainte-Catherine, Montréal (Québec), Canada H3T 2A7 b Département de Mathématique et de Génie Industriel, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal (Québec), Canada H3C 3A7 article info Article history: Received 23 February 2009 Accepted 7 November 2009 Available online 17 November 2009 Keywords: Multiobjective optimization Mesh adaptive direct search (MADS) Convergence analysis abstract This work studies multiobjective optimization ðMOPÞ of nonsmooth functions subject to general con- straints. We first present definitions and optimality conditions as well as some single-objective formula- tions of MOP, parameterized with respect to some reference point in the space of objective functions. Next, we propose a new algorithm called MULTIMADS (multiobjective mesh adaptive direct search) for MOP.MULTIMADS generates an approximation of the Pareto front by solving a series of single-objective for- mulations of MOP generated using the NBI (natural boundary intersection) framework. These single- objective problems are solved using the MADS (mesh adaptive direct search) algorithm for constrained nonsmooth optimization. The Pareto front approximation is shown to satisfy some first-order necessary optimality conditions based on the Clarke calculus. MULTIMADS is then tested on problems from the liter- ature with different Pareto front landscapes and on a styrene production process simulation problem from chemical engineering. Ó 2009 Elsevier B.V. All rights reserved. 1. Introduction A common goal in multiobjective optimization is to identify the best trade-off solutions between different criteria. In this paper, we consider the multiobjective problem under general constraints, which may be stated as MOP : min x2X F ðxÞ¼ðf 1 ðxÞ; f 2 ðxÞ; ... ; f p ðxÞÞ with F : R n !fR [fþ1gg p and X ; # R n ; where X is a nonempty subset of R n ; n is the number of variables, and p is the number of objective functions. In the context of non- smooth optimization, the set X is typically defined through blackbox constraints given by an oracle, such as a computer code may even re- turn a yes or no indicating whether or not a specified trial point is feasible. Examples of such blackboxes in the single-objective case are described in [6,19]. There is usually no single solution x 2 X that simultaneously minimizes each of the p objective functions, and a notion of trade-off between solutions is required. Pareto dominance [25] are used to define the set of optimal trade-off solutions of a MOP problem: Definition 1.1. Let u; v 2 X be two decision vectors. Then, u v (u weakly dominates v) if and only if f i ðuÞ 6 f i ðv Þ for all i 2f1; 2; ... ; pg. u v (u dominates v) if and only if u v and f j ðuÞ < f j ðv Þ for at least one j 2f1; 2; ... ; pg. u v (u is indifferent to v) if and only if u does not dominate v, and v does not dominate u. u 2 X is called Pareto optimal if there is no w 2 X that dom- inates u. The set of Pareto optimal solutions is called Pareto optimal set and is denoted by X P . The image of X P under the mapping F defines the solution of MOP and is called the Pareto front, denoted by Y P # R p . It is not easy to get an exact description of these sets especially for real-word problems where the structure of the functions either cannot be exploited or is absent. In the absence of an exact descrip- tion of the Pareto front, different approaches have been proposed. The most common one is the pointwise approximation where the Pareto front is approximated by a discrete set of solutions [21]. The aim of this paper is to propose an algorithm for multiobjec- tive blackbox optimization. The algorithm relies on the recent mesh adaptive direct search (MADS) algorithm for single-objective optimi- zation [5], and on the natural boundary intersection method of [13] to handle the multiple objectives. The convergence of the proposed 0377-2217/$ - see front matter Ó 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.ejor.2009.11.010 q Work of the first author was supported by FCAR grant NC72792,NSERC grant 239436-0, AFOSR FA9550-07-1-0302 and ExxonMobil Upstream Research Company. * Corresponding author. Address: GERAD, HEC Montréal, 3000, chemin de la côte, Sainte-Catherine, Montréal (Québec), Canada H3T 2A7. E-mail addresses: [email protected], [email protected] (C. Audet), [email protected] (G. Savard), [email protected] (W. Zghal). URL: http://www.gerad.ca/Charles.Audet (C. Audet). European Journal of Operational Research 204 (2010) 545–556 Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor

Upload: charles-audet

Post on 21-Jun-2016

217 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: A mesh adaptive direct search algorithm for multiobjective optimization

European Journal of Operational Research 204 (2010) 545–556

Contents lists available at ScienceDirect

European Journal of Operational Research

journal homepage: www.elsevier .com/locate /e jor

Decision Support

A mesh adaptive direct search algorithm for multiobjective optimization q

Charles Audet a,b,*, Gilles Savard a,b, Walid Zghal b

a GERAD, HEC Montréal, 3000, chemin de la côte, Sainte-Catherine, Montréal (Québec), Canada H3T 2A7b Département de Mathématique et de Génie Industriel, École Polytechnique de Montréal, C.P. 6079, succ. Centre-ville, Montréal (Québec), Canada H3C 3A7

a r t i c l e i n f o

Article history:Received 23 February 2009Accepted 7 November 2009Available online 17 November 2009

Keywords:Multiobjective optimizationMesh adaptive direct search (MADS)Convergence analysis

0377-2217/$ - see front matter � 2009 Elsevier B.V. Adoi:10.1016/j.ejor.2009.11.010

q Work of the first author was supported by FCA

239436-0, AFOSR FA9550-07-1-0302 and ExxonMobil U* Corresponding author. Address: GERAD, HEC Mont

Sainte-Catherine, Montréal (Québec), Canada H3T 2AE-mail addresses: [email protected], charles.

[email protected] (G. Savard), Walid.Zghal@geURL: http://www.gerad.ca/Charles.Audet (C. Aude

a b s t r a c t

This work studies multiobjective optimization ðMOPÞ of nonsmooth functions subject to general con-straints. We first present definitions and optimality conditions as well as some single-objective formula-tions of MOP, parameterized with respect to some reference point in the space of objective functions.Next, we propose a new algorithm called MULTIMADS (multiobjective mesh adaptive direct search) forMOP. MULTIMADS generates an approximation of the Pareto front by solving a series of single-objective for-mulations of MOP generated using the NBI (natural boundary intersection) framework. These single-objective problems are solved using the MADS (mesh adaptive direct search) algorithm for constrainednonsmooth optimization. The Pareto front approximation is shown to satisfy some first-order necessaryoptimality conditions based on the Clarke calculus. MULTIMADS is then tested on problems from the liter-ature with different Pareto front landscapes and on a styrene production process simulation problemfrom chemical engineering.

� 2009 Elsevier B.V. All rights reserved.

1. Introduction

A common goal in multiobjective optimization is to identify thebest trade-off solutions between different criteria. In this paper, weconsider the multiobjective problem under general constraints,which may be stated as

MOP : minx2X

FðxÞ ¼ ðf1ðxÞ; f 2ðxÞ; . . . ; f pðxÞÞ

with

F : Rn ! fR [ fþ1ggp and X – ;# Rn;

where X is a nonempty subset of Rn;n is the number of variables,and p is the number of objective functions. In the context of non-smooth optimization, the set X is typically defined through blackboxconstraints given by an oracle, such as a computer code may even re-turn a yes or no indicating whether or not a specified trial point isfeasible. Examples of such blackboxes in the single-objective caseare described in [6,19]. There is usually no single solution x 2 X thatsimultaneously minimizes each of the p objective functions, and anotion of trade-off between solutions is required. Pareto dominance

ll rights reserved.

R grant NC72792, NSERC grantpstream Research Company.réal, 3000, chemin de la côte,

[email protected] (C. Audet),rad.ca (W. Zghal).t).

[25] are used to define the set of optimal trade-off solutions of a MOPproblem:

Definition 1.1. Let u;v 2 X be two decision vectors. Then,

� u � v (u weakly dominates v) if and only if fiðuÞ 6 fiðvÞ for alli 2 f1;2; . . . ; pg.

� u � v (u dominates v) if and only if u � v and fjðuÞ < fjðvÞ forat least one j 2 f1;2; . . . ; pg.

� u � v (u is indifferent to v) if and only if u does not dominatev, and v does not dominate u.

� u 2 X is called Pareto optimal if there is no w 2 X that dom-inates u.

The set of Pareto optimal solutions is called Pareto optimal set and isdenoted by XP. The image of XP under the mapping F defines thesolution of MOP and is called the Pareto front, denoted by YP # Rp.It is not easy to get an exact description of these sets especiallyfor real-word problems where the structure of the functions eithercannot be exploited or is absent. In the absence of an exact descrip-tion of the Pareto front, different approaches have been proposed.The most common one is the pointwise approximation where thePareto front is approximated by a discrete set of solutions [21].

The aim of this paper is to propose an algorithm for multiobjec-tive blackbox optimization. The algorithm relies on the recent meshadaptive direct search (MADS) algorithm for single-objective optimi-zation [5], and on the natural boundary intersection method of [13]to handle the multiple objectives. The convergence of the proposed

Page 2: A mesh adaptive direct search algorithm for multiobjective optimization

546 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

method is analyzed, and numerical results are presented on five testproblems.

The paper is organized as follows. An overview of pointwiseapproximation methods is presented in Section 2. In particular,two methods are described: the natural boundary intersection(NBI) method [13] for multiobjective optimization and the biob-jective mesh adaptive direct search (BIMADS) algorithm [8] for biob-jective optimization. Then, new single-objective formulations areintroduced and the algorithm MULTIMADS is detailed in Section 3.MULTIMADS combines techniques from both NBI and BIMADS. MULTIMads extends BIMADS in terms of scalability to any number of objec-tives. It is applicable to a wider class of problems than NBI since itallows blackbox functions whereas NBI requires problems whoseobjectives and constraints are twice continuously differentiable.In Section 4, MULTIMADS is tested on problems from [15,14] withvarious complexities and landscapes designed to highlight difficul-ties which may be encountered in real-world problems. Some testproblems are scalable to an arbitrary number of variables. Finally,the algorithm is tested on a tri-objective problem from chemicalengineering with eight variables and nine general constraints.

2. Pointwise approximation methods

This section describes two different pointwise approximationmethods. The NBI method [13] for multiobjective optimization ispresented in Section 2.1, and the BIMADS algorithm [8] for biobjec-tive optimization is presented in Section 2.2. Ideas from bothmethods are combined in the new algorithm proposed in the pres-ent paper.

2.1. The NBI method

The NBI approach of Das and Dennis [13] relies on the ConvexHull of Individual Minima (CHIM), defined as follows:

Definition 2.1. Let x�i be a minimizer of fi and let F�i ¼ Fðx�i Þ fori ¼ 1;2; . . . ; p. F� 2 Rp is said to be the shadow minimum [13]composed of the individual minima of the objectives, i.e.,F� ¼ ðf �1 ; f �2 ; . . . ; f �p Þ. Let F be the Rp � Rp matrix whose ith columnis F�i � F�, known as the shifted pay-off matrix. Then, the setfF� þFb : b 2 Bg, is referred to as the Convex Hull of IndividualMinima, where B ¼ fb 2 Rp :

Ppi¼1bi ¼ 1; bi P 0g is the set of convex

combination vectors.

Fig. 1. The NBI method generates evenly spaced points to approximate the Paretofront.

The shadow minimum is used as a reference point to producefeasible points in the boundary of FðXÞ closet to F�. Das and Dennis[13] show that these points are nondominated. NBI uses the CHIMto produce an approximation of the Pareto front by solving a seriesof single-objective optimization problems NBIb, parameterizedwith respect to some convex combination vector b 2 B in whichadditional non-linear equality constraints tying the objective func-tion values are added:

NBIb :

maxx2Rp ;t2R

t

s:t: FðxÞ � tn ¼ F� þFb

x 2 X;

where n denotes a unit vector normal to the CHIM simplex. In prac-tice, n is chosen to be the sum of the column of F. Let ðxb; tbÞ denotea solution of NBIb for a given value of b 2 B. Then xb lies at the inter-section of the normal to the CHIM and the boundary of FðXÞ closestto the shadow minimum. Solving NBIb for various values of b pro-duces an approximation of the Pareto front. Note that this approachmay be impracticable in the blackbox optimization context due tothe presence of equality constraints. Das and Dennis [13] proposea strategy to generate an even spread of combination vectors

b 2 B. In this strategy, the values of each component of b are simply0; d;2d; . . . ;1 where d 6 1 is a fixed stepsize such that d�1 is a non-negative integer, i.e., d is the uniform spacing between two consec-utive bj. Thus, the smallest is the value of d, the better is thecoverage of the Pareto front, at the expense of more function eval-uations. Note that the number of NBI subproblems is equal to

pþ d�1

d�1

!:

This even spread of b generates evenly spaced points on the CHIM.The intersection of the normals to CHIM emanating from F� þFband the boundary of FðXÞ gives an uniform pointwise approxima-tion of the Pareto front. The NBI approach is illustrated in Fig. 1for a biobjective problem. The CHIM is represented by a dotted linesegment and the arrows are used for normals to the CHIM. Theapproximation of the Pareto front consists of the six circles in thefigure.

Despite the uniformity in the distribution of points generatedby NBI, the method has limitations recognized by Das and Dennis[13] and illustrated by Kim et al. [18]. In particular, the NBI methodmay fail to cover the entire Pareto front for problems whose num-ber of objectives exceeds 2 when two individual minimizers coin-cide. As a simple tri-objective example, suppose that the Paretofront is the simplex fy 2 R3

þ :P3

i¼1yi ¼ 1g,represented in Fig. 2 asthe convex hull of the triangle ABC. Each point on the line segmentAB minimizes f1, BC minimizes f2 and AC minimizes f3. Hence, fordifferent objective functions, NBI may find the same individualminimum. For example, NBI may identify the point A that simulta-neously minimizes f2 and f3, and the point C that minimizes f1. Inthis case, the CHIM consists only of the line segment AC insteadof convfA;B; Cg, a two-dimensional polytope. Hence, only nondom-inated points in AC may be generated by the algorithm NBI. In Sec-tion 3.2, we propose to correct the lack of coverage by substitutingthe CHIM with another simplex in the objective space.

2.2. The BIMADS algorithm

BIMADS [8] is an iterative algorithm that generates an approxi-mation of the Pareto front of a biobjective optimization problem(BOP) by only using function values. It solves a series of single-objective formulations of BOP using the MADS (mesh adaptive directsearch) algorithm [5]. The series of formulations is constructed in away to attempt a uniform coverage of the Pareto front, even in thecase where the Pareto front is nonconvex or disjoint. The conver-gence of the method is studied using the Clarke calculus for non-smooth functions. For a locally Lipschitz function f : Rn ! R,

Page 3: A mesh adaptive direct search algorithm for multiobjective optimization

Fig. 2. The NBI method fails to cover the entire Pareto front when two individualminimizers coincide.

C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556 547

Clarke [11] defines the generalized derivative of f at ~x 2 X in thedirection v 2 Rn as:

f ð~x; vÞ ¼ lim supy!~x; t#0

f ðyþ tvÞ � f ðyÞt

: ð1Þ

BIMADS is applied in [26] for the selection of optimal portfolios.At each iteration of BIMADS, the set of nondominated points

(with respect to all points generated so far) is denoted by XL.The image under the mapping F ¼ ðf1; f2Þ of XL is denoted byYL 2 R2. YL gives an approximation of the Pareto front YP. Atthe initialization step, the algorithm solves the two single-objec-tive problems: min

x2XfiðxÞ, for i=f1;2g using MADS from a starting

point x0 2 X provided by the user. A first list XL of nondominatedpoints is obtained from the set of all points generated by the tworuns of MADS. The sets XL and YL are sorted in ascending value off1 and decreasing value of f2.

The ordering property gives a simple way to measure the gapsbetween nondominated points by evaluating Euclidean distancesbetween successive solutions in YL in the objective space. Thisstrategy allows the evaluation of the solutions coverage along YL

in order to determine a reference point from which the largestgap in the objective space is covered. Each iteration of BIMADS con-sists of three steps.

First, the ordered list YL is used to identify a reference point r inthe space of objectives. The second step consists of solving a partic-ular single-objective formulation Rr of BOP. Rr relies on a referencepoint r in the objective space Rp and must satisfy the requirementspresented in the following definition:

Definition 2.2. Consider the single-objective optimization problem:

Rr : minx2X

wrðxÞ with wrðxÞ ¼ /rðFðxÞÞ;

where /r : Rp ! R is parameterized with respect to some referencepoint r 2 Rp. Then Rr is called a single-objective formulation at r ofMOP if the following conditions hold:

� If F is Lipschitz near some ~x 2 X, then wr is also Lipschitz near~x 2 X.

� If F is Lipschitz near some ~x 2 X with Fð~xÞ < r component-wise, and if d 2 TXð~xÞ is such that f i ð~x; dÞ < 0 fori ¼ 1;2; . . . ; p then wr ð~x; dÞ < 0.

The first condition ensures that the formulation preserves localLipschitz continuity while the second involves Clarke descentdirections for all fi’s and wr . Assuming more smoothness on thefunction /r leads to the following theorem.

Theorem 2.3. Let Rr be a single-objective formulation at r 2 Rp ofMOP. If F and wr are strictly differentiable at some ~x 2 X with Fð~xÞ < rcomponentwise, and if d 2 TXð~xÞ is such that rfið~xÞT d < 0 fori ¼ 1;2; . . . ; p then rwrð~xÞ

T d < 0.

Two single-objective formulations are proposed: the single-objective normalized formulation bRr and the single-objective productformulation eRr . Let r 2 Rp be a reference point in the objectivespace. The single-objective normalized formulation is defined as:bRr : min

x2Xwr with wrðxÞ ¼ /rðf1ðxÞ; f2ðxÞ; . . . ; fpðxÞÞ

¼ maxi2f1;2;...;pg

fiðxÞ � ri

si:

The single-objective product formulation is defined as:eRr : minx2X

~wr with wrðxÞ ¼ ~/rðf1ðxÞ; f2ðxÞ; . . . ; fpðxÞÞ

¼ �Yp

i¼1

ððri � fiðxÞÞþÞ2;

where ðri � fiðxÞÞþ ¼maxfri � fiðxÞ;0g for i ¼ 1;2; . . . ;p.An advantage of eRr over bRr is that the function of p variables /r

is continuously differentiable in the entire space, and therefore theformulation preserves the differentiability of the original problemwhen viewed as a function of x. However, eRr restricts the choice ofreference point whose dominance zone relative to r : D ¼ fx 2 Rn :

fiðxÞ 6 ri for i ¼ 1;2; . . . ; pg should be nonempty since ~wrðxÞ ¼ 0whenever x R D. The reader is invited to consult [8] for the specificdetails of the single-objective formulation Rr .

The single-objective reformulation Rr is solved using the MADS

algorithm [5]. Since the function values RrðxÞ are negative forx 2 D and nonnegative otherwise and as MADS is a descend method,the image of the trial points produced by this algorithm will mostlikely lie in the dominance zone D.

Finally, in the third step, the set of nondominated points XL isupdated. New nondominated points are added and dominatedones are removed. These three steps are iterated by BIMADS. In prac-tice, termination [8] is either set to a fixed number of iterations, orwhen the gap between two consecutive nondominated pointsdrops below a pre-determined threshold. The first alternative iswidely used in blackbox optimization [5] since the optimizer isusually restricted by a budget expressed in terms of number ofobjective evaluations.

3. The MULTIMADS algorithm for multiobjective optimization

The property that Pareto points may be ordered in BOP isexploited by BIMADS to define the reference points. This strategycannot be easily implemented in MOP due to the lack of orderingproperty. Hence, we propose in this section another strategy togenerate the reference point from an alternate simplex to the Con-vex Hull of Individual Minima (CHIM). Section 3.1 proposes a newsingle-objective formulation, and the general scheme of MMADS ispresented in Section 3.2.

3.1. An alternate formulation for multiobjective optimization

Let r 2 Rp be a reference point in the objective space, D the im-age of the dominance zone relative to r and @D the boundary of D.The order of nondominated points is used in BIMADS to identify a

Page 4: A mesh adaptive direct search algorithm for multiobjective optimization

548 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

reference point such that the dominance zone is nonempty. Due tothe lack of an ordering property in MOP, we propose an alternateformulation that does not restrict the choice of the reference point.The single-objective distance formulation is defined as:

Rr : minx2X

�wr with �wrðxÞ ¼ �/rðFðxÞÞÞ

¼ �dist2ð@D; FðxÞÞ; if FðxÞ 2 D;

dist2ð@D; FðxÞÞ; otherwise;

(

where distð@D; FðxÞÞ is the distance in the objective space from FðxÞto the boundary @D of the dominance zone relative to r in the objec-tive space Rp. The level sets of the Rr formulation using differentdistance norms for a biobjective problem are depicted in Fig. 3.The distance norms used in Rr are the infinity norm distance L1,the Euclidian distance L2 and the Taxicab distance L1. For �/r > 0, ob-serve that the level sets of L2 are smooth while they are not for theother norms. The level sets only differ in the zone dominated by thereference point r.

The next theorem shows that Rr qualifies as being a single-objective formulation of MOP.

Theorem 3.1. Rr is a single-objective formulation at r of MOP in thesense of Definition 2.2.

Proof. Let F be Lipschitz near a point x 2 X satisfying FðxÞ 2 D. Itfollows from [11, Proposition 2.4.1] that �wr is Lipschitz near x,

To show that the second condition of Definition 2.2 holds, wemust first show that when FðxÞ 2 D, the value of the reformulationis independent of the norm used. Let i 2 arg minfjfiðxÞ � rij: i 2 f1;2; . . . pgg and construct y 2 @D as follows:

yi ¼fiðxÞ if i – i;ri if i ¼ i;

�for i 2 f1;2; . . . pg:

It follows that

ky� FðxÞk1 ¼Xp

i¼1

jyi � fiðxÞj ¼ kyi � fiðxÞj;

ky� FðxÞk2 ¼Xp

i¼1

jyi � fiðxÞj2 !1=2

¼ kyi � fiðxÞj;

ky� FðxÞk1 ¼max jyi � fiðxÞj : i 2 f1;2; . . . pgf g ¼ kyi � fiðxÞj:

Therefore, regardless of the norm used, �wrðxÞ ¼ �/rðFðxÞÞÞ ¼ �ky�FðxÞk2

1 and the second condition of Definition 2.2 follows directlyfrom [11, Proposition 2.3.12]. h

Fig. 3. Level sets in objective space of single-objective distance formulation

The main advantage of Rr over eRr is that it provides a more flex-ible optimality condition as shown next in Proposition 3.2. The Rr

formulation generalizes the bRr formulation since, the function val-ues �wðxÞ are not restricted to negative values. Therefore, the refer-ence point can be selected anywhere in the objective functionspace.

Proposition 3.2. If ~x is the unique optimal solution of Rr for somereference point r 2 Rp, then ~x is Pareto optimal for MOP.

Proof. Let ~x be the unique optimal solution of Rr and x 2 X; x – ~x.Then �wð~xÞ < �wðxÞ and consequently, there exists some indexj 2 f1;2; . . . ; pg for which fjð~xÞ < fjðxÞ and thus x does not dominate~x. Hence, ~x is Pareto optimal. h

3.2. The MULTIMADS algorithm

MULTIMADS is an iterative algorithm that constructs a set ofpoints approximating the Pareto optimal set XP. We use the samenotation as in [8]. At each iteration, the set of nondominated points(with respect to all points generated so far) is denoted by XL. Theimage under the mapping F of XL is denoted by YL 2 Rp. YL givesan approximation of the Pareto front YP.

At the initialization step, the shadow minimum F� is approxi-mated by consecutively solving the p single-objective problems:minx2X

fiðxÞ for i ¼ 1;2; . . . ; p. As in [13], we assume that the objectivefunctions have been defined with the shadow minimum shifted tothe origin of the objective space. Hence, for all x 2 X; fiðxÞ, fori ¼ 1;2; . . . ; p, is redefined as:

fiðxÞ fiðxÞ � f �i ðxÞ:

Recall that the example illustrated in Fig. 2 exposed a possibleflaw when using the CHIM simplex in an NBI framework. We nextpresent two strategies for developing an alternate simplex S to theCHIM. One strategy attempts to position S close to the Pareto Front.For this purpose, we select S from an hyperplane H tangent to ourapproximation of the Pareto front. This is done by minimizingz ¼

Ppi¼1sifiðxÞ under x 2 X where si is a positive fixed rescaling fac-

tor for i ¼ 1; . . . ; p. Let z� be the optimal value of z, then the hyper-plane H is defined to be the set fy 2 Rp :

Ppi¼1yi ¼ z�g. The second

strategy consists of choosing p vertices in H close to the Paretofront. The vertices may be obtained using the p projections of theshadow minimum onto H along the coordinate axes in the objec-tive space. Hence, the simplex vertices components are simplythe columns of the following matrix: F� þ z�Ip, where Ip is thep� p identity matrix. The alternate to the CHIM, the Tangent Hull(TH) is defined as follows:

�Rr for a biobjective optimization problem using the L1; L2 and L1 norms.

Page 5: A mesh adaptive direct search algorithm for multiobjective optimization

C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556 549

Definition 3.3. Let z� the minimum value of z ¼Pp

i¼1sifiðxÞ, wheresi is a positive scaling factor for i ¼ 1;2; . . . ; p, and letB ¼ fb 2 Rp :

Ppi¼1bi ¼ 1; bi P 0g. Then, the set fF� þ z�bIp : b 2 Bg,

where Ip denotes the p� p identity matrix, is referred to as theTangent Hull (TH).

The TH is represented by a dotted line segment in Fig. 4 for abiobjective problem. An advantage of the TH over the CHIM is thatthe vertices of the TH are distinct whereas the vertices of the CHIMmay coincide. As shown in Fig. 2, the CHIM may consist only of thesegment line AC. Whereas, the TH is the convex hull of the triangleABC.

Each iteration of MULTIMADS consists of two phases. First, a con-vex combination vector b 2 B is generated to select a referencepoint r from the TH. We use the strategy presented in Section 2.1to generate an even spread of combination vectors b 2 B with equalstepsize [13]. The second phase consists of solving a single-objec-tive formulation Rr parameterized with respect to r. Finally, a listXL of nondominated points is obtained from the set of all trialmesh points generated by MADS in the previous steps. A generaldescription of MULTIMADS is presented in Fig. 5.

3.3. Convergence analysis

The single-objective optimization problems generated byMULTIMADS are solved by MADS. The MADS convergence analysis in

Fig. 4. The dotted line represents the TH of a biobjective problem.

Fig. 5. Scheme of the MULTIMADS algorithm

[5] gives conditions under which it generates Clarke stationarypoints and KKT points. The convergence analysis is hierarchical,as is starts out without any assumptions on the objective function,then adds local Lipschitz continuity and culminates with localstrict differentiability.

The MULTIMADS convergence analysis relies on that of MADS. Thefollowing theorem gives conditions under which MULTIMADS pro-duces points satisfying the necessary optimality conditions formultiobjective optimization [12]. We denote by Rr a single-objec-tive formulation of MOP at some reference point r 2 Rp, as definedin Definition 2.2.

Theorem 3.4. Let fi for i ¼ 1;2; . . . ; p be Lipschitz near a Clarkestationary point x 2 X generated by MADS applied to a single-objectivedistance formulation �Rr of MOP at some reference point r 2 Rp. If thehypertangent cone to X at x is nonempty, then

f or any d 2 TClX ðxÞ; there exists j 2 f1;2; . . . ; pg such that f j ðx; dÞ

P 0;

where TClX ðxÞ is the Clarke tangent cone at x.

Proof. The Clarke stationary point x produced by MADS satisfiesRr ðx; dÞP 0 for all d 2 TCl

XðxÞ. The second condition appearing indefinition of single-objective formulation Definition 2.2 ensuresthat for any d 2 TCl

XðxÞ, there exists an index j 2 f1;2; . . . ; pgfor which f j ðx; dÞP 0. h

A corollary to this result is derived when all fi are strictly differ-entiable at x, for i ¼ 1;2; . . . ; p.

Corollary 3.5. Let f1; f2; . . . ; fp and wr be strictly differentiable at KKTpoint x 2 X generated by MADS applied to a single-objective formu-lation �Rr of MOP at some reference point r 2 Rp. If the hypertangentcone to X at x is nonempty, then x satisfies

f or any d 2 TClX ðxÞ; there exists j 2 f1;2; . . . ; pg such that rfjðxÞT d

P 0;

where TClX ðxÞ is the Clarke tangent cone at x.

for the multiobjective programming.

Page 6: A mesh adaptive direct search algorithm for multiobjective optimization

550 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

Proof. Let d 2 TClXðxÞ. According to [5], x is a KKT stationary point of

�Rr on X. Hence, rwrðxÞT d P 0. The second condition appearing in

definition of single-objective formulation Definition 2.2 ensuresthat for any d 2 TCl

XðxÞ, there exists an index j 2 f1;2; . . . ; pgfor which rfjðxÞT d P 0. h

4. Numerical results

The performance of MULTIMADS is evaluated using test problemsfrom Deb et al. [15], on a problem with a discontinuous objectiveand a discontinuous Pareto front and finally, on a problem fromthe chemical engineering. The simplicity of construction, the scali-bility to any number of variables and objectives, the a priori knowl-

00.10.20.30.40.5

00.1

0.20.3

0.40

0.1

0.2

0.3

0.4

0.5

0.20.30.40.50

0.1

0.2

0.3

0.4

0.5

0 0.1 0.2 0.3 0.4 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0 0.1 0.20

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0 0.1 0.2 0.3 0.4 0.5

0 0.1 0.20 0.1 0.2 0.3 0.4 0.5

0 0.1 0.2

Fig. 6. Hyperplane global Pareto front: MULT

edge of the Pareto front and the introduction of variouscomplexities are the main features of the test suite [15]. Indeed,the problems test the ability of an algorithm to overcome differentlandscape complexities and introduce specific difficulties in bothconverging to the Pareto front and maintaining a well spread dis-tribution of optimal solutions.

We test MULTIMADS on three bound-constrained problems from[15] with different Pareto front landscapes: an hyperplane, asphere and a one-dimensional curve in R3. For clarity, we illustrateour approach using three objective functions. The number of deci-sion variables is set to 12 for the problems from [15] unless indi-cated explicitly, is set to 2 for the problem with a discontinuousPareto front and equals 8 in the chemical engineering problem.The test problems solved in [8] only have two decision variables.

00.1

00.1

0.20.3

0.4

00.10.20.30.40.5

00.1

0.20.3

0.40

0.1

0.2

0.3

0.4

0.5

0.3 0.4 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

0 0.1 0.2 0.3 0.4 0.5

0 0.1 0.2 0.3 0.4 0.50.3 0.4 0.5

0 0.1 0.2 0.3 0.4 0.50.3 0.4 0.5

IMADS with 30,000 function evaluations.

Page 7: A mesh adaptive direct search algorithm for multiobjective optimization

C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556 551

The MULTIMADS algorithm requires very few user defined param-eters. In all runs, the default MADS parameters are used by the NOMAD

2.0 software [1], including an initial mesh size parameter of 0.1, onetenth of the variable range. In order to diversify the starting pointselection for the single-objective optimization, a 50 points latinhypercube search sampling is performed in the initialization step.This strategy generates trial points spread across the feasible re-gion. For all problems, the scaling factors si are all fixed at 1 sincethe objective function values share the same magnitude. For eachtest problems, MULTIMADS calls MADS 29 times, including 4 calls dur-ing the initialization step: three times to compute the shadow min-

00.20.40.60.81

0

0.5

1

0

0.2

0.4

0.6

0.8

1

1

0

0.5

1

0

0.2

0.4

0.6

0.8

1

0 0.2 0.40

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

0 0.2 0.40 0.2 0.4 0.6 0.8 1

0 0.2 0.40 0.2 0.4 0.6 0.8 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 7. Spherical global Pareto front: MULTIM

imum and once to construct the TH. In addition, 25 different single-objective distance formulations are solved using MADS. Each run ofMADS terminates after 1000 evaluations, except in the initializationstep where 2000 evaluations are used in order to construct the TH.Using this strategy, MULTIMADS generates a total of 30,000 objectivefunction evaluations (which is the number suggested by Deb et al.[15] to solve the test problems using evolutionary algorithms).

For each test problem except the problem with a discontinuousPareto front, a figure containing twelve graphs is presented, threefor each distance norm: the infinity norm distance L1, the euclid-ian distance L2 and the Taxicab distance L1, and four of them for

00.2

0.40.6

0.8

00.2

0.40.6

0.81

00.2

0.40.6

0.81

0

0.5

1

0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 10.6 0.8 1

0.6 0.8 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ADS with 30,000 function evaluations.

Page 8: A mesh adaptive direct search algorithm for multiobjective optimization

552 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

different graph views in ðf1; f2; f3Þ; ðf1; f2Þ; ðf1; f3Þ and ðf2; f3Þ spaces.Each graph represents the function values of all trial points gener-ated by MULTIMADS using the symbol ‘o’. For the problem with a dis-continuous Pareto front, a figure containing a graph for eachdistance norm, is presented.

4.1. A problem with a planar Pareto front

The problem with a planar Pareto front is presented as follows:

HP : minx2X

f1ðxÞ ¼ 12 x1x2ð1þ gðxÞÞ;

f2ðxÞ ¼ 12 x1ð1� x2Þð1þ gðxÞÞ;

f3ðxÞ ¼ 12 ð1� x1Þð1þ gðxÞÞ;

8><>:

00.20.40.60.81

0

0.5

1

0

0.2

0.4

0.6

0.8

1

0.0.40.60.810

0.2

0.4

0.6

0.8

1

0 0.20.1–0.1–0.1

0.3

0 0.2 0.40 0.2 0.4 0.6 0.8 1

0 0.2 0.4 0.6 0.8 1

0 0.2 0.40 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Fig. 8. One-dimensional Pareto front: MULTI

where gðxÞ ¼ 100ð10þP12

i¼3ððxi � 0:5Þ2 � cosð20pðxi � 0:5ÞÞÞÞ andX ¼ fx 2 R12 : 0 6 xi 6 1 for i ¼ 1;2; . . . ;12g. The set of Pareto solu-tions is

XP ¼ fx 2R12 : 06 x1 6 1;06 x2 6 1;xi ¼ 0:5 for all i¼ 3;4; . . . ;12g:

The Pareto front is the simplex fy P 0 :P3

i¼1yi ¼ 0:5g. The multi-modal function g introduces a difficulty to converge to the Paretofront. Indeed, the search space contains ð1110 � 1Þ local Pareto-optimal fronts. Results obtained by applying MULTIMADS are illus-trated in Fig. 6 with four different graph views of the Pareto frontapproximation, using the three norms in the single-objectivereformulation.

02

0

0.5

1 00.5

100.20.40.60.81

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

0 0.20.1–0.1 0.3 0.4 0.5 0.6 0.7 0.80.4 0.5 0.6 0.7 0.8

10.6 0.8 1 0 0.2 0.4 0.6 0.8

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.6 0.8 1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

–0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

MADS with 30,000 function evaluations.

Page 9: A mesh adaptive direct search algorithm for multiobjective optimization

C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556 553

The results under the three different distance norms are compa-rable. Indeed, all nondominated points YL found by MULTIMADS areclose to the global Pareto front YP. Furthermore, MULTIMADS

achieves a well spread nondominated points in the global Paretofront for each formulation.

4.2. A problem with Pareto front on a sphere

The problem with Pareto front on a sphere is presented asfollows:

SP : minx2X

f1ðxÞ ¼ cosðx1p2Þ cosðx2

p2Þð1þ gðxÞÞ

f2ðxÞ ¼ cosðx1p2Þ sinðx2

p2Þð1þ gðxÞÞ

f3ðxÞ ¼ sinðx1p2Þð1þ gðxÞÞ;

8><>:where gðxÞ ¼ 100ð10þ

P12i¼3ððxi � 0:5Þ2 � cosð20pðxi � 0:5ÞÞÞÞ and

X ¼ fx 2 R12 : 0 6 xi 6 1fori ¼ 1;2; . . . ;12g. The set of Pareto solu-tions is

XP ¼ fx 2 R12 : 0 6 x1 6 1;0 6 x2 6 1; xi ¼ 0:5 for all i

¼ 3;4; . . . ;12g:

The Pareto front is the first octant of the unit sphere:P3i¼1y2

i ¼ 1. As in the hyperplane problem Section 4.1, due tothe multimodal function g, the search space contains ð1110 � 1Þlocal Pareto-optimal fronts. Results obtained by applying MULTIMads are illustrated in Fig. 7 with four different graph views ofthe Pareto front approximation, using the three norms in thesingle-objective reformulation. The best results are obtainedusing the L1 norm. Within this formulation, MULTIMADS generatesa large set of well spread nondominated points YL close to theglobal Pareto front YP.

Fig. 10. Discontinuous Pareto front: MULTIM

0

00.2

0.40.6

0.8

0

0.2

0.4

0.6

0.8

1

0.20.4

0.60.8

1

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

1

Fig. 9. One-dimensional Pareto front with d

4.3. A problem with a one-dimensional Pareto front

The problem with a one-dimensional Pareto front is presentedas follows:

CP : minx2X

f1ðxÞ ¼ cosðx1p2Þ cosð p

4ð1þgðxÞÞ ð1þ 2gðxÞx2ÞÞð1þ gðxÞÞf2ðxÞ ¼ cosðx1

p2Þ sinð p

4ð1þgðxÞÞ ð1þ 2gðxÞx2ÞÞð1þ gðxÞÞf3ðxÞ ¼ sinðx1

p2Þð1þ gðxÞÞ;

8><>:where gðxÞ ¼

P12i¼3ðxi � 0:5Þ2 and X ¼ f0 6 xi 6 1 for i ¼ 1;2; . . . ;

12g. The set of Pareto solutions is

XP ¼ fx 2 R12 : 0 6 x1 6 1; 0 6 x2 6 1; xi ¼ 0:5for alli

¼ 3;4; . . . ;12g:

The Pareto front YP is a curve belonging to the sphere:P3i¼1y2

i ¼ 1. The problem provides a simple way to verify theaccuracy of the results generated by an algorithm. Indeed, atwo-dimensional plot of YP with f3 and either f1 or f2 will markan elliptical curve. Whereas, the plot of YP in f1 and f2 space willshow a straight line. Results obtained by applying MULTIMADS areillustrated in Fig. 8 with four different views of the Pareto frontapproximation, using the three norms in the single-objectivereformulation. The figure shows that all nondominated pointsYL found by MULTIMADS are close to the global Pareto front YP.With the exception of the L1 norm, MULTIMADS achieves a wellspread of nondominated points in the global Pareto front.

In order to study the sensitivity of the algorithm to the problemsize, we vary the number of decision variables. Results obtained byapplying MULTIMADS for different number of variables : 6, 12 and 18are illustrated in Fig. 9 under the L2 norm. The plots shows theapproximation of the one-dimensional Pareto front in R3. As ex-pected, the best results are obtained with the least number ofvariables.

ADS with 30,000 function evaluations.

000.2

0.40.6

0.81

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

1

0.20.4

0.60.8

11

ifferent number of decision variables.

Page 10: A mesh adaptive direct search algorithm for multiobjective optimization

554 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

4.4. A problem with a discontinuous Pareto front

The problem with a discontinuous Pareto front is presented asfollows:

DP : minx2X

f1ðxÞ ¼ x1;

f2ðxÞ ¼ gðx2Þrðx1; x2Þ;

f3ðxÞ ¼0 if x1 6 0:5;1 otherwise;

�8>>><>>>:

where gðx2Þ ¼ 1þ 10x2; rðx1; x2Þ ¼ 1� ðð x1gðx2ÞÞÞ2 � x1

gðx2Þsinð8px2Þ and

X ¼ f0 6 xi 6 1 for i ¼ 1;2g. DP is built from the discontinuous biob-jective problem proposed in [14], by adding the objective function f3.

The Pareto front YP consists of four disjoint regions located ontwo distinct hyperplanes:

Fig. 11. Pareto front of the styrene problem: MULTIMADS

Table 1Objectives and constraints of the styrene problem.

Group of constraints/objectives Description of constraints and objectives

Objective �f1 Maximize the net present value of the prObjective �f2 Maximize the purity of produced styreneObjective �f3 Maximize the overall ethylbenzene conveChemical process constraints Five industrial constraints that depend onEconomic constraints Four constraints relating to payout time,

y2 ¼ y1 � y21 � y1 sinð8py1Þ;

y3 ¼0 if y1 6 0:5;1 otherwise

�8<:The difficulty illustrated in Fig. 2 is also present in this example. In-deed, the feasible point that minimizes f1 also minimizes f3, andtherefore a CHIM with two of its three points being the same. Fur-thermore, one can see that the equality constraints added in NBIbwill not lead to Pareto points.

Results obtained by applying MULTIMADS to DP are illustrated inFig. 10. The thin curves represent the image of the functions on thedomain X (including dominated points) and the dots are the Paretofront approximation using the three norms in the single-objectivereformulation. Both the L2 and L1 norms successfully generatedpoints in each of the four disjoint regions forming the Pareto front.The best coverage is obtained by the L2 norm. Results using the L1norm are not satisfactory. This is due to the fact that in this example,the third function returns only two values, 0 and 1. Therefore, for a

with 30,000 MADS evaluations using the L1 norm.

oject

rsion into styrenethe blocks structure and environmental regulations

cashflow, investment and annual costs

Page 11: A mesh adaptive direct search algorithm for multiobjective optimization

C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556 555

reference point r ¼ ðr1; r2; r3Þ, the value distð@D; FðxÞÞ ¼minfjfiðxÞ�rij : i 2 f1;2;3gg is very often equal to jf3ðxÞ � r3j 1

2 when r3 12.

It follows that when r3 is close to 12, the objective function value of

the single-objective reformulation is often driven solely by r3, leav-ing no discrimination for the values of the two other objective func-tions and for x. This explains why the infinity norm was unsuccessfulin generating Pareto points in the central area.

4.5. An engineering problem: Styrene process optimization

The test examples presented in the previous sections do not be-long to the class of problems that our method targets. Nonetheless,we have presented numerical results because the solution pro-duced by the algorithm can be compared to the actual Pareto Front.As mentioned in the introduction, our method targets blackboxoptimization problems, such as the one presented in the presentsection.

We consider a styrene production process simulation optimiza-tion problem from chemical engineering. The problem was pre-sented in [3] and recently analyzed in [2]. Optimization ofchemical process has been previously studied in [10,17]. The pro-cess is composed of four steps: reactants preparation (pressure riseand evaporation), catalytic reactions [23], styrene recovery (firstdistillation), and benzene recovery (second distillation). The simu-lator for this process based on the Sequential Modular Simulation(SMS) paradigm, widely used to model chemical processes[16,22,24]. The process is divided into several blocks. The simulatordefines the blackbox for our algorithm MULTIMADS. It returns thefunctions values of the problem using classical numerical methods.Consequently, the constraints defining the feasible region and theobjective functions are nonsmooth. Objectives and constraintsare given in Table 1.

The algorithm in [3] incorporates the VNS metaheuristic [20] inthe search step of MADS to solve the single objective optimizationproblem where f1 is the objective function, and f2 and f3 are treatedas constraints by setting upper bounds. This work requires morethan 10,000 function evaluations. In addition, it uses surrogatefunctions [9,4] to guide the exploration of the space of variables.

Table 2Pareto front approximation for the styrene problem.

f1 � 107 f2 f3

Solutions generated by MULTIMADS

1 �2.72399 �0.00884075 �0.6012 �2.7436 �0.00904706 �0.5956673 �2.75015 �0.00904684 �0.5953334 �2.75986 �0.00863101 �0.6015 �2.82753 �0.00863009 �0.66 �2.83072 �0.00841996 �0.5996677 �2.87616 �0.00904596 �0.5948 �2.89155 �0.00862948 �0.5993339 �2.89642 �0.00862917 �0.59910 �2.90846 �0.00883838 �0.59811 �2.92797 �0.00904552 �0.59333312 �2.96815 �0.00883548 �0.59433313 �2.97364 �0.00862856 �0.59833314 �3.01728 �0.00841751 �0.59733315 �3.03039 �0.00862487 �0.59433316 �3.07219 �0.00753947 �0.57333317 �3.09835 �0.0075373 �0.57218 �3.1046 �0.00732483 �0.57233319 �3.1046 �0.00732483 �0.57233320 �3.1046 �0.00732483 �0.57233321 �3.1046 �0.00732483 �0.57233322 �3.10636 �0.00732425 �0.572

Solution presented in [3]�3.35391 �0.00670357 �0.580333

We propose to generate a wide range of Pareto points in orderto present to the decision maker a set of trade-off solutions thathave different characteristics. Hence, the decision maker will havethe choice between these options to select the one that suits himaccording to the weight that he assigns to each criterion. Resultsusing the L1 norm are given in Fig. 11 in ðf1; f2; f3Þ; ðf1; f2Þ; ðf1; f3Þand ðf2; f3Þ objective function spaces. The top plot of Fig. 11 repre-sents 22 undominated solutions generated by MULTIMADS in theobjective space.

The approximation of the Pareto front and the mono-objectivesolution found in [3] are listed in Table 2. Observe that the non-dominated solutions appear clustered into two subsets. The firstsubset consists of the 15 first entries of the table. Their values off2 range between �0.0905 and �0.0841 and values of f3 range be-tween �0.601 and �0.593. The spread of the values in f1 is muchwider. The second subset is composed of the solutions 16–22.The variation in the function values is much less important thanin the first subset. The solutions belonging to the second subsethave low values of f1 at the expense of high values of f2 and f3.The data also reveals that there is a gap between the two subsetsin which no undominated solutions have a value of f2 between�0.00841 and �0.00754 and no values of f3 between �0.593 and�0.572. In summary, the plots suggest that the Pareto front is com-posed of two regions. The first region favors f2 and f3, and the sec-ond region favors f1.

Moreover, the objective value f1 of the solution proposed by [3]is slightly better than those generated by MULTIMADS. Whereas, theobjective values f2 and f3 of the solution proposed by [3] is greaterthan the values attained by most of solutions provided byMULTIMADS.

5. Discussion

We proposed a new solution approach for constrained multiob-jective optimization MOP ensuring some first-order necessary opti-mality conditions for nonsmooth functions. Our algorithmMULTIMADS combines strategies from both the NBI and BIMADS algo-rithms. Our approach is not limited to blackbox optimization, butin the present work we chose to measure its effect on such prob-lems using our MADS single-objective optimization method. We be-lieve that using the approach on smooth problems would benefitfrom an exploitation of the smooth properties of the problem.

In the present work, MULTIMADS was applied to three problemsfrom the literature with different Pareto front landscapes. MULTIMads was also applied to a blackbox styrene production process sim-ulation engineering problem.

The test problems reveal that the three distance norms used inMULTIMADS are comparable. Nevertheless, results using the L2 arebetter on the problem with a discontinuous Pareto front. Whereas,results using the L1 are slightly better on the problem having aspherical Pareto front, and worse than the L1 and L2 norms onthe problem with a one-dimensional Pareto front.

In future work, we plan to test other direct search algorithms tosolve the single-objective formulation. In particular, we will usethe recent deterministic algorithm ORTHOMADS [2] instead of LTMADS

that has a random component. Another research avenue consists ofextending our algorithm to larger problems using the parallelspace decomposition approach proposed in [7].

References

[1] M.A. Abramson, C. Audet, G. Couture, J.E. Dennis Jr., S. Le Digabel, The NOMAD

project. Software available at <http://www.gerad.ca/nomad>.[2] M.A. Abramson, C. Audet, J.E. Dennis Jr., S. Le Digabel, OrthoMADS: A

deterministic MADS instance with orthogonal directions, SIAM Journal onOptimization 20 (2) (2009) 948–966.

Page 12: A mesh adaptive direct search algorithm for multiobjective optimization

556 C. Audet et al. / European Journal of Operational Research 204 (2010) 545–556

[3] C. Audet, V. Béchard, S. Le Digabel, Nonsmooth optimization through meshadaptive direct search and variable neighborhood search, Journal of GlobalOptimization 41 (2) (2008) 299–318.

[4] C. Audet, A.J. Booker, J.E. Dennis Jr., P.D. Frank, D.W. Moore, A surrogate-model-based method for constrained optimization, Presented at the 8th AIAA/ISSMOSymposium on Multidisciplinary Analysis and Optimization, 2000.

[5] C. Audet, J.E. Dennis Jr., Mesh adaptive direct search algorithms for constrainedoptimization, SIAM Journal on Optimization 17 (1) (2006) 188–217.

[6] C. Audet, J.E. Dennis Jr., S. Le Digabel, Globalization strategies for meshadaptive direct search. Les Cahiers du GERAD G-2008-74, GERAD 2008,Computational Optimization and Applications, in press, doi:10.1007/s10589-009-9266-1.

[7] C. Audet, J.E. Dennis Jr., S. Le Digabel, Parallel space decomposition of the meshadaptive direct search algorithm, SIAM Journal on Optimization 19 (3) (2008)1150–1170.

[8] C. Audet, G. Savard, W. Zghal, Multiobjective optimization through a series ofsingle-objective formulations, SIAM Journal on Optimization 19 (1) (2008)188–210.

[9] A.J. Booker, J.E. Dennis Jr., P.D. Frank, D.B. Serafini, V. Torczon, M.W. Trosset, Arigorous framework for optimization of expensive functions by surrogates,Structural and Multidisciplinary Optimization 17 (1998) 1–13.

[10] R.O. Bowden, J.D. Hall, Simulation optimization research and development, in:Simulation Conference, 1998, pp. 1693–1698.

[11] F.H. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York, 1983.Reissued in 1990 by SIAM Publications, Philadelphia, as vol. 5 in the seriesClassics in Applied Mathematics.

[12] H.W. Corley, Optimality conditions for maximizations of set-valued functions,Journal of Optimization Theory and Applications 58 (1) (1988) 1–10.

[13] I. Das, J.E. Dennis Jr., Normal-boundary intersection: A new method forgenerating the Pareto surface in nonlinear multicriteria optimizationproblems, SIAM Journal on Optimization 8 (3) (1998) 631–657.

[14] K. Deb, Nonlinear goal programming using multiobjective genetic algorithms,Journal of the Operational Research Society 52 (2001) 291–302.

[15] K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems forevolutionary multiobjective optimization, in: A. Abraham, R. Jain, R.Goldberg (Eds.), Evolutionary Multiobjective Optimization: TheoreticalAdvances and Applications, Springer, 2005, pp. 105–145. Chapter 6.

[16] J.M. Douglas, Conceptual Design of Chemical Processes, McGraw-Hill, NewYork, 1988.

[17] D.M. Himmelblau, T.F. Edgar, L.S. Lasdon, Optimization of Chemical Processes,McGraw-Hill, New York, 2003.

[18] J. Kim, S.K. Kim, A chim-based interactive tchebycheff procedure for multipleobjective decision making, Computers and Operations Research 33 (2006)1557–1574.

[19] A.L. Marsden, M. Wang, J.E. Dennis Jr., P. Moin, Optimal aeroacoustic shapedesign using the surrogate management framework, Optimization andEngineering 5 (2) (2004) 235–262.

[20] P. Hansen, N. Mladenovic, Variable neighborhood search: Principles andapplications, European Journal of Operational Research 130 (3) (2001) 449–467.

[21] S. Ruzika, M.M. Wiecek, A survey of approximation methods in multiobjectiveprogramming. Technical report 90, Department of Mathematics, University ofKaiserslautern, 2003.

[22] J.D. Seader, W.D. Sieder, D.R. Lewin, Process Design Principles: SynthesisAnalysis and Evaluation, John Wiley and Sons Inc., New York, 1999.

[23] J.D. Snyder, B. Subramaniam, A novel reverse flow strategy for ethylbenzenedehydrogenation in a packed-bed reactor, Chemical Engineering Science 49(1994) 5585–5601.

[24] K.D. Timmerhaus, M.S. Peters, R.E. West, Plant Design and Economics forChemical Engineers, fifth ed., McGraw-Hill, New York, 2003.

[25] P.L. Yu, Cone convexity, cone extreme points and nondominated solutions indecision problems with multi-objectives, Journal of Optimization Theory andApplication 14 (1974) 319–377.

[26] W. Zghal, C. Audet, G. Savard, A new multi-objective approach for the portfolioselection problem with skewness. Technical report, Les Cahiers du GERAD G-2007-86, 2007.