a penalty-free method with line search for nonlinear equality constrained optimization

16
A penalty-free method with line search for nonlinear equality constrained optimization Hengwu Ge, Zhongwen Chen ,1 School of Mathematics Science, Soochow University, Suzhou 215006, PR China article info Article history: Received 9 January 2012 Received in revised form 18 March 2013 Accepted 31 May 2013 Available online 18 June 2013 Keywords: Equality constraints Line search Penalty function Filter Convergence analysis and rate of convergence abstract A new line search method is introduced for solving nonlinear equality constrained optimi- zation problems. It does not use any penalty function or a filter. At each iteration, the trial step is determined such that either the value of the objective function or the measure of the constraint violation is sufficiently reduced. Under usual assumptions, it is shown that every limit point of the sequence of iterates generated by the algorithm is feasible, and there exists at least one limit point that is a stationary point for the problem. A simple modifica- tion of the algorithm by introducing second order correction steps is presented. It is shown that the modified method does not suffer from the Maratos’ effect, so that it converges superlinearly. The preliminary numerical results are reported. Ó 2013 Elsevier Inc. All rights reserved. 1. Introduction In many optimization problems, the variables are interrelated by physical laws like the conservation of mass or energy, Kir- chhoffs voltage and current laws, and other system equalities or inequalities that must be satisfied, see [1,2,11–13,17,22,25,29]. In this paper, we consider the following nonlinear programming problem with general nonlinear equality constraints min f ðxÞ; s:t: cðxÞ¼ 0; ð1:1Þ where x 2 R n ; f : R n ! R; c : R n ! R m are twice continuously differentiable. Many efficient penalty function methods exist for solving problem (1.1). For example, sequential unconstrained optimi- zation methods based on various penalty functions, and sequential quadratic programming (SQP) methods that use either line search or trust-region strategies [24]. The effectiveness of these so-called penalty-type methods hinges on how well the initial penalty parameter is chosen and how ‘‘intelligently’’ it is updated during the course of minimization. To avoid the selection of the penalty parameter, some authors research the technique without the penalty function, for example, see [4,7,8,14–16,20,26–28,30,31]. The methods which do not use any penalty function are called the penalty-free-type ones. Filter methods are important category of the penalty-free-type methods, for example, see [4,7,14–16,21,28,30,31]. Filter methods, which were introduced by Fletcher and Leyffer [15] in 1997, have been well studied and proven to be suc- cessful for solving constrained optimization problems. For example, Gould et al. [18] used filter methods to solve nonlinear equations and nonlinear least squares problems. Filter techniques have also been used to solve unconstrained problems [19], which in contrast to trust-region methods generates nonmonotone iterates with respect to the value of the objective func- 0307-904X/$ - see front matter Ó 2013 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.apm.2013.05.037 Corresponding author. Tel. +86 0512 68096057. E-mail address: [email protected] (Z. Chen). 1 This work was supported by Chinese NSF Grant 11171247. Applied Mathematical Modelling 37 (2013) 9934–9949 Contents lists available at SciVerse ScienceDirect Applied Mathematical Modelling journal homepage: www.elsevier.com/locate/apm

Upload: zhongwen

Post on 02-Jan-2017

217 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: A penalty-free method with line search for nonlinear equality constrained optimization

Applied Mathematical Modelling 37 (2013) 9934–9949

Contents lists available at SciVerse ScienceDirect

Applied Mathematical Modelling

journal homepage: www.elsevier .com/locate /apm

A penalty-free method with line search for nonlinear equalityconstrained optimization

0307-904X/$ - see front matter � 2013 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.apm.2013.05.037

⇑ Corresponding author. Tel. +86 0512 68096057.E-mail address: [email protected] (Z. Chen).

1 This work was supported by Chinese NSF Grant 11171247.

Hengwu Ge, Zhongwen Chen ⇑,1

School of Mathematics Science, Soochow University, Suzhou 215006, PR China

a r t i c l e i n f o

Article history:Received 9 January 2012Received in revised form 18 March 2013Accepted 31 May 2013Available online 18 June 2013

Keywords:Equality constraintsLine searchPenalty functionFilterConvergence analysis and rate ofconvergence

a b s t r a c t

A new line search method is introduced for solving nonlinear equality constrained optimi-zation problems. It does not use any penalty function or a filter. At each iteration, the trialstep is determined such that either the value of the objective function or the measure of theconstraint violation is sufficiently reduced. Under usual assumptions, it is shown that everylimit point of the sequence of iterates generated by the algorithm is feasible, and thereexists at least one limit point that is a stationary point for the problem. A simple modifica-tion of the algorithm by introducing second order correction steps is presented. It is shownthat the modified method does not suffer from the Maratos’ effect, so that it convergessuperlinearly. The preliminary numerical results are reported.

� 2013 Elsevier Inc. All rights reserved.

1. Introduction

In many optimization problems, the variables are interrelated by physical laws like the conservation of mass or energy, Kir-chhoffs voltage and current laws, and other system equalities or inequalities that must be satisfied, see [1,2,11–13,17,22,25,29].In this paper, we consider the following nonlinear programming problem with general nonlinear equality constraints

min f ðxÞ;s:t: cðxÞ ¼ 0;

ð1:1Þ

where x 2 Rn; f : Rn ! R; c : Rn ! Rm are twice continuously differentiable.Many efficient penalty function methods exist for solving problem (1.1). For example, sequential unconstrained optimi-

zation methods based on various penalty functions, and sequential quadratic programming (SQP) methods that use eitherline search or trust-region strategies [24]. The effectiveness of these so-called penalty-type methods hinges on how wellthe initial penalty parameter is chosen and how ‘‘intelligently’’ it is updated during the course of minimization. To avoidthe selection of the penalty parameter, some authors research the technique without the penalty function, for example,see [4,7,8,14–16,20,26–28,30,31]. The methods which do not use any penalty function are called the penalty-free-type ones.Filter methods are important category of the penalty-free-type methods, for example, see [4,7,14–16,21,28,30,31].

Filter methods, which were introduced by Fletcher and Leyffer [15] in 1997, have been well studied and proven to be suc-cessful for solving constrained optimization problems. For example, Gould et al. [18] used filter methods to solve nonlinearequations and nonlinear least squares problems. Filter techniques have also been used to solve unconstrained problems [19],which in contrast to trust-region methods generates nonmonotone iterates with respect to the value of the objective func-

Page 2: A penalty-free method with line search for nonlinear equality constrained optimization

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9935

tion, and have produced good numerical results. Chen [4,5], Gould [20], Ulbrich [27] etc., presented some other penalty-free-type methods without a filter and proved the global convergence.

Penalty-free-type methods, which do not need any penalty function, have become one of hot spots in the nonlinear opti-mizations. The underlying idea of this class methods is that there are two goals to determine whether a trial point is acceptedor not. One is improving the feasibility and the other is reducing the value of objective function. In order to improve these twogoals proportionally, we must build a relation between reducing the value of objective function and improving the feasibility.

The paper presented here gives a new method without a penalty function or a filter for the solution of (1.1), which be-longs to the class of line search Newton–Lagrange method for constrained optimization. The algorithm generates new iteratepoints by solving linear equations and line search procedures. The new method is motivated by Wächter et al. [30,31]. Wäch-ter et al. presented line search filter methods for nonlinear programming and analyzed the global and local convergence oftheir method. The main contribution of this paper is that the new method presented uses also line search but does not useany penalty function or a filter. Thus the new method does not need to reserve the filter set at every iteration. The trust-re-gion frame is adopted in Ref. [5] and the acceptable criteria of the two algorithms are different. Under mild conditions, weanalyze the global and local convergence of the algorithm presented.

Chin [8], Fletcher [15], Wächter [31] etc., have discussed that the filter technique like l1 penalty function methods couldsuffer from Maratos effect when iterates are near to local solutions. As a remedy, Fletcher and Leyffer propose to improve thesearch direction, if the full step is rejected, by means of a second order correction which aims to further reduce infeasibility.In order to prevent the Maratos effect, we also employ second order correction steps but do not use a penalty function or afilter in this article. Under mild conditions, we analyze the local convergence of the algorithm with second order correction.The preliminary numerical results shows that the algorithm is robust and efficient.

This paper is organized as follows. In Section 2, the formal algorithm is described. In Section 3, we prove that, under mildconditions, every limit point of the sequence of iterates generated by the algorithm is feasible, and there exists at least onelimit point which is a stationary point for the problem. In Section 4, we study the local superlinear convergence of the algo-rithm with second order correction. Some numerical results for problems from [6] are reported in Section 5.

Throughout the paper, k:k denotes the Euclidean norm k:k2. For simplicity, we also use subscripts to denote functionsevaluated at iterates, for example, fk ¼ f ðxkÞ; ck ¼ cðxkÞ. Moreover, we denote

gðxÞ ¼ rf ðxÞ 2 Rn; AðxÞ ¼ ðrc1ðxÞ;rc2ðxÞ; . . . ;rcmðxÞÞ 2 Rn�m:

2. The algorithm

The Karush–Kuhn–Tucker (KKT) conditions for the problem (1.1) are given by

gðxÞ þ AðxÞk ¼ 0;cðxÞ ¼ 0;

ð2:1Þ

with the Lagrangian multipliers k. Under linear independence of the constraint gradientrcðxÞ, these are the first order opti-mality conditions for (1.1).

Given a starting point x0, the proposed line search algorithm generates a sequence of improved estimates xk of the solu-tion for the problem (1.1). Therefore, at the current iterate point xk, a search direction dk is computed from the linearizationof the KKT conditions (2.1),

Hk Ak

ATk 0

� �dk

kþk

� �¼ �

gk

ck

� �: ð2:2Þ

Here, the symmetric matrix Hk denotes the Hessian matrix rxxLðxk; kkÞ of the Lagrangian function

Lðx; kÞ ¼ f ðxÞ þ kT cðxÞ

of the problem (1.1), or an approximation to this Hessian. After a search direction dk has been computed, a step sizeak 2 ð0;1� is determined by line search method, and then we can obtain the next iterate xkþ1 ¼ xk þ akdk.

Now we decompose the step dk into orthogonal components

dk ¼ pk þ qk;

where,

qk ¼ Yk�qk; pk ¼ Zk�pk; ð2:3Þ

Yk; Zk are obtained from a QR-factorization of the matrix Ak, i.e.,

Ak ¼ ½Yk Zk�Rk

0

� �;

here, ½Yk Zk� 2 Rn�n is an orthogonal matrix, Rk 2 Rm�m is an upper triangular matrix. If the matrix Ak has full rank, then wefollows from (2.2) that

Page 3: A penalty-free method with line search for nonlinear equality constrained optimization

9936 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

�qk ¼ �½ATk Yk�

�1ck;

�pk ¼ �½ZTk HkZk�

�1ZT

kðgk þ HkqkÞ:ð2:4Þ

We define the infeasibility measure and the optimality measure as follows.

hðxkÞ ¼ kcðxkÞk; vðxkÞ ¼ k�pkk:

Then hðxkÞ þ vðxkÞ ¼ 0 implies that xk is a KKT point of the problem (1.1).In each iteration xk along the direction dk, we hope that a trial step size a improves the feasibility or provides sufficient

reduction to the objective function. In other words, we require that a satisfies

hðxk þ adkÞ 6 ð1� rhÞhðxkÞ; rh 2 ð0;1Þ; ð2:5Þ

or if

mkðaÞ < 0 and ½�mkðaÞ�sf ½a�1�sf > d½hðxkÞ�sh ; ð2:6Þ

holds, then

f ðxk þ adkÞ 6 f ðxkÞ þ gf mkðaÞ; ð2:7Þ

where,

mkðaÞ ¼ agTk dk ð2:8Þ

is the linear model of the objective function f in the direction dk and d > 0; sh > 1; sf P 1;gf 2 0; 12

� �are all fixed constants.

In some cases it is not possible to find a trial step size ak;l that satisfies the above criteria. Hence, we define (also see [31])

amink ¼ ra

min rh;d½hðxkÞ�sh½�gT

kdk �

sf

� �; if gT

k dk < 0;

rh; otherwise:

8<: ð2:9Þ

where, ra 2 ð0;1� is a fixed constant. If a trial step size with ak;l < amin, the algorithm goes to a feasibility restoration phase,where the algorithm will try to find a new iterate xkþ1 that satisfies (2.5) by reducing the constraint violation with some iterativemethod. Note that the feasibility restoration phase may get a u� stationary point ~x ([4]), that is, cð~xÞ – 0;Að~xÞcð~xÞ ¼ 0, which is astationary point of the constraint violation, indicating to the user that the problem seems (at least locally) infeasible.

The detailed description of the algorithm is given as follows.

Algorithm 2.1

Initialization. Given x0 2 Rn. Constants �; rh 2 ð0;1Þ; b 2 ð1� rh;1Þ; d > 0; �d > 1; sh > 1,sf P 1;gf 2 0; 1

2

� �;0 < T1 6 T2 < ra 6 1. Calculate f0; c0; h0;v0;A0; g0;H0. Set F0 ¼ �dh0 þ 1; k :¼ 0.

while hk þ vk > � doSolve linear Eqs. (2.2) to get the search direction dk.Compute amin

k . Set ak;0 ¼ 1; l :¼ 0.while l >¼ 0 do

xkðak;lÞ ¼ xk þ ak;ldk.if hðxkðak;lÞÞ > Fk then

ak;lþ1 2 ½T1ak;l; T2ak;l�; l :¼ lþ 1. Continue.end ifif ak;l < amin

k thenFeasibility restoration phase. Compute a new iterate xkþ1 by decreasing the infeasibility measure h such that xkþ1

satisfies hðxkþ1Þ 6 ð1� rhÞhk. Break.elseif (2.6) and (2.7) all hold or (2.6) does not hold but (2.5) holds then

ak ¼ ak;l; xkþ1 ¼ xk þ akdk. Break.else

ak;lþ1 2 ½T1ak;l; T2ak;l�; l :¼ lþ 1. Continue.end if

end ifend whileif (2.6) does not hold or the iteration comes from restoration phase then

Fkþ1 ¼minf�dhk; bFkg.end ifUpdate Hk to Hkþ1. k :¼ kþ 1.end while

Page 4: A penalty-free method with line search for nonlinear equality constrained optimization

Remark. On the implementation for Algorithm 2.1, if Ak is not full rank, the search direction dk does not exist. Thus, the algorithm

will go to feasibility restoration phase in which it gets either a new iterate xkþ1 that satisfies (2.5) or terminates at a u� stationarypoint (also see Section 5). In order to understand Algorithm 2.1 easily, we give the following flow diagram in Fig. 1.

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9937

3. Global convergence

We now consider the convergence properties of Algorithm 2.1 and suppose that Algorithm 2.1 does not terminate afterfinite number of iterations. It will be proved that Algorithm 2.1 has global convergence under the suitable assumptions. Westate below some standard assumptions about the problem to be solved.

Assumption A.

(A1) f ðxÞ; ciðxÞði ¼ 1;2; . . . ;mÞ are twice continuously differentiable for all x 2 Rn.(A2) There exists a bounded convex closed set X such that xk 2 X for all k.(A3) The matrix AðxkÞ has full rank for all k.(A4) The matrix sequence fHkg is uniformly bounded and kminðZT

k HkZkÞP MH > 0, where the columns of Zk 2 Rn�ðn�mÞ forman orthonormal basis of the null space of AT

k , and kminðZTk HkZkÞ denotes the smallest eigenvalue.

For convenience of statement, we define

F ¼ fkjF k ¼ F k�1g; C ¼ fkjF k – F k�1g

and

R ¼ fkj Iterate xk comes into step 7g:

The k-iteration for k 2 F is called as f-type iteration, which improves mainly the measure of the optimality. Whereas, the

k-iteration for k 2 C is called as c-type iteration, which improves mainly the infeasiblity.

Fig. 1. A penalty-free algorithm.

Page 5: A penalty-free method with line search for nonlinear equality constrained optimization

9938 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

Lemma 3.1. Under Assumption A, there exist constants Md;Mk;Mm > 0 such that

kdkk 6 Md; kkþk k 6 Mk; jmkðaÞj 6 aMm; ð3:1Þ

hold for all k, where a 2 ð0;1�.

Proof. It follows from Assumptions A3, A4 and (2.2) that

dk

kþk

� �¼

Hk Ak

ATk 0

� ��1 �gk

�ck

� �:

By the assumptions A1 and A2, there exist constants Md, Mk > 0 such that kdkk 6 Md; kkþk k 6 Mk. Moreover, there existsMm > 0 such that

jmkðaÞj=a ¼ jgTk dkj 6 Mm:

Thus, the result is proved. h

Lemma 3.2. Under Assumption A, If fxkig is an infinite iterate subsequence which satisfies vðxki

ÞP � with a constant � > 0 inde-pendent of ki, then there exist constants �1, �2 > 0, such that, on the iterations that satisfy the condition hðxki

Þ 6 �1, we have theinequality

mkiðaÞ 6 �a�2; a 2 ð0;1�: ð3:2Þ

Proof. It follows from (2.8), (2.3) and (2.4) that

mkiðaÞa¼ gT

kidki¼ gT

kiZki

�pkiþ gT

kiqki¼ ��pT

kiZT

kiHki

Zki�pki� �pT

kiZT

kiHki

qkiþ gT

kiqki6 �b1k�pki

k2 þ b2b1k�pkik kcki

k þ b3kckik

6 vðxkiÞ �b1�þ b2hðxki

Þ þ b3

�hðxkiÞ

� ;

where, b1; b2; b3 > 0 are constants independent of ki. Let

�1 ¼b1�2

2ðb3 þ b2�Þ:

Then,

mkiðaÞa6 �1

2b1�vðxki

Þ 6 �12

b1�2:

Let �2 ¼ 12 b1�2. Then, mki

ðaÞ 6 �a�2. h

Lemma 3.3. Under Assumption A, there exist constants Ch;Cf > 0 independent of k such that

jhðxk þ adkÞ � ð1� aÞhkj 6 a2Chkdkk2; ð3:3Þ

jf ðxk þ adkÞ � f ðxkÞ �mkðaÞj 6 a2Cf kdkk2; ð3:4Þ

hold for all a 2 ð0;1�.

Proof. Since

cðxk þ adkÞ ¼ ck þ aATk dk þ Oðkadkk2Þ:

It follows from Assumptions A1–A4 that there exists Ch > 0 such that

kck þ aATk dkk � a2Chkdkk2

6 hðxk þ adkÞ ¼ kcðxk þ adkÞk 6 kck þ aATk dkk þ a2Chkdkk2

:

That is,

ð1� aÞhk � a2Chkdkk26 hðxk þ adkÞ 6 ð1� aÞhk þ a2Chkdkk2

:

Therefore, (3.3) holds. Similarly, (3.4) holds. The lemma is proved. h

Lemma 3.4. Under Assumption A, if hk ¼ 0 and Fk > 0, then Fkþ1 ¼ Fk.

Page 6: A penalty-free method with line search for nonlinear equality constrained optimization

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9939

Proof. Since Algorithm 2.1 does not stop when hk ¼ 0, we have that vðxkÞ > �. By Lemma 3.2, there exists �2 > 0 such thatmkðaÞ 6 �a�2. By Algorithm 2.1 , amin

k ¼ 0, which implies that Algorithm 2.1 does not enter feasibility restoration phase at thecurrent iterate point xk. Hence

½�mkðak;lÞ�sf ½ak;l�1�sf P ak;l�sf2 > d½hk�sh ¼ 0;

which implies that (2.6) holds.By (3.4), Lemma 3.1 and Lemma 3.2, we have that

f ðxk þ ak;ldkÞ 6 f ðxkÞ þmkðak;lÞ þ a2k;lCf M2

d:

When ak;l 6 ð1� gf Þ�2=ð2Cf M2dÞ, we have that

f ðxk þ ak;ldkÞ 6 f ðxkÞ þ gf mkðak;lÞ;

i.e., (2.7) holds.It follows from (3.3) that

jhðxk þ adkÞ � ð1� aÞhkj 6 a2Chkdkk2;

that is,

hðxk þ adkÞ 6 a2ChM2d;

which follows from Fk > 0 that hðxk þ ak;ldkÞ 6 Fk holds for all sufficiently small ak;l > 0. By Algorithm 2.1, Fkþ1 ¼ Fk. So theresult is true. h

Lemma 3.5. Fk > 0 holds for all k and the sequence fFkg is monotone decreasing.

Proof. By Algorithm, F0 > 0. Suppose that Fk > 0, we will deduce that Fkþ1 > 0.If Fkþ1 ¼ Fk or Fkþ1 ¼ bFk, then Fkþ1 > 0. If Fkþ1 ¼ �dhk, then it follows from Lemma 3.4 that hk > 0, which also implies that

Fkþ1 > 0.Algorithm 2.1 ensures that the sequence fFkg is monotone decreasing. So the lemma is true. h

Lemma 3.6. Under Assumption A, Fk P hk holds for all k.

Proof. The proof is by induction. First, F0 P h0 holds obviously. Now suppose that Fk P hk, we will deduce that Fkþ1 P hkþ1.If Fkþ1 ¼ Fk, then the mechanism of Algorithm 2.1 gives hkþ1 6 Fk, which implies that the conclusion is true.If Fkþ1 ¼ �dhk, then Algorithm 2.1 implies that (2.5) hold, or Algorithm 2.1 enters into feasibility restoration phase.

Therefore,

hkþ1 6 ð1� rhÞhk 6�dhk ¼ Fkþ1:

So the result also is true.If Fkþ1 ¼ bFk, then Algorithm 2.1 implies that (2.5) hold, or Algorithm 2.1 enters into feasibility restoration phase. Since

b 2 ð1� rh;1Þ, we have that

hkþ1 6 ð1� rhÞhk 6 ð1� rhÞFk 6 bFk ¼ Fkþ1:

So the result holds. h

Lemma 3.7. Under Assumption A, if jCj < þ1, then limk!1hk ¼ 0.

Proof. It follows from jCj < þ1 that there exists an integer k0 such that Fk ¼ Fk0 for all k P k0. By Algorithm 2.1, (2.6) and(2.7) all hold. By Lemma 3.1 and (2.6), we have that

d½hk�sh < ½�mkðakÞ�sf ½ak�1�sf ð3:5Þ6 akM

sfm: ð3:6Þ

It follows from (3.6) that ak P d½hk�sh=Msfm. By (3.5), we have that

½�mkðakÞ�sf > ½ak�sf�1d½hk�sh ;

mkðakÞ < �½ak�1� 1

sf d1sf ½hk�

shsf ;

<�d½hk�sh

½Mm�sf�1 : ð3:7Þ

Page 7: A penalty-free method with line search for nonlinear equality constrained optimization

9940 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

By (2.7), (3.7), we have that

f ðxkÞ � f ðxk0Þ ¼Xk�1

j¼k0

½f ðxjþ1Þ � f ðxjÞ�

6

Xk�1

j¼k0

gf mjðajÞ

6 �gf

Xk�1

j¼k0

b4½hk�sh ;

where, b4 ¼ d=½Mm�sf�1. The left hand side in the inequality above is bounded, which implies that the conclusion is true. h

Lemma 3.8. Under Assumption A, if jCj ¼ þ1, then limk!1hk ¼ 0.

Proof. Let C ¼ fkig. Algorithm 2.1 implies that

Fki¼minf�dhki�1;bFki�1g 6 bFki�1 6 bFki�1

6 bi�1Fk1:

Fki! 0 as i!1. Moreover, by Lemma 3.5, limk!1Fk ¼ 0. By Lemma 3.6, 0 6 hk 6 Fk. Therefore, limk!1hk ¼ 0. h

Theorem 3.9. Under Assumption A, limk!1hk ¼ 0.

Lemma 3.10. Under Assumption A, if fxkig is an infinite subsequence of iterates which satisfies mki

ðaÞ 6 �a�2 with a constant�2 > 0 independent of ki, then there exists a constant �a > 0 such that

f ðxkiþ adki

Þ � f ðxkiÞ 6 gf mki

ðaÞ ð3:8Þ

holds for all a 2 ð0; �a�.

Proof. By Lemma 3.1 and (3.4), we have that

f ðxkiþ adki

Þ 6 f ðxkiÞ þmki

ðaÞ þ a2Cf M2d:

Thus, the lemma is proven with �a ¼ ð1� gf Þ�2=ðCf M2dÞ. h

Lemma 3.11. Under Assumption A, if fxkig is an infinite subsequence of iterates which satisfies vðxki

ÞP � > 0, then there exists aninteger �k such that Fkiþ1 ¼ Fki

holds for all ki P �k.

Proof. By Theorem 3.9 and Lemma 3.2, there exist constants �1, �2 > 0 and an integer k0 such that hki< �1 and

mkiðaÞ 6 �a�2;a 2 ð0;1� hold for all ki P k0. If hki

¼ 0, it follows from Lemma 3.4 and Lemma 3.5 that the result is true.Now we suppose that hki

> 0 and ki P k0. By (3.3) and Lemma 3.1, we have that

hðxkiþ adki

Þ 6 ð1� aÞhðxkiÞ þ a2ChM2

d:

If

a 6hðxkiÞ

2ChM2d

¼ b5hðxkiÞ; b5 ¼

12ChM2

d

: ð3:9Þ

Then (3.9) implies that

hðxkiþ adki

Þ 6 1� 12a

� hðxkiÞ 6 hðxki

Þ 6 Fki;

which implies that Algorithm 2.1 will go to Step 3.4. Note that

½�mkiðaÞ�sf a1�sf P a�sf

2 :

Therefore,

½�mkiðaÞ�sf a1�sf P a�sf

2 P d½hki�sh ;

holds as long as

Page 8: A penalty-free method with line search for nonlinear equality constrained optimization

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9941

a Pd½hki�sh

�sf2

¼ b6½hðxkiÞ�sh ; b6 ¼

d

�sf2

: ð3:10Þ

which implies that (2.6) holds.By Lemma 3.10, (2.7) holds as long as

a 6 �a ¼ð1� gf Þ�2

Cf M2d

: ð3:11Þ

It follows from Theorem 3.9 and sh > 1 that there exists an integer k00 such that

b6½hðxkiÞ�sh6 b5hðxki

Þ 6 �a

holds for all ki P k00. Let �k ¼maxfk0; k00g. For ki P �k, we have that

amink 6 b6½hðxki

Þ�sh6 a 6 b5hðxki

Þ 6 �a;

which implies that (3.9), (3.10), (3.11) all hold. By Algorithm 2.1, Fkiþ1 ¼ Fki. So the result is true. h

Lemma 3.12. Under Assumption A, if jCj < þ1, then limk!1vðxkÞ ¼ 0.

Proof. It follows from jCj < þ1 that there exists an integer k0 such that k 2 F for all k P k0. By Algorithm 2.1 and Assump-tion A, (2.6) and (2.7) all hold for all k P k0. Moreover, the sequence ff ðxkÞg decrease monotonously and is bounded below.We suppose, by contradiction, that there exist a constant � > 0 and an infinite subsequence of iterates fxki

g such thatvðxki

ÞP �; i ¼ 1;2; . . .. By Theorem 3.9 and Lemma 3.2, there exist an integer k00 and constants �1; �2 > 0 such thatmkiðaÞ 6 �a�2 holds for all ki P maxfk0; k00g. We follow from ki 2 F and (2.7) that

f ðxkiþ1Þ � f ðxki

Þ 6 f ðxkiþ1Þ � f ðxkiÞ 6 gf mki

ðakiÞ 6 �gf�2aki

;

which implies that limi!1aki¼ 0.

Since Fki¼ Fk0 > 0 holds for all ki P maxfk0; k00g, by Theorem 3.9, Lemmas 3.3 and 3.1, we have that

hðxkiþ adki

Þ 6 ð1� aÞhðxkiÞ þ a2ChM2

d 612

Fk0 þ a2ChM2d :

Therefore, hðxkiþ adki

Þ 6 Fkiholds when a 6 a0 ¼ ðFk0=ð2ChM2

dÞÞ12.

By Algorithm, the last rejected step aki ;li 2 ½aki=T2;aki

=T1� and aki ;li 6 a0 for sufficient large ki. Moreover, by the step 3.2 inAlgorithm 2.1,

akiP amin

k ¼ rad½hðxki

Þ�sh

½�gTk dk�

sf;

which implies that

aki ;li Paki

T2P

ra

T2

d½hðxkiÞ�sh

½�gTk dk�

sfP

d½hðxkiÞ�sh

½�gTk dk�

sf;

i.e., (2.6) always holds. Therefore, The reason which aki ;li is rejected is that (2.7) does not hold. Therefore,

f ðxkiþ aki ;li dki

Þ � f ðxkiÞ > gf mki

ðaki ;li Þ: ð3:12Þ

By Lemma 3.10, (3.8) holds if aki ;li 6�a. So (3.8) always holds for all sufficient large ki, which is a contradiction with (3.12). So

the result is true.

Theorem 3.13. Under Assumption A, limk!1hk ¼ 0, lim infk!1

vðxkÞ ¼ 0.

Proof. By Theorem 3.9, limk!1hk ¼ 0. Now we will prove that lim infk!1

vðxkÞ ¼ 0.

If jCj < þ1, it follows from Lemma 3.12 that the result is true. If jCj ¼ þ1, then there exist an infinite subsequence ofiterates fxki

g; ki 2 C. Suppose, without loss of generality, that vðxki�1ÞP � for all ki. By Lemma 3.11, Fki¼ Fki�1, i.e., ki 2 F ,

which is a contradiction with ki 2 C. So the result is true. h

4. Local superlinear convergence

Now, we analyze the local convergence of the line search method without a penalty function or a filter. In order to obtainlocally rapid convergence, we need to overcome the so-called Maratos effect, a phenomenon arising in many methods for

Page 9: A penalty-free method with line search for nonlinear equality constrained optimization

9942 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

nonlinear programming where the full superlinear step is rejected near the solution. Thus, we employ a second-order cor-rection technique to maintain fast local convergence properties [15].

A second-order correction step dsock aims to reduce infeasibility by applying an additional Newton-type step for the con-

straints at the point xk þ dk. There is a wide range of options to compute such a step. Here, we assume that it is obtained fromthe solution of the linear system

Hk Ak

ATk

0

� �dsoc

k

ksock

� �¼ � 0

cðxk þ dkÞ

� �: ð4:1Þ

Algorithm 4.1

Initialization. Given an initial point x0. Constants �; rh 2 ð0;1Þ; ra 2 ð0;1�; b 2 ð1� rh;1Þ; d > 0,�d > 1; sh > 1; sf ¼ 1;gf 2 ð0;1=2Þ; m 2 ð2;3�; 0 < T1 6 T2 < 1. Compute h0;v0. Set F0 ¼ �dh0 þ 1; k :¼ 0.

while hk þ vk > � doSolve linear Eqs. (2.2) to get the search direction dk.Compute amin

k . Set ak;0 ¼ 1; l :¼ 0.if l ¼¼ 0 then

if hðxkðak;0ÞÞ 6 Fk, (2.6) and (2.7) all hold or hðxkðak;0ÞÞ 6 Fk, (2.6) does not hold but (2.5) holds thenak ¼ 1; xkþ1 ¼ xk þ dk.

elseSolve (4.1) to get dsoc

k . Set �xkþ1 ¼ xk þ dk þ dsock .

if hð�xkþ1Þ 6 Fk, (2.6) holds and

f ð�xkþ1Þ 6 f ðxkÞ þ gf gTk dk ð4:2Þ

holds, or hð�xkþ1Þ 6 Fk and

hð�xkþ1Þ 6 ð1� rhÞhðxkÞ and hðxkÞP kdkkm ð4:3Þ

hold thenxkþ1 ¼ xk þ dk þ dsoc

k .elseak;lþ1 2 ½T1ak;l; T2ak;l�; l :¼ lþ 1.

end ifend if

end ifwhile l > 0 doxkðak;lÞ ¼ xk þ ak;ldk.if hðxkðak;lÞÞ > Fk then

ak;lþ1 2 ½T1ak;l; T2ak;l�; l :¼ lþ 1. Continue.end ifif ak;l < amin

k thenFeasibility restoration phase. Compute a new iterate xkþ1 by decreasing the infeasibility measure h such that xkþ1

satisfies hðxkþ1Þ 6 ð1� rhÞhk. Break.else

if (2.6) and (2.7) all hold or (2.6) does not hold but (2.5) holds thenak ¼ ak;l; xkþ1 ¼ xk þ akdk. Break.

elseak;lþ1 2 ½T1ak;l; T2ak;l�; l :¼ lþ 1. Continue.end if

end ifend whileif (2.6) does not hold or the iteration comes from restoration phase then

Fkþ1 ¼ minf�dhk; bFkg.end ifUpdate Hk to Hkþ1. k :¼ kþ 1.

end while

Page 10: A penalty-free method with line search for nonlinear equality constrained optimization

In order to analyze the local convergence properties of our algorithm, we give the following additional assumptions,which are similar to Assumptions (L2)–(L5) in [31].

Assumption B.

(B1) limkxk ¼ x�, where x� is a KKT point of the problem (1.1), k� 2 Rm is an associated Lagrangian multiplier vector.(B2) ðWk � HkÞdk ¼ oðkdkkÞ, where

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9943

Wk ¼ r2xxLðxk; k

�Þ ¼ r2f ðxkÞ þXm

i¼1

k�ir2ciðxkÞ: ð4:4Þ

Assumption (B2) is reminiscent of the Dennis–Morè characterization of superlinear convergence [10], but it is strongerthan the one necessary for superlinear convergence [3] which requires only that ZT

kðWk � HkÞdk ¼ oðkdkkÞ, where Zk is a nullspace matrix for AT

k . However, if multiplier estimates kk based on kþk from (2.2) and exact second derivatives are used to ob-tain Hk close to x�, i.e., if Hk ¼ r2

xxLðxk; kkÞ, then Assumption (B2) is satisfied, since limkHk ¼Wðx�Þ in that case.We begin with the following lemma, which shows that limkkdkk ¼ 0.

Lemma 4.1. Under Assumptions A and B, we have that

limk!1kdkk ¼ 0: ð4:5Þ

Proof. By (2.2), we have that

dk

kþk � kk

� �¼ �

Hk Ak

ATk 0

� ��1 gk þ Akkk

ck

� �:

The right-hand side tends to zero as k!1. So the lemma is true. h

Lemma 4.2. Under Assumptions A and B, dsock is the solution to (4.1), then,

dsock ¼ oðkdkkÞ; ð4:6Þ

cðxk þ dk þ dsock Þ ¼ oðkdkk2Þ: ð4:7Þ

Proof. By (2.2),

cðxk þ dkÞ ¼ cðxkÞ þ ATk dk þ oðkdkkÞ ¼ oðkdkkÞ;

which follows from (4.1) that

dsock

ksoc

" #¼

Hk Ak

ATk 0

� ��1 0�cðxk þ dkÞ

� �¼ oðkdkkÞ:

So (4.6) is true. Moreover, since cðxk þ dkÞ þ ATk dsoc

k ¼ 0 from (4.1), we have that

cðxk þ dk þ dsock Þ ¼ cðxk þ dkÞ þ Aðxk þ dkÞT dsoc

k þ Oðkdsock k

2Þ¼ cðxk þ dkÞ þ AT

k dsock þ Oðkdkkkdsoc

k kÞ þ Oðkdsock k

2Þ ¼ oðkdkk2Þ:

Thus, the result is proved. h

Lemma 4.3. Under Assumptions A and B, if hðxkÞ ¼ Oðkdkk2Þ, then there exist constants b7 > 0 such that b7kdkk 6 k�pkk 6 kdkk forall sufficiently large k.

Proof. By (2.3) and (2.4), we have that

kdkk2 ¼ kpkk2 þ kqkk

2 ¼ kpkk2 þ k � Yk½AT

k Yk��1

ckk2 ¼ k�pkk2 þ Oðkdkk4Þ;

which follows the result. h

In order to prove our local convergence result, we make use of two results established in [31] regarding the effect of sec-ond order correction steps on the exact penalty function

Page 11: A penalty-free method with line search for nonlinear equality constrained optimization

9944 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

/qðxÞ ¼ f ðxÞ þ q hðxÞ: ð4:8Þ

Note that we employ the exact penalty function /qðxÞ only as a technical device, but the algorithm never refers to it. We alsouse the following model of the penalty function

qqðxk; dÞ ¼ f ðxkÞ þ gTk dþ 1

2dT Hkdþ qkck þ AT

k dk: ð4:9Þ

The reduction on qqðxk; dÞ from d ¼ 0 to d ¼ dk has the following result.

Lemma 4.4. Under Assumptions A and B, let ðdk; kþk Þ be the solution to (2.2), and let q > kk�k. Then, there is a constant b8 > 0 for

which

qqðxk;0Þ � qqðxk;dkÞP b8jjxk � x�jj2:

Proof. By (4.9), we have that

qqðxk;0Þ � qqðxk;dkÞ

¼ ðfk þ qjjckjjÞ � fk þ gTk dk þ

12

dTk Hkdk þ qkck þ AT

k dkk�

¼ ðfk þ qjjckjjÞ � fk þ gTk dk þ

12

dTkr2

xxLðxk; kkÞdk þ qkck þ ATk dkk

þ 12

dTkðr2

xxLðxk; kkÞ � HkÞdk:

It follows from Assumption (B2) and and Theorem 13.5 in [9] that

qqðxk;0Þ � qqðxk;dkÞ ¼ /qðxkÞ � /qðx�Þ þ oðjjxk � x�jj2Þ þ oðjjdkjj2Þ: ð4:10Þ

Moreover, Assumption (B) implies that jjxk þ dk � x�jj ¼ oðjjxk � x�jjÞ, which follows that jjdkjj ¼ Oðjjxk � x�jjÞ. It follows from(4.10) and Theorem 13.4 in [9] that the result is true. h

Now we show that essentially the decrease in the merit function is accurately predicted by the decrease in the model (4.9)following the second order correction. The following result is Theorem 13.6 in [9].

Lemma 4.5. Under Assumptions A and B, q > kk�k. Then,

limk!1

/qðxkÞ � /qðxk þ dk þ dsock Þ

qqðxk;0Þ � qqðxk; dkÞ¼ 1: ð4:11Þ

Lemma 4.6. Under Assumptions A and B, there exists a constant b9 > 0 such that

�gTk dk � dhk > 0

holds when hðxkÞ 6 b9kdkk2.

Proof. It follows from (2.4) that kqkk ¼ k�qkk ¼ OðkckkÞ. By (2.2), Lemmas 3.1 and 4.3, we have that

� gTk dk � dhk

¼ dTk Hkdk þ dT

k Akkþk � dhk

¼ ðpk þ qkÞT Hkðpk þ qkÞ � cT

kkþk � dhk

¼ �pTk ZT

k HkZk�pk þ 2qTk Hkpk þ qT

k Hkqk � cTkkþk � dhk

P MHjj�pkjj2 � ðMk þ dÞhk � 2jjHkjj jjpkjj jjqkjj � jjHkjj jjqkjj2:

Since jjdkjj2 ¼ jjpkjj2 þ jjqkjj

2 and jjqkjj ¼ OðkckkÞ, we have that

�gTk dk � dhk P MHjjdkjj2 � b9ðMk þ dÞjjdkjj2 � Oðjjdkjj3Þ � Oðjjdkjj4Þ;

when hðxkÞ 6 b9kdkk2. Let b9 ¼ MH=ð2ðMk þ dÞÞ, then,

�gTk dk � dhk P

12

MHkdkk2 � Oðkdkk3Þ; ð4:12Þ

holds for all sufficiently large k. So the result is proved. h

Page 12: A penalty-free method with line search for nonlinear equality constrained optimization

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9945

Lemma 4.7. Under Assumptions A and B, there exists a constant b10 > 0 such that

f ð�xkþ1Þ 6 f ðxkÞ þ gf gTk dk

holds when hðxkÞ 6 b10kdkk2.

Proof. By (4.8), we have that

f ðxkÞ � f ðxk þ dk þ dsock Þ

¼ð4:8Þ/qðxkÞ � /qðxk þ dk þ dsock Þ � qðhk � hðxk þ dk þ dsoc

k ÞÞ

Pð4:11Þ 1

2þ gf

� ðqqðxk; 0Þ � qqðxk;dkÞÞ � qhk þ oðkdkk2Þ

P12þ gf

� ð�gT

k dk �12

dTk HkdkÞ � qhk þ oðkdkk2Þ ðBy ck þ AT

k dk ¼ 0Þ

¼ �gf gTk dk �

12

gTk dk �

14þ 1

2gf

� dT

k Hkdk � qhk þ oðkdkk2Þ

¼ð2:2Þ �gf gTk dk þ

�� 1

2þ 1

4þ 1

2gf

gT

k dk �14þ 1

2gf

� cT

kkþk � qhk þ o

�kdkk2

Pð4:12Þ�gf gT

k dk þb3

6

�12� gf

kdkk2 � 1

2Mk þ q

� hk þ o

�kdkk2

ð4:13Þ

holds. Let

b10 ¼ ð1� 2gf ÞMH=ð12ðMk þ 2qÞÞ:

We follows from (4.13) that

f ðxkÞ � f ðxk þ dk þ dsock Þ

P �gf gTk dk þ

112

12� gf

� MHkdkk2 þ oðkdkk2Þ

P �gf gTk dk

holds, which implies that the conclusion is true. h

Lemma 4.8. Under Assumptions A and B, if xkþ1 ¼ xk þ dk þ rkdsock , where rk ¼ 0 or 1, then, we have that

dkþ1 ¼ oðkdkkÞ: ð4:14Þ

Proof. By (2.2), (4.1) and Lemma 4.2, we have that

gkþ1 þ Akþ1kkþ1

¼ gðxk þ dk þ rkdsock Þ þ Aðxk þ dk þ rkdsoc

k Þkkþ1

¼ gðxkÞ þ r2f ðxkÞðdk þ rkdsock Þ þ oðkdk þ rkdsoc

k kÞ

þ Akkkþ1 þXm

i¼1

kðiÞkþ1r2ciðxkÞðdk þ rkdsoc

k Þ þ oðkdk þ rkdsock kÞ

¼ �Hkdk þ ðr2f ðxkÞ þXm

i¼1

k�ir2ciðxkÞÞdk þ oðkdkkÞ

¼ ðWk � HkÞdk þ oðkdkkÞ¼ oðkdkkÞ ð4:15Þ

holds. Similarly, we also have that

cðxk þ dk þ rkdsock Þ

¼ cðxkÞ þ ATkðdk þ rkdsoc

k Þ þ oðkdk þ rkdsock kÞ

¼ cðxkÞ þ ATk dk þ oðkdkkÞ

¼ oðkdkkÞ ð4:16Þ

holds. By (2.2), we have that

Page 13: A penalty-free method with line search for nonlinear equality constrained optimization

9946 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

Hkþ1 Akþ1

ATkþ1 0

� �dkþ1

Mkkþ1

� �¼ �

gkþ1 þ Akþ1kkþ1

ckþ1

� �; ð4:17Þ

where Mkk ¼ kkþ1 � kk. Assumptions A1–A4 guarantee that the inverse of the matrix

Hkþ1 Akþ1

ATkþ1 0

� �

exists and is uniformly bounded for all k. By (4.15) and (4.16), we know that the right hand of (4.17) is equal to oðkdkkÞ. So theclaim (4.14) follows. h

Lemma 4.9. Under Assumptions A and B, we have that

Fk P minfb9; b10gkdkk2; ð4:18Þ

where b9 and b10 are constants from Lemmas 4.6 and 4.7.

Proof. Let b11 ¼minfb9; b10g. First, we prove that there exists an infinite index set K such that

Fk P b11kdkk2; for all k 2 K:

Suppose, by contradiction, that there exists an integer k1 > 0 such that Fk < b11kdkk2 for all k P k1, which follows fromLemma 4.1 that limkFk ¼ 0. By Lemma 3.6, hk < b11kdkk2 for all k P k1. It follows from Lemmas 4.6 and 4.7 that (2.6) and(4.2) all hold. Therefore, Algorithm 4.1 do not enter the feasibility restoration phase at the each iterate point xk withk P k1. By Step 5 in Algorithm 4.1, we have that Fkþ1 ¼ Fk ¼ Fk1 for all k P k1, which is a contradiction.

Now, we prove (4.18). Let

K 0 ¼ fk j Fk P b11kdkk2 and Fkþ1 < b11kdkþ1k2g:

Suppose, by contradiction, that the set K 0 is an infinite index set. We consider two cases for k 2 K 0 as following.

Case 1: hðxkÞ 6 b11kdkk2; k 2 K 0.

It follows from hðxkÞ 6 b11kdkk2; k 2 K 0 and Lemma 4.2 that hðxk þ dk þ dsoc

k Þ 6 Fk. By Lemmas 4.6 and 4.7, (2.6) and(4.2) all hold. Hence Fk ¼ Fkþ1 P b11kdkk2 and xkþ1 ¼ xk þ dk or xkþ1 ¼ xk þ dk þ dsoc

k . By Lemma 4.8, we haveFkþ1 P b11kdkk2

> b11kdkþ1k2, which contradicts with k 2 K 0.Case 2: hðxkÞP b11kdkk2

; k 2 K 0.By Lemma 4.2, hðxk þ dk þ dsoc

k Þ ¼ oðkdkkÞ2. It follows from Lemma 3.6 and hðxkÞP b11kdkk2 thathðxk þ dk þ dsoc

k Þ 6 Fk and

hðxk þ dk þ dsock Þ 6 ð1� rhÞb11kdkk2

6 ð1� rhÞhðxkÞ;

which implies that xkþ1 ¼ xk þ dk or xkþ1 ¼ xk þ dk þ dsock . If Fkþ1 ¼ Fk, then, it follows from Fk P b11kdkk2 and Lemma 4.8 that

Fkþ1 P b11kdkþ1k2. If Fkþ1 ¼minf�dhk; bFkg, then, Fkþ1 P minf�d; bgb11kdkk2> b11kdkþ1k2. This contradicts with k 2 K 0.

Therefore, the result is proved. h

Theorem 4.10. Under Assumptions A and B, full steps of the form xkþ1 ¼ xk þ dk or xkþ1 ¼ xk þ dk þ dsock are taken for all suffi-

ciently large k and xk converges to x� superlinearly.

Proof. We consider two cases as following.

Case 1: hðxkÞ 6 b11kdkk2.By Lemmas 4.2 and 4.9, we have that hðxk þ dk þ dsoc

k Þ 6 Fk. By Lemmas 4.6 and 4.7, (2.6) and (4.2) all hold. Hencewe have that xkþ1 ¼ xk þ dk þ rkdsoc

k , where rk ¼ 0 or 1.Case 2: hðxkÞP b11kdkk2.

By Lemmas 4.2 and 4.9, we have that hðxk þ dk þ dsock Þ 6 Fk and

hðxk þ dk þ dsock Þ 6 ð1� rhÞb11kdkk2

6 ð1� rhÞhðxkÞ:

It follows from Lemma 4.1 and hðxkÞP b11kdkk2 that hðxkÞP kdkkm. Therefore, Algorithm 4.1 implies thatxkþ1 ¼ xk þ dk þ rkdsoc

k , where rk ¼ 0 or 1.

Finally, that fxkg converges to x� with a superlinear rate follows from Theorem 18.5 in [23]. h

Page 14: A penalty-free method with line search for nonlinear equality constrained optimization

Table 5.1Parameter settings.

rh ra b m T sh sf gf d �d

0.01 0.8 0.5 2.1 0.6 1.5 1 0.01 0.01 10

Table 5.2Results for CUTEr problems.

Problem Algorithm 4.1 SLPSQP Dim

nf/nc ngf/ngc nf/nc ngf/ngc n m

AIRCRFTA 3 3 4 3 8 5ARGTRIG 8 5 6 5 10 10ARGTRIG 13 6 5 4 100 100ARTIF 7 7 R R 12 10ARTIF 6/10 6/10 R R 1002 1000BDVALUE 2 2 4 3 502 500BT1 10 7 8/15 6 2 1BT2 19 18 19/20 12 3 1BT4 34 11 50/56 12 3 2BT5 8 8 9 6 3 2BT6 17 14 79 10 5 2BT7 85/91 27/33 56 8 5 3BT8 10 10 153/154 18 2 2BT9 13 13 143/144 20 4 2BT10 7 7 8 7 2 2BT11 27 14 71/74 8 5 3BT12 9 8 5 4 5 3BYRDSPHR 18/26 8/14 23/51 11 3 2CBRATU2D 2 2 5 4 512 392CBRATU3D 2 2 5 4 686 250CLUSTER 7/10 7/10 13 9 2 2GOTTFR 9 6 10/22 6 2 2HATFLDG 15/20 5/9 11/34 9 25 25HEART6 31/967 8/549 311/4079 155 6 6HEART8 29/82 11/43 28/68 12 8 8HIMMELBC 7 6 8/9 6 2 2HIMMELBE 3 3 4 2 3 3HS6 13 10 11 4 2 1HS7 12 11 58 9 2 1HS8 6 5 6 5 2 2HS26 28 20 36 22 3 1HS27 31 22 105/106 25 3 1HS39 13 13 143/144 20 4 2HS40 6 6 29 4 4 3HS42 12 9 27/38 9 4 2HS46 32 27 30 29 5 2HS47 32 20 39 22 5 3HS56 23 13 51 7 7 4HS61 I I 20/34 7 3 2HS77 18 13 61 9 5 2HS78 7 7 22 5 5 3HS79 11 10 6 5 5 3HS100LNP 47 21 77 11 7 2HS111LNP 133/136 56/59 30 13 10 3HYDCAR6 6 5 9/14 8 29 29HYDCAR20 85/5661 12/5588 32/117 20 99 99HYPCIR 6 5 8/9 5 2 2MARATOS 4 4 17 4 2 1METHANB8 3 3 4 3 31 31METHANL8 5 5 7 6 31 31MWRIGHT 32 20 88/92 12 5 3ORTHREGB 6 6 149 14 27 6POWELLBS 102/505 21/424 1199/6082 282 2 2POWELLSQ 39/47 19/25 R R 2 2S316-322 I I 10/11 6 2 1TRIGGER 8 8 5 3 7 6

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9947

Page 15: A penalty-free method with line search for nonlinear equality constrained optimization

9948 H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949

5. Numerical results

In this section, a preliminary implementation of Algorithm 4.1 is given on a Lenovo IdeaPad Y450. A Matlab code (VersionR2008a) was written corresponding to this implementation. The approximate Hessian matrix Hk in the (2.2) is updated bymeans of the Powell’s damped BFGS update procedure [24].

For the numerical test, we have used the following parameter setting in Algorithm 4.1: (see Table 5.1).We stop if res ¼ maxfhðxkÞ;vðxkÞg 6 e ¼ 1:0e� 5.For comparison, a set of problems in CUTE were selected from the reference [6]. We have included the corresponding re-

sults obtained by SLPSQP in [6]. Numerical results for Algorithm 4.1 and SLPSQP are listed in Table 5.2. Denotations nf ;ngf ;ncand ngc represent the number of evaluation of f ðxÞ;rf ðxÞ; cðxÞ and rcðxÞ respectively. If the constrained gradients is not fullrank at the current iterate point, then Algorithm 4.1 will go to the feasibility restoration phase. Moreover, ‘‘I’’ means thatAlgorithm 4.1 stops when the current iterate point is a u� stationary point, which implies that nonlinear constraints werefound to be locally infeasible. ‘‘R’’ means that SLPSQP stops if the trust region radius q < e. Some other problems were notincluded since Algorithm 4.1 fails when it calls the feasibility restoration procedure, such as problems CATENA, CATENARY,EIGENA2, EIGENB2, HATFLDF, etc.

6. Conclusions

A new line search method is proposed for solving nonlinear equality constrained optimization problems. At each itera-tion, the search direction is obtained by solving a linear system only. The new method does not use any penalty functionor a filter. The trial step is accepted if the value of the objective function or the measure of the constraint violation is suffi-ciently reduced. Moreover, the constraint violation is not allowed to exceed a decreasing limit to ensure that the cycle cannothappen and that global convergence achieves. Similar to filter methods, the new algorithm allows for nonmonotonicity ofboth constraint violation and the objective function value. It is different from filter methods which need to reserve filterset at each iteration. Under usual assumptions, it is shown that every limit point of the sequence of iterates generated bythe algorithm is feasible, and there exists at least one limit point that is a stationary point for the problem. A simple mod-ification of the algorithm by introducing second order correction steps is presented. It is shown that the modified methoddoes not suffer from the Maratos’ effect, so that it converges superlinearly. We test a set of small and medium scale problemsfrom [6] and compare the results with SLPSQP package. The preliminary numerical results show that the new method is via-ble and effective. Future work perspectives will be concerned with the algorithm performance. More numerical experiments,especially for large scale problems, should be done. The feasibility restoration phase should be studied further. The idea ofthe new algorithm is worthy to expanding to solve inequality constrained optimization problems in the future.

Acknowledgements

The authors are grateful to the anonymous referees for their very careful reading and their very helpful comments on themanuscript.

References

[1] A. Antoniou, W.S. Lu, Practical Optimization: Algorithms and Engineering Applications, Springer, 2007.[2] A. Bhattacharya, P. Vasant, Soft-sensing of level of satisfaction in TOC product-mix decision heuristic using robust fuzzy-LP, Eur. J. Oper. Res. 177 (1)

(2007) 55–70.[3] P.T. Boggs, J.W. Tolle, Sequential quadratic programming, Acta Numer. 4 (1996) 1–51.[4] Z.W. Chen, A penalty-free-type nonmonotone trust-region method for nonlinear constrained optimization, Appl. Math. Comput. 173 (2006) 1014–

1046.[5] Z.W. Chen, S.Q. Qiu, Y.J. Jiao, A penalty-free method for equality constrained optimization, J. Ind. Manage. Optim. 9 (2013) 391–409.[6] C.M. Chin, Numerical results of SLPSQP, filterSQP and LANCELOT on selected CUTE test problems. Numerical Analysis Report NA/203, Deparment of

Mathematics, University of Dundee, Scotland, 2001.[7] C.M. Chin, R. Fletcher, On the global convergence of an SLP-filter algorithm that takes EQP steps, Math. Prog. 96 (2003). 161-177.[8] C.M. Chin, A.H.A. Rashid, K.M. Nor, Global and local convergence of a filter line search method for nonlinear programming, Optim. Methods Softw. 22

(3) (2007) 365–390.[9] A.R. Conn, N.I.M. Gould, Ph.L. Toint, Trust region methods, MPS/ SIAM Ser. Optim., SIAM, Philadelphia, 2000.

[10] J.E. Dennis, J.J. More, Quasi-Newton methods, motivation and theory, SIAM Rev. 19 (1997) 46–89.[11] I. Elamvazuthi, T. Ganesan, P. Vasant, J.F. Webb, Application of a fuzzy programming technique to production planning in the textile industry, Int. J.

Comput. Sci. Inf. Secur. 6 (3) (2009) 238–243.[12] I. Elamvazuthi, T. Ganesan, P. Vasant, A comparative study of HNN and Hybrid HNN–PSO techniques in the optimization of distributed generation

power systems, in: Proceedings of the 2011 International Conference on Advanced Computer Science and Information Systems (ICACSIS’11), Jakarta,Indonesia, 2011, pp. 195–199. ISBN 978-979-1421-11-9.

[13] I. Elamvazuthi, P. Vasant, T. Ganesan, Integration of fuzzy logic techniques into DSS for profitability quantification in a manufacturing environment, in:M. Khan, A. Ansari (Eds.), Handbook of Research on Industrial Informatics and Manufacturing Intelligence: Innovations and Solutions, 2012, pp. 171–192. http://dx.doi.org/10.4018/978-1-4666-0294-6.ch007.

[14] R. Fletcher, N.I.M. Gould, S. Leyffer, A. Wächter, Ph.L. Toint, Global convergence of trust-region SQP-filter algorithms for general nonlinearprogramming, SIAM J. Optim. 13 (2003) 635–659.

[15] R. Fletcher, S. Leyffer, Nonlinear programming without a penalty function, Math. Prog. 91 (2002) 239–270.[16] R. Fletcher, S. Leyffer, Ph.L. Toint, On the global convergence of a filter-SQP algorithm, SIAM J. Optim. 13 (2002) 44–59.

Page 16: A penalty-free method with line search for nonlinear equality constrained optimization

H. Ge, Z. Chen / Applied Mathematical Modelling 37 (2013) 9934–9949 9949

[17] T. Ganesan, P. Vasant, I. Elamvazuthi, Optimization of nonlinear geological structure mapping using hybrid neuro–genetic techniques, Math. Comput.Model. 54 (2011) 2913–2922.

[18] N.I.M. Gould, S. Leyffer, Ph.L. Toint, A multidimensional filter algorithm for nonlinear equations and nonlinear least-squares, SIAM J. Optim. 15 (2004)17–38.

[19] N.I.M. Gould, C. Sainvitu, Ph.L. Toint, A filter-trust-region method for unconstrained optimization, SIAM J. Optim. 16 (2005) 341–357.[20] N.I.M. Gould, Ph.L. Toint, Nonlinear programming without a penalty function or a filter, Math. Prog. 122 (2010) 155–196.[21] X.W. Liu, Y.X. Yuan, A sequential quadratic programming method without a penalty function or a filter for nonlinear equality constrained optimization,

SIAM J. Optim. 21 (2011) 545–571.[22] M.D. Madronero, D. Peidro, P. Vasant, Vendor selection problem by using an interactive fuzzy multi-objective approach with modified S-curve

membership functions, Comput. Math. Appl. 60 (2010) 1038–1048.[23] J. Nocedal, M.L. Overton, Projected Hessian updating algorithms for nonlinearly constrained optimization, SIAM J. Numer. Anal. 22 (1985) 821–850.[24] J. Nocedal, S.J. Wright, Numerical Optimization, Springer-Verlag, New York, Inc., 1999.[25] D. Peidro, P. Vasant, Transportation planning with modified s-curve membership functions using an interactive fuzzy multi-objective approach, Appl.

Soft Comput. 11 (2011) 2656–2663.[26] S. Ulbrich, On the superlinear local convergence of a filter-SQP method, Math. Prog. 100 (2004) 217–245.[27] S. Ulbrich, M. Ulbrich, Nonmonotone trust region methods for nonlinear equality constrained optimization without a penalty function, Math. Prog. 95

(1) (2003) 103–105.[28] M. Ulbrich, S. Ulbrich, N. Vicente, A globally convergent primal-dual interior-point filter method for nonlinear programming, Math. Prog. 100 (2004)

379–410.[29] P. Vasant, T. Ganesan, I. Elamvazuthi, Fuzzy linear programming using modified logistic membership function, J. Eng. Appl. Sci. 5 (3) (2010) 239–245.[30] A. Wächter, L.T. Biegler, Line search filter methods for nonlinear programming: motivation and global convergence, SIAM J. Optim. 16 (2005) 1–31.[31] A. Wächter, L.T. Biegler, Line search filter methods for nonlinear programming: local convergence, SIAM J. Optim. 16 (2005) 32–48.