smoothing nonlinear penalty functions for constrained optimization problems

15
This article was downloaded by: [University of Nebraska, Lincoln] On: 10 October 2014, At: 01:31 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Numerical Functional Analysis and Optimization Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/lnfa20 Smoothing Nonlinear Penalty Functions for Constrained Optimization Problems X. Q. Yang a , Z. Q. Meng b , X. X. Huang a & G. T. Y. Pong a a Department of Applied Mathematics , The Hong Kong Polytechnic University , Hong Kong, P.R. China b School of Economics and Management , Xidian University , Xi'an, P.R. China Published online: 31 Aug 2006. To cite this article: X. Q. Yang , Z. Q. Meng , X. X. Huang & G. T. Y. Pong (2003) Smoothing Nonlinear Penalty Functions for Constrained Optimization Problems, Numerical Functional Analysis and Optimization, 24:3-4, 351-364, DOI: 10.1081/ NFA-120022928 To link to this article: http://dx.doi.org/10.1081/NFA-120022928 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Upload: g-t-y

Post on 17-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [University of Nebraska, Lincoln]On: 10 October 2014, At: 01:31Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Numerical Functional Analysis and OptimizationPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/lnfa20

Smoothing Nonlinear Penalty Functions for ConstrainedOptimization ProblemsX. Q. Yang a , Z. Q. Meng b , X. X. Huang a & G. T. Y. Pong aa Department of Applied Mathematics , The Hong Kong Polytechnic University , Hong Kong,P.R. Chinab School of Economics and Management , Xidian University , Xi'an, P.R. ChinaPublished online: 31 Aug 2006.

To cite this article: X. Q. Yang , Z. Q. Meng , X. X. Huang & G. T. Y. Pong (2003) Smoothing Nonlinear Penalty Functionsfor Constrained Optimization Problems, Numerical Functional Analysis and Optimization, 24:3-4, 351-364, DOI: 10.1081/NFA-120022928

To link to this article: http://dx.doi.org/10.1081/NFA-120022928

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

NUMERICAL FUNCTIONAL ANALYSIS AND OPTIMIZATION

Vol. 24, Nos. 3 & 4, pp. 351–364, 2003

Smoothing Nonlinear Penalty Functions for

Constrained Optimization Problems

X. Q. Yang,1,* Z. Q. Meng,2 X. X. Huang,1

and G. T. Y. Pong1

1Department of Applied Mathematics, The Hong Kong Polytechnic

University, Hong Kong, P.R. China2School of Economics and Management,

Xidian University, Xi’an, P.R. China

ABSTRACT

In this article, we discuss a nondifferentiable nonlinear penalty method for

an optimization problem with inequality constraints. A smoothing method is

proposed for the nonsmooth nonlinear penalty function. Error estimations are

obtained among the optimal value of smoothed penalty problem, the optimal

value of the nonsmooth nonlinear penalty optimization problem and that of

the original constrained optimization problem. We give an algorithm for the

constrained optimization problem based on the smoothed nonlinear penalty

method and prove the convergence of the algorithm. The efficiency of the

smoothed nonlinear penalty method is illustrated with a numerical example.

Key Words: Constrained optimization; Nonlinear penalty function; Smoothing

method; �-feasible solution; Optimal solution.

*Correspondence: X. Q. Yang, Associate Professor, Department of Applied Mathematics,

The Hong Kong Polytechnic University, Hong Kong, P.R. China; E-mail: mengzhiqing@

xtu.edu.cn.

351

DOI: 10.1081/NFA-120022928 0163-0563 (Print); 1532-2467 (Online)

Copyright & 2003 by Marcel Dekker, Inc. www.dekker.com

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

1. INTRODUCTION

Consider the following constrained optimization problem (P):

min f0ðxÞ s:t: x 2 X, fiðxÞ � 0, i ¼ 1, 2, . . . ,m,

where X � Rn is a subset and fiði ¼ 0, 1, . . . ,mÞ : X ! R1 are real valued functions.Unconstrained optimization methods have been well studied in the literature,see Bertsekas (1982), Conn et al. (2000), Fiacco and McCormick (1990), Fletcher1987). In particular, penalty method is popular in engineering and economics appli-cations. Extensive theoretical study of exact penalty functions and convergenceanalysis of penalty methods has been given in Auslender et al. (1997), Fiacco andMcCormick (1990), Rosenberg (1984). However, it is known that very large penaltyparameters for the classical penalty method are required in order to obtain goodapproximate solutions and that too large parameters cause numerical instability inimplementation. Recently nonlinear penalty functions are studied in Rubinov et al.(1999), Yang (0000) and the references therein. In particular, the followingk-th power nonlinear penalty function is considered:

f k0 ðxÞ þ �Xmi¼1

hmaxf fiðxÞ, 0g

ik !1=k

:

A promising feature for the k-th power nonlinear penalty function is that a smallerexact penalty parameter than that of the classical penalty function (i.e., k ¼ 1) can beguaranteed when k is sufficiently small.

It is noted that when k<1, the above k-th power nonlinear penaltyfunction is not Lipschitz. Thus the minimization of the k-th power nonlinear penaltyfunction is not an easy task. However, smoothing methods have been investigatedfor minimizing nonsmooth penalty functions in e.g., Bertsekas (1982), Pinar andZenios (1994), Ref. 12). Error estimates of the optimal value of the original penaltyfunction and that of the smoothed penalty function are obained. In particular,extensive numerical testing is given in Pinar and Zenios (1994) to show theefficiency of the smoothing method of penalty functions for solving convex networkoptimization problems.

With the promising feature of a small exact penalty parameter for the k-th powernonlinear penalty function in mind, the aim of this article is to apply smoothingmethod for the minimization of the k-th power nonlinear penalty function. We willestablish the error analysis of the optimal values for the exact k-th power non-linear penalty function, a smoothed penalty function, and the original constrainedoptimization problem (P) for the cases 0 < k � 1 and 1 � k < þ1 respectively. Thisanalysis is carried out under the assumption that an k-th power exact nonlinearpenalty function exists. An algorithm is also proposed based on the smoothedpenalty problems. We show that the limiting point of the sequence of optimalsolutions of smoothing penalty functions satisfies the Kuhn-Tucker necessaryoptimality condition.

352 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

2. A SMOOTHING FUNCTION

Consider the function pk : R1! R1:

pkðtÞ ¼

(0, if t � 0,t k, if t � 0,

ð1Þ

where 0 < k < þ1. Clearly, pkðtÞ is not C1 on R1 for 0 < k � 1, but it is C1 fork > 1. It was shown in Bertsekas (1982), Pinar and Zenias (1994) that the functionpkðtÞ is useful in defining exact penalty functions for nonlinear programmingproblems. In order to smooth the function pkðtÞ, we define pk� : R

1! R1:

pk� ðtÞ ¼

0, if t � 0,12t2k

�k, if 0 � t � �,

ðtk 12�kÞ, if t � �,

8><>: ð2Þ

where 0 < k < þ1 and � > 0. It is clear that lim�!0 pk� ðtÞ ¼ pkðtÞ:

Lemma 2.1. Let 1=2 < k < 1 and � > 0. Then pk� ðtÞ is C1.

Proof. Let p1ðtÞ ¼ 0 if t � 0, p2ðtÞ ¼ ð1=2Þðt2=�Þk if 0 � t � � and p3ðtÞ ¼ ðtk 12�kÞ if

t � �. We have

pk� ðtÞ ¼p1ðtÞ, if t � 0p2ðtÞ, if 0 � t � �p3ðtÞ, if t � �,

8<:

Then, for 1=2 < k < 1 we obtain

rpk� ðtÞ ¼rp1ðtÞ ¼ 0, if t � 0rp2ðtÞ ¼ k� kt2k 1, if 0 � t � �rp3ðtÞ ¼ ktk 1, if t � �:

8<: ð3Þ

In particular, rp1ð0Þ ¼ 0 ¼ rp2ð0Þ, and rp2ð�Þ ¼ k�k 1¼ rp3ð�Þ. Therefore, p

k� ðtÞ is

C1 at any t 2 R1 by Eq. (3).

Lemma 2.2. Let 1 � k < þ1 and � > 0. Then pk� ðtÞ is C1, 1, i.e., rpk� ðtÞ exists and is

locally Lipschitz.

Proof. By Eq. (3), it is clear that pk� ðtÞ is C1 for k � 1: We show that rpk� ðtÞis locally Lipschitz. It is obvious that rpk� ðtÞ is locally Lipschitz if t 6¼ �.We need only to prove that rpk� ðtÞ is locally Lipschitz at t ¼ �. Consider thefollowing functions:

hðtÞ ¼ rp2ðtÞ ¼ k� kt 2k 1, 0 � t � �,

qðtÞ ¼ rp3ðtÞ ¼ kt k 1, t � �:

Smoothing Nonlinear Penalty Functions 353

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

When t ¼ �, hð�Þ ¼ qð�Þ ¼ k�k 1. Since hðtÞ and qðtÞ are obviously locally Lipschitz att ¼ �, there exist � > 0 and K > 0 such that

jhð�Þ hðt1Þj � K j� t1j ð4Þ

for t1 2 ½� �, �� and

jqðt2Þ hð�Þj � K jt2 �j ð5Þ

for t2 2 ½�, �þ ��. From Eqs. (4) and (5), we have

Kð� t1Þ � hð�Þ hðt1Þ � Kð� t1Þ ð6Þ

Kðt2 �Þ � qðt2Þ qð�Þ � Kðt2 �Þ: ð7Þ

By Eqs. (6) and (7), we obtain

jqðt2Þ hðt1Þj � K jt2 t1j for jt2 t1j � �:

Hence, rpk� ðtÞ is locally Lipschitz.

Remark 2.1. If 0 < k < 1=2, pk� ðtÞ is differentiable when t 6¼ 0, but is not locallyLipschitz at t ¼ 0:

3. NONLINEAR PENALTY FUNCTIONS

Let function fi : Rn! R1, i 2 f0g [ I be C1, 1, where I ¼ f1, 2, . . . ,mg. By

Lemmas 2.1 and 2.2, pk� ð fiðxÞÞði 2 f0g [ I Þ is C1 for 1=2 < k < 1 and C1, 1 fork � 1. Denote f þi ðxÞ ¼ maxf0, fiðxÞgði 2 I Þ. In this article, we always assume thatf0 is positive on the set X .

Let

I0ðxÞ ¼ fi 2 I j fiðxÞ ¼ 0g

IþðxÞ ¼ fi 2 I j fiðxÞ > 0g

I ðxÞ ¼ fi 2 I j fiðxÞ < 0g

Consider the following constrained optimization problem:

(P): min f0ðxÞ s:t: x 2 X0,

where X0 ¼ fx 2 X j fiðxÞ � 0, i ¼ 1, 2, . . . ,mg and the nonlinear penalty functionsfor (P):

Fðx, �Þ ¼ f k0 ðxÞ þ �Xi2I

pkð fiðxÞÞ,

Fðx, �, �Þ ¼ f k0 ðxÞ þ �Xi2I

pk� ð fiðxÞÞ,

354 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

where � > 0 and 0 < k < þ1. Fðx, �, �Þ is a smooth approxiamtion of Fðx, �Þ, whichis nonsmooth when 1=2 < k � 1. Accordingly, we have the following two penaltyproblems:

ðEP�Þ : minFðx, �Þ s:t: x 2 X

ðSEP�Þ : minFðx, �, �Þ s:t: x 2 X

By Theorem 3.1 of Bertsekas (1982), if X0 is compact andlimx!1, x2X f0ðxÞ ¼ þ1, then we have

sup�2Rþ

minx2X

Fðx, �Þ ¼ minx2X0

f0ðxÞk:

It was also shown in Pinar and Zenios (1994) that it is possible to solve (P) by solvinga sequence of penalty problems (EP�). Moreover, under stronger conditions, thereexist some � > 0 such that if x� is an optimal solution of (P), then x� is also anoptimal solution of (EP�) (see Rosenberg (1984)). As Fðx, �Þ is nonsmooth when0 < k � 1, we expect to solve the smooth optimization problem (SEP�Þ in order toobtain an approximate solution to (P). Since lim�!0 Fðx, �, �Þ ¼ Fðx, �Þ, 8�, we willfirst study some relationship between (EP�) and (SEP�).

Proposition 3.1. For any x 2 X and � > 0, we have

0 � Fðx, �Þ Fðx, �, �Þ �1

2m��k, 0 < k < þ1, � > 0: ð8Þ

Proof. By using the definition of pk� ðtÞ, we have

0 � pkðtÞ pk� ðtÞ �1

2�k:

As a result,

0 � pkð fiðxÞÞ pk� ð fiðxÞÞ �1

2�k 8x 2 X, i ¼ 1, 2, . . . ,m:

Adding up for all i, we obtain

0 �Xi2I

pkð fiðxÞÞ Xi2I

pk� ð fiðxÞÞ �m

2�k:

Hence,

0 � Fðx, �Þ Fðx, �, �Þ �1

2m��k:

Corollary 3.1. Let f"jg ! 0 be a sequence of positive numbers and assume that xj is asolution to minx2X Fðx, �, "jÞ for some � > 0. Let x be an accumulation point of thesequence fxjg, then x is an optimal solution to minx2X Fðx, �Þ:

Definition 3.1. A vector x� 2 X is �-feasible to (P) if

fiðx"Þ � �, 8i 2 I :

Smoothing Nonlinear Penalty Functions 355

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

Theorem 3.1. Let x� be an optimal solution of (EP�) and x 2 X an optimal solution of(SEP�). Then

0 � Fðx�, �Þ Fðx, �, �Þ �1

2m��k, 0 < k < þ1, ð9Þ

Proof. From Proposition 3.1 we have

Fðx, �Þ � Fðx, �, �Þ þ1

2m��k 8x 2 X , 0 < k < þ1

Consequently,

infx2X

Fðx, �Þ � infx2X

Fðx, �, �Þ þ1

2m��k,

which proves the right-hand inequality of Eq. (9). The left-hand inequality of Eq. (9)can be similarly proved.

Theorem 3.2. Let x� be an optimal solution of (EP�) and x 2 X be an optimal solutionof (SEP�). Furthermore, let x

� be feasible to (P) and x be �-feasible to (P). Then

0 � f0ðx�Þk f0ðxÞ

k� m��k, 0 < k < þ1: ð10Þ

Proof. Since x is �-feasible to (P), it follows that

Xi2I

pk� ð fiðxÞÞ �1

2m��k:

As x� is an optimal solution to (P), we haveXi2I

pkð fiðx�ÞÞ ¼ 0:

By Proposition 3.1, we get

0 � ð f0ðx�Þkþ �

Xi2I

pkð fiðx�ÞÞÞ ð f0ðxÞ

kþ �

Xi2I

pk� ð fiðxÞÞÞ �1

2m��k,

which implies 0 � f0ðx�Þk f ðxÞk � m��k for 0 < k < þ1:

Remark 3.1. From the assumption of Theorem 3.2, we can see that x� is actually anoptimal solution of (P). Thus, if all the conditions of Theorem 3.2 hold, then Eq. (10)essentially gives an error estimation between the optimal value of ðSEP�Þ andthat of (P).

Definition 3.2. Let x� 2 Rn. y� 2 Rm is called a Lagrange multiplier vector associatedwith x� for problem (P) if x� and y� satisfy

356 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

rf0ðx�Þ ¼

Xi2I

y�i rfiðx�Þ, ð11Þ

y�i fiðx�Þ ¼ 0, y�i � 0, fiðx

�Þ � 0, i ¼ 1, 2, . . . ,m: ð12Þ

Theorem 3.3. Let 0 < k � 1. Let fi ði ¼ 0, 1, 2, . . . ,mÞ be convex. Let x� be an optimalsolution of (P) and y� 2 Rm a Lagrange multiplier vector associated with x� forproblem (P). Then

Fðx�, �Þ Fðx, �, �Þ �1

2m��k 8x 2 X , ð13Þ

provided that � � ðmÞk, where ¼ maxfy�i , i ¼ 1, . . . ,mg.

Proof. By the convexity of fi, i ¼ 0, 1, 2, . . . ,m, we have

fiðxÞ � fiðx�Þ þ rfiðx

�ÞTðx x�Þ x 2 X : ð14Þ

Since, x� is an optimal solution for (P) and y� is a Lagrange multiplier vector, wehave Eqs. (11) and (12). By Eqs. (11), (12), and (14), we obtain

f0ðxÞ � f0ðx�Þ þ rf0ðx

�ÞTðx x�Þ

¼ f0ðx�Þ

Xi2I

y�i rfiðx�ÞTðx x�Þ

� f0ðx�Þ

Xi2I

y�i ð fiðxÞ fiðx�ÞÞ

� f0ðx�Þ

Xi2I

y�i fiðxÞ:

Recall IþðxÞ ¼ fi 2 I j fiðxÞ > 0g. Then we have

f0ðxÞ � f0ðx�Þ

Xi2IþðxÞ

y�i fiðxÞ: ð16Þ

Set ¼ maxfy�i , i ¼ 1, . . . ,mg. Then y�i fiðxÞ � fiðxÞ for i 2 IþðxÞ. From Eq. (16),

we deduce

f0ðxÞ � f0ðx�Þ

Xi2IþðxÞ

fiðxÞ: ð17Þ

Let a ¼ f0ðx�Þ, b ¼ maxf fiðxÞ, i ¼ 1, 2, . . . ,mg. Then by Eq. (17), we have

f0ðxÞ � f0ðx�Þ

Xi2IþðxÞ

y�i fiðxÞ � a mb: ð18Þ

Now, we show that Eq. (13) is true for 0 < k � 1. If b � 0 in Eq. (18), we havef0ðxÞ � f0ðx

�Þ, which implies

Fðx, �Þ � f0ðx�Þk:

If b > 0, we show that for 0 < k � 1

Smoothing Nonlinear Penalty Functions 357

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

f0ðxÞk� ak ðmbÞk: ð19Þ

When a � mb, ak � ðmbÞk, which gives us Eq. (19) because f0ðxÞ � 0. Whena � mb � 0, let t ¼ mb=a, then we have 0 � t � 1. Consider the functiongðtÞ ¼ t k þ ð1 tÞk, 0 � t � 1. We have

g0ðtÞ ¼ kt k 1 kð1 tÞk 1 0 � t � 1:

We see that g0ðtÞ � 0 for 0 < t � 1=2 and g0ðtÞ � 0 for 1 > t � 1=2. Moreover,gð0Þ ¼ gð1Þ. It follows that gðtÞ attains minimum on ½0, 1� at t ¼ 0, 1. Hence,

gðtÞ � 1 0 � t � 1,

i.e., ð1 tÞk � 1 t k: Therefore, from Eq. (18) we obtain

f0ðxÞk� ða mbÞk � ak 1

mb

a

� �k !:

So we have

Fðx, �Þ � f k0 ðx�Þ ðmbÞk þ

Xi2IþðxÞ

�kf þi ðxÞk

� f k0 ðx�Þ ðmbÞk þ �bk:

When � � ðmÞk, we obtain Fðx, �Þ � f0ðx�Þk. Thus, when 0 < k � 1 and � � ðmÞk

we always have f0ðx�Þk Fðx, �Þ � 0: By Proposition 3.1, for 0 < k � 1 we obtain

Eq. (13).

Corollary 3.2. Let fi ði ¼ 0, 1, 2, . . . ,mÞ be convex. Let x� be an optimal solution of (P)and y� 2 Rm a Lagrange multiplier vector associated with x� for problem (P). Supposethat x�� is an optimal solution of (EP�). Then f0ðx

�Þ ¼ ðFðx��, �ÞÞ

1=k for 0 < k < 1provided that � � ðmÞk, where ¼ maxfy�i , i ¼ 1, . . . ,mg.

Theorem 3.4. Let 1 � k < þ1. Let fi ði ¼ 0, 1, 2, . . . ,mÞ be convex. Let x� be anoptimal solution of (P) and y� 2 Rm a Lagrange multiplier vector associated with x�

for problem (P). Let x�� be an optimal solution of (EP�) andbðx��Þ ¼ maxf fiðx

��Þ, i ¼ 1, 2, . . . ,mg. Then

(i) f0ðx�Þ ¼ ðFðx��, �ÞÞ

1=k and

Fðx��, �Þ Fðx, �, �Þ �1

2m��k 8x 2 X , ð20Þ

when bðx��Þ � 0,(ii) f0ðx

�Þ ¼ ðFðx��, �ÞÞ

1=k and Eq. (20) is true when bðx��Þ > 0 if the relation� � ðmÞk þ ð1 21 kÞ f k0 ðx

�Þbðx��Þ

k holds,where ¼ maxfy�i , i ¼ 1, . . . ,mg.

Proof.

(i) By assumption, we have

358 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

f0ðxÞ � f0ðx�Þ

Xi2IþðxÞ

fiðxÞ: ð21Þ

Letus takea ¼ f0ðx�Þ, bðx��Þ ¼ maxf fiðx

��Þ, i ¼ 1, 2, . . . ,mg.ThenbyEq. (21),

we have

f0ðx��Þ � f0ðx

�Þ

Xi2Iþðx��Þ

y�i fiðx��Þ � a mbðx��Þ: ð22Þ

We show that Eq. (20) is true when 1 � k < þ1.If bðx��Þ � 0, by Eq. (22) we have f0ðx

��Þ � f0ðx

�Þ which implies

f0ðx�Þk� Fðx��, �Þ � f0ðx

�Þk

and Eq. (20) by Proposition 3.1 and the definition of Fðx, �Þ.(ii) Now, if bðx��Þ > 0 and a � mbðx��Þ, it is clear from Eq. (22) that

f0ðx��Þk� ak ðmbðx��ÞÞ

k:

This implies that

Fðx��, �Þ � f k0 ðx�Þ ðmbðx��ÞÞ

kþ �

Xi2Iþðx��Þ

f þi ðx��Þk

� f k0 ðx�Þ ðmbðx��ÞÞ

kþ �bðx��Þ

k

When � � ðmÞk, we obtain Fðx��, �Þ � f0ðx�Þk. On the other hand, when

a > mbðx��Þ � 0, let t ¼ mbðx��Þ=a, then we have 0 � t � 1. LetgðtÞ ¼ tkþ ð1 tÞk for 0 � t � 1. Then,

g0ðtÞ ¼ kt k 1 kð1 tÞk 1 0 � t � 1:

Obvious, g0ð1=2Þ ¼ 0, g0ðtÞ � 0, 0 � t � 1=2 and g0ðtÞ > 0, 1 � t � 1=2.Thus, t ¼ 1=2 is minimum point of g on 0 � t � 1. So, we have

gðtÞ �1

2

� �kþ

1

2

� �k¼ 21 k,

i.e., ð1 tÞk � 21 k tk: Therefore, from Eq. (22) we obtainf0ðx

��Þk� 21 kak ðmbðx��ÞÞ

k: This further implies that

Fðx��, �Þ � 21 kf k0 ðx�Þ ðmbðx��ÞÞ

kþ �

Xi2Iþðx��Þ

f þi ðx��Þk

� 21 kf k0 ðx�Þ ðmbðx��ÞÞ

kþ �bðx��Þ

k:

So if the relation � � ðmÞk þ ð1 21 kÞ f k0 ðx�Þbðx��Þ

k holds, then weobtain Fðx��, �Þ � f0ðx

�Þk. By Proposition 3.1, for k � 1 we obtain

f0ðx�Þk¼ Fðx�, �Þ ¼ Fðx��, �Þ � Fðx, �, �Þ þ

1

2m��k:

Let � > 0, x 2 X and

Smoothing Nonlinear Penalty Functions 359

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

I � ðxÞ ¼ fij fiðxÞ � �g

Iþ� ðxÞ ¼ fij fiðxÞ > �g:

Theorem 3.5. Let X , fi ði ¼ 0, 1, 2, . . . ,mÞ be convex. Let 0 < k � 1. Suppose that x�

is an optimal solution of (P) and y� 2 Rm is a Lagrange multiplier vector associatedwith x� for problem (P). Let x 2 X be an optimal solution of (SEP�), where� � ð2þmÞk and ¼ maxfy�i , i ¼ 1, . . . ,mg. Then x is �-feasible to (P).

Proof. Suppose to the contrary that x is not �-feasible to (P), i.e., Iþ� ðxÞ 6¼ ;: FromEq. (1) and the definition of Fðx, �, �Þ, we have

Fðx, �, �Þ ¼ f k0 ðxÞ þ �Xi2Iþ� ðxÞ

f þi ðxÞk 1

2�k

� �þ �

Xi2I � ðxÞ

1

2� kf þi ðxÞ2k: ð23Þ

When � � ð2þmÞk, we have

�Xi2Iþ� ðxÞ

�f þi ðxÞk

1

2�k� k

Xi2Iþ� ðxÞ

f þi ðxÞk

� kXi2Iþ� ðxÞ

ð2þmÞ

�f þi ðxÞk

1

2�k� f þi ðxÞk

> kXi2Iþ� ðxÞ

ð1=2Þm�k ðusing f þi ðxÞk > �kÞ

� ð1=2Þmk�k,

i.e.,

�kXi2Iþ� ðxÞ

�f þi ðxÞk

1

2�k�> k

Xi2Iþ� ðxÞ

f þi ðxÞk þ ð1=2Þmk�k: ð24Þ

Moreover, when � � ð2þmÞk, we have

�Xi2I � ðxÞ

1

2� kf þi ðxÞ2k k

Xi2I � ðxÞ

f þi ðxÞk

> kXi2I � ðxÞ

1

2� kf þi ðxÞ2k f þi ðxÞk

Let gðtÞ ¼ ð1=2Þ� kt2 t, 0 � t � �k. Since g0ðtÞ ¼ � kt 1 � 0 for 0 � t � �k, wehave gðtÞ � ð1=2Þ�k, 0 � t � �k. Therefore,

gð f þi ðxÞkÞ ¼ ð1=2Þ� kf þi ðxÞ2k f þi ðxÞk, 8i 2 I � ðxÞ:

So we have

�Xi2I � ðxÞ

1

2� kf þi ðxÞ2k > k

Xi2I � ðxÞ

f þi ðxÞk ð1=2Þmk�k: ð25Þ

360 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

From Eqs. (23)–(25), we deduce that when � � ð2þmÞk, there holds

Fðx, �, �Þ > f k0 ðxÞ þ kXi2Iþ� ðxÞ

f þi ðxÞk þ kXi2I � ðxÞ

f þi ðxÞk � f k0 ðxÞ þ kXi2I

f þi ðxÞk:

Since 0 < k � 1, we have

Fðx, �, �Þ1=k > ð f k0 ðxÞ þ kXi2I

f þi ðxÞkÞ1=k

� f0ðxÞ þ Xi2I

f þi ðxÞ

� f0ðxÞ þXi2I

y�i fþi ðxÞ

� f0ðxÞ þXi2I

y�i fiðxÞ

� f0ðx�Þ þ

Xi2I

y�i fiðx�Þ

� f0ðx�Þ

Therefore, we obtain

Fðx, �, �Þ > f0ðx�Þkþ �

Xi2I

pk� ð fiðx�ÞÞ,

which contradicts the fact that x is an optimal solution to (SEP�). Hence, Iþ� ðxÞ ¼ ;:

By Theorems 3.2 and 3.5, we have the following corollary.

Corollary 3.3. Let X , fi ði ¼ 0, 1, 2, . . . ,mÞ be convex. Let 0 < k � 1. Suppose that x� isan optimal solution of (P) and y� 2 Rm is a Lagrange multiplier vector associated with x�

for problem (P). Let � � maxfðmÞk, ð2þmÞkg, where ¼ maxfy�i , i ¼ 1, . . . ,mg.Suppose that x 2 X is an optimal solution of (SEP�). Then for enough larger �

0 � f0ðx�Þk f0ðxÞ

k� m��k:

Proof. By Theorem 3.5, we know that x is �-feasible to (P). Moreover, x� is alsoan optimal solution of (EP�) for � � ðmÞk by Corollary 3.2. Hence, we have theconclusion of the theorem by Theorem 3.2.

4. THE NLPA ALGORITHM AND A NUMERICAL EXAMPLE

In this section we give a nonlinear penalty function algorithm (NLPA) for theoptimization problem (P). In order to solve (P), we wish to solve its smoothedpenalty problem: minx2X Fðx, �, �Þ.

We propose the following algorithm.

Smoothing Nonlinear Penalty Functions 361

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

Algorithm NLPA.

Step 1. Given xs, � > 0, �0 > 0, �0 > 0, 0 < < 1 and N > 1. Let j ¼ 0.

Step 2. Solve the smoothed penalty problem: minx2X Fðx, �j, �jÞ with the starting pointxs. Let x j be the optimal solution.

Step 3. If x j is �-feasible to (P) and �j < �, then stop and get an approximate solutionx j of (P). Otherwise, let �jþ1 ¼ N�j and �jþ1 ¼ �j and set j :¼ j þ 1 and xs :¼ x j go toStep 2.

Remark 4.1. Since 0 < < 1 and N > 1, the sequence f�jg decreases to 0 and thesequence f�jg increases to þ1 as j ! þ1.

Theorem 4.1. Let k > 1=2. Assume that limkxk!1,x2X f0ðxÞ ¼ þ1: Let the sequence

fx jg be generated by the NLPA. Suppose that the sequence fFðx j, �j, �jÞg is bounded.

Then fx jg is bounded and any limit point x� of fx jg belongs to X0 and satifies

rf0ðx�Þ þ

Xi2I0ðx�Þ

�irfiðx�Þ ¼ 0: ð26Þ

where ,�i � 0, i ¼ 1, 2, . . . ,m and they are not all zero.

Proof. By assumption, there is some number L that such that

L > Fðx j, �j, �jÞ, j ¼ 0, 1, 2, . . . ð27Þ

Suppose to the contrary that fx jg is unbounded. Assume without loss of generalitythat k x j k! 1 as j ! þ1. Then, from Eq. (27), we have

L > ð f0ðxjÞÞk, j ¼ 0, 1, 2, . . .

which results in a contradiction since limkxk!1, x2X f0ðxÞ ¼ þ1.

Now we show that any limit point of fx jg belongs to X0. Withoutloss of generality, we assume that x j ! x�. Suppose to the contrary that x� 62 X0.Then there exists some i such that pð fiðx

�ÞÞ > 0. As fiði 2 I Þ is continuous,

so are Fðx j, �j, �jÞ ð j ¼ 1, 2, . . .Þ. By assumption, we have

L > Fðxj,�j, �jÞ ¼ ð f0ðxjÞÞkþ �j

Xi2Iþ�j ðx

�f þi ðxjÞk

1

2�kj

�þ �j

Xi2I �j ðx

1

2� kj f þi ðxjÞ2k:

ð28Þ

Clearly, �j ! þ1 and �j ! 0 as j ! þ1. Taking the limit in Eq. (28) as j ! þ1,we have L > Fðx j, �j , �jÞ ! þ1, which leads to a contradiction.

Finally, we show that Eq. (26) holds. By Lemmas 2.1, 2.2 and Step 2,rFðx j, �j, �jÞ ¼ 0, that is,

362 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

kf0ðxjÞk 1

rf0ðxjÞ þ �j

Xi2Iþ�j ðx

kf þi ðx jÞk 1rfiðx

þ�jX

i2I �j ðxjÞ

k� kj f þi ðx jÞ2k 1rfiðx

jÞ ¼ 0 ð29Þ

Let

�j ¼ kf0ðxjÞk 1

þX

i2Iþ�j ðxjÞ

�jkfþi ðx jÞk 1

þX

i2I �j ðxj Þ

�jk� kj f þi ðx jÞ2k 1, j ¼ 1, 2, . . .

Then �j > 0, j ¼ 1, 2, . . .. From Eq. (29), we have

kf0ðxjÞk 1

�jrf0ðx

jÞ þ

Xi2Iþ�j ðx

�jkfþi ðx jÞk 1

�jrfiðx

þX

i2I �j ðxjÞ

�jk� kj f þi ðx jÞ2k 1

�jrfiðx

jÞ ¼ 0 ð30Þ

Set

j ¼kf0ðx

jÞk 1

�j,

�ji ¼�jk�

kj f þi ðx jÞ2k 1

�j, i 2 Iþ�j ðx

jÞ,

�ji ¼�jk�

kj f þi ðx jÞ2k 1

�j, i 2 I �j ðx

jÞ,

�ji ¼ 0, i 2 In Iþ�j ðxjÞ[

I �j ðxjÞ

� �:

Then

j þXi2I

�ji ¼ 1, 8j, ð31Þ

and

�ji � 0, i 2 I , 8j:

Obviously, we can assume without loss of generality that

j ! � 0,

�ji ! �j � 0, 8i 2 I :

Taking the limit in Eqs. (30) and (31) as j ! þ1, we get

rf0ðx�Þ þ

Xi2I

�irfiðx�Þ ¼ 0:

þXi2I

�i ¼ 1:

Smoothing Nonlinear Penalty Functions 363

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014

©2003 Marcel Dekker, Inc. All rights reserved. This material may not be used or reproduced in any form without the express written permission of Marcel Dekker, Inc.

MARCEL DEKKER, INC. • 270 MADISON AVENUE • NEW YORK, NY 10016

Note that when i 2 I ðx�Þ, we have �ji ¼ 0 when j is sufficiently large. As a result,�i ¼ 0, 8i 2 I ðx�Þ: Hence Eq. (26) holds.

REFERENCES

Auslender, A., Cominetti, R., Haddou, M. (1997). Asymptotic analysis for penaltyand barrier methods in convex and linear programming. Mathematics ofOperations Research 22:43–62.

Bertsekas, D. (1982). Constrained Optimization and Lagrange Multiplier Methods.New York: Academic Press.

Conn, A. R., Gould, N. I. M., Toint, Ph. L. (2000). Trust region methods. SIAM.Dippllo, G., Grippo, L. (1998). On the exactness of a class of nondifferentiable

penalty function. Journal of Optimization Theory and Applications 57:399–410.Fiacco, A. V., and McCormick, G. P. (1990). Nonlinear Programming. Philadelphia:

SIAM.Fletcher, R. (1987). Practical Methods of Optimization, New York: Wiley.Pinar, M. C., Zenios, S. A. (1994). On smoothing exact penalty functions for convex

constarained optimization. SIAM Journal on Optimization 4:486–511.Rosenberg, E. (1984). Exact penalty functions and stablility in locally lipschitz

programming. Mathematical Programming 30:340–356.Rubinov, A. M., Glover, B. M., Yang, X. Q. (1999). Extended Lagrange and penalty

functions in continous optimization. Optimization 46:327–351.Rubinov, A. M., Glover, B. M., Yang, X. Q. (1999). Decreasing functions with

applications to penalization. SIAM Journal on Optimization 10:289–313.Yang, X. Q. (1994). An exterior point method for computing point that satisfy

second-order necessary condition for a C1, 1 optimization problem. Journal ofMathematical Analysis and Applications 187:118–133.

Yang, X. Q. (1994). Smoothing approximations to nonsmooth optimizationproblems. Journal of the Australian Mathematical Society, Series B 36:274–285.

Yang, X. Q., Huang, X. X. (2001). A nonlinear lagrangian approach to constraintedoptimization problems. SIAM Journal on Optimization 11:1119–1144.

364 Yang et al.

Dow

nloa

ded

by [

Uni

vers

ity o

f N

ebra

ska,

Lin

coln

] at

01:

31 1

0 O

ctob

er 2

014