a method for nonsmooth optimization problems

15
This article was downloaded by: [Eindhoven Technical University] On: 21 November 2014, At: 07:56 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gcom20 A method for nonsmooth optimization problems Gianfranco Corradi a Faculty of Economics , University of Rome ‘La Sapienza” , Via del Castro Laurenziano, 9 00161, Rome, Italy Published online: 25 Jan 2007. To cite this article: Gianfranco Corradi (2004) A method for nonsmooth optimization problems, International Journal of Computer Mathematics, 81:6, 693-705, DOI: 10.1080/0020716031000148197 To link to this article: http://dx.doi.org/10.1080/0020716031000148197 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: gianfranco

Post on 27-Mar-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A method for nonsmooth optimization problems

This article was downloaded by: [Eindhoven Technical University]On: 21 November 2014, At: 07:56Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of ComputerMathematicsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/gcom20

A method for nonsmooth optimizationproblemsGianfranco Corradia Faculty of Economics , University of Rome ‘La Sapienza” , Viadel Castro Laurenziano, 9 00161, Rome, ItalyPublished online: 25 Jan 2007.

To cite this article: Gianfranco Corradi (2004) A method for nonsmooth optimization problems,International Journal of Computer Mathematics, 81:6, 693-705, DOI: 10.1080/0020716031000148197

To link to this article: http://dx.doi.org/10.1080/0020716031000148197

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: A method for nonsmooth optimization problems

International Journal of Computer MathematicsVol. 81, No. 6, June 2004, pp. 693–705

A METHOD FOR NONSMOOTH OPTIMIZATIONPROBLEMS

GIANFRANCO CORRADI∗

Faculty of Economics, University of Rome "La Sapienza", Via del Castro Laurenziano, 9 00161 Rome,Italy

(Received 7 January 2003)

We introduce a method for solving a nonsmooth optimization problem. The algorithm constructs a sequence {xk}where xk+1 = x and x is a solution of a variational inequality problem and satisfies an Armijo-type condition. Aconvergence theorem for the algorithm is established and some numerical results are reported.

Keywords: Nonsmooth optimization; Unconstrained optimization; Variational inequality

C.R. Categories: G.1.6

1 INTRODUCTION

We consider in this paper an algorithm to solve

minx

{f (x)|x ∈ Rn}, (1)

where f : Rn → R1 is a convex not necessarily differentiable function. Our algorithm is basedon the solution of a variational inequality problem. The algorithm constructs a sequence {xk}in Rn and at the kth iteration a point x is considered where x is a solution of the variationalinequality and satisfies an Armijo-type condition that is

f (xk) − f (x) − εk + σvk ≥ 0,

where εk > 0, limk→∞ εk = 0, and vk ≤ 0, limk→∞ vk = 0. Throughout the paper 〈·, ·〉 and |·|denote the scalar product and the Euclidean norm in Rn, respectively, further ∂f (x) denotesthe subdifferential of f at x and each element g(x) ∈ ∂f (x) denotes a subgradient. We usexi to denote the ith component of the vector x ∈ Rn. We also define |y|2G = 〈y, Gy〉 wherey ∈ Rn, G ∈ Rn×n and G is a symmetric and positive definite matrix, finally if x ∈ Rn and

y ∈ Rm, then we also denote the vector

(x

y

)by (x, y)T.

∗ E-mail: [email protected]

ISSN 0020-7160 print; ISSN 1029-0265 online c© 2004 Taylor & Francis LtdDOI: 10.1080/0020716031000148197

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 3: A method for nonsmooth optimization problems

694 G. CORRADI

DEFINITION 1 Let f : C ⊆ Rn → Rn. The variational inequality problem is to determine avector x ∈ C, such that

〈f (x), x − x〉 ≥ 0 for every x ∈ C, (2)

where f is a given continuous function and C is a given closed convex set. We denote prob-lem (2) VI(f, C).

PROPOSITION 1 Let x be a solution to the optimization problem

minx

{f (x)|x ∈ C},

where f : Rn → R1 is continuously differentiable and C is a closed and convex set. Then x isa solution of the variational inequality problem

〈∇f (x), x − x〉 ≥ 0 for every x ∈ C,

where ∇f (·) is the gradient of f (·).

PROPOSITION 2 Let f : Rn → R1 be a convex function and x is a solution to VI(∇f, C). Thenx is a solution of the following optimization problem minx{f (x)|x ∈ C}.

2 AN ALGORITHM FOR NONSMOOTH UNCONSTRAINED PROBLEMS

We now introduce an algorithm for solving problem (1). We assume that at each y ∈ Rn wecan compute f (y) and an arbitrary subgradient g(y) ∈ ∂f (y) that defines the linearization off (·) at y

f (x; y) = f (y) + 〈g(y), x − y〉 for every x ∈ Rn. (3)

Our algorithm constructs a sequence {xk} of Rn and at the kth iteration the following polyhedralapproximation to f (·) is considered

fk(x) = max{f (x, xi)|i = 1, k} = max{f (xi) + 〈g(xi), x − xi〉|i = 1, k}. (4)

It results f (xi) = fk(xi), f (x) ≥ fk(x) for every x. Then we consider a solution x to thefollowing problem

min

{fk(x) + 1

2w〈x − xk, M(x − xk)〉|x ∈ Rn

}, (5)

where 0 ≤ w ∈ R1 and M ∈ Rnxn is a symmetric positive semidefinite matrix.The above subproblem may be interpreted as a local approximation to problem (1). We

only note that the penalty term (1/2)w〈x − xk, M(x − xk)〉 is introducted since the problemmin{fk(x)|x ∈ Rn} may have no solution. We note that problem (5) is equivalent to the fol-lowing

minx,u

1

2w〈x − xk, M(x − xk)〉 + u

s.t. f (xi) + 〈g(xi), x − xi〉 ≤ u i = 1, k.

(6)

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 4: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 695

Remark 1 Let x, u be a solution to (6). We note that if f (x) < f (xk) then the vectord = x − xk is a descent direction of f (·) at xk . In fact since f (·) is a convex function then∀t ∈ [0, 1] it follows that

f [(1 − t)xk + t x] = f (xk + td)

≤ (1 − t)f (xk) + tf (xk + d)

= f (xk) + t[f (xk + d) − f (xk)]

= f (xk) + t[f (x) − f (xk)].

It follows that ∀t ∈ [0, 1] f (xk + td) < f (xk). Particularly we are interested to find a point x

which satisfies an Armijo-type condition, that is,

f (x) ≤ f (xk) − εk − σ {f (xk) − fk(x)}, (7)

where εk > 0, limk→∞ εk = 0, and σ > 0 is sufficiently small. We now assume that there existsa value w of w such that the solution x to (6) also satisfies (7).

Remark 2 Let C = {(x, u)T|f (xi) + 〈g(xi), x − xi〉 ≤ u, i = 1, k}. By Proposition 1 itfollows that if C is a closed and convex set, then problem (6) is equivalent to find a point(x, u)T ∈ C such that⟨(

wM(x − xk)

1

),

(x − x

u − u

)⟩≥ 0 ∀ (x, u)T ∈ C ⊂ Rn+1. (8)

Since w is a value for which (7) is satisfied then we are interested to solve the problem to finda vector (x, u, w)T ∈ C × D where D = {w ∈ R1|w ≥ wmin}, wmin ≥ 0 such that

wM(x − xk)

1

f (xk) − f (x) − εk − σ(f (xk) − fk(x))

,

x − x

u − u

w − w

⟩≥ 0 (9)

∀(x, u, w)T ∈ C × D and fk(x) = max{f (xi) + 〈g(xi), x − xi〉 i = 1, k}. We note that ifwe set

F(x, u, w) =

wM(x − xk)

1

f (xk) − f (x) − εk − σ(f (xk) − fk(x))

(10)

then from (9) we have

⟨F(x, u, w),

x − x

u − u

w − w

⟩≥ 0 ∀ (x, u, w)T ∈ C × D.

Note that if we set x = x, u = u, w ∈ D, then from (9) it follows that

[f (xk) − f (x) − εk − σ(f (xk) − fk(x))](w − w) ≥ 0 ∀w ∈ D,

from whichf (xk) − f (x) − εk − σ(f (xk) − fk(x)) ≥ 0, (11)

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 5: A method for nonsmooth optimization problems

696 G. CORRADI

that is (7). On the other hand if we set w = w, (x, u)T ∈ C from (9) we obtain (8). Finally wenote that variational inequality problem (9) is crucial for our algorithm.

Remark 2 suggests the following main algorithm for solving Problem 1.

ALGORITHM 1

Step 1: Select x1 ∈ Rn, u1 ∈ R1, w1 ∈ R1, {εk} ⊂ R1 such that 0 < εk+1 < εk , limk→∞ εk =0, 0 < σ ∈ R1

Step 2: Set k = 1Step 3: Select gk = g(xk) ∈ ∂f (xk)

Step 4: Solve the following variational inequality: find (xk, uk, wk)T ∈ C × D ⊂ Rn+2 such

that ⟨

wkM(xk − xk)

1

f (xk) − f (xk) − εk − σ(f (xk − fk(xk))

,

x − xk

u − uk

w − wk

⟩≥ 0

∀(x, u, w)T ∈ C × D. Comment: we note that for solving the problem in Step 4 wehave considered, for our numerical results, the following initial point (xk, uk, wk)

T.Step 5: Set xk+1 = xk , uk+1 = uk , wk+1 = wk

Step 6: Set k = k + 1 and go to Step 3.

Remark 3 Note that we can replace Step 4 of Algorithm 1 by Step 4′.

Step 4′: Solve the following optimization problem

minx,u,w

1

2w〈x − xk, M(x − xk)〉 + u

s.t. f (xi) + 〈g(xi), x − xi〉 ≤ u i = 1, k

f (x) ≤ f (xk) − εk − σ(f (xk) − fk(x)).

Note that this problem is a nonsmooth optimization problem for which we have replaced theabove problem by a variational inequality problem which we solve making us of projectionmethod.

Remark 4 Note that when the iteration kth is sufficiently large, then becames prohibitiveto solve the variational inequality problem. So to limit storage we may adopt a fairly stan-dard technique [1]. When k reaches a maximal value kmax one constraint is deleted from theconstraints f (xi) + 〈g(xi), x − xi〉 ≤ u; i = 1, kmax and one is introduced. We delete, amongthose that are not active at (xk+1, uk+1)

T the constraint that has the largest error at xk+1 given by

maxi

{f (xk+1) − [

f (xi) + 〈g(xi), xk+1 − xi〉]}

We now prove the convergence of the sequence {xk} generated by Algorithm 1.

LEMMA 1 Let {xk} be the sequence generated by Algorithm 1. If we set

vk = fk(xk+1) − f (xk),

then vk ≤ 0 for all k.

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 6: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 697

Proof If we set w = wk and (x, u)T ∈ C , from Step 4 of Algorithm 1 it follows that

⟨(wkM(xk − xk)

1

),

(x − xk

u − uk

)⟩≥ 0 (12)

for all (x, u)T ∈ C.By Proposition 1 it follows that if C is a closed and convex set then (12) is equivalent to

the following problem

minx,u

1

2wk〈x − xk, M(x − xk)〉 + u

s.t. f (xi) + 〈g(xi), x − xi〉 ≤ u i = 1, k

and this is equivalent to the problem

minx∈Rn

{fk(x) + 1

2wk〈x − xk, M(x − xk)〉

}.

Since xk+1 = xk and xk is a solution of (12), then xk+1 is a solution of the above problem, forwhich

0 ∈ ∂fk(xk+1) + wkM(xk+1 − xk).

Then we can assume that there exists pk ∈ ∂fk(xk+1) such that

0 = pk + wkM(xk+1 − xk).

If we set dk = xk+1 − xk it follows that

pk + wkMdk = 0. (13)

Now we set

fk(x) = fk(xk+1) + 〈pk, x − xk+1〉 (14)

it results

fk(xk) = fk(xk+1) + 〈pk, xk − xk+1〉

from which

fk(xk) − f (xk) = fk(xk+1) − f (xk) + 〈pk, xk − xk+1〉

if we set

αp

k = f (xk) − fk(xk) (15)

then

−αp

k = vk + 〈pk, xk − xk+1〉 = vk − 〈pk, dk〉

by (13) if we assume M a symmetric and positive definite matrix

= vk + 1

wk

〈pk, M−1pk〉 = vk + 1

wk

|pk|2M−1

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 7: A method for nonsmooth optimization problems

698 G. CORRADI

it follows that

vk = − 1

wk

|pk|2M−1 − αp

k = −(

1

wk

|pk|2M−1 + αp

k

). (16)

On the other hand, since pk ∈ ∂fk(xk+1) it follows that

fk(x) ≥ fk(xk+1) + 〈pk, x − xk+1〉 = fk(x) (17)

by (17) we have

f (x) ≥ fk(x) ≥ fk(x) for all x ∈ Rn (18)

from which

0 ≤ f (x) − fk(x) ≤ f (x) − fk(x)

hence

αp

k = f (xk) − fk(xk) ≥ 0 (19)

and from (16)

vk ≤ 0 for all k. (20)

LEMMA 2 Assume that x is an accumulation point of {xk}, where {xk} is the sequence con-structed by Algorithm 1. Then limk→+∞ vk = 0.

Proof From Step 4 of Algorithm 1 it follows that the sequence {f (xk)} is monotone decreas-ing. By continuity of f (·)

limk→+∞,k∈K

f (xk) = f (x) and K ⊂ {1, 2, 3, . . .}

hence f (x) is an accumulation point of {f (xk)}, it follows that

limk→+∞ f (xk) = f (x) for which lim

k→+∞[f (xk) − f (xk+1)] = 0

from Remark 2 (see (11)) we have

f (xk) − f (xk+1) ≥ εk − σvk > 0

from which limk→+∞ vk = 0.

LEMMA 3 Let {xk} be the sequence generated by Algorithm 1. Then

f (x) ≥ f (xk) + 〈pk, x − xk〉 − 2αp

k for all x ∈ Rn, (21)

where pk and αp

k are defined in (13) and (15).

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 8: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 699

Proof We have by (15)

f (xk) + 〈pk, x − xk〉 − αp

k = f (xk) + 〈pk, x − xk〉 − f (xk) + fk(xk)

= fk(xk) + 〈pk, x − xk〉≤ fk(xk) + 〈pk, x − xk〉 [by (18)]

= fk(xk+1) − fk(xk+1) + fk(xk) + 〈pk, x − xk+1 + xk+1 − xk〉= fk(xk+1) + 〈pk, x − xk+1〉 + fk(xk) − fk(xk+1)

+ 〈pk, xk+1 − xk〉= fk(xk+1) + 〈pk, x − xk+1〉 + fk(xk) − [fk(xk+1)

+ 〈pk, xk − xk+1〉]≤ fk(x) + fk(xk) − [fk(xk+1) + 〈pk, xk − xk+1〉] [by (17)]

≤ f (x) + fk(xk) − [fk(xk+1) + 〈pk, xk − xk+1〉] [by (18)].

Hence

f (xk) + 〈pk, x − xk〉 − αp

k ≤ f (x) + fk(xk) − [fk(xk+1) + 〈pk, xk − xk+1〉]. (22)

If we set

αp

k = fk(xk) − [fk(xk+1) + 〈pk, xk − xk+1〉]

from (22) it follows that

f (x) ≥ f (xk) + 〈pk, x − xk〉 − αp

k − αp

k (23)

on the other hand by (14) and (18)

αp

k = fk(xk) − [fk(xk+1) + 〈pk, xk − xk+1〉] ≤ f (xk) − fk(xk) = αp

k ,

for which

αp

k ≤ αp

k (24)

by (23) and (24) it follows that

f (x) ≥ f (xk) + 〈pk, x − xk〉 − 2αp

k .

LEMMA 4 Let {xk} be the sequence constructed by Algorithm 1. Then

f (x) ≥ f (xk) − |wkvk|1/2

c1/22

|x − xk| + 2vk for all x ∈ Rn. (25)

where c2 > 0 is a constant.

Proof By (16) and (21) we have

f (x) ≥ f (xk) + 〈pk, x − xk〉 + 2vk + 2|pk|2M−1

wk

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 9: A method for nonsmooth optimization problems

700 G. CORRADI

and if we assume wk ≥ wmin > 0 it follows that

f (x) ≥ f (xk) + 〈pk, x − xk〉 + 2vk

on the other hand |〈pk, x − xk〉| ≤ |pk||x − xk|, hence

−|pk|x − xk| ≤ 〈pk, x − xk〉 ≤ |pk||x − xk|, for which

f (x) ≥ f (xk) − |pk||x − xk| + 2vk. (26)

Since (see (16))

−|pk|2M−1 = wkvk + wkαp

k ≥ wkvk

then

|pk|2M−1 ≤ −wkvk = |wkvk|

hence

|pk|M−1 ≤ |wkvk|1/2, (27)

furthermore

c2|pk|2 ≤ 〈pk, M−1pk〉 ≤ c1|pk|2,

where c1 > 0 and c2 > 0 are two constants. Hence

|pk|2M−1 = 〈pk, M−1pk〉 ≥ c2|pk|2 (28)

and by (27) and (28)

|pk| ≤ |pk|M−1

c1/22

≤ |wkvk|1/2

c1/22

(29)

from (26) and (29) it follows that

f (x) ≥ f (xk) − |wkvk|1/2

c1/22

|x − xk| + 2vk.

THEOREM 1 Assume that the convex function f (·) has a nonempty bounded set of minima X.Then any accumulation point x of {xk} minimizes f (·).

Proof By (25) and Lemma 2 we have

limk∈K,k→+∞ f (x) ≥ lim

k∈K,k→+∞{f (xk) − (|wkvk|1/2(c2)

−1/2) |x − xk| + 2vk

},

where K ⊂ {1, 2, 3, . . .}.Since {wk} is bounded it follows that

f (x) ≥ f (x) for all x ∈ Rn

LEMMA 5 There exists a constant c5 > 0 such that

|M−1pk|2 ≤ c5|pk|2M−1 .

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 10: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 701

Proof We have

c3|pk|2 ≤ 〈pk, M−1pk〉 c3 > 0 (30)

〈M−1pk, M−1pk〉 = 〈pk, M

−2pk〉 ≤ c4|pk|2 c4 > 0. (31)

Let c5 > 0 be, such that c4 ≤ c3c5, then by (30) and (31)

|M−1pk|2 ≤ c4|pk|2 ≤ c3c5|pk|2 ≤ c5〈pk, M−1pk〉 = c5|pk|2M−1 .

LEMMA 6 Let {xk} be the sequence constructed by Algorithm 1. Assume that

f (xk) ≥ f (x) for some fixed x ∈ Rn and all k. (32)

Then∞∑

k=1

|vk| ≤ f (x1) − f (x)

σ. (33)

Proof By Steps 4 and 5 of Algorithm 1 and Remark 2 it follows that

σ |vk| ≤ f (xk) − f (xk+1),

from which

σ

k∑i=1

|vi | ≤ f (x1) − f (xk+1) ≤ f (x1) − f (x).

Thus, in the limit we obtain (33).

LEMMA 7 Let {xk} be the sequence generated by Algorithm 1. Assume that (32) holds andthat there exists a constant c6 > 0 such that

〈M−1pk, x − xk〉 ≤ 2c6αp

k (34)

Then the sequence {xk} is bounded.

Proof We have

|x − xk+1|2 = |x − xk|2 − 2〈x − xk, xk+1 − xk〉 + |xk+1 − xk|2= |x − xk|2 − 2〈x − xk, dk〉 + |xk+1 − xk|2

= |x − xk|2 + 2

⟨x − xk,

M−1pk

wk

⟩+ |xk+1 − xk|2

≤ |x − xk|2 + |xk+1 − xk|2 + 4c6

wk

αp

k ,

then

|x − xk+1|2 ≤ |x − xk|2 + |xk+1 − xk|2 + 4c6

wk

αp

k . (35)

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 11: A method for nonsmooth optimization problems

702 G. CORRADI

On the other hand∞∑

k=1

{|xk+1 − xk|2 + 4c6

wk

αp

k

}=

∞∑k=1

{ |M−1pk|2w2

k

+ 4c6

wk

αp

k

}

≤∞∑

k=1

{c5|pk|2M−1

w2k

+ 4c6

wk

αp

k

}(by Lemma 5)

≤∞∑

k=1

{c5|pk|2M−1

wkwmin+ 4c6

wminα

p

k

}(if wk ≥ wmin > 0)

≤ c7

wmin

∞∑k=1

{ |pk|2M−1

wk

+ αp

k

}(if c7 = max{c5, 4c6})

= c7

wmin

∞∑k=1

|vk| < ∞ [by (16)].

The last relation it follows from Lemma 6, for which∞∑

k=1

{|xk+1 − xk|2 + 4c6

wk

αp

k

}< ∞. (36)

By (35) it follows that for all n ≥ 1

|x − xk+n|2 ≤ |x − xk|2 +k+n−1∑

i=k

{|xi+1 − xi |2 + 4c6

wi

αp

i

}(37)

from (36) and (37) it follows that |x − xk+n|2 is bounded for all n ≥ 1, hence {xk} is bounded.

THEOREM 2 Let X = φ. Then each sequence {xk} constructed by Algorithm 1 is minimizing,that is

limk→+∞ f (xk) = inf{f (x)|x ∈ Rn}

Proof Let {zi} be a minimizing sequence, that is f (zi) → inf{f (x)|x ∈ Rn} and f (zi) >

f (zi+1) for all i. To obtain a contradiction assume that for some fixed index i f (zi) ≤ f (xk)

for all k. Then by Lemma 7 the sequence {xk} is bounded. If x is an accumulation point of {xk}then by Theorem 1 x minimizes f on Rn for which X �= φ and this is a contradition, hence{xk} is a minimizing sequence.

3 NUMERICAL RESULTS

We have implemented Algorithm 1 in Fortran 77 and have obtained our numerical results usingan IBM PC X 240 pentium III and g77 compiler. We have consider three test problems:

1. Shor’s minimax problem [2] with

f (x) = max

bi

n∑j=1

(xj − aij )2|i = 1, 10

n = 5, optimal value f (x) = 22.60016, starting point x1 = (0, 0, 0, 0, 1)T.

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 12: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 703

2. Lemarechal’s minimax problem (maxquad) [3] with

f (x) = max{〈x, Aix〉 − 〈x, bi〉|i = 1, 5

}n = 10, optimal value f (x) = −0.8414083 [4] starting point xi

1 = 1; i = 1, n.3. Goffin’s polyhedral problem [5] with

f (x) = n max{xi |i = 1, n

} −n∑

i=1

xi

n = 50, optimal value f (x) = 0, starting point xi1 = i − (n + 1)/2; i = 1, n.

Remark 5 We note that from Remark 2 it follows that the variational problem of Step 4 ofAlgorithm 1, for every fixed k, is equivalent to find y for which 〈F(y), y − y〉 ≥ 0 for all y ∈C × D such that Ay ≤ c, where A ∈ R(k+1)×(n+2)y = (x, u, w)T, y = (x, u, w)T. For solvingthe variational problem we make use of the following algorithm [6] (projection algorithm).

Let 0 < γ ∈ R1 be sufficiently small, let G ∈ R(n+2)×(n+2) be a symmetric and positivedefinite matrix and y1 ∈ Rn+2.

ALGORITHM 2

Step 1: Set t = 1Step 2: Compute yt+1 solution of the following quadratic problem

minAy≤c

1

2〈y, Gy〉 + 〈γF(yt ) − Gyt, y〉

Step 3: If |yt+1 − yt | ≤ ε stop, else set t = t + 1 and go to Step 2.

Remark 6 We now consider some remarks on Algorithm 2. Let {xk} be the sequenceconstructed by Algorithm 1. We note that if the stop criterion |yt+1 − yt | ≤ ε is satisfied,then, if f (yt+1) < f (xk) we consider a serious step setting xk+1 = yt+1, if the condition|yt+1 − yt | ≤ ε, after a fixed maximum number, say tmax, of iteration is not satisfied, thenif f (ytmax) < f (xk) we again consider a serious step, else (f (ytmax) ≥ f (xk)) we consider anull step setting xk+1 = xk . In this last case we consider the point ytmax and update the set C

(see Remark 4).

Remark 7 We note that the termination criterion for Algorithm 1 is

|xk+1 − xk| ≤ ε0 or |f (xk+1) − f (kk)| ≤ ε0. (38)

In Tables I–III we report our numerical results for the test problems and for different choices ofparameters. In these tables ε1 and β define the sequence {εk}, we set εk+1 = βεk , σ is definedin Algorithm 1, ε0 is defined in (38), ε is defined in Remark 6, γ is defined in Algorithm 2, kmax

is defined in Remark 4, iter is the final number of iteration, f is the objective function value

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 13: A method for nonsmooth optimization problems

704 G. CORRADI

TABLE I Problem 1.

ε1 β kmax σ γ ε ε0 iter f c9

50 0.5 50 0.3 10−6 10−4 10−6 40 22.60018 10−3

50 0.5 45 0.2 10−6 10−4 10−6 38 22.60021 10−3

10 0.3 30 0.4 10−6 10−5 10−6 27 22.60086 0.05

1 0.2 30 1.5 10−6 10−5 10−6 25 22.60083 0.05

Note: For this problem always c8 = 1.

TABLE II Problem 2.

ε1 β kmax σ γ ε ε0 iter f c8 c9

1 0.5 40 0.4 10−7 10−5 10−6 81 −0.841396 0.5 5 × 10−5

20 0.3 35 0.3 10−8 10−5 10−6 121 −0.84140759 1 10−5

1 0.5 35 10−3 10−7 10−5 10−6 95 −0.841404 0.5 5 × 10−5

TABLE III Problem 3.

ε1 β kmax σ γ ε ε0 iter f c8 c9

10 0.3 115 10−5 10−8 10−5 10−8 105 0.82 × 10−9 1 10−6

50 0.1 115 10−5 10−8 10−5 10−8 99 0.36 × 10−8 0.001 10−6

at termination. For our numerical results we have considered M = diag(v) (see (5)) wherevi = c8 for all i and c8 > 0 is a constant, G = diag(g) (see Algorithm 2) where gi = c9 forall i and c9 > 0 is a constant.

Remark 8 We note that our method performs quite well and can compete with others methods[4, 7] in the number of iterations. Our numerical results also show that the approach presentedin this note is promising. A drawback of our method is that in Step 4 of Algorithm 1 we solvea variational inequality, so it follows that, for every fixed Step k, we compute several times theobjective function value. On the other hand our method computes the variable w, such that acondition of Armijo-type is satisfied, in an automatic way, on the contrary of others methods[7, 8] in which particular procedures for computing the parameter w are defined. On the otherhand we note that since a proper range of values of the parameter w may be unknown in apractical situation, then an empirical procedure for computing w may be inadequate.

4 CONCLUSIONS

Further improvement of our method is expected from more sophisticated implementationson the variational problem. Also the variational inequality may be modified considering notonly an Armijo-type condition but further conditions for which we may consider a model withseveral variables wi with respect to a model with one parameter w.

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 14: A method for nonsmooth optimization problems

METHOD FOR NONSMOOTH OPTIMIZATION PROBLEMS 705

References

[1] Lemarechal, C. and Sagastizabal, C. (1997). Variable metric bundle methods: from conceptual to implementableforms. Mathematical Programming, 76, 393–410.

[2] Shor, N. Z. (1985). Minimization Methods for Non-Differentiable Functions. Springer-Verlag, Berlin, p. 137.[3] Lemarechal, C. and Mifflin, R. (1978). Nonsmooth Optimization. Pergamon Press, Oxford, pp. 151–153.[4] Vlcek, J. and Luksan, L. (2001). Globally convergent variable metric method for nonconvex nondifferentiable

unconstrained minimization. JOTA, 111(2), 407–430.[5] Kiwiell, K. C. (1990). Proximity control in bundle methods for convex nondifferentiable minimization. Mathe-

matical Programming, 46, 105–122.[6] Nagurnay, A. (1993). Network Economics. Kluwer Academic Publishers, London, pp. 41–44.[7] Fukushima, M. (1984). A descent algorithm for nonsmooth convex optimization. Mathematical Programming,

30, 163–175.[8] Kiwiell, K. C. (1985). Methods of Descent for Nondifferentiable Optimization. Lecture Notes in Mathematics

1133, Springer, Berlin.

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14

Page 15: A method for nonsmooth optimization problems

Dow

nloa

ded

by [

Ein

dhov

en T

echn

ical

Uni

vers

ity]

at 0

7:56

21

Nov

embe

r 20

14