inexact sqp methods for equality constrained optimization

Post on 31-Dec-2015

35 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

INFORMS Annual Meeting 2006. Inexact SQP Methods for Equality Constrained Optimization. Frank Edward Curtis Department of IE/MS, Northwestern University with Richard Byrd and Jorge Nocedal November 6, 2006. Outline. Introduction Problem formulation Motivation for inexactness - PowerPoint PPT Presentation

TRANSCRIPT

Inexact SQP Methods for Equality Constrained OptimizationFrank Edward CurtisDepartment of IE/MS, Northwestern University

with Richard Byrd and Jorge Nocedal

November 6, 2006

INFORMS Annual Meeting 2006

Outline Introduction

Problem formulation Motivation for inexactness Unconstrained optimization and nonlinear

equations Algorithm Development

Step computation Step acceptance

Global Analysis Merit function and sufficient decrease Satisfying first-order conditions

Conclusions/Final remarks

Outline Introduction

Problem formulation Motivation for inexactness Unconstrained optimization and nonlinear

equations Algorithm Development

Step computation Step acceptance

Global Analysis Merit function and sufficient decrease Satisfying first-order conditions

Conclusions/Final remarks

Goal: solve the problem

Equality constrained optimization

)()(),( xcxfxL T

0)(s.t.

)(min

xcxf

x ),(:),(

)(:)(

)(:)(

2 xLxW

xcxA

xfxg

xx

i

0)(

)()(

xc

xAxg T

Define: the Lagrangian

Define: the derivatives

Goal: solve KKT conditions

Equality constrained optimization

0s.t.

min 21

Adc

Wdddg TT

d

c

Agd

A

AW TT 0

Algorithm: Newton’s method

Algorithm: the SQP subproblem

Two “equivalent” step computation techniques

Equality constrained optimization

c

Agd

A

AW TT 0 0s.t.

min 21

Adc

Wdddg TT

d

Algorithm: Newton’s method

Algorithm: the SQP subproblem

Two “equivalent” step computation techniques

0A

AW T KKT matrix• Cannot be formed• Cannot be factored

Equality constrained optimization

c

Agd

A

AW TT 0 0s.t.

min 21

Adc

Wdddg TT

d

Algorithm: Newton’s method

Algorithm: the SQP subproblem

Two “equivalent” step computation techniques

0A

AW T KKT matrix• Cannot be formed• Cannot be factored

Linear system solve• Iterative method• Inexactness

Unconstrained optimization

)(min xfx

kx

)()(2kkk xfdxf

Goal: minimize a nonlinear objective

Algorithm: Newton’s method (CG)

Note: choosing any intermediate step ensures global convergence to a local solution of NLP

(Steihaug, 1983)

Note: choosing any step with

and

ensures global convergence

Nonlinear equations

kkkk rxFdxF )()(

0)( xF

kx

)()( kkk xFdxF

Goal: solve a nonlinear system

Algorithm: Newton’s method

10 ,)( kk xFr

(Eisenstat and Walker, 1994)

(Dembo, Eisenstat, and Steihaug, 1982)

Outline Introduction/Motivation

Unconstrained optimization Nonlinear equations Constrained optimization

Algorithm Development Step computation Step acceptance

Global Analysis Merit function and sufficient decrease Satisfying first-order conditions

Conclusions/Final remarks

Equality constrained optimization

0s.t.

min 21

Adc

Wdddg TT

d

c

Agd

A

AW TT 0

Algorithm: Newton’s method

Algorithm: the SQP subproblem

Two “equivalent” step computation techniques

),( kkx Question: can we ensure convergence to a local solution by choosing any step into the ball?

Globalization strategy: exact merit function

… with Armijo line search condition

Globalization strategy

)()()( xcxfx

Step computation: inexact SQP step

)()()( dDxdx

rc

Agd

A

AW TT 0

First attempt

10 ,

c

Ag

r

T

Proposition: sufficiently small residual

1e-8 1e-7 1e-6 1e-5 1e-4 1e-3 1e-2 1e-1

Success 100% 100% 97% 97% 90% 85% 72% 38%

Failure 0% 0% 3% 3% 10% 15% 28% 62%

Test: 61 problems from CUTEr test set

rc

Agd

A

AW TT 0

First attempt… not robust

10 ,

c

Ag

r

T

Proposition: sufficiently small residual

… not enough for complete robustness We have multiple goals (feasibility and optimality) Lagrange multipliers may be completely off

rc

Agd

A

AW TT 0

Recall the line search condition

Second attempt

rcdgdD T )(

Step computation: inexact SQP step

)()()( dDxdx

rc

Agd

A

AW TT 0

We can show

Recall the line search condition

Second attempt

rcdgdD T )(

Step computation: inexact SQP step

)()()( dDxdx

rc

Agd

A

AW TT 0

We can show

... but how negative should this be?

AdccWdddg

dmmdmredTT

21

)()0()(

AdcWdddgfdm TT 21)(

Quadratic/linear model of merit function

Create model

Quantify reduction obtained from step

0s.t.

min 21

Adc

Wdddg TT

d

rc

Agd

A

AW TT 0

AdcWdddgfdm TT 21)(

Quadratic/linear model of merit function

Create model

Quantify reduction obtained from step

0s.t.

min 21

Adc

Wdddg TT

d

AdccWdddg

dmmdmredTT

21

)()0()(

rc

Agd

A

AW TT 0

Exact case

AdccWdddgdmred TT 21)(

rc

Agd

A

AW TT 0

kx

Exact case

AdccWdddgdmred TT 21)(

rc

Agd

A

AW TT 0

kxExact step minimizes the objective on the linearized constraints

Exact case

AdccWdddgdmred TT 21)(

rc

Agd

A

AW TT 0

kxExact step minimizes the objective on the linearized constraints

… which may lead to an increase in the objective (but that’s ok)

Inexact case

rc

Agd

A

AW TT 0

kx

Option #1: current penalty parameter

10 ,)( cdmred

rc

Agd

A

AW TT 0

kx

Option #1: current penalty parameter

10 ,)( cdmred

rc

Agd

A

AW TT 0

kx

TAg

r

,

:0 ,10Step is acceptable if for

Option #2: new penalty parameter

10 ,)( cdmred

rc

Agd

A

AW TT 0

kx

Option #2: new penalty parameter

10 ,)( cdmred

rc

Agd

A

AW TT 0

kx

c

cr

,

:0 ,10

Step is acceptable if for

Option #2: new penalty parameter

10 ,)( cdmred

rc

Agd

A

AW TT 0

kx

c

cr

,

:0 ,10

Step is acceptable if for

10 ,)1(

21

rc

Wdddg TT

for k = 0, 1, 2, … Iteratively solve

Until

Update penalty parameter Perform backtracking line search Update iterate

Algorithm outline

cdmred

Ag

rT

)(

10 ,

0 ,

0 ,

10 ,

c

cr

rc

Agd

A

AW TT 0

or

Observe KKT conditions

Termination test

10 , )(,1 max

10 , ,1 max

0

feasfeas

optoptT

xcc

gAg

Outline Introduction/Motivation

Unconstrained optimization Nonlinear equations Constrained optimization

Algorithm Development Step computation Step acceptance

Global Analysis Merit function and sufficient decrease Satisfying first-order conditions

Conclusions/Final remarks

The sequence of iterates is contained in a convex set over which the following hold:

the objective function is bounded below the objective and constraint functions and their first and second derivatives are uniformly bounded in norm the constraint Jacobian has full row rank and its smallest singular value is bounded below by a positive constant the Hessian of the Lagrangian is positive definite with smallest eigenvalue bounded below by a positive constant

Assumptions

Sufficient reduction to sufficient decrease

rcdgdD T )(

cWddrcdg TT 21

cd

cWdddD T

2

21)(

10 ,)( cdmred

Taylor expansion of merit function yields

Accepted step satisfies

Intermediate results

0 ,

10 ,

c

cr

10 ,)1(

21

rc

Wdddg TT

cdmred

Ag

rT

)(

0 ,

0 ,

d

is bounded below by a positive constant

is bounded above

is bounded above

Sufficient decrease in merit function

cdxdD 2

),;(

0lim k

Tk

kgZ

0lim2

kkk

cd

cddxx 2);();(

Step in dual space

10 , TTAgAg

|||| c |||| d(for sufficiently small and )

0lim k

kc

0lim k

Tkk

kAg

Therefore,

We converge to an optimal primal solution, and

Outline Introduction/Motivation

Unconstrained optimization Nonlinear equations Constrained optimization

Algorithm Development Step computation Step acceptance

Global Analysis Merit function and sufficient decrease Satisfying first-order conditions

Conclusions/Final remarks

Conclusion/Final remarks Review

Defined a globally convergent inexact SQP algorithm

Require only inexact solutions of KKT system Require only matrix-vector products involving

objective and constraint function derivatives Results also apply when only reduced Hessian of

Lagrangian is assumed to be positive definite Future challenges

Implementation and appropriate parameter values

Nearly-singular constraint Jacobian Inexact derivative information Negative curvature etc., etc., etc….

top related