interior point methods - umbcpotra//talk0930.pdfinterior-point methods in mathematical programming...

51
UMBC UMBC UMBC J I Interior Point Methods twenty years after Florian A. Potra [email protected] Department of Mathematics and Statistics University of Maryland Baltimore County Baltimore, MD 21250, USA NIST, September 30, 2003 Interior Point Methods – p.1/30

Upload: others

Post on 14-Jul-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Interior Point Methodstwenty years after

Florian A. [email protected]

Department of Mathematics and StatisticsUniversity of Maryland Baltimore County

Baltimore, MD 21250, USA

NIST, September 30, 2003

Interior Point Methods – p.1/30

Page 2: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Overview

Interior-point methods in mathematical programming havebeen the largest and most dramatic area of research inoptimization since the development of the simplex method. . .

Interior-point methods have permanently changed thelandscape of mathematical programming theory, practice andcomputation. . . (Freund & Mizuno 1996).

Major impacts on

The linear programming problem (LP)

The quadratic programming problem (QP)

The linear complementarity problem (LCP)

The semi-definite programming problem (SDP)

Some classes of convex programming problems

Interior Point Methods – p.2/30

Page 3: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Interior Point Methods – p.3/30

Page 4: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Interior Point Methods – p.3/30

Page 5: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Interior Point Methods – p.3/30

Page 6: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Interior Point Methods – p.3/30

Page 7: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)

– computational complexity: O(n4L)

Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)

– computational complexity: O(n3.5L)

Note: A variant of the algorithm implemented in KORBX (1989).Practical performance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Interior Point Methods – p.4/30

Page 8: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)

– computational complexity: O(n4L)

Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)

– computational complexity: O(n3.5L)

Note: A variant of the algorithm implemented in KORBX (1989).Practical performance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Interior Point Methods – p.4/30

Page 9: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)

– computational complexity: O(n4L)

Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)

– computational complexity: O(n3.5L)

Note: A variant of the algorithm implemented in KORBX (1989).Practical performance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Interior Point Methods – p.4/30

Page 10: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)

– computational complexity: O(n4L)

Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)

– computational complexity: O(n3.5L)

Note: A variant of the algorithm implemented in KORBX (1989).Practical performance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Interior Point Methods – p.4/30

Page 11: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

How is polynomiality proved?

Primal-dual algorithms:Obtain a sequence of points with duality gap µk → 0.Best complexity is obtained for path-following methods where (µk) isQ-linearly convergent with Q-factor (1− ν/

(n)):

µk+1 ≤ (1− ν/√

n) µk, k = 0, 1, . . .

or for potential reduction method where (µk) is R-linearly convergentwith R-factor (1− γ/

√n):

µk ≤ χ(1− ν/√

n)k, k = 0, 1, . . .

In both cases we have

µk ≤ ε for k = O(√

n log(µ0

ε))

If ε ≤ 2−2L then (xk, yk, sk) can be rounded to an exact solution in O(n3)

arithmetic operations. Hence the O(√

nL)-iteration complexity.

Interior Point Methods – p.5/30

Page 12: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have superlinear convergence:

µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ α µ2k

superlinear convergence of Q-order ω: µk+1 ≤ α µωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and

Q-quadratic convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Interior Point Methods – p.6/30

Page 13: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have superlinear convergence:

µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ α µ2k

superlinear convergence of Q-order ω: µk+1 ≤ α µωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and

Q-quadratic convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Interior Point Methods – p.6/30

Page 14: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have superlinear convergence:

µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ α µ2k

superlinear convergence of Q-order ω: µk+1 ≤ α µωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and

Q-quadratic convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Interior Point Methods – p.6/30

Page 15: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have superlinear convergence:

µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ α µ2k

superlinear convergence of Q-order ω: µk+1 ≤ α µωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and

Q-quadratic convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Interior Point Methods – p.6/30

Page 16: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iterationworsens as µk decreases. Superlinearly convergent algorithms needonly a couple of iterations with small µk.

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can beused to find the exact solution. Does superlinear convergence set in(much) before such a projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Interior Point Methods – p.7/30

Page 17: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iterationworsens as µk decreases. Superlinearly convergent algorithms needonly a couple of iterations with small µk.

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can beused to find the exact solution. Does superlinear convergence set in(much) before such a projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Interior Point Methods – p.7/30

Page 18: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iterationworsens as µk decreases. Superlinearly convergent algorithms needonly a couple of iterations with small µk.

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can beused to find the exact solution. Does superlinear convergence set in(much) before such a projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Interior Point Methods – p.7/30

Page 19: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Convergence of the iterates

µk → 0 . Is the sequence of iterates convergent?

Tapia, Zhang and Ye (1995): sufficient conditions for convergence ofiterates. Can they be satisfied with preservation of polynomialcomplexity?

Gonzaga and Tapia (1997): the iterates of MTY converge to theanalytic center of the solution set.MTY: 2 matrix factorizations + 2 backsolves, Q(µk) = 2.simplified MTY: asymptotically one factorization + 2 backsolves;the iterates converge but not to the analytic center.

Bonnans and Gonzaga (1996) Bonnans and P. (1997): GeneralConvergence Theory

P. (2001): established superlinear convergence of the iterates forseveral interior point methods.The simplified MTY is Q-quadratically convergent.MTY appears not to be Q-superlinearly convergent.

Interior Point Methods – p.8/30

Page 20: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Convergence of the iterates

µk → 0 . Is the sequence of iterates convergent?

Tapia, Zhang and Ye (1995): sufficient conditions for convergence ofiterates. Can they be satisfied with preservation of polynomialcomplexity?

Gonzaga and Tapia (1997): the iterates of MTY converge to theanalytic center of the solution set.MTY: 2 matrix factorizations + 2 backsolves, Q(µk) = 2.simplified MTY: asymptotically one factorization + 2 backsolves;the iterates converge but not to the analytic center.

Bonnans and Gonzaga (1996) Bonnans and P. (1997): GeneralConvergence Theory

P. (2001): established superlinear convergence of the iterates forseveral interior point methods.The simplified MTY is Q-quadratically convergent.MTY appears not to be Q-superlinearly convergent.

Interior Point Methods – p.8/30

Page 21: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but hasgood practical performance. Why?

The expected number of pivots is O(n2) !

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Antreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTYtype to find an exact solution of the LP or to determine that the LP is

infeasible is at most O(√

n ln n) !

Numerical experience with a very large number of problems show thatinterior point methods need 30–50 iterations to convergence. Can theabove probabilistic results be improved?Polylog probabilistic complexity?

Interior Point Methods – p.9/30

Page 22: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but hasgood practical performance. Why?

The expected number of pivots is O(n2) !

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Antreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTYtype to find an exact solution of the LP or to determine that the LP is

infeasible is at most O(√

n ln n) !

Numerical experience with a very large number of problems show thatinterior point methods need 30–50 iterations to convergence. Can theabove probabilistic results be improved?Polylog probabilistic complexity?

Interior Point Methods – p.9/30

Page 23: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but hasgood practical performance. Why?

The expected number of pivots is O(n2) !

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Antreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTYtype to find an exact solution of the LP or to determine that the LP is

infeasible is at most O(√

n ln n) !

Numerical experience with a very large number of problems show thatinterior point methods need 30–50 iterations to convergence. Can theabove probabilistic results be improved?Polylog probabilistic complexity?

Interior Point Methods – p.9/30

Page 24: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but hasgood practical performance. Why?

The expected number of pivots is O(n2) !

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Antreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTYtype to find an exact solution of the LP or to determine that the LP is

infeasible is at most O(√

n ln n) !

Numerical experience with a very large number of problems show thatinterior point methods need 30–50 iterations to convergence. Can theabove probabilistic results be improved?Polylog probabilistic complexity?

Interior Point Methods – p.9/30

Page 25: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods ofthe central path are the most efficient interior point methods.Paradoxically the best complexity results were obtained for algorithmsacting in narrow neighborhoods.Recent work closes this gap (P., Math. Prog. 2003).

The horizontal linear complementarity problem (HLCP)

xs = 0

Qx + Rs = b

x, s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Interior Point Methods – p.10/30

Page 26: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods ofthe central path are the most efficient interior point methods.Paradoxically the best complexity results were obtained for algorithmsacting in narrow neighborhoods.Recent work closes this gap (P., Math. Prog. 2003).

The horizontal linear complementarity problem (HLCP)

xs = 0

Qx + Rs = b

x, s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0,∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Interior Point Methods – p.10/30

Page 27: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods ofthe central path are the most efficient interior point methods.Paradoxically the best complexity results were obtained for algorithmsacting in narrow neighborhoods.Recent work closes this gap (P., Math. Prog. 2003).

The horizontal linear complementarity problem (HLCP)

xs = 0

Qx + Rs = b

x, s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0,∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Interior Point Methods – p.10/30

Page 28: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods ofthe central path are the most efficient interior point methods.Paradoxically the best complexity results were obtained for algorithmsacting in narrow neighborhoods.Recent work closes this gap (P., Math. Prog. 2003).

The horizontal linear complementarity problem (HLCP)

xs = 0

Qx + Rs = b

x, s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0,∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Interior Point Methods – p.10/30

Page 29: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods ofthe central path are the most efficient interior point methods.Paradoxically the best complexity results were obtained for algorithmsacting in narrow neighborhoods.Recent work closes this gap (P., Math. Prog. 2003).

The horizontal linear complementarity problem (HLCP)

xs = 0

Qx + Rs = b

x, s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0,∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Interior Point Methods – p.10/30

Page 30: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The Central Path

The set of all feasible points is denoted by

F = {z = dx, s c ∈ IR2n+ : Qx + Rs = b},

where dx, s c denotes the vector [xT , sT ]T . If the relative interior of F ,

F0 = F⋂

IR2n++

is not empty, then the nonlinear system,

Fτ (z) :=

xs− τe

Qx + Rs− b

= 0 .

has a unique positive solution for any τ > 0. The set of all such solutionsdefines the central path C of the HLCP, i.e.,

C = {z ∈ IR2n++ : Fτ (z) = 0, τ > 0} .

Interior Point Methods – p.11/30

Page 31: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Central Path Neighborhoods

Proximity measures:

δ2(z) :=

xs

µ(z)− e

2

, δ∞(z) :=

xs

µ(z)− e

,

δ−∞(z) :=

[

xs

µ(z)− e

]−∥

, µ(z) =xT s

n.

Corresponding neighborhoods:

N2(α) = {z ∈ F0 : δ2(z) ≤ α } ,

N∞(α) = {z ∈ F0 : δ∞(z) ≤ α } ,

N−∞(α) = {z ∈ F0 : δ−∞(z) ≤ α } .

Relation between neighborhoods:

N2(α) ⊂ N∞(α) ⊂ N−∞(α), and lim

α↑1N−

∞(α) = F .

Interior Point Methods – p.12/30

Page 32: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

First order predictor

Input: z ∈ D(β) = {z ∈ F0 : xs ≥ βµ(z) } = N−∞(1− β).

z(θ) = z + θw ,

where

w = du, v c = −F ′0(z)−1F0(z)

is the Newton direction of F0 at z (the affine scaling direction), i.e.,

su + xv = −xs

Qu + Rv = 0 .

γ :=2 (1− β)

2(1− β) + n +√

4(1− β)(n + 1) + n2,

θ = argmin {µ(θ) : z(θ) ∈ D((1− γ)β) } ,

Output: z = z(θ ) ∈ D((1− γ)β) .

Interior Point Methods – p.13/30

Page 33: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

First order corrector

Input: z ∈ D((1− γ)β).

z(θ) = z + θw ,

where

w = dx, s c = −F ′µ(z)(z)−1Fµ(z)(z) ,

is the centering direction at z, i.e.,

su + xv = µ(z)− xs

Qu + Rv = 0 .

Output: z+ = z(θ+) ∈ D(β).

First order predictor–corrector.Input: z ∈ D(β), Output: z+ ∈ D(β).

Iterative algorithm: Given z0 ∈ D(β).For k = 0, 1, . . .: z ← zk; zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Interior Point Methods – p.14/30

Page 34: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Convergence results

Theorem If HLCP is monotone then the algorithm is well defined and

µk+1 ≤(

1− 29√

(1− β)β

16(n + 2)

)

µk, , k = 0, 1, . . .

Corollary O(nL)-iteration complexity.Theorem If the HLCP has a strictly complementary solution, then thesequence {µk} generated by Algorithm 1 converges quadratically tozero in the sense that

µk+1 = O(µ2k) .

Comments: Same complexity as Gonzaga (1999) plus quadratic con-vergence. In Gonzaga’s algorithm a predictor is followed by an aprioriunknown number of correctors. The complexity result is proved by show-ing that the total number of correctors is at most O(nL). The structureof Gonzaga’s algorithm makes it very difficult to analyze the asymptoticconvergence properties of the duality gap. No superlinear convergenceresults have been obtained so far for his method.

Interior Point Methods – p.15/30

Page 35: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

A higher order predictor

Input: z ∈ D(β).

z(θ) = z +m∑

i=1

wiθi ,

where wi = dui, vi c are given by

su1 + xv1 = −xs

Qu1 + Rv1 = 0,

sui + xvi = −∑i−1j=1 ujvi−j

Qui + Rvi = 0, i = 2, 3, . . . , m

(one factorization + m backsolves ,O(n3) + m O(n2) arith. operations)

θ = argmin {µ(θ) : z(θ) ∈ D((1− γ)β) } .

Output: z = z(θ ) ∈ D((1− γ)β) .

Interior Point Methods – p.16/30

Page 36: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The higher order predictor corrector

Given z0 ∈ D(β).For k = 0, 1, . . .:z ← zk;

Obtain z by the higher order predictor;Obtain z+ by the first order corrector;

zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Theorem If HCLP is monotone then the algorithm is well defined and

µk+1 ≤(

1− .16

√β 3√

1− β)√n m+1√

n + 2

)

µk , k = 0, 1, . . .

Corollary O(

n1/2+1/(m+1)L)

-iteration complexity.

Corollary If m = O(d(n + 2)ω − 1e), then O (√

nL) -iteration complexity.

lim n(1/nω) = 1, n(1/nω) ≤ e1/(ωe), ∀n.

Interior Point Methods – p.17/30

Page 37: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The higher order predictor corrector

Given z0 ∈ D(β).For k = 0, 1, . . .:z ← zk;

Obtain z by the higher order predictor;Obtain z+ by the first order corrector;

zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Theorem If HCLP is monotone then the algorithm is well defined and

µk+1 ≤(

1− .16

√β 3√

1− β)√n m+1√

n + 2

)

µk , k = 0, 1, . . .

Corollary O(

n1/2+1/(m+1)L)

-iteration complexity.

Corollary If m = O(d(n + 2)ω − 1e), then O (√

nL) -iteration complexity.

lim n(1/nω) = 1, n(1/nω) ≤ e1/(ωe), ∀n.

Interior Point Methods – p.17/30

Page 38: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

The higher order predictor corrector – continued

Theorem We have

µk+1 = O(µm+1k ) , if HLCP is nondegenerate ,

and

µk+1 = O(µ(m+1)/2k ) , if HLCP is degenerate .

Conclusion The first algorithm with O (√

nL) -iteration complexity andsuperlinear convergence for degenerate LCP in the wide neighborhoodof the central path.

Remark If we take ω = 0.1 then the values of m = d(n + 2)ω − 1e,corresponding to n = 106, n = 107, n = 108, and n = 109 are 3, 5, 6, and 7respectively. This corresponds with efficient practical implementation ofinterior point methods where the same factorization is used from 3 to 7times.

Interior Point Methods – p.18/30

Page 39: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Semidefinite programming (SDP)

(Primal)

minimize C •X

subject to Ai •X = bi, i = 1, . . . , m, X � 0.

(Dual)

maximize bT y

subject tom∑

i=1

yiAi + S = C, S � 0.

data: C, Ai, –n× n symmetric matrices; b = (b1, . . . , bm)T ∈ Rm.primal variable: X, symmetric & p.s.d.dual variables: y ∈ Rm, S, symmetric & p.s.d.

Interior Point Methods – p.19/30

Page 40: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

SDP central path

The primal-dual SDP system:

Ai •X = bi, i = 1, . . . , m,m∑

i=1

yiAi + S = C,

XS = 0, X � 0, S � 0.

The primal-dual SDP central path:

Ai •X = bi, i = 1, . . . , m,m∑

i=1

yiAi + S = C,

XS = µI, X � 0, S � 0.

Problem: On the central path XS = SX, but this is not true outside thecentral path.

Interior Point Methods – p.20/30

Page 41: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Search directions

MZ search direction (∆X, ∆y, ∆S):

HP (X∆S + ∆XS) = σµI −HP (XS),

Ai •∆X = ri, i = 1, . . . , m,m∑

i=1

∆yiAi + ∆S = Rd.

Symmetrization operator:

HP (M) = (PMP−1 + [PMP−1]T )/2.

P = I : AHOP = S1/2 : HKM (HRVW/KSH/M)P such that P T P = X−1/2[X1/2SX1/2]1/2X−1/2 : NT

Twenty search directions are analyzed and tested by Todd (1999)

MTY with some directions has O(√

n ln(ε0/ε) iteration complexity.

Interior Point Methods – p.21/30

Page 42: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear Convergence for SDP

Kojima, Shida and Shindoh (1998): MTY predictor-corrector with HKMsearch direction has superlinear convergence if:

(A) SDP has a strictly complementary solution;

(B) SDP is nondegenerate (nonsingular Jacobian)

(C) the iterates converge tangentially to the central path in the sensethat the size of the neighborhood containing the iterates mustapproach zero, namely,

∥(Xk)1/2Sk(Xk)1/2 − (Xk • Sk/n)I

F

Xk • Sk→ 0

Assumption (B) and (C) not required for LP.

Interior Point Methods – p.22/30

Page 43: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear Convergence for SDP – continued

P. and Sheng (1998): Superlinear convergence + polynomiality if

(A) SDP has a strictly complementary solution;

(D)XkSk

√Xk • Sk

→ 0.

Note that (B) and (C) implies (D).

Both (C) and (D) can be enforced by the algorithm; practicalefficiency of such an approach is questionable. I

f several corrector steps are used the algorithm has polynomialcomplexity and is superlinearly convergent under assumption (A)only.

MTY with KHM direction for predictor and AHO direction forcorrector, has polynomial complexity and superlinear convergenceof Q-order 1.5 under (A) and (B).

Interior Point Methods – p.23/30

Page 44: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear Convergence for SDP – wanted

Kojima, Shida and Shindoh (1998):

example suggesting that interior point algorithms for SDP based onthe KHM are unlikely to be superlinearly convergent without(C).

MTY with AHO is quadratically convergent under (A). Globalconvergence, but no polynomial complexity.

Ji, P. and Sheng (1999): MTY using the MZ-family.

polynomial complexity (Monteiro).

(A) + (D)⇒ superlinear convergence.

(A) + (B) + scaling matrices in the corrector step have boundedcondition number⇒ Q-order 1.5 .

(A) + (B) + scaling matrices in both predictor and corrector stephave bounded condition number⇒ Q-quadratic convergence.

WANTED: A superlinearly convergent algorithm with O(√

n ln(ε0/ε)) iter-

ation complexity in a wide neighborhood of the central path.

Interior Point Methods – p.24/30

Page 45: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

Superlinear Convergence for SDP – wanted

Kojima, Shida and Shindoh (1998):

example suggesting that interior point algorithms for SDP based onthe KHM are unlikely to be superlinearly convergent without(C).

MTY with AHO is quadratically convergent under (A). Globalconvergence, but no polynomial complexity.

Ji, P. and Sheng (1999): MTY using the MZ-family.

polynomial complexity (Monteiro).

(A) + (D)⇒ superlinear convergence.

(A) + (B) + scaling matrices in the corrector step have boundedcondition number⇒ Q-order 1.5 .

(A) + (B) + scaling matrices in both predictor and corrector stephave bounded condition number⇒ Q-quadratic convergence.

WANTED: A superlinearly convergent algorithm with O(√

n ln(ε0/ε)) iter-

ation complexity in a wide neighborhood of the central path.Interior Point Methods – p.24/30

Page 46: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References

Detailed expositions of the results obtained in interior point methodsprior to 1997 are contained in the monographs of Roos, Vial and Terlaky(1997), Wright (1997), and Ye (1997). For a short survey of the resultsobtained prior to 2000 see P. and Wright (2000). In what follows we givecomplete references for the results cited in this talk:

Anst99 K.M. Anstreicher. Linear programming in O([n3/ ln n]L)

operations. CORE Discussion Paper 9746, Universite Catholique deLouvain, Louvain-la-Neuve, Belgium, January 1999. to appear inSIAM Journal on Optimization.

BoGo96 J. F. Bonnans and C. C. Gonzaga. Convergence of interiorpoint algorithms for the monotone linear complementarity problem.Mathematics of Operations Research, 21:1–25, 1996.

BoPo97 J. F. Bonnans and F. A. Potra. On the convergence of theiteration sequence of infeasible path following algorithms for linearcomplementarity problems. Mathematics of Operations Research,22(2):378–407, 1997.

Interior Point Methods – p.25/30

Page 47: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References–continued

FrMi96 R. M. Freund and S. Mizuno. Interior point methods: Currentstatus and future directions. Optima, 51:1–9, October 1996.

Gonz99 C. C. Gonzaga. Complexity of predictor-correctoralgorithms for LCP based on a large neighborhood of the centralpath. SIAM J. Optim., 10(1):183–194 (electronic), 1999.

GoTa97a C. C. Gonzaga and R. A. Tapia. On the convergence ofthe Mizuno-Todd-Ye algorithm to the analytic center of the solutionset. SIAM Journal on Optimization, 7(1):47–65, 1997.

GoTa97b C. C. Gonzaga and R. A. Tapia. On the quadraticconvergence of the simplified Mizuno-Todd-Ye algorithm for linearprogramming. SIAM Journal on Optimization, 7(1):66–85, 1997.

JiPS99 J. Ji, F. A. Potra, and R. Sheng. On the local convergence of apredictor-corrector method for semidefinite programming. SIAMJournal on Optimization, 10:195–210, 1999.

Karm84 N. K. Karmarkar. A new polynomial-time algorithm for linearprogramming. Combinatorica, 4:373–395, 1984.

Interior Point Methods – p.26/30

Page 48: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References–continued

Khac79 L. G. Khachiyan. A polynomial algorithm for linearprogramming. Soviet Mathematics Doklady, 20:191–194, 1979.

KlMi72 V. Klee and G. Minty. How good is the simplex algorithm? InO. Sisha, editor, Inequalities III. Academic Press, New York, NY, 1972.

KoSS98 Masakazu Kojima, Masayuki Shida, and Susumu Shindoh.Local convergence of predictor-corrector infeasible-interior-pointalgorithms for SDPs and SDLCPs. Math. Programming, 80(2, Ser.A):129–160, 1998.

MeYe93 S. Mehrotra and Y. Ye. Finding an interior point in theoptimal face of linear programs. Math. Programming, 62(3, Ser.A):497–515, 1993.

Potr01 F. A. Potra. Q-superlinear convergence of the iterates inprimal-dual interior-point methods. Math. Programming, 91(Ser.A):99–115, 2001.

Interior Point Methods – p.27/30

Page 49: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References–continued

Potr03 F. A. Potra. A superlinearly convergent predictor-correctormethod for degenerate LCP in a wide neighborhood of the centralpath with O(

√nl)-iteration complexity. Math. Programming, (Ser. A,

electronic):DOI 10.1007/s10107=003–0472–0, 2003.

PoSg98b F. A. Potra and R. Sheng. Superlinear convergence ofinterior-point algorithms for semidefinite programming. Journal ofOptimization Theory and Applications, 99(1):103–119, 1998.

PoSg98c F. A. Potra and R. Sheng. A superlinearly convergentprimal-dual infeasible-interior-point algorithm for semidefiniteprogramming. SIAM Journal on Optimization, 8(4):1007–1028, 1998.

PoWr00 F. A. Potra and S. J. Wright. Interior-point methods. Journal ofComputational and Applied Mathematics, 124:281–302, 2000.

RoVT97 C. Roos, J.-Ph. Vial, and T. Terlaky. Theory and Algorithms forLinear Optimization : An Interior Point Approach. Wiley-InterscienceSeries in Discrete Mathematics and Optimization. John Wiley andSons, 1997.

Interior Point Methods – p.28/30

Page 50: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References–continued

Sham92 R. Shamir. Probabilistic analysis in linear programming.

TaZY95 R. A. Tapia, Y. Zhang, and Y. Ye. On the convergence of theiteration sequence in primal–dual interior-point methods.Mathematical Programming, 68(2):141–154, 1995.

Todd99 M. J. Todd. A study of search directions in primal-dualinterior-point methods for semidefinite programming. Optim.Methods Softw., 11/12(1-4):1–46, 1999. Interior point methods.

Wrig97 S. J. Wright. Primal–Dual Interior–Point Methods. SIAMPublications, Philadephia, 1997.

Ye97 Y. Ye. Interior Point Algorithms : Theory and Analysis.Wiley-Interscience Series in Discrete Mathematics and Optimization.John Wiley and Sons, 1997.

YeAn93 Y. Ye and K. Anstreicher. On quadratic and O(√

nL)

convergence of predictor-corrector algorithm for LCP.Mathematical Programming, 62(3):537–551, 1993.

Interior Point Methods – p.29/30

Page 51: Interior Point Methods - UMBCpotra//talk0930.pdfInterior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since the

UMBCUMBCUMBC J � I

References–continued

YGTZ93 Y. Ye, O. Guler, R. A. Tapia, and Y. Zhang. A quadraticallyconvergent O(

√nL)-iteration algorithm for linear programming.

Mathematical Programming, 59(2):151–162, 1993.

ZhTD92 Y. Zhang, R. A. Tapia, and J. E. Dennis. On the superlinearand quadratic convergence of primal–dual interior point linearprogramming algorithms. SIAM Journal on Optimization,2(2):304–324, 1992.

ZhTP93 Y. Zhang, R. A. Tapia, and F. A. Potra. On the superlinearconvergence of interior point algorithms for a general class ofproblems. SIAM J. on Optimization, 3(2):413–422, 1993.

Interior Point Methods – p.30/30