interior point methods · interior point methods interior-point methods in mathematical programming...

61
Interior Point Methods Florian A. Potra University of Maryland Baltimore County Florian A. Potra, UMBC Interior Point Methods 1/34

Upload: others

Post on 14-Jul-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Interior Point Methods

Florian A. Potra

University of Maryland Baltimore County

Florian A. Potra, UMBC Interior Point Methods 1/34

Page 2: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Interior point methods

Interior-point methods in mathematical programming have been thelargest and most dramatic area of research in optimization since thedevelopment of the simplex method. . . Interior-point methods havepermanently changed the landscape of mathematical programmingtheory, practice and computation. . . (Freund & Mizuno 1996).

Margaret H. Wright: The interior-point revolution in optimization: history,recent developments, and lasting consequences, Bull. Amer. Math. Soc.(N.S.), 42, 2005, 39–56.

Major impacts on

The linear programming problem (LP)

The quadratic programming problem (QP)

The linear complementarity problem (LCP)

The semi-definite programming problem (SDP)

Nonlinear Programming

Florian A. Potra, UMBC Interior Point Methods 2/34

Page 3: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Florian A. Potra, UMBC Interior Point Methods 3/34

Page 4: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Florian A. Potra, UMBC Interior Point Methods 3/34

Page 5: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Florian A. Potra, UMBC Interior Point Methods 3/34

Page 6: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The linear programming problem

minx

cT x s.t. Ax = b, x ≥ 0.

Dantzig (1947-1951): the simplex method– good practical performance– exponential worst case complexity (Klee and Minty (1972))

Question: Is (LP) solvable in polynomial time?(in terms of: L = bitlength of data, and n = dim(x))

Answer: YES! Khachyan 1979.

Proof: The ellipsoid method (an interior point method)

Florian A. Potra, UMBC Interior Point Methods 3/34

Page 7: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)– computational complexity: O(n4L)Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)– computational complexity: O(n3.5L)Note: A variant of the algorithm implemented in KORBX (1989). Practicalperformance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Florian A. Potra, UMBC Interior Point Methods 4/34

Page 8: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)– computational complexity: O(n4L)Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)– computational complexity: O(n3.5L)Note: A variant of the algorithm implemented in KORBX (1989). Practicalperformance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Florian A. Potra, UMBC Interior Point Methods 4/34

Page 9: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Complexity

Khachyan (1979–1980): the ellipsoid method– iteration complexity: O(n2L)– computational complexity: O(n4L)Note: Bad practical performance.Observed complexity same as worst case complexity.

Karmarkar (1984): projective scaling– iteration complexity: O(nL)– computational complexity: O(n3.5L)Note: A variant of the algorithm implemented in KORBX (1989). Practicalperformance better than theoretical complexity.

State of the Art:– iteration complexity: O(

√nL)

– computational complexity: O(n3L)

(Anstreicher (1999) O( n3

log nL))

Note: Excellent software packages: CPLEX, LOQO, Mosek, OSL, PCx

Florian A. Potra, UMBC Interior Point Methods 4/34

Page 10: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

How is polynomiality proved?

Primal-dual algorithms:Obtain a sequence of points with duality gap µk → 0.Best complexity is obtained for path-following methods where (µk ) is Q-linearlyconvergent with Q-factor (1− ν/

√(n)):

µk+1 ≤ (1− ν/√

n)µk , k = 0, 1, . . .

or for potential reduction method where (µk ) is R-linearly convergent withR-factor (1− γ/

√n):

µk ≤ χ(1− ν/√

n)k , k = 0, 1, . . .

In both cases we have

µk ≤ ε for k = O(√

n log(µ0

ε))

If ε ≤ 2−2L then (xk , y k , sk ) can be rounded to an exact solution in O(n3)arithmetic operations. Hence the O(

√nL)-iteration complexity.

Florian A. Potra, UMBC Interior Point Methods 5/34

Page 11: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have

superlinear convergence: µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ αµ2k

superlinear convergence of Q-order ω: µk+1 ≤ αµωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and Q-quadratic

convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Florian A. Potra, UMBC Interior Point Methods 6/34

Page 12: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have

superlinear convergence: µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ αµ2k

superlinear convergence of Q-order ω: µk+1 ≤ αµωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and Q-quadratic

convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Florian A. Potra, UMBC Interior Point Methods 6/34

Page 13: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have

superlinear convergence: µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ αµ2k

superlinear convergence of Q-order ω: µk+1 ≤ αµωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and Q-quadratic

convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Florian A. Potra, UMBC Interior Point Methods 6/34

Page 14: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have

superlinear convergence: µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ αµ2k

superlinear convergence of Q-order ω: µk+1 ≤ αµωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and Q-quadratic

convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Florian A. Potra, UMBC Interior Point Methods 6/34

Page 15: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear convergence

Polynomiality is proved by showing that µk converges linearly. Efficientalgorithms have

superlinear convergence: µk+1 ≤ αk µk , k = 0, 1, . . . , αk → 0

Q-quadratic convergence: µk+1 ≤ αµ2k

superlinear convergence of Q-order ω: µk+1 ≤ αµωk

Zhang, Tapia and Dennis (1992): sufficient conditions for superlinearconvergence for path-following methods for LP;

Zhang, Tapia and P. (1993): generalization for QP and LCP;

Ye, Guler, Tapia and Zhang (1993): The Mizuno-Todd-Yepredictor-corrector method has O(

√nL) complexity and Q-quadratic

convergence under general conditions;

Ye and Anstreicher (1993): generalization for LCP under strictcomplementarity.

Florian A. Potra, UMBC Interior Point Methods 6/34

Page 16: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 17: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 18: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 19: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 20: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 21: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 22: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Benefits of superlinear convergence

Convergence is faster than indicated by worst case complexity.

The condition of the linear systems to be solved at each iteration worsensas µk decreases. Superlinearly convergent algorithms need only a couple ofiterations with small µk .

Superlinear convergence vs. finite terminationMehrotra and Ye (1993): if µk is small enough then a projection can be used tofind the exact solution. Does superlinear convergence set in (much) before sucha projection works?

Superlinear convergence is even more important for SDP since:

no analogous of simplex

interior point methods – the only efficient solvers

no finite termination schemes

condition of linear systems is more critical

Florian A. Potra, UMBC Interior Point Methods 7/34

Page 23: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Convergence of the iterates

µk → 0 . Is the sequence of iterates convergent?

Tapia, Zhang and Ye (1995): sufficient conditions for convergence ofiterates. Can they be satisfied with preservation of polynomial complexity?

Gonzaga and Tapia (1997): the iterates of MTY converge to the analyticcenter of the solution set.MTY: 2 matrix factorizations + 2 backsolves, Q(µk ) = 2.simplified MTY: asymptotically one factorization + 2 backsolves;the iterates converge but not to the analytic center.

Bonnans and Gonzaga (1996) Bonnans and P. (1997): GeneralConvergence Theory

P. (2001): established superlinear convergence of the iterates for severalinterior point methods.The simplified MTY is Q-quadratically convergent.MTY appears not to be Q-superlinearly convergent.

Florian A. Potra, UMBC Interior Point Methods 8/34

Page 24: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Convergence of the iterates

µk → 0 . Is the sequence of iterates convergent?

Tapia, Zhang and Ye (1995): sufficient conditions for convergence ofiterates. Can they be satisfied with preservation of polynomial complexity?

Gonzaga and Tapia (1997): the iterates of MTY converge to the analyticcenter of the solution set.MTY: 2 matrix factorizations + 2 backsolves, Q(µk ) = 2.simplified MTY: asymptotically one factorization + 2 backsolves;the iterates converge but not to the analytic center.

Bonnans and Gonzaga (1996) Bonnans and P. (1997): GeneralConvergence Theory

P. (2001): established superlinear convergence of the iterates for severalinterior point methods.The simplified MTY is Q-quadratically convergent.MTY appears not to be Q-superlinearly convergent.

Florian A. Potra, UMBC Interior Point Methods 8/34

Page 25: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but has goodpractical performance. Why?

The expected number of pivots is O(n2) .

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Anstreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTY type tofind an exact solution of the LP or to determine that the LP is infeasible is atmost O(n ln n) .

The probabilistic complexity of finding an ε-approximate solution is analyzed inJi and P. (2008).

Numerical experience shows that only 30–50 iterations are needed. Can theprobabilistic complexity be improved?Polylog probabilistic complexity?

Florian A. Potra, UMBC Interior Point Methods 9/34

Page 26: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but has goodpractical performance. Why?

The expected number of pivots is O(n2) .

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Anstreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTY type tofind an exact solution of the LP or to determine that the LP is infeasible is atmost O(n ln n) .

The probabilistic complexity of finding an ε-approximate solution is analyzed inJi and P. (2008).

Numerical experience shows that only 30–50 iterations are needed. Can theprobabilistic complexity be improved?Polylog probabilistic complexity?

Florian A. Potra, UMBC Interior Point Methods 9/34

Page 27: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but has goodpractical performance. Why?

The expected number of pivots is O(n2) .

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Anstreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTY type tofind an exact solution of the LP or to determine that the LP is infeasible is atmost O(n ln n) .

The probabilistic complexity of finding an ε-approximate solution is analyzed inJi and P. (2008).

Numerical experience shows that only 30–50 iterations are needed. Can theprobabilistic complexity be improved?Polylog probabilistic complexity?

Florian A. Potra, UMBC Interior Point Methods 9/34

Page 28: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Worst case vs. probabilistic analysis

The simplex method has exponential worst case complexity but has goodpractical performance. Why?

The expected number of pivots is O(n2) .

(Adler, Borgwardt, Megiddo, Shamir, Smale, Todd; See Shamir(1992))This is a strongly polynomial probabilistic complexity result

(no dependence on L).

Are there corresponding results for interior point methods?

Anstreicher, Ji, P. and Ye (1999): The expected value of the number ofiterations needed for an infeasible start interior point method of MTY type tofind an exact solution of the LP or to determine that the LP is infeasible is atmost O(n ln n) .

The probabilistic complexity of finding an ε-approximate solution is analyzed inJi and P. (2008).

Numerical experience shows that only 30–50 iterations are needed. Can theprobabilistic complexity be improved?Polylog probabilistic complexity?

Florian A. Potra, UMBC Interior Point Methods 9/34

Page 29: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods of thecentral path are the most efficient interior point methods. Paradoxically thebest complexity results were obtained for algorithms acting in narrowneighborhoods.

Recent work closes this gap

(P. 2003-2013, Ai-Zhang 2005, Peng-Terlaky-Zhao 2005).

The horizontal linear complementarity problem (HLCP)

xs = 0Qx + Rs = b

x , s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Florian A. Potra, UMBC Interior Point Methods 10/34

Page 30: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods of thecentral path are the most efficient interior point methods. Paradoxically thebest complexity results were obtained for algorithms acting in narrowneighborhoods.

Recent work closes this gap

(P. 2003-2013, Ai-Zhang 2005, Peng-Terlaky-Zhao 2005).

The horizontal linear complementarity problem (HLCP)

xs = 0Qx + Rs = b

x , s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Florian A. Potra, UMBC Interior Point Methods 10/34

Page 31: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods of thecentral path are the most efficient interior point methods. Paradoxically thebest complexity results were obtained for algorithms acting in narrowneighborhoods.

Recent work closes this gap

(P. 2003-2013, Ai-Zhang 2005, Peng-Terlaky-Zhao 2005).

The horizontal linear complementarity problem (HLCP)

xs = 0Qx + Rs = b

x , s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0,∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Florian A. Potra, UMBC Interior Point Methods 10/34

Page 32: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods of thecentral path are the most efficient interior point methods. Paradoxically thebest complexity results were obtained for algorithms acting in narrowneighborhoods.

Recent work closes this gap

(P. 2003-2013, Ai-Zhang 2005, Peng-Terlaky-Zhao 2005).

The horizontal linear complementarity problem (HLCP)

xs = 0Qx + Rs = b

x , s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0, ∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Florian A. Potra, UMBC Interior Point Methods 10/34

Page 33: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Wide neighborhoods of the central path

Primal-dual path-following algorithms acting in wide neighborhoods of thecentral path are the most efficient interior point methods. Paradoxically thebest complexity results were obtained for algorithms acting in narrowneighborhoods.

Recent work closes this gap

(P. 2003-2013, Ai-Zhang 2005, Peng-Terlaky-Zhao 2005).

The horizontal linear complementarity problem (HLCP)

xs = 0Qx + Rs = b

x , s ≥ 0.

HLCP is monotone iff Qu + Rv = 0⇒ uT v ≥ 0, ∀u, v ∈ IRn .

HLCP is skew-symmetric iff Qu + Rv = 0⇒ uT v = 0, ∀u, v ∈ IRn .

QP reduces to a monotone HLPC. LP reduces to a skew-symmetric HLCP.

Florian A. Potra, UMBC Interior Point Methods 10/34

Page 34: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The Central Path

The set of all feasible points is denoted by

F = {z = d x , s c ∈ IR2n+ : Qx + Rs = b},

where d x , s c denotes the vector [xT , sT ]T . If the relative interior of F ,

F0 = F⋂

IR2n++

is not empty, then the nonlinear system,

Fτ (z) :=

[xs − τe

Qx + Rs − b

]= 0 .

has a unique positive solution for any τ > 0. The set of all such solutionsdefines the central path C of the HLCP, i.e.,

C = {z ∈ IR2n++ : Fτ (z) = 0, τ > 0} .

Florian A. Potra, UMBC Interior Point Methods 11/34

Page 35: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Central Path Neighborhoods

Proximity measures:

δ2(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥2

, δ∞(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥∞,

δ−∞(z) :=

∥∥∥∥∥[

xs

µ(z)− e

]− ∥∥∥∥∥∞

, µ(z) =xT s

n.

Corresponding neighborhoods:

N2(α) = {z ∈ F0 : δ2(z) ≤ α } ,

N∞(α) = {z ∈ F0 : δ∞(z) ≤ α } ,

N−∞(α) = {z ∈ F0 : δ−∞(z) ≤ α } .

Relation between neighborhoods:

N2(α) ⊂ N∞(α) ⊂ N−∞(α), and limα↑1N−∞(α) = F .

Florian A. Potra, UMBC Interior Point Methods 12/34

Page 36: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Central Path Neighborhoods

Proximity measures:

δ2(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥2

, δ∞(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥∞,

δ−∞(z) :=

∥∥∥∥∥[

xs

µ(z)− e

]− ∥∥∥∥∥∞

, µ(z) =xT s

n.

Corresponding neighborhoods:

N2(α) = {z ∈ F0 : δ2(z) ≤ α } ,

N∞(α) = {z ∈ F0 : δ∞(z) ≤ α } ,

N−∞(α) = {z ∈ F0 : δ−∞(z) ≤ α } .

Relation between neighborhoods:

N2(α) ⊂ N∞(α) ⊂ N−∞(α), and limα↑1N−∞(α) = F .

Florian A. Potra, UMBC Interior Point Methods 12/34

Page 37: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Central Path Neighborhoods

Proximity measures:

δ2(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥2

, δ∞(z) :=

∥∥∥∥ xs

µ(z)− e

∥∥∥∥∞,

δ−∞(z) :=

∥∥∥∥∥[

xs

µ(z)− e

]− ∥∥∥∥∥∞

, µ(z) =xT s

n.

Corresponding neighborhoods:

N2(α) = {z ∈ F0 : δ2(z) ≤ α } ,

N∞(α) = {z ∈ F0 : δ∞(z) ≤ α } ,

N−∞(α) = {z ∈ F0 : δ−∞(z) ≤ α } .

Relation between neighborhoods:

N2(α) ⊂ N∞(α) ⊂ N−∞(α), and limα↑1N−∞(α) = F .

Florian A. Potra, UMBC Interior Point Methods 12/34

Page 38: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

First order predictor

Input: z ∈ D(β) = {z ∈ F0 : xs ≥ βµ(z) } = N−∞(1− β).

z(θ) = z + θw ,

wherew = d u, v c = −F ′0(z)−1F0(z)

is the Newton direction of F0 at z (the affine scaling direction), i.e.,

su + xv = −xsQu + Rv = 0 .

γ :=2 (1− β)

2(1− β) + n +√

4(1− β)(n + 1) + n2,

θ = argmin {µ(θ) : z(θ) ∈ D((1− γ)β) } ,

Output: z = z(θ ) ∈ D((1− γ)β) .

Florian A. Potra, UMBC Interior Point Methods 13/34

Page 39: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

First order corrector

Input: z ∈ D((1− γ)β).z(θ) = z + θw ,

wherew = d x , s c = −F ′µ(z)(z)−1Fµ(z)(z) ,

is the centering direction at z , i.e.,

su + xv = µ(z)− xsQu + Rv = 0 .

Output: z+ = z(θ+) ∈ D(β).

First order predictor–corrector.Input: z ∈ D(β), Output: z+ ∈ D(β).

Iterative algorithm: Given z0 ∈ D(β).For k = 0, 1, . . .: z ← zk ; zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Florian A. Potra, UMBC Interior Point Methods 14/34

Page 40: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Convergence results

Theorem (P. 2003) If HLCP is monotone then

µk+1 ≤

(1−

29√

(1− β)β

16(n + 2)

)µk , , k = 0, 1, . . .

Corollary O(nL)-iteration complexity.Theorem If the HLCP has a strictly complementary solution, then the sequence{µk} generated by Algorithm 1 converges quadratically to zero in the sense that

µk+1 = O(µ2k ) .

Comments: Same complexity as Gonzaga (1999) plus quadratic convergence.In Gonzaga’s algorithm a predictor is followed by an apriori unknown number ofcorrectors. The complexity result is proved by showing that the total number ofcorrectors is at most O(nL). The structure of Gonzaga’s algorithm makes itvery difficult to analyze the asymptotic convergence properties of the dualitygap. No superlinear convergence results have been obtained so far for hismethod.

Florian A. Potra, UMBC Interior Point Methods 15/34

Page 41: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

A higher order predictor

Input: z ∈ D(β).

z(θ) = z +m∑

i=1

w iθi ,

where w i = d ui , v i c are given by{su1 + xv 1 = −xs

Qu1 + Rv 1 = 0,

{sui + xv i = −

∑i−1j=1 uj v i−j

Qui + Rv i = 0, i = 2, 3, . . . ,m

(one factorization + m backsolves, O(n3) + m O(n2) arith. operations)

θ = argmin {µ(θ) : z(θ) ∈ D((1− γ)β) } .

Output: z = z(θ ) ∈ D((1− γ)β) .

Florian A. Potra, UMBC Interior Point Methods 16/34

Page 42: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The higher order predictor corrector

Given z0 ∈ D(β).For k = 0, 1, . . .:z ← zk ;

Obtain z by the higher order predictor;Obtain z+ by the first order corrector;

zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Theorem (P. 2003) If HLCP is monotone then the algorithm is well definedand

µk+1 ≤

(1− .16

√β 3√

1− β)√

n m+1√

n + 2

)µk , k = 0, 1, . . .

Corollary O(

n1/2+1/(m+1)L)

-iteration complexity.

Corollary If m = O(d(n + 2)ω − 1e), then O(√

nL)

-iteration complexity.

lim n(1/nω) = 1, n(1/nω) ≤ e1/(ωe), ∀n.

Florian A. Potra, UMBC Interior Point Methods 17/34

Page 43: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The higher order predictor corrector

Given z0 ∈ D(β).For k = 0, 1, . . .:z ← zk ;

Obtain z by the higher order predictor;Obtain z+ by the first order corrector;

zk+1 ← z+ , µk+1 ← µ(z+) , k ← k + 1.

Theorem (P. 2003) If HLCP is monotone then the algorithm is well definedand

µk+1 ≤

(1− .16

√β 3√

1− β)√

n m+1√

n + 2

)µk , k = 0, 1, . . .

Corollary O(

n1/2+1/(m+1)L)

-iteration complexity.

Corollary If m = O(d(n + 2)ω − 1e), then O(√

nL)

-iteration complexity.

lim n(1/nω) = 1, n(1/nω) ≤ e1/(ωe), ∀n.

Florian A. Potra, UMBC Interior Point Methods 17/34

Page 44: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

The higher order predictor corrector – continued

Theorem (P. 2003) We have

µk+1 = O(µm+1k ) , if HLCP is nondegenerate ,

andµk+1 = O(µ

(m+1)/2k ) , if HLCP is degenerate .

Conclusion The first algorithm with O(√

nL)

-iteration complexity andsuperlinear convergence for degenerate LCP in the wide neighborhood of thecentral path.

Remark If we take ω = 0.1 then the values of m = d(n + 2)ω − 1e,corresponding to n = 106, n = 107, n = 108, and n = 109 are 3, 5, 6, and 7respectively. This corresponds with efficient practical implementation of interiorpoint methods where the same factorization is used from 3 to 7 times.

Florian A. Potra, UMBC Interior Point Methods 18/34

Page 45: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Classical Predictor-Corrector Methods

A classical predictor-corrector method operates in two neighborhoods of thecentral path. At each iteration, a predictor step is used to decrease the dualitygap while keeping the point inside the outer neighborhood; a corrector stepthen follows to increase the centrality by bringing the iterated point back intothe inner neighborhood.

Florian A. Potra, UMBC Interior Point Methods 19/34

Page 46: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Our Corrector-Predictor Method

Our corrector-predictor method (P., 2004, 2008) operates in one wideneighborhood D(β) of the central path. At each iteration a corrector step isused to increase the centrality and optimality simultaneously. After thecorrector step we obtain z ∈ D(βc ), where βc > β varies from iteration toiteration. A predictor step then follows to increase the optimality while keepingthe resulting point in D(β).

Since only one neighborhood is used we can extend to sufficient HLCP.

Florian A. Potra, UMBC Interior Point Methods 20/34

Page 47: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

P∗(κ) and P∗ HLCP

P∗(κ) HLCP, κ ≥ 0:

Qu + Rv = 0⇒ (1 + 4κ)∑i∈I+

ui vi +∑

i∈I−ui vi ≥ 0, for any u, v ∈ IRn

where I+ = {i : ui vi > 0}, I− = {i : ui vi < 0}, ∀u, v ∈ IRn .If the above condition is satisfied we say that (Q,R) is a P∗(κ) pair.

Monotone HLCP is a special case of P∗(κ) HLCP with κ = 0.

P∗ HLCP:P∗ = ∪κ≥0P∗(κ) ,

in which case (Q,R) is called a P∗ pair.

Florian A. Potra, UMBC Interior Point Methods 21/34

Page 48: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Sufficient Matrices and Sufficient LCP

A matrix M ∈ Rn×n is calledcolumn sufficient, if zi (Mz)i ≤ 0 for all i ⇒ zi (Mz)i = 0 for all i ;row sufficient, if its transpose is column sufficient;sufficient, if it is both column and row sufficient.

M is column sufficient iff for each q ∈ Rn, the LCP(q,M) has a (possiblyempty) convex solution set.

M is row sufficient iff for each q ∈ Rn, if (x , u) is a KKT pair of thequadratic program

minx xT (Mx + q)s.t. Mx + q ≥ 0

x ≥ 0 ,

then x solves LCP(q,M).

(Valiaho, 1996) ‘P∗−matrices are just sufficient’.

Florian A. Potra, UMBC Interior Point Methods 22/34

Page 49: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Sufficient Pairs and Sufficient HLCP

Φ = Null([Q R]). The pair (Q,R) is called

column sufficient, if ∀d u, v c ∈ Φ, ui vi ≤ 0⇒ ui vi = 0;row sufficient, if ∀d u, v c ∈ Φ⊥, ui vi ≥ 0⇒ ui vi = 0;sufficient, if the pair is both column and row sufficient;

where d u, v c denotes the vector [uT , v T ]T .

For an HLCP, the following statements are equivalent:

It is a P∗ HLCP;It is a sufficient HLCP;Its solution set is convex and every KKT point of

minx,s xT ss.t. Qx + Rs = b

x , s ≥ 0

is a solution of the HLCP.

Florian A. Potra, UMBC Interior Point Methods 23/34

Page 50: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Interior Point Methods for P∗(κ) and sufficient LCP

Methods Using Small Neighborhoods

(Kojima et. al., 1991) Potential reduction algorithm for solving sufficientLCP, O((1 + κ)

√nL) iteration complexity, no superlinear convergence.

(Miao, 1995) Generalization of MTY to P∗(κ) LCP, O((1 + κ)√

nL)iteration complexity and quadratic convergence (under the non-degeneracyassumption). The algorithm depends on κ.(P. and Sheng, 1997) Generalization of MTY to sufficient LCP,O((1 + κ)

√nL) iteration complexity and superlinear convergence (even in

degenerate case). The algorithm does not depend on κ.(Stoer, Wechs, Mizuno, 1998) High order methods for sufficient HLCP.Superlinear convergence even in the degenerate case.(Stoer, Wechs, 1998) O((1 + κ)

√nL) iteration complexity in the small

neighborhood of the central path. The algorithm depends on κ.(Gurtuna,Petra, P., Shevchenko, Vancea, 2011) O((1 + κ)

√nL) iteration

complexity in the small neighborhood of the central path. The algorithmdoes not depend on κ. Extension to infeasible case.

Florian A. Potra, UMBC Interior Point Methods 24/34

Page 51: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Interior Point Methods for P∗(κ) and sufficient LCP - continued

Methods Using Wide Neighborhoods

(Stoer, 2001) Superlinear convergence in the large neighborhood of thecentral path. No complexity.(Liu and P., 2006) A corrector-predictor method for sufficient HLCP in awide neighborhood D(β) of the central path.

It has O((1 + κ)√

nL) iteration complexity;It is superlinearly convergent (even in the degenerate case);It does not depend on κ, so that it can be used for sufficient HLCP.The cost of implementing one iteration of our algorithm is O(n3) arithmeticoperations.

Florian A. Potra, UMBC Interior Point Methods 25/34

Page 52: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Efficiency Index

Ostrowski’s efficiency index:

eff = c√ω, ω = convergence order c = cost per iteration

MTY: eff =√

2

Asymptotic efficiency results

(Wright 1996) Safe step - fast step. eff = 2 (QP)

(Wright and Zhang 1996) eff = m + 1 , m backsolves (LCP)

(P. and Sheng 1997) eff = m + 1 (sufficient LCP)

(Gonzaga and Tapia 1997) Simplified MTY eff = 2

Same cost at each step

(P. 2008) eff = m + 1 (nondegenerate), eff = m+12

affine scaling direction at each iterationO(√

nL) complexity in the large neighborhood.

Florian A. Potra, UMBC Interior Point Methods 26/34

Page 53: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Efficiency index - continued

(P. and Stoer 2009)

eff = m + 1 (nondegenerate), eff = m+12

(degenerate)

uses (high order) infeasible affine scaling and the “right” amount ofcentering at each iteration; large neighborhoodfor sufficient LCP, algorithm independent of κif the starting point is feasible (or “almost” feasible)

O((1 + κ)(1 + log m√

1 + κ )√

n L)

iteration complexityif the starting point is “large enough”

O((1 + κ)2+1/m(1 + log m√

1 + κ )n L)

iteration complexity.

Florian A. Potra, UMBC Interior Point Methods 27/34

Page 54: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Extensions

Convex Programming (Nesterov and Nemirovski)self concordant barrieruniversal barrier exists but no explicit expression

LP and LCP over cones (Nesterov, Todd, ...)

relation to Jordan algebras (Guler, Faybusovich)

infinite dimensional problems (Faybusovich, Ulbrich, Heinkenschloss)

”cone-free” (Nemirovski and Tuncel)

nonlinear programming (Byrd-Nocedal-Waltz KNITRO, WaechterIPOPT)

Florian A. Potra, UMBC Interior Point Methods 28/34

Page 55: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Semidefinite programming (SDP)

(Primal)

minimize C • X

subject to Ai • X = bi , i = 1, . . . ,m, X � 0.

(Dual)

maximize bT y

subject tom∑

i=1

yi Ai + S = C , S � 0.

data: C ,Ai , n × n symmetric matrices; b = (b1, . . . , bm)T ∈ Rm.primal variable: X , symmetric & p.s.d.dual variables: y ∈ Rm, S , symmetric & p.s.d.

Florian A. Potra, UMBC Interior Point Methods 29/34

Page 56: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

SDP central path

The primal-dual SDP system:

Ai • X = bi , i = 1, . . . ,m,m∑

i=1

yi Ai + S = C ,

XS = 0, X � 0, S � 0.

The primal-dual SDP central path:

Ai • X = bi , i = 1, . . . ,m,m∑

i=1

yi Ai + S = C ,

XS = µI , X � 0, S � 0.

Problem: On the central path XS = SX , but this is not true outside the centralpath.

Florian A. Potra, UMBC Interior Point Methods 30/34

Page 57: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Search directions

MZ search direction (∆X ,∆y ,∆S):

HP (X ∆S + ∆XS) = σµI − HP (XS),

Ai •∆X = ri , i = 1, . . . ,m,m∑

i=1

∆yi Ai + ∆S = Rd .

Symmetrization operator:

HP (M) = (PMP−1 + [PMP−1]T )/2.

P = I : AHOP = S1/2 : HKM (HRVW/KSH/M)P such that PT P = X−1/2[X 1/2SX 1/2]1/2X−1/2 : NT

Twenty search directions are analyzed and tested by Todd (1999)

MTY with some directions has O(√

n ln(ε0/ε) iteration complexity.

Florian A. Potra, UMBC Interior Point Methods 31/34

Page 58: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear Convergence for SDP

Kojima, Shida and Shindoh (1998): MTY predictor-corrector with HKM searchdirection has superlinear convergence if:

(A) SDP has a strictly complementary solution;

(B) SDP is nondegenerate (nonsingular Jacobian)

(C) the iterates converge tangentially to the central path in the sense that thesize of the neighborhood containing the iterates must approach zero,namely, ∥∥∥(X k )1/2Sk (X k )1/2 − (X k • Sk/n)I

∥∥∥F

X k • Sk→ 0

Assumption (B) and (C) not required for LP.

Ax = b , x ≥ 0.

Florian A. Potra, UMBC Interior Point Methods 32/34

Page 59: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear Convergence for SDP – continued

P. and Sheng (1998): Superlinear convergence + polynomiality if

(A) SDP has a strictly complementary solution;

(D)X k Sk

√X k • Sk

→ 0.

Lu and Monteiro (2004) proved that (D)⇔ [(X k )1/2Sk (X k )1/2]BN

µk→ 0.

Note that (B) and (C) implies (D).

Both (C) and (D) can be enforced by the algorithm; practical efficiency ofsuch an approach is questionable.

If several corrector steps are used, the algorithm has polynomialcomplexity and superlinear convergence under assumption (A) only.

MTY with KHM direction for predictor and AHO direction for corrector,has polynomial complexity and superlinear convergence of Q-order 1.5under (A) and (B).

Florian A. Potra, UMBC Interior Point Methods 33/34

Page 60: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear Convergence for SDP – wanted

Kojima, Shida and Shindoh (1998):

example suggesting that interior point algorithms for SDP based on theKHM are unlikely to be superlinearly convergent without(C).

MTY with AHO is quadratically convergent under (A). Globalconvergence, but no polynomial complexity.

Ji, P. and Sheng (1999): MTY using the MZ-family. Polynomial complexity.

(A) + (D)⇒ superlinear convergence.

(A) + (B) + scaling matrices in the corrector step have boundedcondition number ⇒ Q-order 1.5 .

(A) + (B) + scaling matrices in both predictor and corrector step havebounded condition number ⇒ Q-quadratic convergence.

Lu and Monteiro (2007) proved that αµk I � X k Sk + Sk X k � βµk I ⇒ (D).

WANTED: A superlinearly convergent algorithm with O(√

n ln(ε0/ε))

iteration complexity in a wide neighborhood of the central path.

Florian A. Potra, UMBC Interior Point Methods 34/34

Page 61: Interior Point Methods · Interior point methods Interior-point methods in mathematical programming have been the largest and most dramatic area of research in optimization since

Superlinear Convergence for SDP – wanted

Kojima, Shida and Shindoh (1998):

example suggesting that interior point algorithms for SDP based on theKHM are unlikely to be superlinearly convergent without(C).

MTY with AHO is quadratically convergent under (A). Globalconvergence, but no polynomial complexity.

Ji, P. and Sheng (1999): MTY using the MZ-family. Polynomial complexity.

(A) + (D)⇒ superlinear convergence.

(A) + (B) + scaling matrices in the corrector step have boundedcondition number ⇒ Q-order 1.5 .

(A) + (B) + scaling matrices in both predictor and corrector step havebounded condition number ⇒ Q-quadratic convergence.

Lu and Monteiro (2007) proved that αµk I � X k Sk + Sk X k � βµk I ⇒ (D).

WANTED: A superlinearly convergent algorithm with O(√

n ln(ε0/ε))

iteration complexity in a wide neighborhood of the central path.

Florian A. Potra, UMBC Interior Point Methods 34/34