method of approximate centers for semi-definite programming

21
Method of approximate centers for semi-definite programming Report 96-27 B. He E. de Klerk C. Roos T. Terlaky Technische Universiteit Delft Delft University of Technology Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics

Upload: lehigh

Post on 14-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Method of approximate centers forsemi-definite programming

Report 96-27

B. HeE. de Klerk

C. RoosT. Terlaky

Technische Universiteit DelftDelft University of Technology

Faculteit der Technische Wiskunde en InformaticaFaculty of Technical Mathematics and Informatics

ISSN 0922-5641

Copyright c 1996 by the Faculty of Technical Mathematics and Informatics, Delft, TheNetherlands.No part of this Journal may be reproduced in any form, by print, photoprint, microfilm,or any other means without permission from the Faculty of Technical Mathematics andInformatics, Delft University of Technology, The Netherlands.

Copies of these reports may be obtained from the bureau of the Faculty of TechnicalMathematics and Informatics, Julianalaan 132, 2628 BL Delft, phone+31152784568.A selection of these reports is available in PostScript form at the Faculty’s anonymousftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl

DELFT UNIVERSITY OF TECHNOLOGYREPORT Nr. 96{??Method of approximate centers forsemi-definite programmingB. He, E. de Klerk, C. Roos, T. Terlaky

ISSN 0922{5641Reports of the Faculty of Technical Mathematics and Informatics Nr. 96{??Delft, February 1, 1996i

B. He,Nanjing University,Nanjing, China.E. de Klerk, C. Roos and T. Terlaky,Faculty of Technical Mathematics and Informatics,Delft University of Technology,P.O. Box 5031, 2600 GA Delft, The Netherlands.e{mail: [email protected], [email protected], [email protected] �rst author visited the Delft University of Technology from April 1, 1995 to May 31,1995.

Copyright c 1995 by Faculty of Technical Mathematics andInformatics, Delft, The Netherlands.No part of this Journal may be reproduced in any form, byprint, photoprint, micro�lm or any other means without writ-ten permission from Faculty of Technical Mathematics and In-formatics, Delft University of Technology, The Netherlands.ii

AbstractThe success of interior point algorithms for large-scale linear programming hasprompted researchers to extend these algorithms to the semi-de�nite program-ming (SDP) case. In this paper, the method of approximate centers of Roosand Vial [13] is extended to SDP.Key words: Semi{de�nite programming, path{following algorithm, approxi-mate centersRunning title: Method of approximate centers for semi-de�nite program-ming.

iii

1 IntroductionIn recent years, a great revival of interest in semi{de�nite programming (SDP) has takenplace. The reason is basically twofold: Important applications in control theory [4],structural optimization [3], and combinatorial optimization [1], to name but a few, havebeen formulated as SDP problems. (A review of applications may be found in [15]).Secondly, it has become clear that most interior point algorithms for linear programming(LP) may be extended to semi{de�nite programming. The polynomial complexity of thesemethods and their successful application to large{scale LP make them suitable candidatesfor semi-de�nite programming algorithms.This is currently an active research area, as is clear from the large number of recentpapers. The �rst algorithms to be extended were potential reduction methods [11, 16],and recently much work has been done on primal{dual central path following algorithms[9, 10, 14, 12, 7].In this paper we extend the so{called method of approximate centres of Roos and Vial[13] from LP to SDP. This is a primal central path following method, but feasible dualiterates are available at each iteration. As such it is a theoretically insightful algorithmwhich illustrates important concepts, while the extension from LP to SDP is surprisinglystraightforward.We will work with the following standard SDP formulation of the primal problem (P):minTr(CX)subject to Tr(AiX) = bi; i = 1; : : : ;mX � 0and of the dual problem (D): max bTysubject to mXi=1 yiAi + Z = CZ � 0where C, X, Z and the matrices Ai are symmetric n�n matrices, b; y 2 IRm and `X � 0'means X is positive semi{de�nite. The matrices Ai are further assumed to be linearlyindependent. The class of symmetric n� n matrices is denoted by S. The sets of primaland dual feasible solutions will be denoted by P and D respectively.The optimality conditions for the pair of dual problems areTr(AiX) = bi; i = 1; : : : ;m1

mXi=1 yiAi + Z = CXZ = 0X;Z � 0We will assume that a strictly feasible pair (X;Z) exists. This ensures a zero duality gap(TrXZ = 0) at an optimal primal{dual pair [16].We denote the unique solution of the system of relaxed optimality conditions:Tr(AiX) = bi; i = 1; : : : ;mmXi=1 yiAi + Z = CXZ = �IX;Z � 0by fX(�); y(�); Z(�)g. This solution gives a parametric representation of the central pathas a function of �. The existence and uniqueness of the solution follows from the fact thatit corresponds to the unique minimum of the strictly convex primal{dual barrier functionf(X;Z; �) = 1�Tr(XZ) � ln det(XZ)de�ned on the primal{dual feasible region P � D. The primal{dual barrier f is easilyshown to be the sum of the primal and dual barrier functions, de�ned respectively on Pand D by fp(X;�) = 1�Tr(CX)� ln detXand fd(y; Z; �) = �1�bTy � ln detZ:The primal central path corresponds to the minimizers X(�) of fp(X;�). The algorithmto be presented follows the primal central path, and the search direction �X is simplythe projected Newton direction of the primal barrier. The algorithm for (P) takes theform2

Method of approximate centers for (P)Input:A pair (X0; �0) such that X0 is strictly feasible and suf-�ciently centered, �0 > 0 and accuracy parameter � > 0.begin� := 14pn+2 ; X := X0; � := �0;while n� > � dobeginX := X +�X;� := (1 � �)�;endendWe prove that the above algorithm converges in O(pn) iterations from a su�cientlycentered starting point. Moreover, we discuss how to embed a general primal{dual pro-blem pair in a self{dual problem which has the same form as (P). This embedding has aknown central feasible point. If one applies the algorithm presented here to the embeddingproblem, it becomes an infeasible{start primal{dual method.After the �rst draft of this paper was completed it became known that many of thetools used in the analysis have previously been developed by Faybusovich [6], and laterindependently by Anstreicher [2]. In particular, the distance measure used to quantify`su�cient proximity' to the central path was introduced in [6]. This paper therefore focusesmore on the extention of the method of approximate centers as such, with simpli�ed proofs.In this way a simple and elegant extention of the original LP algorithm of Roos and Vialis presented. It also gives motivation for the implementation of such primal methods, ifused in the framework of the abovementioned embedding.The paper is structured as follows: Section 2 discusses the distance measure, which isthen related to the primal search direction in Section 3. The behaviour of the primalstep near the central path is analysed in Section 4. The analysis of a centering parameterupdate in Section 5 allows the complexity analysis of Section 6. The dual algorithm isbrie y discussed in Section 7, and the self{dual embedding in Section 8.Notation:We will use the inner product < A;B >:= TrATBde�ned on the class of real n � n matrices and which induces the Fr�obenius norm:kAk := �Tr(ATA)�12 = 0@ nXi=1 nXj=1A2ij1A 12 :3

In the case of a symmetric matrix A = AT one haskAk2 = nXi=1 �2i (A)where �i(A) denotes the i-th eigenvalue of A.The following Kronecker product notation will be used:� AB indicates the n2�n2 matrix consisting of the n2 blocks [AijB], i; j = 1; : : : ; n.� vec(A) is the n2 dimensional vector obtained by stacking the n columns of A on topof one another. Note that vec(A)Tvec(B) = TrATB.2 Measure of centralityFor primal feasible X, we de�neZ(X;�) := arg minZ2Sn X 12ZX 12� � I : mXi=1 yiAi + Z = Co: (1)In other words, Z(X;�) satis�es the dual feasibility constraints with the semi{de�nitenesscondition relaxed and minimizes the deviation of the pair (X;Z(X;�)) from centrality,where the deviation is quanti�ed using the measure�(X;�) := X 12Z(X;�)X 12� � I :Note that one has �(X;�) = 0 () X = X(�):We will refer to X as being su�ciently centered if �(X;�) is smaller than some prescribedtolerance.The matrix Z(X;�) plays an important role in the following analysis. In particular, thesearch direction of the algorithm can be expressed in terms of it, as is shown in the nextsection.3 The primal stepThe projected Newton direction for the primal barrierfp(X;�) = 1�Tr(CX)� ln detX4

at a given pair (X;�) is de�ned as [8]:�X = argmin�Xvec(�X)Tvec(rfp(X;�)) + 12vec(�X)Tr2fp(X;�)vec(�X) (2)subject to the feasibility conditionsTr(Ai�X) = 0; i = 1; : : : ;m;where rfp and r2fp denote the gradient and Hessian of fp respectively. In other words,the projected Newton step minimizes the quadratic Taylor approximation to fp subjectto feasibility of the step direction.As in the LP case, an explicit expression for �X may be obtained. To this end, note thatfor symmetric X, rfp(X;�) = 1�C �X�1and r2fp(X;�) = @vec(rfp(X;�))@vecX= �@vecX�1@vecX= X�1 X�1:Substitution in (2) and usingvec(�X)T �X�1 X�1� vec(�X) = vec(�X)Tvec(X�1�XX�1) = Tr �X�1�X�2yields: �X = argmin "TrC�X� �TrX�1�X + 12Tr �X�1�X�2#subject to Tr(Ai�X) = 0; i = 1; : : : ;m:The �rst order optimality conditions for this problem are1�C �X�1 +X�1�XX�1 + mXi=1 yiAi = 0Tr(Ai�X) = 0; i = 1; : : : ;m:Multiplying from both sides with X yields�X = �1�XCX +X � mXj=1 yjXAjX; (3)5

and using TrAi�X = 0; i = 1; : : : ;m one obtainsmXj=1 yjTr(AiXAjX) = Tr Ai(X � 1�XCX)! ; i = 1; : : : ;m: (4)Eq. (4) may be rewritten asmXj=1 yjvec�X 12AjX 12 �T vec�X 12AiX 12� = vec�X 12AiX 12 �T vec(I)�vec�X 12AiX 12�T vec( 1�X 12CX 12 ); i = 1; : : : ;m:If we consider the vectors hvec�X 12AjX 12 �i as row vectors of an m � n2 matrix A, say,then the last equation becomesAATy = A "vec(I)� vec( 1�X 12CX 12 )#or y = �AAT��1A "vec(I)� vec( 1�X 12CX 12 )# : (5)Noting that (3) may be written as the vector equationvec�X� 12�XX� 12� = �vec 1�X 12CX 12!+ vec(I)� mXj=1 yjvec �X 12AjX 12� ;= �vec 1�X 12CX 12!+ vec(I)�ATy;and substituting y from (5) yieldsvec�X� 12�XX� 12� = � hI �AT (AAT )�1Ai vec 1�X 12CX 12 � I!! :The last expression is simply the orthogonal projection of the vector vec( 1�X 12CX 12 � I)onto the nullspace of A. Note that the row space of A is given byspan nvec�X 12A1X 12 � ; : : : ; vec�X 12AmX 12�oand the nullspace is the orthogonal complement of this space.Reverting to the space of matrices S, it is clear that the search direction �X is obtainedvia a projection of the matrix [ 1�X 12CX 12 � I] onto the orthogonal complement ofspanfX 12A1X 12 ; : : : ;X 12AmX 12g:The relevant projection operator PA : S 7! S is given byPA(M) := arg minW2SfkW �Mk : Tr(X 12AiX 12W ) = 0; i = 1; : : : ;mg: (6)The search direction �X can be written in terms of Z(X;�) as follows:6

Lemma 3.1�X = �X 12 0@PA 0@X 12CX 12� � I1A1AX 12 = � XZ(X;�)X� �X! :Proof:Note that the �rst-order optimality conditions forminW2SfkW � (X 12CX 12� � I)k : Tr(X 12AiX 12W ) = 0; i = 1; : : : ;mg:are (I) 8><>: W � �X 12 CX 12� � I�+Pmi=1 �iX 12AiX 12 = 0;Tr(AiX 12WX 12 ) = 0; i = 1; : : : ;m:The �rst{order optimality conditions forminy2IRm;Z2S n X 12ZX 12� � I : mXi=1 yiAi + Z = Cocan be written as (II) 8>>>><>>>>: XZX�2 �Q = X� ;Tr(AiQ) = 0; i = 1; : : : ;m;Pmi=1 yiAi + Z = C:If we denote the solution of System (II) by (y(X;�); Z(X;�); Q(X;�)), it follows that�(X;�) := 1�y(X;�) and W (X;�) := �X� 12Q(X;�)X� 12satisfy the �rst equation of System (I). It follows from the second equation of System (II)that Tr(AiX 12W (X;�)X 12 ) = 0; i = 1; : : : ;m;which completes the proof. 2In practice the optimality conditions of System (II) may be solved by rewriting them asmXi=1 yiTr(XAiXAj) = Tr(XAjXC)� �Tr(AjX); j = 1; : : : ;m: (7)The solution of thism�m linear system yields y(X;�). The coe�cient matrix [Tr(XAiXAj)]of the linear system (7) is positive de�nite as the matrices Ai; i = 1; : : : ;m are linearlyindependent [6]. Letting Z(X;�) =Pmi=1 yi(X;�)Ai�C, the search direction is calculatedfrom �X = �1�XZ(X;�)X +X:7

4 Behaviour near the central pathConsider a primal update X� := X +�X = 2X � 1�XZ(X;�)X: The pair (X�; Z(X;�))now satis�es primal{dual feasibility with the semi{de�niteness requirements relaxed. Thenext two lemmas show that the semi-de�niteness requirement is also satis�ed if X issu�ciently centered.Lemma 4.1 If X � 0 and �(X;�) < 1, then Z(X;�) � 0.Proof:By de�nition �(X;�)2 = X 12Z(X;�)X 12� � I 2= Tr 1�X 12Z(X;�)X+ 12 � I!2= nXi=1[�i(X 12Z(X;�)X 12 )� 1]2:Using �(X;�) < 1, we have nXi=1[�i(X 12Z(X;�)X 12 )� 1]2 < 1;which shows that �i(X 12Z(X;�)X 12 ) > 0, and thus Z(X;�) � 0. 2Lemma 4.2 Let X� = X +�X = 2X � 1�XZ(X;�)X. If X � 0 and �(X;�) < 1, thenX� � 0.Proof:Note that X� may be written asX� = X 12�2I �X 12 Z(X;�)� X 12�X 12 :Because �(X;�) < 1, i.e. k 1�X 12Z(X;�)X 12 � Ik < 1, it follows thatnXi=1 �2i (X 12Z(X;�)X 12 � I) < 1;Thus we have �i( 1�X 12Z(X;�)X 12 ) 2 (0; 2); i = 1; : : : ; n8

which implies �i�2I �X 12 Z(X;�)� X 12� 2 (0; 2); i = 1; : : : ; nand consequently X� � 0. 2One also has quadratic convergence of the primal iterate to the central path:Lemma 4.3 A primal update X� = 2X � 1�XZ(X;�)X satis�es �(X�; �) � �2(X;�).Proof:By de�nition �(X�; �)2 = X� 12Z(X�; �)X� 12� � I 2� X� 12Z(X;�)X� 12� � I 2= Tr 1�Z(X;�)X� � I!2= Tr 1�Z(X;�) "2X � 1�XZ(X;�)X# � I!2= Tr 1�Z(X;�)X � I!4� 1�X 12Z(X;�)X 12 � I 4 = �4(X;�)which is the required result. 2This last result has also been found recently by Faybusovich [6], and Anstreicher [2].5 Updating the centering parameterOnce the primal iterate X is su�ciently centered, i.e. �(X;�) � � for some tolerance � ,the parameter � can be reduced. To �x our ideas, we update the target parameter in sucha way that we will still have �(X;��) � 12 after a target update � ! ��. The followingNewton step will then yield a feasible X� satisfying �(X�; ��) � 14 , by Lemma 4.3.In order to realise these ideas, the e�ect of a �{update on the proximity measure mustbe analysed: 9

Lemma 5.1 De�ne a �{update by �� := (1 � �)�, with 0 < � < 1. It then follows that�(X;��) � 11 � � (�(X;�) + �pn):Proof:Using the de�nition of Z(X;��) we may write�(X;��) = X 12Z(X�; ��)X 12(1 � �)� � I � X 12Z(X;�)X 12(1 � �)� � I � 11� �� X 12Z(X;�)X 12� � I + �kIk�= 11� � (�(X;�) + �pn);where the second inequality follows from the triangle inequality. 2The above result enables us to choose an updating parameter � which guarantees thatthe primal iterate remains su�ciently centered with respect to the new parameter ��.Lemma 5.2 Let �(X;�) � 12 . If � = 1=(4pn + 2), X� = X + �X and �� = (1 � �)�,then �(X�; ��) � 12 .Proof:Using Lemma 5.1 and Lemma 4.3 successively we get�(X�; ��) � 11 � � (�(X�; �) + �pn)� 11 � � (�2(X;�) + �pn)� 4pn+ 24pn+ 1 14 + pn4pn+ 2!= 12 : 2It is easily veri�ed that the dynamic update� = 12 � �(X;�)pn+ 12 � 14pn + 2may also be used. 10

6 Complexity analysisTo prove the polynomial complexity of the algorithm, we need the following lemma whichbounds the duality gap in terms of the proximity measure �.Lemma 6.1 If �(X;�) � 1 then�(n� �(X;�)pn) � Tr(CX)� bTy(X;�) � �(n+ �(X;�)pn):Proof:Note that Tr(CX)� bTy(X;�) = Tr(XZ(X;�)) = Tr(X 12Z(X;�)X 12 ):Using the Cauchy-Schwarz inequality yields�(X;�)pn = X 12Z(X;�)X 12� � I kIk � �����Tr(XZ(X;�))� � n����� ;which implies thatn� �(X;�)pn � Tr(XZ(X;�))� � n+ �(X;�)pn;which in turn gives the required result. 2We can now derive the worst case complexity bound of the algorithm.Theorem 6.1 Let � > 0 be an accuracy parameter and �0 > 0. Let X0 � 0 be astrictly feasible starting point such that �(X0; �0) � 12 . The Algorithm stops after at most6pn ln n�0� steps, the last generated points X and Z(X;�) are strictly feasible, and theduality gap is bounded by Tr(XZ(X;�)) � 32�:Proof:After each iteration of the algorithm X will be strictly feasible, and �(X;�) � 1=2; dueto Lemma 5.2. After the k-th iteration one has � = (1 � �)k�0: The algorithm stops if kis such that n�0(1 � �)k < �:Taking logarithms on both sides, this inequality reduces to�k ln(1 � �) > ln n�0� :11

Since � ln (1 � �) > �, this will certainly hold ifk� > ln n�0� ;which implies k > 6pn ln n�0�for the default setting � := 14pn+2 . This proves the �rst statement in the theorem. Nowlet X be the last generated point, then it follows from Lemma 4.1 that Z(X;�) � 0.Moreover, the duality gap is then bounded byTr(XZ(X;�)) � n� "1 + �(X;�)pn #� �(1 + �(X;�)pn ) � 32�:where the �rst inequality follows from Lemma 6.1. This completes the proof. 27 The dual algorithmThe algorithm for the dual problem is perfectly analogous to that of the primal problem.If one de�nesX(Z; �) := argminX2S 8<: Z 12XZ 12� � I : TrAiX = bi; i = 1; : : : ;m9=;for a strictly feasible dual variable Z � 0, then the �rst{order optimality conditions whichyield X(Z; �) are Z� "XZ� � I#+ mXi=1 yiAi = 0Tr(AiX) = bi:If we now de�ne �(Z; �) := Z 12X(Z; �)Z 12� � I ;then we can repeat the analysis for the primal algorithm, but with the roles of X and Zinterchanged. This results in the following algorithm:12

Algorithm for (D)Input: A strictly feasible dual pair (Z0; y0), a parameter�0 > 0 such that �(Z0; �0) � 12:An accuracy parameter � > 0.begin� := 14pn+2 ; Z := Z0; � := �0;while n� > � dobeginZ := 2Z � ZX(Z; �)Z;� := (1 � �)�;endendDue to the symmetry in the analysis, the dual algorithm has the same complexity boundas the primal algorithm.8 InitializationUp to now, we have treated the method as centers as a purely primal (or dual) algo-rithm which requires a su�ciently centered feasible starting point. These conditions seemrestrictive, but the algorithm may be used in a infeasible{start, primal{dual framework.To see this, consider the pair of primal{dual SDP problems in symmetric form:(P̂): fminX Tr(CX) j Tr(AiX) � bi; i = 1; : : : ;m; X � 0g(D̂): fmaxy;S bTy j mXi=1 yiAi + S = C; S � 0; y � 0gThe pair of problems (P̂) and (D̂) may be embedded in a self{dual SDP problem withknown central starting point [5]. This self{dual embedding for (P̂) and (D̂) takes theform: miny;X;�;�;z;S;�;� ��13

subject to Tr(AiX) ��bi +��b �zi = 0 8i�Pmi=1 yiAi +�C �� �C �S = 0bTy �Tr(CX) +�� �� = 0��bTy +Tr( �CX) ��� �� = ��z � 0; X � 0; y � 0; S � 0; � � 0; � � 0; � � 0; � � 0where �bi := bi + 1 � TrAi� �C := I + mXi=1Ai � C� := 1 + TrC � bT e� := 2n + 2:It is straightforward to verify that a feasible interior starting solution is given by thecentered point y0 = z0 = e, X0 = S0 = I, �0 = �0 = � 0 = �0 = 1, where e 2 IRm denotesthe all{one vector.If one de�nes the new matrix variable�X = 266666666666666666666664diag y X � � diag z S � �

377777777777777777777775 � 0then the self{dual problem may easily be cast in the standard primal form (P). The re-sulting problem has 2m+ 2n2 + 4 variables and the same number of constraints. It maybe solved with the primal algorithm described in this paper. Note that the algorithmfunctions as a primal{dual, infeasible{start algorithm in this case, while the worst casecomplexity bound becomes O(pn+m) iterations. This is the usual worst{case com-plexity bound for the solution of problems in symmetric form, and not a result of theembedding. The solution of the embedding problem yields one of the following alternati-ves [5]: 14

� an optimal solution pair to (P̂) and (D̂) with zero duality gap;� a certi�cate that the optimal duality gap is strictly positive;� a ray (certi�cate of unboundedness) for either (P̂) or (D̂).References[1] F. Alizadeh. Combinatorial optimization with interior point methods and semi{de�nite matrices. PhD thesis, University of Minnesota, Minneapolis, USA, 1991.[2] K.M. Anstreicher and M. Fampa. A long{step path following algorithm for semide�-nite programming problems. Working Paper, Department of Management Sciences,University of Iowa, Iowa City, USA, 1996.[3] A. Ben-Tal and A.S. Nemirovskii. Stable truss topology design via semide�niteprogramming. Working Paper, Faculty of Industrial Engineering and Management,Technion{Israel Institute of Technology, Haifa, Israel, 1995.[4] S.E. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear matrix inequalitiesin system and control theory. SIAM Studies in Applied Mathematics, Vol. 15. SIAM,Philadelphia, USA, 1994.[5] E. de Klerk, C. Roos, and T. Terlaky. Initialization in semide�nite programmingvia a self{dual, skew{symmetric embedding. Technical Report 96{10, Faculty ofTechnical Mathematics and Computer Science, Delft University of Technology, Delft,The Netherlands, 1996.[6] L. Faybusovich. On a matrix generalization of a�ne{scaling vector �elds. SIAM J.Matrix Anal. Appl., 16:886{897, 1995.[7] L. Faybusovich. Semi{de�nite programming: a path{following algorithm for a linear{quadratic functional. Technical Report, Dept of Mathematics, University of NotreDame, Notre Dame, IN, USA, 1995. (To appear in SIAM J. Optimization ).[8] P.E. Gill, W. Murray, M.A. Saunders, J.A. Tomlin, and M.H. Wright. On projectedNewton barrier methods for linear programming and an equivalence to Karmarkar'sprojective method. Mathematical Programming, 36:183{209, 1986.[9] M. Kojima, M. Shida, and S. Shindoh. Global and local convergence of predictor-corrector infeasible{interior{point algorithms for semide�nite programs. TechnicalReport B-305, Department of Mathematical and Computing Sciences, Tokyo Insti-tute of Technology, Tokyo, Japan, 1995.[10] R.D.C. Monteiro. Primal-dual algorithms for semide�nite programming. WorkingPaper, School of Industrial and Systems Engineering, Georgia Institute of Technology,Atlanta, USA, 1995. 15

[11] Y. Nesterov and A.S. Nemirovskii. Interior point polynomial algorithms in convexprogramming. SIAM Studies in Applied Mathematics, Vol. 13. SIAM, Philadelphia,USA, 1994.[12] F.A. Potra and R. Sheng. A superlinearly convergent primal{dual infeasible{interior{point algorithm for semide�nite programming. Reports on Computational Mathe-matics 78, Dept. of Mathematics, The University of Iowa, Iowa City, USA, 1995.[13] C. Roos and J.-Ph. Vial. A polynomial method of approximate centers for linearprogramming. Mathematical Programming, 54:295{305, 1992.[14] J.F. Sturm and S. Zhang. Symmetric primal{dual path following algorithms forsemide�nite programming. Technical Report 9554/A, Tinbergen Institute, ErasmusUniversity Rotterdam, 1995.[15] L. Vanderberghe. Semide�nite programming. Technical Report, Information SystemsLaboratory, Electrical Engineering Department, Stanford University, Stanford CA94305, California, 1994. (To be published in SIAM Review).[16] L. Vanderberghe and S. Boyd. A primal{dual potential reduction algorithm for pro-blems involving matrix inequalities. Mathematical Programming, 69:205{236, 1995.

16