semidefinite relaxations and lagrangian duality with ...jordan/sail/... · optimization problem....

43
ISSN 0249-6399 apport de recherche INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE Semidefinite relaxations and Lagrangian duality with application to combinatorial optimization Claude Lemar´ echal, Franc ¸ois Oustry N ˚ 3710 Juin 1999 TH ` EME 4

Upload: others

Post on 10-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

ISS

N 0

249-

6399

appor t de r echerche

INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE

Semidefinite relaxations andLagrangian duality with application to

combinatorial optimization

Claude Lemar´echal, Franc¸ois Oustry

N ˚ 3710

Juin 1999

THEME 4

Page 2: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable
Page 3: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Semide�nite relaxations and

Lagrangian duality with application to

combinatorial optimization

Claude Lemar�echal, Fran�cois Oustry

Th�eme 4 | Simulation et optimisationde syst�emes complexes

Projet Numopt

Rapport de recherche n�3710 | Juin 1999 | 40 pages

Abstract: We show that it is fruitful to dualize the integrality constraintsin a combinatorial optimization problem. First, this reproduces the knownSDP relaxations of the max-cut and max-stable problems. Then we applythe approach to general combinatorial problems. We show that the resultingduality gap is smaller than with the classical Lagrangian relaxation; we alsoshow that linear constraints need a special treatment.

Key-words: Combinatorial optimization, Lagrangian relaxation, SDP re-laxation, duality, quadratic constraints, linear matrix inequalities

(R�esum�e : tsvp)

Unite de recherche INRIA Rhˆone-Alpes655, avenue de l’Europe, 38330 MONTBONNOT ST MARTIN (France)

Telephone : 04 76 61 52 00 - International: +33 4 76 61 52 00Telecopie : 04 76 61 52 52 - International: +33 4 76 61 52 52

Page 4: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Relaxations semi-d�e�nies et dualit�e

lagrangienne avec applications en optimisation

combinatoire

R�esum�e : Nous montrons qu'il est fructueux de dualiser les contraintesd'int�egralit�e d'un probl�eme d'optimisation combinatoire. Tout d'abord, cecireproduit la relaxation SDP classique pour les probl�emes de la coupe maxi-mum et du stable maximum. Nous appliquons ensuite cette approche �a desprobl�emes combinatoires g�en�eraux. Nous montrons que le saut de dualit�er�esultant est meilleur que celui de la relaxation lagrangienne classique. Nousmontrons �egalement que les contraintes lin�eaires n�ecessitent un traitement par-ticulier.

Mots-cl�e : Optimisation combinatoire, relaxation lagrangienne, relaxationSDP, dualit�e, contraintes quadratiques, in�egalit�es matricielles lin�eaires

Page 5: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 3

1 Introduction

The aim of this paper is twofold:

(i) to show that some of the well-known SDP relaxations of combinatorialproblems (max-cut, max-stable) are particular instances of a general tech-nique: Lagrangian duality;

(ii) to apply this latter technique to rather arbitrary combinatorial problems,thereby proposing relaxation schemes which turn out to be �ner than theclassical ones.

A third aspect of this paper is clari�cation: we present Lagrangian and SDPduality in a common framework, as simple as possible, and we demonstratehow to apply the \recipe" of [16].

Lagrangian duality, in various situations (nonconvex, quadratic, SDP), isa central tool used in this paper. For the reader's convenience, its main pointsare explained in an accessible language, in Sections 2 to 4. These sectionscould be skipped by a reader already expert with Lagrangian duality andlinear matrix inequalities. However, Sections 2.2, 3.2, 4.2 and 4.5 do containde�nitely new material and should not be skipped. Indeed, they deal with twoexamples from combinatorial optimization: max-cut and max-stable, which weuse to illustrate the course of our development. It is there that are given theresults mentioned in (i) above. More precisely, 0-1 constraints can be expressedas x2i � xi = 0; applying Lagrangian duality to them gives birth to an SDPproblem; and applying duality again to this problem reproduces the previouslyknown and popular SDP relaxations.

Application of this scheme to these two examples opens the way to rathernew dualization schemes for general 0-1 programming problems: here again,just dualize the 0-1 constraints, and dualize the result once more { this isour point (ii) above. Such a technique had already been proposed here andthere in the literature, see for examples [10], [21]; but it has been generallyoverlooked so far by the community. In x5, we develop the technique in somedetail, showing in particular that some care should be exerted when treatingthe coupling constraints Ax = a.

RR n�3710

Page 6: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

4 Claude Lemar�echal, Fran�cois Oustry

2 Recalls on Lagrangian duality

Lagrangian duality will be applied to quadratically constrained problems in x4,which in turn are useful for the combinatorial applications considered in thispaper. The present section demonstrates the duality mechanism as a simpleyet general tool, covering in particular conic duality { and in particular SDPduality { as well as duality for nonconvex quadratic programs.

2.1 Basic principles

Consider �rst an optimization problem put in the form

inf f(x) ; x 2 X ; gj(x) = 0; j = 1; : : : ; m : (1)

We introduce the Lagrangian, a function of the primal variable x and of thedual variable { or multipliers { u 2 R

m :

X � Rm 3 (x; u) 7! L(x; u) := f(x) +

mXj=1

ujgj(x) = f(x) + u>g(x) ; (2)

if g 2 Rm denotes the vector of constraint-values. The dual function is then a

function of u alone, de�ned by

Rm 3 u 7! �(u) := inf

x2XL(x; u) ; (3)

and the dual problem issup �(u) ; u 2 R

m : (4)

If some of the constraints in (1) are inequalities gj(x) 6 0, then the correspon-ding dual variables have a sign-constraint: uj > 0. The dual function is alwaysconcave and upper semi-continuous, independently of the data X , f and g.The dual problem is therefore always \easy", insofar as (3) is easy to solve.For a general introduction and motivation of this technique, see for example[9, Chap.XII]; or also [5], [17, Chap. 6] for its application to combinatorialproblems.

We will denote by val(�) the optimal value of an optimization problem (�).A crucial connection between (1) and (4) is then the duality gap val(1)�val(4),

INRIA

Page 7: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 5

which is always a nonnegative number. In fact, the dual problem amounts tosolving a certain convex relaxation of (1), which can be obtained by applyingagain duality to (4); see for example [3, x16], [4], [11].

Lagrangian duality is a very versatile technique: in a constrained optimiza-tion problem giving birth to the formulation (1), all possible combinations areallowed to select the \hard" constraints { those de�ning X { and the \soft"constraints { those de�ning g and then dualized in L. In view of the abovearguments, this selection can then be oriented according to two criteria:

{ How easily can the dual function be computed in (3); for this, the functionx 7! L(x; u) should enjoy some structure not appearing directly in (1).

{ How e�cient is the dual problem in terms of (1); said otherwise: how accu-rate is the convexi�cation of (1). The duality gap val(1)�val(4) quanti�esthis accuracy.

Section 5, precisely, will review various dualization schemes when the feasibledomain in (1) is described by Ax = a, x 2 f0; 1gn.

A very relevant question, then, is whether the duality gap is 0. As iswell known, this question heavily relies on convexity , but there are also sometechnicalities dealing with compactness. A crucial object for this is the subsetof Rm made up of all possible constraint-values:

G := fg(x) : x 2 Xg = f 2 Rm : = g(x) for some x 2 Xg ; (5)

or rather its convex hull. Along this line, the following preliminary result isuseful. Its proof in the present general setting is rather arid and we do not giveit. The convex case will su�ce for our purpose, and this will be Proposition 3.3below.

Lemma 2.1 Suppose there is some �u 2 Rm such that the dual function � of

(2), (3) is �nite: infx L(x; �u) > �1. Then the following two statements areequivalent:

(i) The convex hull of G de�ned in (5) contains 0 as an interior point;

(ii) �(u)! �1 for kuk ! +1.

RR n�3710

Page 8: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

6 Claude Lemar�echal, Fran�cois Oustry

2.2 First example: a dual of Max-Cut

Consider the max-cut problem, written in a form suitable for our purpose:

infnX

i;j=1

Qijxixj = x>Qx ; x2i = 1; i = 1; : : : ; n : (6)

Needless to say, the constraints x2i = 1 just express that xi = �1. Thesymmetric matrix Q has nonnegative coe�cient, null if (i; j) is not an edge ofthe graph under study; but our development to come is valid for an arbitraryQ,

The above problem has the form (1), with X = Rn and gj(x) = x2j � 1. To

apply duality, we form the Lagrangian

L(x; u) := x>Qx+nXi=1

ui(x2i � 1) = x>(Q+D(u))x� e>u : (7)

Here D(u) denotes the diagonal matrix constructed from the vector u, ande 2 R

n is the vector of all ones.Minimizing this Lagrangian with respect to x (on the whole of Rn !) is

a trivial operation: if Q + D(u) is not positive semide�nite, we obtain �1;otherwise, the best to do is to take x = 0. This validates the following result:

Theorem 2.2 The dual function associated with the maxcut problem (6), (7)is

�(u) =

(�e>u if Q+D(u) < 0,�1 otherwise :

The dual problem sup �(u) is therefore

sup�e>u ; subject to Q +D(u) < 0 ; (8)

which has a nonempty compact set of optimal solutions.

Proof. The expressions of the dual function and problem are clear.Now observe the following facts:

{ taking u big enough gives a positive semide�nite Q+D(u);

INRIA

Page 9: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 7

{ when xi describes R, the constraint x2i � 1 describes [�1;+1[: the set G of

(5) is [�1;+1[m (which is convex by itself) contains 0 as an interior point;

{ in (8), we certainly have Qii+ui > 0, while �e>u! �1 if some ui ! +1:the dual problem (8) is \sup-compact".

All this illustrates Lemma2.1, and proves at the same time that (8) has anonempty compact solution set. 2

2.3 Conic duality

Here we extend the previous development to the case where constraints havevalue in a cone. The corresponding theory is by no means new: it can befound for example in [14, Chap. 8] or [15, Chap. 6]; see also [9, xXII.5.3(a)].Our aim is rather to demonstrate in simple terms how this duality scheme canbe developed, giving birth among others to SDP duality, the subject of x 3.1.

Consider instead of (1):

inf f(x) ; x 2 X ; g(x) 2 K ; (9)

where K is some subset of Rm . The same dualization scheme as in x2.1 can beused: introduce a \slack variable" y 2 R

m and rede�ne X to X �K: (9) canbe rewritten as

inf f(x) ; x 2 X ; y 2 K ; g(x) = y :

For (x; y; u) 2 X �K � Rm , form the \slackened Lagrangian"

L0(x; y; u) = f(x) + u>(g(x)� y) = L(x; u)� u>y ;

here L is the original Lagrangian (2). Remembering the general relationinfx;y = infx infy, we can obtain the corresponding dual function in two steps,minimizing with respect to x 2 X the value

infy2K

L0(x; y; u) = L(x; u) + infy2K

(�u>y) :

Thus, the dual function associated with (9) is �(u)� �K(u), where

�K(u) := supy2K

u>y

RR n�3710

Page 10: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

8 Claude Lemar�echal, Fran�cois Oustry

is the so-called support function of the set K (note: the support function of aset coincides with that of its closed convex hull; K can therefore be assumedclosed and convex).

The case of interest for our purpose is when K is a closed convex cone ofRm . Then de�ne the polar cone of K by

K� := fu 2 Rm : u>y 6 0 for all y 2 Kg :

It is easy to see that the corresponding support function is in this case

�K(u) =

(0 if u 2 K� ;+1 otherwise

(if u 2 K�, �K(u) = u>0; if u =2 K�, there is �y 2 K such that u>�y > 0 andthen u>(t�y) goes to +1 when t! +1).

Thus, when K is a (closed convex) cone, the dual problem of (9) is

sup �(u) ; u 2 K� : (10)

With comparison to (1), the introduction of K amounts to chopping o� thecomplement of K� from the dual space. Needless to say, (1) corresponds toK = f0g and f0g� = R

m .

Remark 2.3 It can be seen that Lemma2.1(i) has here the following form:there are points x1; : : : ; xm+1 in X and convex multipliers �1; : : : ; �m+1 suchthat the corresponding convex combination

P�kg(xk) lies in the interior of K.

This is called a Slater condition, which will be used in Theorem3.4. 2

3 Recalls on Linear Matrix Inequalities

An instance of (9) which is relevant for our forthcoming development is semide-�nite programming, in which the constraint-space is Sn, the space of symmetricn� n matrices, and K = S+

n is the set of positive semide�nite matrices. Thelatter set is obviously a cone. The resulting SDP duality is well developed in[23], where the starting tool is Legendre-Fenchel conjugacy; see also [1]. Thepresent section is closer to the development of A. Shapiro in [19, Chap. 3]. Werevisit SDP duality as a particular case of Lagrangian/conic duality, and wehave several motivations for this:

INRIA

Page 11: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 9

{ to use a unique theoretical framework, which will apply also to nonconvexquadratic problems;

{ to introduce the reader with some elementary facts about linear matrix in-equalities, which will also be instrumental for our development;

{ to prepare a forthcoming work in a more general framework, where cons-traints take their values in a space of operators.

3.1 SDP duality

A convenient scalar product on our constraint-space Sn is the ordinary dot-product in R

n2 :

(Sn � Sn) 3 (X;U) 7! hX;Ui := trXU =nX

i;j=1

XijUij ;

also called the trace-product. The reason it is so convenient is the followingrelation: for all X 2 Sn and u 2 R

n ,

[Pn

i;j=1Xijuiuj =] u>Xu = hX; uu>i ; (11)

which can be checked in a straightforward way (note: (uu>)ij = uiuj); it willbe used continually.

Note that S+n = fX 2 Sn : �n(X) > 0g, where the smallest eigenvalue

�n(X) = minkuk=1 u>Xu appears as a concave continuous function of the ma-

trix X; this shows that the cone K = S+n is convex and closed. For the same

reason, its interior is the cone of positive de�nite matrices. In what follows,we will use the notation X � 0 [resp. < 0] to express that the (symmetric)matrix X is [resp. semi-]positive de�nite. Also, the norm of a matrix in Snhas a useful expression:

kXk2 = tr(X2) =nXi=1

�i(X2) =

nXi=1

�2i (X) : (12)

The polar cone of S+n with respect to our scalar product is easy to obtain:

Lemma 3.1 The cone K = S+n is anti-polar: K� = �K.

RR n�3710

Page 12: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

10 Claude Lemar�echal, Fran�cois Oustry

Proof. [K� � �K] Take X 2 K� and assume for contradiction that X =2 �K:there is u 2 R

n such that u>Xu > 0. Then, in view of (11), U := uu> (2 K!)satis�es hX;Ui = u>Xu > 0, which contradicts X 2 K�.

[�K � K�] Let X 2 �K, i.e., X is negative semide�nite. Then write thespectral decomposition X =

Pni=1 �ieie

>i , where the family feigi=1;:::;n forms

an orthonormal basis of Rn and �n 6 : : : 6 �1 are the non-positive eigenvaluesof X. For all U 2 K, we have

hX;Ui =P�ihU; eie

>i i

=P�ie

>i Uei

6 0 :[use (11)]

[�i 6 0 and e>i Uei > 0] 2

Now, a linear optimization problem over the cone of positive semide�nitematrices (a linear SDP problem) is the following instance of (9):

inf f(x) := b>x ; x 2 X := Rm ; g(x) := A0 +

mXj=1

xjAj < 0 ; (13)

where b 2 Rm is given, as well as each Aj in Sn. Writing the Lagrangian dual

(10) of this conic problem is an easy exercise:

{ Form the Lagrangian at (x; U) 2 Rm �K� = R

m � S�n

L(x; U) = b>x + hg(x); Ui =mXj=1

(bj + hAj; Ui)xj + hA0; Ui ; (14)

{ Calculate the dual function: for U < 0,

�(U) =

(hA0; Ui if bj + hAj; Ui = 0 for j = 1; : : : ; m ;�1 otherwise :

{ Remember that K� = �K = �S+n (Lemma3.1).

{ Eliminate all dual points U at which �(U) = �1 (they play no role in themaximization of �).

{ Altogether, the dual of (13) comes naturally as

sup hA0; Ui ; U 4 0 ; bj + hAj; Ui = 0 ; j = 1; : : : ; m : (15)

INRIA

Page 13: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 11

Another easy exercise is to compute the dual of (15). We form the Lagran-gian

S�n � Rm 3 L0(U; x) := hA0; Ui+

mXj=1

xj(bj + hAj; Ui) ;

in which we recognize (14), with the same domain of de�nition. The dual

infx2Rm

supU40

(b>x + hA0 +Pm

j=1 xjAj; Ui)

of (15) is thus nothing but (13), which thus appears as its own bidual .

3.2 Example: a bidual of Max-Cut

The example of x2.2 can be used again to illustrate our development so far:indeed the dual obtained in (8) is an SDP problem, to which (conic) dualitycan in turn be applied, using a (\bi")dual variable X 2 Sn. Some signs mustbe changed because (8) is a maximization problem; we take X 2 �K� = S+

n {see (10) and Lemma3.1 { and form the (\2nd") Lagrangian

L0(u;X) = �e>u+ hX;Q+D(u)i ; (16)

Call d(X) the vector of Rn constructed from the diagonal of the matrixX 2 Sn.Realizing that hX;D(u)i = d(X)>u (in fact, the operators d and D are adjointto each other), we have

L0(u;X) = (�e+ d(X))>u+ hX;Qi ;

whose supremum with respect to u is certainly +1 if d(X)� e 6= 0.

Theorem 3.2 The dual problem associated with (8), (16) is

inf hQ;Xi ; d(X) = e ; X < 0 ; (17)

which has a nonempty compact set of optimal solutions.

Proof. From the discussion above, it is clear enough that the dual function�0(X) = supu L

0(x; u) is hX;Qi if d(X) = e, and +1 if not. The resultingdual problem infX<0 �

0(X) is just (17).

RR n�3710

Page 14: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

12 Claude Lemar�echal, Fran�cois Oustry

Now the feasible set in (17) is obviously nonempty; the identity is feasible(and positive de�nite). Besides, all feasible matricesX have nonnegative eigen-values which sum up to n (the trace of X); in view of (12), they are bounded.2

Note that this result could be anticipated from Theorem2.2: (8) satis�esboth conditions in Lemma2.1, hence its dual (17) should satisfy them as well.

The interesting point in the above result is that (17) is just the relaxedproblem of [6]. This suggests that the dualization (7), (8) is \gap-equivalent"to the relaxation of Goemans and Williamson. For this, we must check thatthere is no duality gap between (8) and (17); but this will be a consequence ofTheorem3.4 below.

3.3 Primal-dual equivalence

We now study the relationship between our primal-dual pair of SDP problems(13) and (15). The duality gap will be the object of Theorem3.4, �rst weprove Lemma2.1. Even though we restrict ourselves to SDP duality, we usethe notation of x2.1, which is lighter.

Proposition 3.3 Consider the convex problem (13) and its dual (15). Assumethat the feasible domain of (15) is nonempty. Then the following two propertiesare equivalent:

(i) (13) satis�es the Slater condition: there is �x 2 Rm with g(�x) positive

de�nite;

(ii) the objective function �(U) in (15) tends to �1 when kUk ! +1 whilestaying feasible.

Proof. [(i) ) (ii)] Take �x as stated; because positive de�nite matrices forman open set, there is " > 0 such that

g(�x) + P 2 K for all P 2 Sn with kPk 6 " : (18)

Then take an arbitrary nonzero U 2 K�; setting PU := "U=kUk, we haveg(�x) + PU 2 K, therefore

0 > hU; g(�x) + PUi = hU; g(�x)i+ "kUk :

INRIA

Page 15: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 13

Thus,�(U) 6 L(�x; U) = f(�x) + hU; g(�x)i 6 f(�x)� "kUk

for all U 4 0; this implies (ii).

[(ii) ) (i)] For the converse, observe that, when x describes X = Rm , g(x)

describes a convex set (an a�ne subspace) in Sn. Then \not (i)" means thatthis set does not meet the interior of K, it is separated by some hyperplanein Sn (see [18, Thm. 11.3] for example). There is a nonzero �M 2 Sn and anumber �r such that

hU; �Mi 6 �r 6 h �M; g(x)i for all U < 0 and x 2 Rm

(here we have used the fact that positive de�nite matrices are dense in S+n ).

Because 0 2 K, we certainly have �r > 0; and because K is a cone, we mayjust assume �r = 0 (hU; �Mi > 0 would imply thU; �Mi ! +1 for t ! +1,contradicting htU; �Mi 6 �r). In other words we have

hU; �Mi 6 0 6 h �M; g(x)i for all U 2 K and x 2 Rm :

It follows in particular that �M 2 K�. Now take �U 2 K� such that �( �U) > �1.We have for all t > 0 and x 2 R

m

L(x; �U + t �M) = L(x; �U) + th �M; g(x)i > L(x; �U) > �( �U) :

hence �( �U + t �M) > �( �U). Observe that �U + t �M 2 K� and let t ! +1 tocontradict (ii). 2

With respect to Lemma2.1, convexity of G makes the proof a lot easiermainly for the (i) ) (ii) part. Without such convexity, points of the formg(xk) need to be considered (see Remark 2.3), for which the correspondingvalues f(xk) need to be controlled. . . and then it is linearity of f that helps(or at least its boundedness from above).

It is important to understand that the particular form of Slater conditionused here is relevant only when a full-dimensional cone K is involved in theconstraints. For a problem (1) containing only \natural" equalities, one shouldreturn to the general form used in Lemma2.1(i); incidentally, this form wouldmake easier the proof below. Finally w mention that the concept of interior

RR n�3710

Page 16: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

14 Claude Lemar�echal, Fran�cois Oustry

point is not completely satisfactory (for example, it does not cover a simplelinear program where g(x) = Ax � a, with a degenerate matrix A); actually,it is the relative interior that should be used.

Theorem 3.4 In the conditions of Proposition 3.3, there is no duality gap:val(15)=val(13).

Proof. Perturb the constraints in (13); the result is the so-called perturbationfunction

Sn 3 P 7! v(P ) := finf f(x) : x 2 Rn ; g(x) + P < 0g :

Convexity of the data allows to establish convexity of this function (see [9,x IV.2.4] for example). Besides, the perturbed feasible domain is nonemptyfor P small enough: see (18). Thus, we have a convex function v, well-de�ned in a neighborhood of 0: it has a nonempty subdi�erential at 0 (see[9, Prop. IV.1.2.1] for example), i.e., there exists U 2 Sn such that

v(P ) > v(0) + hU; P i for all P 2 Sn :

By de�nition of v, this means

f(x) > val(13) + hU; P i for all (x; P ) 2 Rn � Sn such that g(x) + P < 0 :

For given x, the set of P such that g(x) +P < 0 is �g(x) +K. Thus, we have

f(x) + hU; g(x)i > val(13) + hU; Y i for all x 2 Rn and Y 2 K :

When Y varies freely in K, the righthand side is bounded from above: U iscertainly in K�. Then, taking Y = 0, we obtain

val(15) > �(U) = infxf(x) + hU; g(x)i > val(13) : 2

By symmetry (see the end of x3.1), the above results can be applied to (15)and its dual (13):

Corollary 3.5 Assume the feasible domain of (13) is nonempty. Then thereis no duality gap under any of the following equivalent conditions:

(i) there is �U � 0 feasible in (15);

(ii) the objective function b>x in (13) tends to +1 when kxk tends to +1while staying feasible.

INRIA

Page 17: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 15

3.4 In�mum of a quadratic function

Technically, the essential content of this paper consists in applying Lagrangianduality to optimization problems involving quadratic functions. As it hap-pens, linear matrix inequalities and their accompanying SDP duality appearnaturally in this context, and the aim of this section is to demonstrate theconnection. Given Q 2 Sp, b 2 R

p and c 2 R, consider the function

Rp 3 y 7! q(y) := y>Qy + b>y + c :

The following result is elementary.

Lemma 3.6 A necessary and su�cient condition for q to have a �nite lowerbound over Rn (inf q > �1) is

Q < 0 and b 2 R(Q) : (19)

Proof. [Necessity] If Q 6< 0, i.e., q is not convex, there exists y 2 Rp such that

y>Qy < 0. Then q(ty)! �1 when t! +1.On the other hand, b =2 R(Q) means that b has a nonzero component bN

on (R(Q))? = (R(Q>))? = N (Q). Then

q(�tbN ) = b>(�tbN ) + c = �tkbNk2 + c ;

which tends to �1 when t! +1.

[Su�ciency] Conversely, the property b 2 R(Q) means that the linear system2Qy = �b (i.e., rq(y) = 0) has at least one solution, and this characterizesthe minimum points of the convex function q. 2

When (19) holds, the minimization of q can be carried out explicitly interms of the spectral decomposition Q =

Pp�ki=1 �iqiq

>i . Here 0 6 k 6 p is the

rank of Q, �1 > : : : > �p�k > 0 and q1; : : : ; qp�k are orthonormal (eigen)vectorsforming a basis of R(Q). The pseudo-inverse of Q is then

Qy :=p�kXi=1

��1i qiq>i :

RR n�3710

Page 18: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

16 Claude Lemar�echal, Fran�cois Oustry

Lemma 3.7 When (19) holds and using the above notation, �y := �12Qyb mi-

nimizes q and the corresponding in�mal value is

infy2Rp

q(y) = q(�y) = c� 14b>Qyb :

Proof. We have

QQy =p�kXi;j=1

�i��1j qi(q

>i qj)q

>j =

p�kXi=1

qiq>i :

Because the qi's form an orthonormal basis of R(Q), we can write b =P�iqi,

with �i = q>i b. This gives

QQyb =p�kXi;j=1

qiq>i qjq

>j b =

p�kXi=1

qiq>i b = b

(actually, QQy characterizes the orthogonal projection onto R(Q)). ThenQ�y = �1

2QQyb = �1

2b, i.e., �y minimizes q. The computation of q(�y) is then

straightforward. 2

These elementary results can be used to prove a lemma very useful in theanalysis of linear matrix inequalities. It is everyday bread in control theory,[2, Chap. 2], and will be useful for us as well. Here we give a simple proof.

Lemma 3.8 (Schur's Lemma) For Q 2 Sp, P 2 Sn and S = [s1; : : : ; sp] 2Rn�p, the following two statements are equivalent:

(i) The matrix

"P S>

S Q

#is positive [resp. semi-]de�nite.

(ii) The matrices Q and P�S>QyS are both positive [resp. semi-]de�nite, andsi 2 R(Q), for i = 1; : : : ; p; these last range conditions are automaticallysatis�ed when Q � 0 (then R(Q) = R

p).

Proof. For (x; y) 2 Rn � R

p , de�ne the function

qx(y) := ( x> y> )

"P S>

S Q

# � xy

�= x>Px+ 2 (Sx)>y + y>Qy

INRIA

Page 19: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 17

and consider Lemma3.6 with b = 2Sx, c = x>Px; observe that (i) meansexactly qx(y) > 0 [resp. > 0] for all nonzero (x; y) 2 R

n � Rp .

[(i)) (ii)] Under (i), we always have Sx 2 R(Q).

{ Take x = 0 to see that (i) implies Q � 0 [resp. < 0].

{ Then take successively x = ei, i = 1; : : : ; p so that Sx = si, which lies inR(Q).

Finally use Lemma3.7 to write

infy2Rp

qx(y) = x>Px� (Sx)>QySx = x>(P � S>QyS)x :

This must be positive [resp. nonnegative] for all nonzero x 2 Rn , which was

the last property we wanted.

[(ii) ) (i)] Under (ii), we can use again Lemma3.7 to write for all (x; y) 2Rn � R

p ,qx(y) > inf

y2Rpqx(y) = x>(P � S>QyS)x :

To say that this number is positive [resp. nonnegative] is just (i). 2

We give now two applications of Schur's Lemma.

Lemma 3.9 (Rank-one constraints) The set of (X; x) 2 Sn�Rn satisfyingX < xx> is closed and convex. Actually,

X � xx> < 0 [resp. � 0] ()

"X xx> 1

#< 0 [resp. � 0] :

Proof. The set of (X; x)'s described by the right part of the equivalence isthe pre-image of the closed [resp. open] convex set S+

n+1 [resp. S++n+1] by an

a�ne operator; as such, it is closed [resp. open] and convex. To prove theequivalence, apply Lemma3.8 with p = 1, P = X, S = x> and Q = 1. 2

Lemma 3.10 (Debreu's Lemma) Let A 2 Rn�m and H 2 Sn be positive

de�nite on N (A) (x>Hx > 0 whenever Ax = 0 and x 6= 0). Then H + �AA>

is positive de�nite for all � > 0 large enough.

RR n�3710

Page 20: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

18 Claude Lemar�echal, Fran�cois Oustry

Proof. Calling m0 the rank of A, let ZN and ZR be respectively n � (n�m0)and n � m0 matrices whose columns form orthonormal bases of N (A) andN (A)?. The columns of the n� n matrix B := [ZR ZN ] are orthonormal: Bis nonsingular, B�1 = B> and a matrix M is positive de�nite if and only ifB>MB is. Using the block structure of B, we have

B>MB =

"Z>RZ>N

#M [ZR ZN ] =

"Z>RMZR Z>RMZN

Z>NMZR Z>NMZN

#

Since AZN = 0, we obtain in particular

B>[H + �A>A]B =

"Z>RHZR + �Z>RA

>AZR Z>RHZN

Z>NHZR Z>NHZN

#=:

"P S>

S Q

#:

Now apply Lemma3.8. Observe that Q = Z>NHZN is positive de�nite (byassumption); it is therefore \onto" and the only thing we need is P�S>QyS �0, i.e.,

Z>RHZR + �Z>RA>AZR � Z>RHZN (Z

>NHZN )

�1Z>NHZR : (20)

We claim that Z>RA>AZR, which obviously lies in S+

m0 , is actually positivede�nite: indeed, for all v 2 R

m0

, the property v>Z>RA>AZRv = kAZRvk

2 = 0implies

ZRv 2 N (A) \ R(A) = N (A) \ N (A)? = f0g ;

and hence v = 0.Thus, with simpli�ed notation, we have to prove that a certain matrix

�M �N is positive de�nite. Calling R (a symmetric positive de�nite matrix)the square root of M = Z>RA

>AZR � 0, we have for all nonzero v 2 Rm0

:

v>(�M �N)v = �v>RRv � v>Nv = �kRvk2 � v>RR�1NR�1Rv :

Setting w = Rv ( 6= 0), this is equal to �kwk2 � w>R�1NR�1w, which is apositive number for � large enough, namely for � > �1(R

�1NR�1). 2

4 Lagrangian duality for quadratic programs

The material introduced in Sections 2 and 3 is just what is needed to applyLagrange duality to quadratic optimization problems.

INRIA

Page 21: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 19

4.1 Dualizing a quadratic problem

Let be given m+ 1 quadratic functions de�ned on Rn :

qj(x) := x>Qjx + bj>x+ cj ; j = 0; : : : ; m ;

where the Qj's lie in Sn, the bj's in Rn and the cj's in R; we assume c0 = 0.

With these data, consider the quadratic problem(inf q0(x) ; x 2 R

n

qj(x) = 0; j = 1; : : : ; m :(21)

The associated Lagrangian (2) of x2.1 with X := Rn , f := q0 and gj := qj,

j = 1; : : : ; m is then

L(x; u) = x>Q(u)x+ b(u)>x + c(u) ; (22)

where Q(u) := Q0+Pm

j=1 ujQj, b(u) := b0+Pm

j=1 ujbj and c(u) =Pm

j=1 ujcj =c>u. Just as in x2.1, we denote by

Rm 3 u 7! �(u) := inf

x2RnL(x; u) (23)

the dual function, and the dual problem is to maximize �. In fact the dualfunction can be computed explicitly:

Proposition 4.1

�(u) =

(c(u)� 1

4b(u)>Q(u)yb(u) if Q(u) < 0 and b(u) 2 R(Q(u)) ;

�1 otherwise.

Proof. Just apply Lemma3.6 and its extension Lemma3.7. 2

Then Schur's Lemma enables a \PD-representation" of the epigraph of thedual function (23) (terminology appearing in [15, x 6.4.3]), and thereby anSDP-formulation of the dual problem:

Corollary 4.2 The dual of (21), (22) is equivalent to the SDP problem withvariables u 2 R

m and r 2 R:8><>:

sup r ;"c(u)� r 1

2b(u)>

12b(u) Q(u)

#< 0 :

(24)

RR n�3710

Page 22: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

20 Claude Lemar�echal, Fran�cois Oustry

In other words, supu2Rm �(u) is the optimal value �r of (24) (a number in[�1;+1]).

Proof. Using an extra variable r 2 R, the dual problem consists in maximizingr subject to �(u) > r. In view of Proposition 4.1, this last property exactlymeans

c(u)� r �1

4b(u)>Q(u)yb(u) > 0 ; Q(u) < 0 ; b(u) 2 R(Q(u))

and from Lemma3.8, this is equivalent to the SDP constraint in (24). 2

Thus, even though the dual problem is posed in Rm , it involves intermediatevariables in Sn, and even in Sn+1. This is somehow made necessary by therequirement Q(u) < 0, unavoidable in (22), (23).

4.2 Second example: Max-Stable

We have applied Lagrangian duality to the max-cut problem to illustrate howthis general strategy �nds back a known and successful particular relaxationtechnique. The same phenomenon occurs with another problem having a po-pular relaxation: the maximum stable problem, which we formulate as follows:

maxw>x ; xixj = 0; (i; j) 2 E ; x2i = xi; i = 1; : : : ; n ; (25)

here we have used again the trick t2 � t = 0 , t 2 f0; 1g. Of course w > 0(if some wi 6 0, just set the corresponding xi to 0! typically w = e); but thisassumption is useless.

We use the dualization technique of x4.1. The notation is slightly com-plicated, though. Of course we change signs because we maximize; but also,there are now two sets of dual variables:

(i) those assigned to the constraints xixj = 0; it is convenient to considerin (25) only those constraints with i < j (E contains no (i; i)!) andto call �1

2Uij the corresponding multiplier; then the contribution to the

Lagrangian of these terms will be

�1

2

X(i;j)2Ei<j

Uijxixj = �X

(i;j)2E

Uijxixj ;

INRIA

Page 23: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 21

(ii) those for the constraints x2i � xi = 0, as in x2.2; we call �Uii these multi-pliers, which produce the contribution �

PUiix

2i +

PUiixi.

Thus, the dual variables altogether form a symmetric matrix U , in which weimpose Uij = 0 if the constraint (i; j) is not present in (25). In summary, wetake the Lagrangian

L(x; U) = w>x�nX

i;j=1

Uijxixj +nXi=1

Uiixi = �x>Ux + (d(U) + w)>x ; (26)

whith the dual domain

U := fU 2 Sn : Uij = 0 if i 6= j and (i; j) 62 Eg :

Maximizing this Lagrangian (with respect to x unconstrained) can be donewith the tools of x4.1, yielding an SDP representation of the dual epigraph.

Theorem 4.3 The dual value inf �(U) associated with (25), (26) is the optimalvalue of 8><

>:inf r ; (U; r) 2 U � R ;"

r �12(d(U) + w)>

�12(d(U) + w) U

#< 0 :

(27)

Proof. The function �L(x; U) of (26) is an instance of (22), with adaptednotation and sign. Then the dual problem (24) from x4.1 becomes (27). 2

Slater and compactness properties could be checked on this problem, butit will be simpler to use the bidual for this.

4.3 Bidualizing a quadratic problem

Now, duality can again be applied to the SDP problem (24). This bidualizationof the starting primal problem (21) is actually a convexi�cation of it.

Theorem 4.4 The dual of (24), i.e., the bidual of (21), is

(SDP )

8>>>><>>>>:

inf hQ0; Xi+ b>0 x ; X 2 Sn; x 2 Rn ;

hQj; Xi+ b>j x+ cj = 0; j = 1; : : : ; m ;"1 x>

x X

#< 0 :

RR n�3710

Page 24: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

22 Claude Lemar�echal, Fran�cois Oustry

Proof. The result comes mechanically from SDP duality (x 3.1). One can alsodetail the calculations, according to x 2.3 in the cone K = S+

n+1. We take the(bi)dual variable

Y =

"t x>

x X

#2 �K� = S+

n+1

(see Lemma3.1) and form the Lagrangian associated with (24)

L(u; r; Y ) = r +

*"t x>

x X

#;

"c(u)� r 1

2b(u)>

12b(u) Q(u)

#+

= (1� t)r + tc(u) + x>b(u) + hX;Q(u)i= (1� t)r + hX;Q0i+ x>b0 +

Pmj=1 uj(hX;Qji+ x>bj + tcj) :

Its supremum with respect to (u; r) (the (bi)dual function, to be minimized) is+1 unless t = 1 and the coe�cient of each uj is zero. Then the only remainingterm is hX;Q0i+ x>b0. Altogether, we recognize (SDP). 2

Another exercise is left to the reader: applying once more SDP duality to(SDP), one obtains (24) again.

Observe that the bidual (SDP) of (21) appears as a convex problem; this isnormal, a dual problem is always convex (and (24) is convex as well). Applyingweak duality twice, we see that val(21)>val(24)6val(SDP). Comparing val(21)with val(SDP) requires one more argument, which we give now.

4.4 The lifting procedure

Looking at (SDP) does reveal a strong similarity with (21); basically, these twoproblems just di�er by the introduction of the new variable X. The similarityis made even more blatant with the help of Lemma3.9: (SDP) can also bewritten 8><

>:inf hQ0; Xi+ b>0 x ; X 2 Sn; x 2 R

n ;hQj; Xi+ b>j x + cj = 0; j = 1; : : : ; m ;

X < xx> :

(28)

In addition, this writing allows a direct interpretation of (SDP), without pas-sing by the intermediate problem (24):

INRIA

Page 25: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 23

{ In view of (11), a quadratic form x>Qx can equivalently be written ashQ; xx>i.

{ Setting X := xx>, (21) can therefore be written as8><>:

inf hQ0; Xi+ hb0; xi ; X 2 Sn; x 2 Rn ;

hQj; Xi+ hbj; xi+ cj = 0; j = 1; : : : ; m ;X = xx> :

{ In this last problem, everything is linear except the last constraint, which isnonconvex.

{ Relax this last constraint X = xx> to X < xx>, which is now convex withrespect to (x;X) (Lemma3.9). Needless to say, this widens the feasibledomain.

The convexi�cation mechanism is now clear. The above \recipe" is called liftingprocedure in [13], where it was introduced in the context of 0-1 programming.This procedure thus appears as fairly general, indeed: it is applicable to anyoptimization problem involving quadratic functions; but any polynomial canbe reduced to a quadratic form, via appropriate additions of variables (see forexample [21], [22]). As a result, the lifting procedure can be considered as aconvexi�cation process for polynomial optimization.

Let us summarize the relative merits of the convex relaxations introducedin this section, in terms of duality gaps:

Proposition 4.5 There holds

val(21) > val(28) = val(SDP) > val(24).

Proof. The last inequality is weak duality between (24) and its dual (SDP)(beware of the change of signs!) The equality is Lemma3.9. For the �rstinequality, take any x feasible in (21) (if there is none, val(21)=+1 and thereis nothing to prove); then (x;X := xx>) is feasible in (28) and has the sameobjective value q0(x) (use (11), in particular). Thus, the feasible domain in(28) is \larger" than in (21). 2

It is worth mentioning that the last inequality in Proposition 4.5 often holdsas an equality, namely in every \normal" situation where there is no dualitygap between the pair of dual problems (24) and (SDP) (see Theorem3.4).

RR n�3710

Page 26: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

24 Claude Lemar�echal, Fran�cois Oustry

Remark 4.6 We emphasize the di�erence between the two dualization sche-mes resulting in the lifting procedure:

{ One { (21)!(24) { dualizes m constraints, producing a dual problem in Rm ;

{ the second { (24)!(28) { dualizes O(n2) constraints, producing (28) withO(n2) variables.

These two dualization schemes have little to do with each other. As a result,(28) is an \exotic bidual" of (21); both problems are not even posed in the samespace. 2

4.5 Examples: Max-Stable and Max-Cut

Just as Lagrangian duality reproduced Goemans-Williamson relaxation for themax-cut problem, it turns out to reproduce for the max-cut problem one of theforms of Lov�asz's number [12] (see [8]). Such a result was already mentionedin [22]. Before proving it, we state separately a result which will be also usedlater.

Lemma 4.7 The set of matrices X 2 Sn satisfying X < d(X)d(X)> is com-pact.

Proof. First, this set is obviously closed. The assumption certainly impliesthat X < 0, hence Xii > 0, and also that the diagonal of X � d(X)d(X)> isnonnegative: Xii �X2

ii > 0; altogether, Xii 6 1. It follows that trX 6 n, eacheigenvalue of X is bounded (by n). In view of (12), X is bounded. 2

Theorem 4.8 Problem (27) and

8><>:

supw>d(X) ; X 2 Sn ;Xij = 0; (i; j) 2 E ;X � d(X)d(X)> < 0

(29)

have the same optimal value. Furthermore, both problems have a nonemptycompact set of optimal solutions.

INRIA

Page 27: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 25

Proof. Write (25) as an instance of (21): the objective function is q0(x) :=�w>x; the constraints are qij(x) := xixj and qi(x) := x2i � xi. Then applyTheorem4.4: the dual of (27) { the bidual of (25) { is

8>>>>>><>>>>>>:

supw>x ;Xij = 0; (i; j) 2 E ;Xii � xi = 0; i = 1; : : : ; n :"1 x>

x X

#< 0 :

(30)

This implies d(X) = x and, using Lemma3.9, we obtain (29).Now take the matrix X = 1

2I, so that X � d(X)d(X)> = 1

4I; it satis�es

Slater's condition in (29); besides, the feasible domain in (29) is compact(Lemma4.7). Altogether, there is no duality gap between the pair of dualproblems (27) and (29), and both have a nonempty compact set of optimalsolutions (Theorem3.4). 2

The lifting procedure of (4.4) could have been applied to (25), thus pro-ducing (29) directly. This procedure could also be applied to (6), we wouldobtain the relaxation

inf hQ;Xi ; d(X) = e ; X < xx> :

However the variable x is super uous: the best to do is to take x = 0, whichgives (17) indeed.

5 Application to 0{1 programming

Consider now a rather general combinatorial optimization problem:

inf b>x ; Ax = a ; x2i � xi = 0; i = 1; : : : ; n : (31)

By contrast with the examples seen so far, nothing is quadratic except theBoolean constraints; but linear (equality) constraints do exist. As a result,even without assuming any structure in the matrix A, this problem has atleast three possible dualizations following the pattern of x2.1 or 4.1.

RR n�3710

Page 28: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

26 Claude Lemar�echal, Fran�cois Oustry

5.1 Dualization L

The particular choice g = Ax� a 2 Rm , X = f0; 1gn produces the Lagrangian

LL(x; u) := b>x+ u>(Ax� a) :

Its minimization over X = f0; 1gn is trivial, resulting in a piecewise linear dualfunction:

�L(u) :=nXi=1

min f0; (b+ A>u)ig � a>u : (32)

Maximizing �L is therefore a nonsmooth optimization problem, which can bedone by a number of methods, say those of [7], of [20], or of Chaps XII andXV in [9]. As far as duality gaps are concerned, the resulting dual problemamounts to replacing f0; 1gn = X in (31) by its convex hull [0; 1]n.

Remark 5.1 This last property can for example be obtained by an easy exer-cise: form an LP equivalent to the problem sup �L and dualize it. As illustratedby the previous sections, duality generates a convexi�cation, which can be cha-racterized by a bi-dualization. It is interesting to note that the present convexi-�cation is performed in the primal space Rn itself; remember Remark 4.6. 2

This scheme is quite popular. It can even be said that the expression\Lagrangian relaxation" of (31) in the combinatorial community tacitly meansthe present dualization LL (possibly moving some of the constraints Ax = ainto the de�nition of X ). We will call it dualization L(inear).

5.2 Dualization C

Another possible dualization of (31) consists in taking X = Rn and in dualizing

all the constraints: the linear as well as the Boolean ones. This producesanother Lagrangian, which now depends on two dual variables (u; v) 2 R

m�Rn :

LC(x; u; v) = b>x+ u>(Ax� a) +Pn

i=1 vi(x2i � xi)

= (b + A>u� v)>x + x>D(v)x� a>u :

INRIA

Page 29: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 27

Computing the associated dual function �C(u; v) can be done With the toolsof x4.1, or via elementary calculations: �C(u; v) = �a>u+

Pni=1 �i(u; vi), where

�i(u; vi) :=

8>><>>:�

(b+A>u�v)2i

4viif vi > 0 ,

0 if vi = 0 and (b+ A>u)i = 0 ,�1 otherwise .

(33)

Theorem 5.2 The dualizations L and C produce the same duality gap. Moreprecisely:

�L(u) = supv2Rn

�C(u; v) : (34)

Proof.Clearly, supv �C(u; v) = �a>u+P

i supvi �i(u; vi). Now the maximizationof each �i(u; �) is an easy exercise illustrating x4.3: setting � := (b+A>u)i, weobtain 0 for � = 0, while for � 6= 0 (which forces vi > 0):

supt2R

�i(u; t) = supt>0

�i(u; t) =1

2��

1

2j�j = min f0; �g :

Comparing with (32), this proves (34). Then the C-dual value is

supu;v

�C(u; v) = supu

�supv�C(u; v)

�= sup

u�L(u) ;

which is just the L-dual value. 2

Remark 5.3 The real reason underlying this result is worth meditating. Infact, �L(u) is the optimal value of the problem

inf LL(x; u) ; x2i = xi; i = 1; : : : ; n :

Suppose we solve it via duality: we form a Lagrangian, which is nothing elsethan LC(x; u; v). For lack of compactness, the minimization of LC(�; u; v) re-quires v > 0; said otherwise, the constraints x2i = xi can equivalently be takenas inequalities x2i 6 xi. The present dual approach ignores the nonconvexityof the original problem. Since, in addition, a Slater condition holds (when xidescribes R, the constraint x2i � xi describes [�1

4;+1[, which contains 0 as

interior point), there is no duality gap: (34) holds. 2

RR n�3710

Page 30: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

28 Claude Lemar�echal, Fran�cois Oustry

This dualization, which we will call C(omplete), is therefore of moderateinterest as compared to L of x5.1: it increases the number of dual variables,it produces a more complicated dual function, all this without improving theduality gap. Note also that maximizing �L can somehow be seen as maximizing�C, with priority given to v. By contrast, dualization B of x5.3 below givespriority to u; but with a signi�cant re�nement, though.

Remark 5.4 Applying the lifting procedure of x4.4 to (31), we obtain8><>:

inf b>d(X) ; X 2 Sn ;Ad(X) = a ;X � d(X)d(X)> < 0 :

Despite the appearances, this is an ordinary LP in Rn . Actually, X plays no

role except through its diagonal: we can set Xij = 0 for i 6= j and replaced(X) by x 2 R

n . The SDP constraint becomes x� xx> < 0, which just means0 6 x 6 1. 2

5.3 Dualization B

A third possibility for (31) is to put the linear constraints in X and dualizethe Boolean constraints. In other words, we do just the opposite to L: we takethe Lagrangian

LB(x; v) := (b� v)>x + x>D(v)x (35)

and the dual function is then the optimal value of the problem

�B(v) = infAx=a

x>D(v)x+ (b� v)>x : (36)

We will call B(oolean) this dualization scheme. Computing �B is still possiblewith the tools of x4. However, before developing the calculations, we checkthat this complication is worth the e�ort.

Theorem 5.5 There holds sup �B > sup �C = sup �L.

Proof. The equality is Theorem5.2. The inequality is a general result: whenmore constraints are dualized, the duality gap can only increase (see for example[11]). 2

INRIA

Page 31: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 29

Actually, a more careful comparison of dualizations B and C explains whyB improves the duality gap:

Lemma 5.6 If vi > 0, i = 1; : : : ; n, then �B(v) = supu �C(u; v). As a result,the C-dual value is given by

supu;v

�C(u; v) = supv2Rn

+

�B(v) :

Proof. When v 2 Rn+ , the B-Lagrangian problem (36) is a standard linear-

quadratic program, for which there is no duality gap:

�B(v) = supu2Rm

infx2Rn

[LB(x; v) + u>(Ax� a)] = supu2Rm

�C(u; v) :

Now, �C(u; v) � �1 if v =2 Rn+ (see (33)): to compute the C-dual value,

we may as well insert the constraint v > 0:

supu;v

�C(u; v) = sup f�C(u; v) : u 2 Rm ; v > 0g = sup

v>0�B(v) : 2

Thus, dualizations C or L consist in maximizing �B over Rn+ . By contrast,dualization B maximizes �B over a possibly larger set. In particular, �B assumes�nite values on

�V := fv 2 Rn : D(v) � 0 onN (A)g : (37)

Even more: for v 2 �V , (36) has a strictly convex objective function, andtherefore a unique optimal solution x(v), which varies continuously with v.This has important consequences:

Proposition 5.7 With the above notation, assume:

(i) �C has some maximum point (�u; �v) with �v 2 �V ,

(ii) the C-duality gap is nonzero.

then the B-duality gap is strictly smaller than the L-duality gap.

RR n�3710

Page 32: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

30 Claude Lemar�echal, Fran�cois Oustry

Proof. Use the continuity on �V of the unique solution x(v): for v in a neighbo-rhood of �v (ensuring in particular v 2 �V ), some extra constraint kx�x(�v)k 6 "can be inserted in (36) without changing its solution x(v). Then the so-called�lling property XII.2.3.1 of [9] holds: �B is di�erentiable in this neighborhood,with derivatives x2i (v)� xi(v), i = 1; : : : ; n.

Now use Lemma5.6: if dualization B does not improve the duality gap,sup �B = �B(�v). This implies r�B(�v) = 0: x(�v) is feasible in (31). Since x(�v)minimizes LC(�; �u; �v), we have a C-primal-dual solution: dualization C { henceL { has no duality gap. 2

Assumption (i) means that the domain of �B contains a useful point, notlying in the v-part of the domain of �C. This assumption is guaranteed to holdfor example when �V � R

n+ . Mutatis mutandis, the e�ciency of dualization B

will increase when �V increases, i.e. when N (A) decreases, i.e. when there aremore constraints.

We now turn to actual computation of �B. The case a 62 R(A) (i.e. Ax = ahas no solution in R

n) is uninteresting: then val(31) is +1 and �B � +1;there is no \duality gap". We therefore assume that a = Ax for some x (2 R

n).Besides, calling m0 the rank of A, let Z be a matrix whose n � m0 columnsform a basis of N (A). Thus:

Ax = a with x 2 Rn () x = x + Zy with y 2 R

n�m0

;

so that (31) is equivalent to the problem in y

(inf c>Zy + b>x ;y>Z>i Ziy + (2xi � 1)Ziy + x2i � xi = 0 ; i = 1; : : : ; n :

(38)

Here Zi 2 Rn�m0

denotes the ith row of Z.

Theorem 5.8 The dual value supv �B(v) associated with (31), (35) is the valueof the SDP problem

8><>:

sup r ;"c(v)� r 1

2b(v)>

12b(v) Q(v)

#< 0 ;

INRIA

Page 33: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 31

wherec(v) := x>D(v)x+ (b� v)>x ;

b(v) := 2D(v)x+ b� v ;Q(v) := ZD(v)Z> :

Proof. Plug the change of variable x = x + Z>y in (36) and use the abovenotation c(v), b(v) and Q(v):

�B(v) = infy2E

y>Q(v)y + b(v)>y + c(v) :

Not unexpectedly, we recognize in the in�mand the Lagrangian associated with(38): we are in the situation of x4.1 and the above SDP problem is just thecorresponding instance of (24). 2

Naturally, a convexi�ed form of (31) will be obtained as before by bi-dualization. However, the computations turn out to be lighter if a simpli�edbut equivalent form of dualization B is used; we now explain it.

5.4 A simpi�ed form of B

As suggested in [16], another dualization is possible, which avoids the heavyalgebra implied by Theorem5.8, without worsening the B-duality gap. Theidea is to solve (36) by the so-called augmented Lagrangian technique: add apenalty term �kAx� ak2 (which is 0 for x feasible!) to the objective functionLB and use duality. In other words, taking � > 0 and setting

L�(x; u; v) := LB(x; v) + �kAx� ak2 + u>(Ax� a) ; (39)

replace (36) by the problem

supu2Rm

��(u; v) ; where ��(u; v) := infx2Rn

L�(x; u; v) :

The following result demonstrates the validity of the idea (cf. Lemma5.6).

Lemma 5.9 If v 2 �V of (37), then �B(v) = supu ��(u; v) for � large enough.

It follows that�B(v) = sup

(u;�)2Rm+1

��(u; v) if v 2 �V :

RR n�3710

Page 34: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

32 Claude Lemar�echal, Fran�cois Oustry

Proof. Clearly, the additional term �kAx�ak2 in the in�mand of (36) does notchange the optimal value �B(v). Now invoke Debreu's Lemma3.10: if v 2 �Vthen, for � large enough, the penalized (36) has a positive de�nite HessianD(v) + �A>A. This problem becomes an ordinary linear-quadratic program,to which we can apply duality theory:

�B(v) = supu2Rm

infx2Rn

LB(x; v) + u>(Ax� a) + �kAx� ak2

and our �rst claim is proved.Then observe that the function � 7! L�(x; u; v) is nondecreasing. This

property is transmitted to the in�mum in x (��(u; v) increases with �), andthen to the supremum in u; our second claim is proved. 2

The equivalence between the present dualization and B will be proved if therestriction v 2 �V can be removed in the above lemma. Such is the case indeedif, at the same time, we maximize with respect to v { but this is precisely whatwe want.

Theorem 5.10 There holds

supv2Rn

�B(v) = sup f��(u; v) : u 2 Rm ; v 2 R

n ; � > 0g :

Proof. The key argument is the fact that the in�mum of a convex function fis not changed if restricted to the relative interior of its domain:

[�f �(0) =] infv2Rn

f(v) = infv2ri dom f

f(v) : (40)

This is because, if a function f is changed while leaving its closed convexhull f �� intact, then the conjugate f � is not changed; see [9, Cor.X.1.3.6] forexample.

All we have to do is therefore show that �V � ri dom �B: in view ofLemma5.9, the B-dual optimal value will then be

supv2Rn

�B(v) = supv2 �V

�B(v) = supv2 �V

�supu;�

��(u; v)�:

Clearly from Lemma3.6,

�V � dom �B � �V ; where �V := fv 2 Rn : D(v) < 0 on N (A)g :

INRIA

Page 35: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 33

Incidentally, dom �B is full-dimensional (it contains Rm++); and, with the nota-tion of Theorem5.8, we have the equivalent de�nitions

�V = fv 2 Rn : ZD(v)Z> � 0g ; �V = fv 2 R

n : ZD(v)Z> < 0g :

Since the eigenvalues of ZD(v)Z> are continuous in v, it is clear that �V and�V are the interior and closure of each other. The required property is true. 2

The dual and bidual problems are rather simple to write: the approachamounts to writing (31) as

(inf b>x ; Ax = a ; x2i � xi = 0; i = 1; : : : ; n ;kAx� ak2 = 0 :

Assigning a multiplier � to the redundant constraint and forming the C-Lagrangian, we obtain nothing else than L�(x; u; v) of (39). The resultingdual problem is then given by x4.1: supv �B(v) = supu;v;� �

�(u; v) is the opti-mal value of8><

>:sup r ;"

�kak2 � r 12(b + A>(u� 2�a)� v)>

12(b+ A>(u� 2�a)� v) D(v) + �A>A

#< 0 :

As for the bidual, it is given by x4.3 or the recipe of x4.4:

8>>><>>>:

inf b>d(X) ; X 2 Sn ;Ad(X) = a ;X � d(X)d(X)> < 0 ;hA>A;Xi = kak2 :

(41)

Remark 5.11 As alreday mentioned, the present x5.4 has its roots in [16].Actually, it was proposed there to avoid dualization B by mere penalty, repla-cing (36) by

lim�!+1

infx2Rn

LB(x; v) + �kAx� ak2 :

The reason we prefer augmented Lagrangian is that there are serious technicaldi�culties to prove that simple penalty does not worsen the B-duality gap. 2

RR n�3710

Page 36: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

34 Claude Lemar�echal, Fran�cois Oustry

Let us summarize our development so far. Dualization of (31) can be donein two ways:

{ One gives birth to the classical LP relaxation

min b>x ; Ax = a ; xi 2 [0; 1] :

{ Viewed with the SDP glasses of dualization C, this can be written

min b>d(X) ; Ad(X) = a ; X < d(X)d(X)> :

Indeed, the matrix X in this last formulation plays a role only through itsdiagonal; the extra-diagonal terms can be eliminated, which reproduces theLP formulation; see Remark 5.3.

{ The second relaxation starts from the previous formulation and appends toit the extra constraint hA>A;Xi = kak2. This gives a chance of improvingthe gap, in a way described by Proposition 5.7.

Appending redundant constraints is actually a classical idea in nonconvex dua-lity. This is the basis of [13] (append the constraints xi(Ax � a) = and(1 � xi)(Ax � a) = 0). Here, we can also observe that xixj 2 [0; 1], hencethe constraints Xij 2 [0; 1] can be appended to the B-relaxation (41).

One more comment: needless to say, dualization B accepts quadratic ob-jectives and/or constraints in (31). This was already the case of max-cut andmax-stable. These two problems had no linear constraints, hence there was noambiguity: only dualization B could be applied

5.5 Inequality constraints; the knapsack problem

Suppose that (31) contains inequality constraints; say

inf b>x ; Ax 6 a ; xi 2 f0; 1g; i = 1; : : : ; n ; (42)

or even a mixture of equalities and inequalities.Dualization L is not really a�ected by this new situation. The only change is

that the dual problem now has positivity constraints and becomes supu>0 �L(u);but the duality gap still corresponds to replacing f0; 1g by [0; 1].

Dualization C is also una�ected. The u-part of the dual variables is againconstrained to R

m+ , and we again obtain a scheme equivalent to L:

INRIA

Page 37: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 35

Theorem 5.12 For (42), dualizations L and C produce the same duality gap.

Proof. The statement and the proof of Theorem5.2 remain valid under theconstraint u > 0. 2

By contrast, dualization B breaks down: the dual function becomes

infAx6a

x>D(v)x+ (b� v)>x :

Since D(v) has no reason to be positive semi-de�nite, the above problem isnonconvex and NP-hard.

The conclusion is: dualization B can treat equality constraints only; as forinequalities, they have to be treated by a C strategy. However, the case m = 1is somewhat special. Indeed, consider the knapsack problem

min b>x ; `>x 6 a ; x2i � xi = 0; i = 1; : : : ; n : (43)

The three possible dualizations are here, for u > 0 and v 2 Rn :

LL(x; u) = (b + u`)>x� au for x 2 f0; 1gn ;LC(x; u; v) = x>D(v)x+ (b+ u`� v)>x� au for x 2 R

n ;LB(x; v) = x>D(v)x+ (b� v)>x for `>x 6 a :

With its single inequality constraint, B becomes a constructive dualization:

Proposition 5.13 Suppose ` 6= 0. Then the B-dual function �B(v) of (43) is�1 if v =2 R

n+ .

Proof. If some vi < 0, call ei the corresponding canonical basis vector; let �x befeasible in (43) (�x exists). Depending on the sign of `i, �x� tei remains feasiblefor t! +1, while LB(�x� tei; v)! �1. 2

Thus, the computation of �B(v) has to be done only when v > 0, and thenit reduces to an easy linear-quadratic problem. A consequence of this propertyis that the versatility of Lagrangian dualization is disappears:

Theorem 5.14 For (43), the dualization B and C produce the same dualitygap. More precisely:

�B(v) = supu>0

�C(u; v) :

As a result, the three dualizations L, C and B produce the same duality gap.

RR n�3710

Page 38: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

36 Claude Lemar�echal, Fran�cois Oustry

Proof. In view of Proposition 5.13, we may assume v > 0 (otherwise we havethe trivial relation �1 = supu>0�1). Then we can apply standard dualitytheory to the linear-quadratic problem inf`>x6a LB(x; v):

�B(v) = supu>0

infx2Rn

[LB(x; v) + u(`>x� a)] = supu>0

infx2Rn

LC(x; u; v)

and our �rst claim is proved.Then the second claim comes with Theorem5.12. 2

We mention that the same results hold if (43) contains also linear equalityconstraints Ax = a: just reduce the space to N (A).

We conclude this section with an illustrative example. Mutatis mutandis,the B-duality gap has chances to be smaller when there are more (equality)constraints. From this point of view, a problem with only one constraint is theleast favourable. Indeed consider a problem in R

2 :

min c>x ; x2 = 2x1 xi 2 f0; 1g; i = 1; 2 :

Eliminating x2, the feasible x1's are those satisfying x21 = x1 and 2x21 = x1.

The B-lifting procedure consists in

{ introducing the extra variable X 2 S1 = R,

{ \linearizing"the quadratic constraints to X = x1, 2X = x1, and

{ introducing the conic constraint X > x21.

The only feasible point is clearly X = x1 = 0 (= x2): the relaxed feasibledomain is not larger than the original one, the B-gap is 0 (in fact, it can beproved that the B-gap is 0 for n 6 0).

By contrast, the feasible domain for the C=L=LP relaxation is the segmentwith endpoints 0 and (1=2; 1), which may introduce a duality gap dependingon the position of the vector c in R

2 .

6 Conclusions

Relaxation (i.e. enlargement) of the feasible domain is a central tool in combi-natorial optimization. SDP relaxation, which embeds the primal space Rn into

INRIA

Page 39: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 37

the space of symmetric matrices S(Rn), appears to be a possible technique; ithas demonstrated its e�ciency on some applications.

The motivation for this paper was �rst to show that this technique ac-tually has its roots in a general tool: Lagrangian duality { which itself shouldbe understood in a broader sense than usual in combinatorial optimization.Among other things, the way is thus open to SDP relaxation as a general tool,applicable to a large class of combinatorial problems.

Besides, we have also shown that SDP relaxation can often be more accu-rate than the classical Lagrangian relaxation. Our analysis suggests the way toconstruct an SDP relaxation, so as to keep duality gaps as small as possible.For example, we are currently studying its application to problems such asgraph coloring, or quadratic assignment.

Finally, it should be mentioned that Lagrangian duality corresponds to aconvexi�cation in the primal space itself , instead of squaring the number ofvariables, as is done in SDP relaxation. An interesting question is thereforeto study this narrower convexi�cation: comparing the two should give fruitfulinterpretations for either one. This is currently under study.

A limitation of our approach is that it is clumsy in situation particularlyfavourable to classical Lagrangian relaxation: when the original problem (31)contains two groups of constraints, say

min b>x ; A1x = a1 ; A2x = a2 ; x 2 f0; 1gn :

If the constraints indexed by 1 [resp. 2] are \simple" [resp. \complicating"],a standard technique is to dualize the complicating constraints, forming thedual function

minx

(b + A>2 u)>x� u>a2 ; A1x = a1 ; x 2 f0; 1gn :

This may very well result in a better duality gap than the B-dualization. Webelieve that SDP relaxation might be better suited to problems where thereare two groups of variables, instead of constraints; but more work is necessaryto invent appropriate schemes.

RR n�3710

Page 40: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

38 Claude Lemar�echal, Fran�cois Oustry

References

[1] F. Alizadeh. Interior point methods in semide�nite programming with ap-plications to combinatorial optimization. SIAM Journal on Optimization,5(1):13{51, 1995.

[2] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix In-equalities in System and Control Theory, volume 15 of Studies in AppliedMathematics. SIAM, Philadelphia, PA, June 1994.

[3] I. Ekeland and R. Temam. Analyse Convexe et Probl�emes Variationnels.Dunod, Paris, 1974.

[4] S. Feltenmark and K. C. Kiwiel. Dual applications of proximal bundlemethods, including lagrangian relaxation of nonconvex problems. SIAMJournal on Optimization. to appear.

[5] A.M. Geo�rion. Lagrangean relaxation for integer programming. Mathe-matical Programming Study, 2:82{114, 1974.

[6] M. X. Goemans and D. P. Williamson. Improved approximation algo-rithms for maximum cut and satis�ability problems using semide�niteprogramming. Journal of the ACM, 6:1115{1145, 1995.

[7] J.L. Go�n, A. Haurie, and J.Ph. Vial. Decomposition and nondi�eren-tiable optimization with the projective algorithm. Management Science,38(2):284{302, 1992.

[8] C. Helmberg, S. Poljak, F. Rendl, and H. Wolkowicz. Combining se-mide�nite and polyhedral relaxations for integer programs. In E. Balasand J. Clausen, editors, Integer Programming and Combinatorial Opti-mization IV, volume 920 of Lecture Notes in Computer Science, pages124{134. Springer Verlag, Berlin, 1995.

[9] J.-B. Hiriart-Urruty and C. Lemar�echal. Convex Analysis and Minimi-zation Algorithms. Number 305-306 in Grundlehren der mathematischenWissenschaften. Springer-Verlag, 1993.

INRIA

Page 41: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Duality and SDP relaxations 39

[10] F. K�orner and C. Richter. Zur e�ektiven l�osung von booleschen, qua-dratischen optimierungsproblemen. Numerische Mathematik, 40:99{109,1982.

[11] C. Lemar�echal and A. Renaud. Dual-equivalent convex and nonconvexproblems. Mathematical Programming. Submitted.

[12] L. Lov�asz. On the Shannon capacity of a graph. IEEE Trans. Inform.Theory, 25:1{7, 1979.

[13] L. Lov�asz and A. Schrijver. Cones of matrices and set-functions and 0-1optimization. SIAM Journal on Optimization, 1(2):166{190, 1991.

[14] D.G. Luenberger. Optimization by Vector Space Methods. Wiley, NewYork, 1969.

[15] Yu.E. Nesterov and A.S. Nemirovskii. Interior-Point Polynomial Algo-rithms in Convex Programming. Number 13 in SIAM Studies in AppliedMathematics. SIAM, Philadelphia, 1994.

[16] S. Poljak, F. Rendl, and H. Wolkowicz. A recipe for semide�nite relaxationfor (0,1)-quadratic programming. Journal of Global Optimization, 7:51{73, 1995.

[17] C.R. Reeves. Modern Heuristic Techniques for Combinatorial Problems.Blackwell Scienti�c Publications, New York, 1993.

[18] R.T. Rockafellar. Convex Analysis. Number 28 in Princeton MathematicsSer. Princeton University Press, 1970.

[19] R. Saigal, L. Vandenberghe, and H. Wolkowicz. Handbook of Semide�nite-Programming. Kluwer, 1999.

[20] N.Z. Shor. Minimization methods for non-di�erentiable functions.Springer-Verlag, Berlin, 1985.

[21] N.Z. Shor. Class of global minimum bounds of polynomial functions.Cybernetics, 23(6):731{734, 1987.

RR n�3710

Page 42: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

40 Claude Lemar�echal, Fran�cois Oustry

[22] N.Z. Shor. Dual estimates in multiextremal problems. Journal of GlobalOptimization, 2:411{418, 1992.

[23] L. Vandenberghe and S. Boyd. Semide�nite programming. Siam Review,38(1):49{95, 1996.

INRIA

Page 43: Semidefinite relaxations and Lagrangian duality with ...jordan/sail/... · optimization problem. First, this repro duces the kno wn SDP relaxations of the max-cut and max-stable

Unite de recherche INRIA Lorraine, Technopˆole de Nancy-Brabois, Campus scientifique,615 rue du Jardin Botanique, BP 101, 54600 VILLERS LES NANCY

Unite de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES CedexUnite de recherche INRIA Rhˆone-Alpes, 655, avenue de l’Europe, 38330 MONTBONNOT ST MARTIN

Unite de recherche INRIA Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY CedexUnite de recherche INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93, 06902 SOPHIA-ANTIPOLIS Cedex

EditeurINRIA, Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex (France)

http://www.inria.fr

ISSN 0249-6399