convex problems - hkust - department of electronic ... · convex problems daniel p. palomar hong...
TRANSCRIPT
Convex Problems
Daniel P. Palomar
Hong Kong University of Science and Technology (HKUST)
ELEC5470 - Convex Optimization
Fall 2017-18, HKUST, Hong Kong
Outline of Lecture
• Optimization problems
• Convex optimization (def., optimality conditions, reformulations)
• Quasi-convex optimization
• Classes of convex problems: LP, QP, SOCP, SDP.
• Multicriterion optimization (Pareto optimality)
(Acknowledgement to Stephen Boyd for material for this lecture.)
Daniel P. Palomar 1
Optimization Problem in Standard Form
minimizex
f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
hi (x) = 0 i = 1, . . . , p
x ∈ Rn is the optimization variable
f0 : Rn −→ R is the objective function
fi : Rn −→ R, i = 1, . . . ,m are inequality constraint functions
hi : Rn −→ R, i = 1, . . . , p are equality constraint functions.
Daniel P. Palomar 2
• Feasibility:
– a point x ∈ dom f0 is feasible if it satisfies all the constraints and
infeasible otherwise
– a problem is feasible if it has at least one feasible point and
infeasible otherwise.
• Optimal value:
p? = inf {f0 (x) | fi (x) ≤ 0, i = 1, . . . ,m, hi (x) = 0, i = 1, . . . , p}
p? =∞ if problem infeasible (no x satisfies the constraints)
p? = −∞ if problem unbounded below.
• Optimal solution: x? such that f (x?) = p? (and x? feasible).
Daniel P. Palomar 3
Global and Local Optimality
• A feasible x is optimal if f0 (x) = p?; Xopt is the set of optimal
points.
• A feasible x is locally optimal if is optimal within a ball, i.e., there
is an R > 0 such that x is optimal for
minimizez
f0 (z)
subject to fi (z) ≤ 0, i = 1, . . . ,m, hi (z) = 0, i = 1, . . . , p
‖z − x‖2 ≤ R
Examples:
• f0 (x) = 1/x, dom f0 = R++: p? = 0, no optimal point
• f0 (x) = x3 − 3x: p? = −∞, local optimum at x = 1.
Daniel P. Palomar 4
Implicit Constraints
• The standard form optimization problem has an explicit constraint:
x ∈ D =
m⋂i=0
dom fi ∩p⋂
i=1
dom fi
• D is the domain of the problem
• The constraints fi (x) ≤ 0, hi (x) = 0 are the explicit constraints
• A problem is unconstrained if it has no explicit constraints
• Example:minimize
xlog(b− aTx
)is an unconstrained problem with implicit constraint b > aTx.
Daniel P. Palomar 5
Feasibility Problem
• Sometimes, we don’t really want to minimize any objective, just to
find a feasible point:
findx
x
subject to fi (x) ≤ 0 i = 1, . . . ,m
hi (x) = 0 i = 1, . . . , p
• This feasibility problem can be considered as a special case of a
general problem:
minimizex
0
subject to fi (x) ≤ 0 i = 1, . . . ,m
hi (x) = 0 i = 1, . . . , p
where p? = 0 if constraints are feasible and p? =∞ otherwise.
Daniel P. Palomar 6
Convex Optimization Problem
• Convex optimization problem in standard form:
minimizex
f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
Ax = b
where f0, f1, . . . , fm are convex and equality constraints are affine.
• Local and global optima: any locally optimal point of a convex
problem is globally optimal.
• Most problems are not convex when formulated.
• Reformulating a problem in convex form is an art, there is no
systematic way.
Daniel P. Palomar 7
Example• The following problem is nonconvex (why not?):
minimizex
x21 + x22
subject to x1/(1 + x22
)≤ 0
(x1 + x2)2
= 0• The objective is convex.
• The equality constraint function is not affine; however, we can
rewrite it as x1 = −x2 which is then a linear equality constraint.
• The inequality constraint function is not convex; however, we can
rewrite it as x1 ≤ 0 which again is linear.
• We can rewrite it asminimize
xx21 + x22
subject to x1 ≤ 0
x1 = −x2
Daniel P. Palomar 8
Global and Local Optimality
Any locally optimal point of a convex problem is globally optimal.
Proof: Suppose x is locally optimal (around a ball of radius R) and y
is optimal with f0 (y) < f0 (x). We will show this cannot be.
Just take the segment from x to y: z = θy + (1− θ)x. Obviously
the objective function is strictly decreasing along the segment since
f0 (y) < f0 (x):
θf0 (y) + (1− θ) f0 (x) < f0 (x) θ ∈ (0, 1] .
Using now the convexity of the function, we can write
f0 (θy + (1− θ)x) < f0 (x) θ ∈ (0, 1] .
Finally, just choose θ sufficiently small such that the point z is in the
ball of local optimality of x, arriving at a contradiction.
Daniel P. Palomar 9
Optimality Criterion for Differentiable f0
Minimum Principle: A feasible point x is optimal if and only if
∇f0 (x)T
(y − x) ≥ 0 for all feasible y
Daniel P. Palomar 10
• unconstrained problem: x is optimal iff
x ∈ dom f0, ∇f0 (x) = 0
• equality constrained problem: minxf0 (x) s.t. Ax = b
x is optimal iff
x ∈ dom f0, Ax = b, ∇f0 (x) +ATν = 0
• minimization over nonnegative orthant: minxf0 (x) s.t. x ≥ 0
x is optimal iff
x ∈ dom f0, x ≥ 0,
{∇if0 (x) ≥ 0 xi = 0
∇if0 (x) = 0 xi > 0
Daniel P. Palomar 11
Equivalent Reformulations
• Eliminating/introducing equality constraints:
minimizex
f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
Ax = b
is equivalent to
minimizez
f0 (Fz + x0)
subject to fi (Fz + x0) ≤ 0 i = 1, . . . ,m
where F and x0 are such that Ax = b ⇐⇒ x = Fz + x0 for some
z.
Daniel P. Palomar 12
• Introducing slack variables for linear inequalities:
minimizex
f0 (x)
subject to aTi x ≤ bi i = 1, . . . ,m
is equivalent to
minimizex,s
f0 (x)
subject to aTi x+ si = bi i = 1, . . . ,m
si ≥ 0
Daniel P. Palomar 13
• Epigraph form: a standard form convex problem is equivalent to
minimizex,t
t
subject to f0 (x)− t ≤ 0
fi (x) ≤ 0 i = 1, . . . ,m
Ax = b
• Minimizing over some variables:
minimizex,y
f0 (x, y)
subject to fi (x) ≤ 0 i = 1, . . . ,m
is equivalent to
minimizex
f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
where f0 (x) = infy f0 (x, y).
Daniel P. Palomar 14
Quasiconvex Optimization
minimizex
f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
Ax = b
where f0 : Rn −→ R is quasiconvex and f1, . . . , fm are convex.
• Observe that it can have locally optimal points that are not (globally)
optimal:
Daniel P. Palomar 15
• Convex representation of sublevel sets of a quasiconvex function
f0: there exists a family of convex functions φt (x) for fixed t such
that
f0 (x) ≤ t ⇐⇒ φt (x) ≤ 0.
• Example:
f0 (x) =p (x)
q (x)
with p convex, q concave, and p (x) ≥ 0, q (x) > 0 on dom f0. We
can choose:
φt (x) = p (x)− tq (x)
– for t ≥ 0, φt (x) is convex in x
– p (x) /q (x) ≤ t if and only if φt (x) ≤ 0.
Daniel P. Palomar 16
Solving a quasiconvex problem via convex feasibility problems:the idea is to solve the epigraph form of the problem with a sandwich
technique in t:
• for fixed t the epigraph form of the original problem reduces to a
feasibility convex problem
φt (x) ≤ 0, fi (x) ≤ 0 ∀i, Ax ≤ b
• if t is too small, the feasibility problem will be infeasible
• if t is too large, the feasibility problem will be feasible
• start with upper and lower bounds on t (termed u and l, resp.)
and use a sandwitch technique (bisection method): at each iter-
ation use t = (l + u) /2 and update the bounds according to the
feasibility/infeasibility of the problem.
Daniel P. Palomar 17
Classes of Convex Problems
Daniel P. Palomar 18
Linear Programming (LP)
minimizex
cTx+ d
subject to Gx ≤ hAx = b
• Convex problem: affine objective and constraint functions.
• Feasible set is a polyhedron:
Daniel P. Palomar 19
`1- and `∞- Norm Problems as LPs
• `∞-norm minimization:
minimizex
‖x‖∞subject to Gx ≤ h
Ax = b
is equivalent to the LP
minimizet,x
t
subject to −t1 ≤ x ≤ t1Gx ≤ hAx = b.
Daniel P. Palomar 20
• `1-norm minimization:
minimizex
‖x‖1subject to Gx ≤ h
Ax = b
is equivalent to the LP
minimizet,x
∑i ti
subject to −t ≤ x ≤ tGx ≤ hAx = b.
Daniel P. Palomar 21
Example: Chebyshev Center of a Polyhedron
• The Chebyshev center of a polyhedron P ={x | aTi x ≤ bi, i = 1, . . . ,m
}is the center of the largest inscribed ball B = {xc + u | ‖u‖ ≤ r}.
• Let’s solve the problem:
maximizer,xc
r
subject to x ∈ P for all x = xc + u | ‖u‖ ≤ r
• Observe that aTi x ≤ bi for all x ∈ B if and only if
supu
{aTi (xc + u) | ‖u‖ ≤ r
}≤ bi.
Daniel P. Palomar 22
• Using Schwartz inequality, the supremum condition can be rewritten
as
aTi xc + r ‖ai‖2 ≤ bi.
• Hence, the Chebyshev center can be obtained by solving:
maximizer,xc
r
subject to aTi xc + r ‖ai‖2 ≤ bi, i = 1, . . . ,m
which is an LP.
Daniel P. Palomar 23
Linear-Fractional Programming
minimizex
(cTx+ d
)/(eTx+ f
)subject to Gx ≤ h
Ax = b
with dom f0 ={x | eTx+ f > 0
}.
• It is a quasiconvex optimization problem (solved by bisection).
• Interestingly, the following LP is equivalent:
minimizey,z
cTy + dz
subject to Gy ≤ hzAy = bz
eTy + fz = 1
z ≥ 0
Daniel P. Palomar 24
Quadratic Programming (QP)
minimizex
(1/2)xTPx+ qTx+ r
subject to Gx ≤ hAx = b
• Convex problem (assuming P ∈ Sn � 0): convex quadratic objective
and affine constraint functions.
• Minimization of a convex quadratic function over a polyhedron:
Daniel P. Palomar 25
Quadratically Constrained QP (QCQP)
minimizex
(1/2)xTP0x+ qT0 x+ r0
subject to (1/2)xTPix+ qTi x+ ri ≤ 0 i = 1, . . . ,m
Ax = b
• Convex problem (assuming Pi ∈ Sn � 0): convex quadratic objec-
tive and constraint functions.
Daniel P. Palomar 26
Second-Order Cone Programming (SOCP)
minimizex
fTx
subject to ‖Aix+ bi‖ ≤ cTi x+ di i = 1, . . . ,m
Fx = g
• Convex problem: linear objective and second-order cone constraints
• For Ai row vector, it reduces to an LP.
• For ci = 0, it reduces to a QCQP.
• More general than QCQP and LP.
Daniel P. Palomar 27
Robust LP as an SOCP
• Sometimes, we don’t know exactly the parameters of an optimization
problem.
• Consider the robust LP:
minimizex
cTx
subject to aTi x ≤ bi ∀ai ∈ Ei, i = 1, . . . ,m
where Ei = {ai + Piu | ‖u‖ ≤ 1}.
• It can be rewritten as the SOCP:
minimizex
cTx
subject to aTi x+∥∥PT
i x∥∥2≤ bi i = 1, . . . ,m.
Daniel P. Palomar 28
Generalized Inequality Constraints
• Convex problem with generalized ineq. constraints:
minimizex
f0 (x)
subject to fi (x) �Ki0 i = 1, . . . ,m
Ax = b
where f0 is convex and fi are Ki-convex w.r.t. proper cone Ki.
• It has the same properties as a standard convex problem.
• Conic form problem: special case with affine objective and con-
straints:minimize
xcTx
subject to Fx+ g �K 0
Ax = b
Daniel P. Palomar 29
Semidefinite Programming (SDP)
minimizex
cTx
subject to x1F1 + x2F2 + · · ·+ xnFn � GAx = b
• Inequality constraint is called linear matrix inequality (LMI).
• Convex problem: linear objective and linear matrix inequality (LMI)
constraints.
• Observe that multiple LMI constraints can always be written as a
single one.
Daniel P. Palomar 30
• LP and equivalent SDP:
minimizex
cTx
subject to Ax ≤ bminimize
xcTx
subject to diag (Ax− b) � 0
• SOCP and equivalent SDP:
minimizex
fTx
subject to ‖Aix+ bi‖ ≤ cTi x+ di, i = 1, . . . ,m
minimizex
fTx
subject to
[ (cTi x+ di
)I Aix+ bi
(Aix+ bi)T
cTi x+ di
]� 0, i = 1, . . . ,m
Daniel P. Palomar 31
• Eigenvalue minimization:
minimizex
λmax (A (x))
where A (x) = A0 + x1A1 + · · ·+ xnAn, is equivalent to SDP
minimizex
t
subject to A (x) � tI
• It follows from
λmax (A (x)) ≤ t ⇐⇒ A (x) � tI
Daniel P. Palomar 32
Vector Optimization• General vector optimization problem:
minimizex
(w.r.t. K) f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
hi (x) = 0 i = 1, . . . , p
where the vector objective f0 : Rn −→ Rq is minimized w.r.t.
proper cone K ⊆ Rq.
• Convex vector optimization problem:
minimizex
(w.r.t. K) f0 (x)
subject to fi (x) ≤ 0 i = 1, . . . ,m
Ax = b
where f0 is K-convex and f1, . . . , fm are convex.
Daniel P. Palomar 33
Pareto Optimality
• Set of achievable objective values:
O = {f0 (x) | x is feasible} .
• A feasible x is Pareto optimal is f0 (x) is a minimal value of O.
• A minimal value x of O satisfies:
y ∈ O, y �K x =⇒ y = x
(in words: x cannot be in the
cone of points worse than y)
Daniel P. Palomar 34
Multicriterion Optimization
• If we now choose the proper cone K = Rq+ (nonnegative orthant),
then the vector optimization becomes a multicriterion optimization
with q different objectives:
f0 (x) = (F1 (x) , . . . , Fq (x)) .
• A feasible point xpo is Pareto optimal if
y feasible, f0 (y) ≤ f0 (xpo) =⇒ f0 (xpo) = f0 (y) .
• If there are multiple Pareto optimal values, there is a trade-off
among the objectives.
Daniel P. Palomar 35
Scalarization for Multicriterion Problems
• To find Pareto optimal points, minimize the positive weighted sum:
λTf0 (x) = λ1F1 (x) + · · ·+ λqFq (x) .
• Example: regularized least-squares:
minimizex
‖Ax− b‖22 + γ ‖x‖22 .
Daniel P. Palomar 36
Summary
• Thus far, we have seen the basic definitions of convex sets and convex
functions with examples and operations that preserve convexity.
• We have then considered convex problems in a variety of forms
including quasiconvex problems, vector optimization, Pareto opti-
mality, etc.
• We have also overviewed different classes of convex problems such
as LP, QP, QCQP, SOCP, and SDP.
• We can say we have acquired the vocabulary of convex optimization.
Daniel P. Palomar 37
References
Chapter 4 of
• Stephen Boyd and Lieven Vandenberghe, Convex Optimization.
Cambridge, U.K.: Cambridge University Press, 2004.
http://www.stanford.edu/˜boyd/cvxbook/
Daniel P. Palomar 38