lecture 4 convex optimization problems - unisi.itcontrol/seminars/boyd/notes/cvx-probs.pdf ·...
TRANSCRIPT
S. Boyd EE364
Lecture 4
Convex optimization problems
• optimization problem in standard form
• convex optimization problem
• standard form with generalized inequalities
• multicriterion optimization
4–1
Optimization problem: standard form
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m
hi(x) = 0, i = 1, . . . , p
where fi, hi : Rn → R
• x is optimization variable
• f0 is objective or cost function;fi(x) ≤ 0 are the inequality constraints;hi(x) = 0 are the equality constraints
• x is feasible if it satisfies the constraints;the feasible set C is the set of all feasible points;problem is feasible if there are feasible points
• problem is unconstrained if m = p = 0
• optimal value is f? = infx∈C f0(x)(can be −∞; convention: f? = +∞ if infeasible);optimal point: x ∈ C s.t. f(x) = f?;optimal set: Xopt = {x ∈ C | f(x) = f?}
Convex optimization problems 4–2
example:
minimize x1 + x2
subject to −x1 ≤ 0−x2 ≤ 01 − x1x2 ≤ 0
(1)
• feasible set C is half-hyperboloid
• optimal value is f? = 2
• only optimal point is x? = (1, 1)
0 1 2 3 4 50
1
2
3
4
5
x1
x2
CCC
Convex optimization problems 4–3
Implicit and explicit constraints
explicit constraints: fi(x) ≤ 0, hi(x) = 0
implicit constraint: x ∈ dom fi, x ∈ domhi
D = dom f0∩· · ·∩dom fm∩domh1∩· · ·∩domhp
is called domain of the problem
example
minimize − log x1 − log x2
subject to x1 + x2 − 1 ≤ 0
has an implicit constraint
x ∈ D = {x ∈ R2 | x1 > 0, x2 > 0}
Convex optimization problems 4–4
Feasibility problem
suppose objective f0 = 0, so
f? ={
0 if C 6= ∅+∞ if C = ∅
thus, problem is really to
• either find x ∈ C
• or determine that C = ∅
i.e., solve the inequality / equality system
fi(x) ≤ 0, i = 1, . . . , mhi(x) = 0, i = 1, . . . , p
or determine that it is inconsistent
Convex optimization problems 4–5
Convex optimization problem
convex optimization problem in standard form:
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m
aTi x − bi = 0, i = 1, . . . , p
• f0, f1, . . . , fm convex
• affine equality constraints
• feasible set is convex
often written as
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b
where A ∈ Rp×n
Convex optimization problems 4–6
example. problem (1)
• has convex objective and feasible set
• is not a standard form convex optimization problemsince f3(x) = 1 − x1x2 is not convex
can easily be cast as standard form convex optimizationproblem:
minimize x1 + x2
subject to −x1 ≤ 0−x2 ≤ 01 −√
x1x2 ≤ 0
(1 −√x1x2 is convex on R2
+)
many different ways, e.g.,
minimize x1 + x2
subject to −x1 ≤ 0−x2 ≤ 0− log x1 − log x2 ≤ 0
Convex optimization problems 4–7
example. fi all affine yields linear program
minimize cT0 x + d0
subject to cTi x + di ≤ 0, i = 1, . . . , m
Ax = b
which is a convex optimization problem
example. minimum norm approximation with limitson variables
minimize ‖Ax − b‖subject to li ≤ xi ≤ ui, i = 1, . . . , n
is convex
example. maximum entropy with linear equalityconstraints
minimize∑
i xi log xi
subject to xi ≥ 0, i = 1, . . . , n∑i xi = 1
Ax = b
is convex
(more on these later)
Convex optimization problems 4–8
Local and global optimality
x ∈ C is locally optimal if it satisfies
y ∈ C, ‖y − x‖ ≤ R =⇒ f0(y) ≥ f0(x)
for some R > 0
c.f. (globally) optimal, which means x ∈ C,
y ∈ C =⇒ f0(y) ≥ f0(x)
for cvx opt problems, any local solution is also global
proof:
• suppose x is locally optimal, but y ∈ C, f0(y) <f0(x)
• take small step from x towards y, i.e., z = λy +(1 − λ)x with λ > 0 small
• z is near x, with f0(z) < f0(x); contradicts localoptimality
Convex optimization problems 4–9
An optimality criterion
suppose f0 is differentiable in convex problem
then x ∈ C is optimal iff
y ∈ C =⇒ ∇f0(x)T (y − x) ≥ 0
• −∇f0(x) defines supporting hyperplane for C at x
• if you move from x towards any feasible y, f0 doesnot decrease
x−∇f0(x)
C
contour lines of f0
• hence x ∈ C, ∇f0(x) = 0 implies x optimal
• for unconstr. problems, x is optimal iff ∇f0(x) = 0
Convex optimization problems 4–10
Quasiconvex optimization problem
quasiconvex optimization problem in standard form:
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b
• f0 is quasiconvex
• f1, . . . , fm are convex
• affine equality constraints
• feasible set, all sublevel sets are convex
example. linear-fractional programming
minimize (aTx + b)/(cTx + d)subject to Ax = b, Fx ¹ g, cTx + d > 0
Convex optimization problems 4–11
Solving quasiconvex problems viabisection
minimize f0(x)subject to fi(x) ≤ 0, i = 1, . . . , m
Ax = b
fi convex, f0 quasiconvex
idea: express sublevel set f0(x) ≤ t as sublevel set ofconvex function:
f0(x) ≤ t ⇔ φt(x) ≤ 0
where φt : Rn → R is convex in x for each t
now solve quasiconvex problem by bisection on t,solving convex feasibility problem
φt(x) ≤ 0, fi(x) ≤ 0, i = 1, . . . , m, Ax = b
(with variable x) at each iteration
Convex optimization problems 4–12
bisection method for quasiconvex problem:
given l < p∗; feasible x; ε > 0u := f0(x)repeat
t := (u + l)/2solve convex feasibility problem
φt(x) ≤ 0, fi(x) ≤ 0, Ax = bif feasible,
u := tx := any solution of feas. problem
else l := tuntil u − l ≤ ε
• reduces quasiconvex problem to sequence of convexfeasibility problems
• finds ε-suboptimal solution in log2(1/ε) iterations
Convex optimization problems 4–13
Epigraph form
write standard form problem as
minimize tsubject to f0(x) − t ≤ 0,
fi(x) ≤ 0, i = 1, . . . , mhi(x) = 0, i = 1, . . . , p
• variables are (x, t)
• m + 1 inequality constraints
• objective is linear : t = eTn+1(x, t)
• if original problem is cvx, so is epigraph form
−en+1 C
f0(x)
linear objective is ‘universal’ for convex optimization
Convex optimization problems 4–14
Standard form with generalizedinequalities
convex optimization problem in standard form withgeneralized inequalities:
minimize f0(x)subject to fi(x) ¹Ki
0, i = 1, . . . , LAx = b
where:
• f0 : Rn → R convex
• ¹Kiare generalized inequalities on Rmi
• fi : Rn → Rmi are Ki-convex
example. semidefinite programming
minimize cTxsubject to A0 + x1A1 + · · · + xnAn ¹ 0
where Ai = ATi ∈ Rp×p
Convex optimization problems 4–15
How fi, hi are described
analytical form
functions can have analytical form, e.g.,
f(x) = xTPx + 2qTx + r
f is specified by giving the problem data, coefficients,or parameters, e.g.
P = PT ∈ Rn×n, q ∈ Rn, r ∈ R
oracle form
functions can be given by oracle or subroutinethat, given x, computes f(x) (and maybe ∇f(x),∇2f(x), . . . )
• oracle model can be useful even if f has analyticform, e.g., linear but sparse
• how f given affects choice of algorithm, storagerequired to specify problem, etc.
Convex optimization problems 4–16
Some hard problems
‘slight’ modification of convex problem can be veryhard
• convex maximization, concave minimization, e.g.
maximize ‖x‖subject to Ax ¹ b
• nonlinear equality constraints, e.g.
minimize cTxsubject to xTPix + qT
i x + ri = 0, i = 1, . . . , K
• minimizing over non-convex sets, e.g., Booleanvariables
find xsuch that Ax ¹ b,
xi ∈ {0, 1}
Convex optimization problems 4–17
Multicriterion optimization
vector objective
F (x) = (F1(x), . . . , FN(x))
F1, . . . , FN : Rn → R(can include equality, inequality constraints)
Fi called objective functions: roughly speaking, wantall Fi small
family of specifications indexed by t ∈ RN :
F (x) ¹ t
i.e., Fi(x) ≤ ti, i = 1, . . . , N .
achievable specification: t s.t. F (x) ¹ t feasible
Convex optimization problems 4–18
Achievable specifications
set of achievable objectives:
A = {t ∈ RN | ∃x s.t. F (x) ¹ t}
t1
t2
A (achievable specs)
if Fi are convex then A is convex
boundary of A is called (optimal) tradeoff surface
Convex optimization problems 4–19
Pareto optimality
x dominates (is better than) x̃ if F (x) 6= F (x̃)
F (x) ¹ F (x̃)
i.e., x is no worse than x̃ in any objective, and betterin at least one
x0 is Pareto optimal if no x dominates it
t1
t2
achievable specs
specs tighter than F (x0)
F (x0)
roughly, x0 Pareto optimal means F (x0) is on tradeoffsurface (F (x0) ∈ ∂A)
Convex optimization problems 4–20
Pareto problem: find Pareto-optimal x
real (but more vague) engineering problem:search/explore/characterize tradeoff surface, e.g.:
• ‘can reduce F5 below 0.1, but only at huge cost inF4 and F2’
• ‘can pretty much minimize F3 independently ofother objectives’
• ‘F1 and F2 tradeoff strongly for F1 ≤ 1, F2 ≤ 2’
t1
t2
weak tradeoff
strong tradeoff
Convex optimization problems 4–21
Scalarization
multicriterion problem with F1, . . . , FN
minimize weighted sum of objectives: choose weightswi > 0, and solve
minimize∑
i
wiFi(x)
which is the same as
minimize wT tsubject to F (x) ¹ t (i.e., t ∈ A)
t1
t2
achievable specs
F (x0)
∑i witi = constant
w
• solution x0 is Pareto optimal
• for many cvx problems, all Pareto optimal pointscan be found this way, as weights vary
Convex optimization problems 4–22