linear matrix inequalities in control · state-feedback, estimation and output-feedback synthesis....

17
Linear Matrix Inequalities in Control Carsten Scherer and Siep Weiland 7th Elgersburg School on Mathematical Systems Theory Class 1 Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 1 / 65 Part I Course organization Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 2 / 65 Outline of Part I 1 Organization of the course Topics Software Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 3 / 65 course topics Facts from convex analysis. LMI’s: history, algorithms and software. The role of LMI’s in dissipativity Stability and nominal performance. Analysis results. From analysis to synthesis. State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s and multipliers. Relations to classical tests and to μ-theory. Mixed control problems and parameter-varying systems and control design. Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 4 / 65

Upload: others

Post on 10-Jul-2020

17 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Linear Matrix Inequalities in Control

Carsten Scherer and Siep Weiland

7th Elgersburg School on Mathematical Systems Theory

Class 1

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 1 / 65

Part I

Course organization

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 2 / 65

Outline of Part I

1 Organization of the courseTopicsSoftware

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 3 / 65

course topics

• Facts from convex analysis.LMI’s: history, algorithms and software.

• The role of LMI’s in dissipativityStability and nominal performance. Analysis results.

• From analysis to synthesis.State-feedback, estimation and output-feedback synthesis.

• From nominal to robust stability, robust performance and robustsynthesis.

• IQC’s and multipliers.Relations to classical tests and to µ-theory.

• Mixed control problems and parameter-varying systems and controldesign.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 4 / 65

Page 2: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

software issues

Install Yalmip

• Modelling language for solving convex optimization problems

• Consistent with Matlab; free (!)

• Efficient to implement algorithms

• Developed by Johan Lofberg

Install SDP solver

• Small till medium size LP and QP problems: SeDuMi or SPDT3

• High level LP and QP problems: GUROBI

• Generic package for academic use: MOSEK

• Many more commercial and non-commercial solvers . . .

to Yalmip website

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 5 / 65

Part II

Let’s start . . .

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 6 / 65

Outline of Part II

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 7 / 65

Outline

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 8 / 65

Page 3: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Merging control and optimization

Classical optimal control paradigm (LQG, H2, H∞, L1, MPC) isrestricted:

• Performance specs in terms of complete closed loop transfer matrix.

• One measure of performance only. Often multiple specs have beenimposed for controlled system

• Cannot incorporate structured time-varying/nonlinear uncertainties

• Can only design LTI controllers

• Can only synthesize one controller in one architecture.

Control vs. optimization

View control input and/or feedback controller as decision variable ofoptimization problem. Desired specifications are imposed as constraints oncontrolled system.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 9 / 65

Major goals for control and optimization

• Distinguish easy from difficult problems. (Convexity is key!)

• What are consequences of convexity in optimization?

• What is robust optimization?

• How to check robust stability by convex optimization?

• Which performance measures can be dealt with?

• How can controller synthesis be convexified?

• What are limits for the synthesis of robust controllers?

• How can we perform systematic gain scheduling?

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 10 / 65

Outline

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 11 / 65

Optimization problems

Casting optimization problems in mathematics requires

• X : decision set

• S ⊆ X : feasible decisions

• f : S → R: cost function

f assigns to each decision x ∈ S a cost f(x) ∈ R.

Wish to select the decision x ∈ S that minimizes the cost f(x)

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 12 / 65

Page 4: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Optimization problems

1 What is least possible cost? Compute optimal value

fopt := infx∈S

f(x) = inf{f(x) | x ∈ S} ≥ −∞

Convention: S = ∅ then fopt = +∞Convention: If fopt = −∞ then problem is said to be unbounded

2 How to determine almost optimal solutions? For arbitrary ε > 0 find

xε ∈ S with fopt ≤ f(xε) ≤ fopt + ε.

3 Is there an optimal solution (or minimizer)? Does there exist

xopt ∈ S with fopt = f(xopt)

We write: f(xopt) = minx∈S f(x)

4 Can we calculate all optimal solutions? (Non)-uniqueness

arg min f(x) := {x ∈ S | fopt = f(x)}

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 13 / 65

Recap: infimum and minimum of functions

Infimum of a functionAny f : S → R has infimum L ∈ R ∪ {−∞} denoted as infx∈S f(x). It isdefined by the properties

• L ≤ f(x) for all x ∈ S• L finite: for all ε > 0 exists x ∈ S with f(x) < L+ εL infinite: for all ε > 0 there exist x ∈ S with f(x) < −1/ε

Minimum of a functionIf exists x0 ∈ S with f(x0) = infx∈S f(x) we say that f attains itsminimum on S and write L = minx∈S f(x).If exists, the minimum is uniquely defined by the properties

• L ≤ f(x) for all x ∈ S• There exists some x0 ∈ S with f(x0) = L

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 14 / 65

A classical result

Theorem (Weierstrass)

If f : S → R is continuous and S is a compact subset of the normed linearspace X , then there exists xmin, xmax ∈ S such that for all x ∈ S

infx∈S

f(x) = f(xmin) ≤ f(x) ≤ f(xmax) = supx∈S

f(x)

Comments:

• Answers problem 3 for “special” S and f

• No clue on how to find xmin, xmax

• No answer to uniqueness issue

• S compact if for every sequence xn ∈ S a subsequence xnm existswhich converges to a point x ∈ S

• Continuity and compactness overly restrictive!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 15 / 65

Convex sets

Definition

A set S in a linear vector space X is convex if

x1, x2 ∈ S =⇒ αx1 + (1− α)x2 ∈ S for all α ∈ (0, 1)

The point αx1 + (1− α)x2 with α ∈ (0, 1) is a convex combination of x1and x2.

Definition

The point x ∈ X is a convex combination of x1, . . . , xn ∈ X if

x :=

n∑i=1

αixi, αi ≥ 0,

n∑i=1

αi = 1

• Note: set of all convex combinations of x1, . . . , xn is convex.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 16 / 65

Page 5: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Examples of convex sets

Convex sets Non-convex sets

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 17 / 65

Basic properties of convex sets

Theorem

Let S and T be convex. Then

• αS := {x | x = αs, s ∈ S} is convex

• S + T := {x | x = s+ t, s ∈ S, t ∈ T } is convex

• closure of S and interior of S are convex

• S ∩ T := {x | x ∈ S and x ∈ T } is convex.

• x ∈ S is interior point of S if there exist ε > 0 such that all y with‖x− y‖ ≤ ε belong to S

• x ∈ X is closure point of S ⊆ X if for all ε > 0 there exist y ∈ S with‖x− y‖ ≤ ε

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 18 / 65

Examples of convex sets

• With a ∈ Rn\{0} and b ∈ R, the hyperplane

H = {x ∈ Rn | a>x = b}

and the half-space

H− = {x ∈ Rn | a>x ≤ b}

are convex.

• The intersection of finitely many hyperplanes and half-spaces is apolyhedron. Any polyhedron is convex and can be described as

{x ∈ Rn | Ax ≤ b,Dx = e}

for suitable matrices A and D and vectors b, e.

• A compact polyhedron is a polytope.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 19 / 65

The convex hull

Definition

The convex hull of a set S ⊂ X is

conv(S) := ∩{T | T is convex and S ⊆ T }

• conv(S) is convex for any set S• conv(S) is set of all convex combinations of points of S• The convex hull of finitely many points conv(x1, . . . , xn) is a

polytope. Moreover, any polytope can be represented in this way!!

Latter property allows explicit representation of polytopes. For example{x ∈ Rn | a ≤ x ≤ b} consists of 2n inequalities and requires 2n

generators for its representation as convex hull!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 20 / 65

Page 6: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Convex functions

Definition

A function f : S → R is convex if

• S is convex and

• for all x1, x2 ∈ S, α ∈ (0, 1) there holds

f(αx1 + (1− α)x2) ≤ αf(x1) + (1− α)f(x2)

• We have

f : S → R convex =⇒ {x ∈ S | f(x) ≤ γ}︸ ︷︷ ︸Sublevel sets

convex for all γ ∈ R

Derives convex sets from convex functions

• Converse ⇐= is not true!

• f is strictly convex if < instead of ≤Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 21 / 65

Examples of convex functions

Convex functions

• f(x) = ax2 + bx+ c convex ifa > 0

• f(x) = |x|• f(x) = ‖x‖• f(x) = sinx on [π, 2π]

Non-convex functions

• f(x) = x3 on R• f(x) = −|x|• f(x) =

√x on R+

• f(x) = sinx on [0, π]

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 22 / 65

Convex matrix-valued functions

An interesting generalization:

• Define H and S to be the sets of Hermitian and Symmetric matrices,i.e., sets of square matrices A for which

A = A∗ or A = A>

• Define partial ordering on H and S

A1 ≺ A2, A1 4 A2, A1 � A2, A1 < A2

by requiring x∗(A1 −A2)x to be negative, non-positive, positive ornon-positive for all x 6= 0.

Definition

A matrix-valued function F : S → H is convex if S is a convex set and

F (αx1 + (1− α)x2) 4 αF (x1) + (1− α)F (x2)

for any x1, x2 ∈ S and 0 < α < 1.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 23 / 65

Recap: Hermitian and symmetric matrices

Definition

For a real or complex matrix A the inequality A ≺ 0 means that A isHermitian and negative definite.

• A is Hermitian if A = A∗ = A>. If A is real this amounts to A = A>

and we call A symmetric.

Set of n× n Hermitian or symmetric matrices: Hn and Sn.

• All eigenvalues of Hermitian and symmetric matrices are real.

• By definition a Hermitian matrix A is negative definite if

u∗Au < 0 for all complex vectors u 6= 0

A is negative definite if and only if all its eigenvalues are negative.

• A 4 B, A � B and A < B defined and characterized analogously.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 24 / 65

Page 7: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Affine sets

Definition

A subset S of a linear vector space is affine if x = αx1 + (1− α)x2belongs to S for every x1, x2 ∈ S and α ∈ R

• Geometric idea: line through any two points belongs to set

• Every affine set is convex

• S affine if and only if there exists x0 such that

S = {x | x = x0 +m, m ∈M}

with M a linear subspace

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 25 / 65

Affine functions

Definition

A function f : S → T is affine if

f(αx1 + (1− α)x2) = αf(x1) + (1− α)f(x2)

for all x1, x2 ∈ S and for all α ∈ R

Theorem

If S and T are finite dimensional, then f : S → T is affine if and only if

f(x) = f0 + T (x)

where f0 ∈ T and T : S → T a linear map (a matrix).

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 26 / 65

Cones and convexity

Definition

A convex cone is a set K ⊂ Rn with the property that

x1, x2 ∈ K =⇒ α1x1 + α2x2 ∈ K for all α1, α2 ≥ 0.

• Since x1, x2 ∈ K implies αx1 + (1− α)x2 ∈ K for all α ∈ (0, 1) aconvex cone is convex.

• If S ⊂ X is an arbitrary set, then

K := {y ∈ Rn | 〈x, y〉 ≥ 0 for all x ∈ S}

is a cone. Also denoted K = S∗ and called dual cone of S.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 27 / 65

Cones and convexity

• Cones are unboundedsets.

• If K1 and K2 are convexcones, then so are

αK1,K1 ∩ K2,K1 +K2

for any α ∈ R.

• The intersection of a conewith a hyperplane is aconic section.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 28 / 65

Page 8: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Outline

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 29 / 65

Why is convexity interesting ???

Reason 1: absence of local minima

Definition

f : S → H. Then x0 ∈ S is a

• local minimum if ∃ε > 0 such that

f(x0) 4 f(x) for all x ∈ S with ‖x− x0‖ ≤ ε

• global minimum if f(x0) 4 f(x) for all x ∈ S

Theorem

If f : S → H is convex then every local minimum x0 is a global minimumof f . If f is strictly convex, then the global minimum x0 is unique.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 30 / 65

Why is convexity interesting ???

Reason 2: uniform bounds

Theorem

Suppose S = conv(S0) and f : S → H is convex. Then equivalent are:

• f(x) 4 γ for all x ∈ S• f(x) 4 γ for all x ∈ S0

Very interesting if S0 consists of finite number of points, i.e,S0 = {x1, . . . , xn}. A finite test!!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 31 / 65

Why is convexity interesting ???

Reason 3: subgradients

Definition

A vector g = g(x0) ∈ Rn is a subgradient of f at x0 if

f(x) ≥ f(x0) + 〈g, x− x0〉

for all x ∈ S

Geometric idea: graph of affine function x 7→ f(x0) + 〈g, x− x0〉 tangentto graph of f at (x0, f(x0)).

Theorem

A convex function f : S → R has a subgradient at every interior point x0of S.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 32 / 65

Page 9: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Examples and properties of subgradients

• f differentiable, then g = g(x0) = ∇f(x0) is subgradientSo, for differentiable functions every gradient is a subgradient

• the non-differentiable function f(x) = |x| has any real numberg ∈ [−1, 1] as its subgradient at x0 = 0.

• f(x0) is global minimum of f if and only if 0 is subgradient of f at x0.

• Since〈g, x− x0〉 > 0 =⇒ f(x) > f(x0),

all points in half space H := {x | 〈g, x− x0〉 > 0} can be discarded insearching for minimum of f .

Used explicitly in ellipsoidal algorithm

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 33 / 65

Ellipsoidal algorithm

Aim: Minimize convex function f : Rn → RStep 0 Let x0 ∈ Rn and P0 � 0 such that all minimizers of f are

located in the ellipsoid

E0 := {x ∈ Rn | (x− x0)>P−10 (x− x0) ≤ 1}.

Set k = 0.Step 1 Compute a subgradient gk of f at xk. If gk = 0 then stop,

otherwise proceed to Step 2.Step 2 All minimizers are contained in

Hk := Ek ∩ {x | 〈gk, x− xk〉 ≤ 0}.

Step 3 Compute xk+1 ∈ Rn and Pk+1 � 0 with minimaldeterminant detPk+1 such that

Ek+1 := {x ∈ Rn | (x− xk+1)>P−1k+1(x− xk+1) ≤ 1}

contains Hk.Step 4 Set k to k + 1 and return to Step 1.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 34 / 65

Ellipsoidal algorithm

Remarks on ellipsoidal algorithm:

• Convergence f(xk)→ infx f(x).

• Exist explicit equations for

xk, Pk, Ek

such that volume of Ek decreases with factor e−1/2n at each step.(See lecture notes).

• Simple, robust, reliable, easy to implement, but slow convergence.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 35 / 65

Why is convexity interesting ???

Reason 4: Duality and convex programsSet of feasible decisions often described by equality and inequalityconstraints:

S = {x ∈ X | gk(x) ≤ 0, k = 1, . . . ,K, h`(x) = 0, ` = 1, . . . , L}

Primal optimization:

Popt = infx∈S

f(x)

• One of index sets K or L infinite: semi-infinite optimization

• Both index sets K and L finite: nonlinear program

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 36 / 65

Page 10: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Why is convexity interesting ???

• Examples: saturation constraints, safety margins, constitutive andbalance equations all assume form S = {x | g(x) 4 0, h(x) = 0}.

• semi-definite program:

minimize f(x) subject to g(x) 4 0, h(x) = 0

f , g and h affine.

• quadratic program:

minimize x>Qx+ 2s>x+ r subject to g(x) 4 0, h(x) = 0

with g and h affine.

• quadratically constraint quadratic program:

f(x) = x>Qx+2s>x+r, gj(x) = x>Qjx+2s>j x+rj , h(x) = h0+Hx

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 37 / 65

Upper and lower bounds for convex programs

Primal optimization problem

Popt = infx∈X

f(x)

subject to g(x) 4 0, h(x) = 0

Can we obtain bounds on optimal value Popt ?

• Upper bound on optimal valueFor any x0 ∈ S we have

Popt ≤ f(x0)

which defines an upper bound on Popt.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 38 / 65

Upper and lower bounds for convex programs

• Lower bound on optimal valueLet x ∈ S. Then for arbitrary y < 0 and z we have

L(x, y, z) := f(x) + 〈y, g(x)〉+ 〈z, h(x)〉 ≤ f(x)

and, in particular,

`(y, z) := infx∈X

L(x, y, z) ≤ infx∈S

L(x, y, z) ≤ infx∈S

f(x) = Popt.

so that

Dopt := supy<0, z

`(y, z) = supy<0, z

infx∈X

L(x, y, z) ≤ Popt

defines a lower bound for Popt.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 39 / 65

Duality and convex programs

Some terminology:

• Lagrange function: L(x, y, z)

• Lagrange dual cost: `(y, z)

• Lagrange dual optimization problem:

Dopt := supy<0, z

`(y, z)

Remarks:

• `(y, z) computed by solving an unconstrained optimization problem.Is concave function.

• Dual problem is concave maximization problem. Constraints aresimpler than in primal problem

• Main question: when is Dopt = Popt?

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 40 / 65

Page 11: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Example of duality

• Primal Linear Program

Popt = infx

c>x

subject to x ≥ 0, b−Ax = 0

Lagrange dual cost

`(y, z) = infxc>x− y>x+ z>(b−Ax)

=

{b>z if c−A>z − y = 0

−∞ otherwise

• Dual Linear Program

Dopt = supz

b>z

subject to y = c−A>z ≥ 0

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 41 / 65

Karush-Kuhn-Tucker and duality

We need the following property:

Definition

Suppose f , g convex and h affine.(g, h) satisfy the constraint qualification if ∃x0 in the interior of X withg(x0) 4 0, h(x0) = 0 such that gj(x0) < 0 for all component functions gjthat are not affine.

Example: (g, h) satisfies constraint qualification if g and h are affine.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 42 / 65

Karush-Kuhn-Tucker and duality

Theorem (Karush-Kuhn-Tucker)

If (g, h) satisfies the constraint qualification, then we have strong duality:

Dopt = Popt.

There exist yopt < 0 and zopt, such that Dopt = `(yopt, zopt).Moreover, xopt is an optimal solution of the primal optimization problemand (yopt, zopt) is an optimal solution of the dual optimization problem, ifand only if

1 g(xopt) 4 0, h(xopt) = 0,

2 yopt < 0 and xopt minimizes L(x, yopt, zopt) over all x ∈ X and

3 〈yopt, g(xopt)〉 = 0.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 43 / 65

Karush-Kuhn-Tucker and duality

Remarks:

• Very general result, strong tool in convex optimization

• Dual problem simpler to solve, (yopt, zopt) called Kuhn Tucker point.

• The triple (xopt, yopt, zopt) exist if and only if it defines a saddle pointof the Lagrangian L in that

L(xopt, y, z) ≤ L(xopt, yopt, zopt)︸ ︷︷ ︸=Popt=Dopt

≤ L(x, yopt, zopt)

for all x, y < 0 and z.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 44 / 65

Page 12: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Outline

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 45 / 65

Linear Matrix Inequalities

Definition

A linear matrix inequality (LMI) is an expression

F (x) = F0 + x1F1 + . . .+ xnFn ≺ 0

where

• x = col(x1, . . . , xn) is a vector of real decision variables,

• Fi = F>i are real symmetric matrices and

• ≺ 0 means negative definite, i.e.,

F (x) ≺ 0 ⇔ u>F (x)u < 0 for all u 6= 0

⇔ all eigenvalues of F (x) are negative

⇔ λmax (F (x)) < 0

• F is affine function of decision variables

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 46 / 65

Simple examples of LMI’s

• 1 + x < 0

• 1 + x1 + 2x2 < 0

•(

1 00 1

)+ x1

(2 −1−1 2

)+ x2

(1 00 0

)≺ 0.

All the same with � 0, 4 0 and < 0.

Only very simple cases can be treated analytically.

Need to resort to numerical techniques!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 47 / 65

Main LMI problems

1 LMI feasibility problem: Test whether there exists x1, . . . , xn suchthat F (x) ≺ 0.

2 LMI optimization problem: Minimize f(x) over all x for which theLMI F (x) ≺ 0 is satisfied.

How is this solved?F (x) ≺ 0 is feasible iff minx λmax(F (x)) < 0 and therefore involvesminimizing the function

x 7→ λmax (F (x))

• Possible because this function is convex!

• There exist efficient algorithms (Interior point, ellipsoid).

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 48 / 65

Page 13: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Linear Matrix Inequalities

Definition

(More general:) A linear matrix inequality is an inequality

F (X) ≺ 0

where F is an affine function mapping a finite dimensional vector spaceX to the set H of Hermitian matrices.

• Allows defining matrix valued LMI’s.

• F affine means F (X) = F0 + T (X) with T a linear map (a matrix).

• With Ej basis of X , any X ∈ X can be expanded as X =∑n

j=1 xjEj

so that

F (X) = F0 + T (X) = F0 +

n∑j=1

xjFj with Fj = T (Ej)

which is the standard form.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 49 / 65

Why are LMI’s interesting?

• Reason 1: LMI’s define convex constraints on x, i.e.,S := {x | F (x) ≺ 0} is convex.

Indeed, F (αx1 + (1− α)x2) = αF (x1) + (1− α)F (x2) ≺ 0.• Reason 2: Solution set of multiple LMI’s

F1(x) ≺ 0, . . . , Fk(x) ≺ 0

is convex and representable as one single LMI

F (x) =

F1(x) 0 . . . 0

0. . . 0

0 . . . 0 Fk(x)

≺ 0

Allows to combine LMI’s!• Reason 3: Incorporate affine constraints such asF (x) ≺ 0 and Ax = bF (x) ≺ 0 and x = Ay + b for some yF (x) ≺ 0 and x ∈ S with S an affine set.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 50 / 65

Why are LMI’s interesting?

Reason 3: Affinely constrained LMI is LMI

Combine LMI constraint F (x) ≺ 0 and x ∈ S with S affine:

• Write S = x0 +M with M a linear subspace of dimension k

• Let {ej}kj=1 be a basis for M• Write F (x) = F0 + T (x) with T linear

• Then for any x ∈ S:

F (x) = F0 + T

x0 +k∑

j=1

xjej

= F0 + T (x0)︸ ︷︷ ︸constant

+k∑

j=1

xjT (ej)︸ ︷︷ ︸linear

= G0 + x1G1 + . . .+ xkGk = G(x)

Result: x unconstrained and x has lower dimension than x !!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 51 / 65

Why are LMI’s interesting?

Reason 4: conversion nonlinear constraints to linear ones

Theorem (Schur complement)

Let F be an affine function with

F (x) =

(F11(x) F12(x)F21(x) F22(x)

), F11(x) is square.

Then

F (x) ≺ 0 ⇐⇒

{F11(x) ≺ 0

F22(x)− F21(x) [F11(x)]−1 F12(x) ≺ 0.

⇐⇒

{F22(x) ≺ 0

F11(x)− F12(x) [F22(x)]−1 F21(x) ≺ 0.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 52 / 65

Page 14: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

First examples in control

Example 1: Stability

Consider autonomous system

x = Ax.

Verify stability through feasibility. We have:

x = Ax asymptotically stable ⇐⇒(−X 0

0 A>X +XA

)≺ 0 feasible

Here X = X> defines a Lyapuov function

V (x) := x>Xx

for the flow x = Ax.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 53 / 65

First examples in control

Example 2: Joint stabilization

Given (A1, B1), . . . , (Ak, Bk), find F such that(A1 +B1F ), . . . , (Ak +BkF ) are asymptotically stable.

Equivalent to finding F ,X1, . . . , Xk such that for j = 1, . . . , k:(−Xj 0

0 (Aj +BjF )Xj +Xj(Aj +BjF )>

)≺ 0 not an LMI!!

Sufficient condition: X = X1 = . . . = Xk, K = FX, yields(−X 0

0 AjX +XA>j +BjK +K>B>j

)≺ 0 an LMI!!

Set feedback F = KX−1.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 54 / 65

First examples in control

Example 3: Eigenvalue problemGiven F : V→ S affine, minimize over all x

f(x) = λmax (F (x)) .

Observe that, with γ > 0, and using Schur complement:

λmax(F>(x)F (x)) < γ2 ⇔ 1

γF>(x)F (x)−γI ≺ 0⇔

(−γI F>(x)F (x) −γI

)≺ 0

We can define

y :=

(xγ

); G(y) :=

(−γI F>(x)F (x) −γI

); g(y) := γ

then G is affine in y and minx f(x) = minG(y)≺0 g(y). This is a LP!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 55 / 65

Outline

2 What to expect?

3 Convex sets and convex functionsConvex setsConvex functions

4 Why is convexity important?ExamplesEllipsoidal algorithmDuality and convex programs

5 Linear Matrix InequalitiesDefinitionsLMI’s and convexityLMI’s in control

6 A design example

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 56 / 65

Page 15: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Truss topology design

0 50 100 150 200 250 300 350 400

0

50

100

150

200

250

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 57 / 65

Trusses

• Trusses consist of straight members (‘bars’) connected at joints.

• One distinguishes free and fixed joints.

• Connections at the joints can rotate.

• The loads (or the weights) are assumed to be applied at the freejoints.

• This implies that all internal forces are directed along the members,(so no bending forces occur).

• Construction reacts based on principle of statics: the sum of theforces in any direction, or the moments of the forces about any joint,are zero.

• This results in a displacement of the joints and a new tensiondistribution in the truss.

Many applications (roofs, cranes, bridges, space structures, . . . ) !!

Design your own bridge

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 58 / 65

Truss topology design

Problem features:

• Connect nodes by N bars of length ` = col(`1, . . . , `N ) (fixed) andcross sections s = col(s1, . . . , sN ) (to be designed)

• Impose bounds on cross sections ak ≤ sk ≤ bk and total volume`>s ≤ v (and hence an upperbound on total weight of the truss).Let a = col(a1, . . . , aN ) and b = col(b1, . . . , bN ).

• Distinguish fixed and free nodes.

• Apply external forces f = col(f1, . . . fM ) to some free nodes. Theseresult in a node displacements d = col(d1, . . . , dM ).

Mechanical model defines relation A(s)d = f where A(s) < 0 is thestiffness matrix which depends linearly on s.

Goal:

Maximize stiffness or, equivalently, minimize elastic energy f>d

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 59 / 65

Truss topology design

Problem

Find s ∈ RN which minimizes elastic energy f>d subject to the constraints

A(s) � 0, A(s)d = f, a ≤ s ≤ b, `>s ≤ v

• Data: Total volume v > 0, node forces f , bounds a, b, lengths ` andsymmetric matrices A1, . . . , AN that define the linear stiffness matrixA(s) = s1A1 + . . .+ sNAN .

• Decision variables: Cross sections s and displacements d (bothvectors).

• Cost function: stored elastic energy d 7→ f>d.

• Constraints:• Semi-definite constraint: A(s) � 0• Non-linear equality constraint: A(s)d = f• Linear inequality constraints: a ≤ s ≤ b and `>s ≤ v.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 60 / 65

Page 16: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

From truss topology design to LMI’s

• First eliminate affine equality constraint A(s)d = f :

minimize f> (A(s))−1 fsubject to A(s) � 0, `>s ≤ v, a ≤ s ≤ b

• Push objective to constraints with auxiliary variable γ:

minimize γ

subject to γ > f> (A(s))−1 f , A(s) � 0, `>s ≤ v, a ≤ s ≤ b• Apply Schur lemma to linearize

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

Note that the latter is an LMI optimization problem as all constraints on sare formulated as LMI’s!!

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 61 / 65

Yalmip coding for LMI optimization problem

Equivalent LMI optimization problem:

minimize γ

subject to

(γ f>

f A(s)

)� 0, `>s ≤ v, a ≤ s ≤ b

The following YALMIP code solves this problem:

gamma=sdpvar(1,1); x=sdpvar(N,1,’full’);

lmi=set([gamma f’; f A*diag(x)*A’]);

lmi=lmi+set(l’*x<=v);

lmi=lmi+set(a<=x<=b);

options=sdpsettings(’solver’,’csdp’);

solvesdp(lmi,gamma,options);

s=double(x);

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 62 / 65

Result: optimal truss

0 50 100 150 200 250 300 350 400

0

50

100

150

200

250

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 63 / 65

Useful software:

General purpose MATLAB interface Yalmip

• Free code developed by J. Lofberg accessible here

Get Yalmip now

Run yalmipdemo.m for a comprehensive introduction.Run yalmiptest.m to test settings.

• Yalmip uses the usual Matlab syntax to define optimization problems.Basic commands sdpvar, set, sdpsettings and solvesdp.Truely easy to use!!!

• Yalmip needs to be connected to solver for semi-definite programming.There exist many solvers:

SeDuMi PENOPT OOQPDSDP CSDP MOSEK

• Alternative Matlab’s LMI toolbox for dedicated control applications.

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 64 / 65

Page 17: Linear Matrix Inequalities in Control · State-feedback, estimation and output-feedback synthesis. From nominal to robust stability, robust performance and robust synthesis. IQC’s

Gallery

Joseph-Louis Lagrange (1736) Aleksandr Mikhailovich Lyapunov (1857)

to next class

Carsten Scherer and Siep Weiland (Elgersburg) Linear Matrix Inequalities in Control Class 1 65 / 65