matrix theory and normsmocha-java.uccs.edu/ece5580/ece5580_ch3_20feb16.pdf · ece5580, matrix...

44
ECE5580: Multivariable Control in the Frequency Domain 3–1 Matrix Theory and Norms In this chapter, we will review some important fundamentals of matrix algebra necessary to the development of multivariable frequency response techniques Importantly, we’ll focus on the geometric construct of the: eigenvalue-eigenvector decomposition singular value decomposition Additionally, we’ll discuss the notion of norms Basic Concepts Complex numbers Define the complex scalar quantity c D ˛ C where ˛ D Re fc g and ˇ D Im fc g We may express c in polar form, c Djc j†c , where jc j denotes the magnitude (or modulus) of c and c its phase angle: jc jD p N cc D p ˛ 2 C ˇ 2 c D arctan ˇ ˛ Here, the N c notation indicates complex conjugation. Lecture notes prepared by M. Scott Trimboli. Copyright c 2016, M. Scott Trimboli

Upload: others

Post on 26-Jun-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580: Multivariable Control in the Frequency Domain 3–1

Matrix Theory and Norms

! In this chapter, we will review some important fundamentals of matrixalgebra necessary to the development of multivariable frequencyresponse techniques

! Importantly, we’ll focus on the geometric construct of the:

– eigenvalue-eigenvector decomposition

– singular value decomposition

! Additionally, we’ll discuss the notion of norms

Basic Concepts

Complex numbers

! Define the complex scalar quantityc D ˛ C jˇ

where ˛ D Re fcg and ˇ D Im fcg

! We may express c in polar form, c D jcj †c, where jcj denotes themagnitude (or modulus) of c and †c its phase angle:

jcj Dp

Ncc Dp

˛2 C ˇ2

†c D arctan!

ˇ

˛

"

Here, the Nc notation indicates complex conjugation.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 2: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–2

Complex vectors and matrices

! A complex column vector a with m components may be written,

a D

266664

a1

a2:::

am

377775

where the ai are complex scalars

! We define the following transpose operations:

aT Dh

a1 a2 # # # am

i

a$ Dh

Na1 Na2 # # # Nam

i

– The latter of these is often called the complex conjugate orHermitian transpose

! Similarly, we may define a complex ` % m matrix A,

A D

266664

a11 a12 # # # a1m

a21 a22 # # # a2m::: ::: : : : :::

a`1 a`2 # # # a`m

377775

with elementsaij D Re

˚aij

#C j Im

˚aij

#

! Here, ` denotes the number of rows (outputs), and m the number ofcolumns (inputs)

! Similar to the vector, we define the following operations:

AT D transpose of A

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 3: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–3

NA D conjugate of A

A$ D conjugate transpose of A

! MATRIX INVERSE

A&1 D adj A

det A

where adj A is the classical adjoint (“adjugate”) of A which iscomputed as the transpose of the matrix of cofactors cij of A,

cij D Œadj A!j i , .&1/iCj det Aij

Here Aij is the submatrix formed by deleting row i and column j of A.

Example 3.1

A D"

a11 a12

a21 a22

#

det A D a11a22 & a12a21

A&1 D 1

det A

"a22 &a12

&a21 a11

#

Some Determinant Identities

! det A DnX

iD1

aij cij .expansion along column j /

! det A DnX

j D1

aij cij .expansion along row i/

! Let c be a complex scalar and A be an n % n matrix,

det .cA/ D cn det A

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 4: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–4

! Let A be a nonsingular square matrix,

det A&1 D 1

det A

! Let A and B be matrices of compatible dimension such that AB andBA are square,

det .I C AB/ D det .I C BA/

! For block triangular matrices,

det

"A B

0 D

#D det

"A 0

C D

#

D det .A/ # det .D/

! Schur’s Formula for the determinant of a partitioned matrix:

det

"A B

C D

#D det .A/ # det

$D & CA&1B

%

D det .D/ # det$A & BD&1C

%

where it is assumed that A and/or D are non-singular.

Vector Spaces

! We’ll start with the simplest example of a vector space: the Euclidiann-dimensional space, Rn, e.g.,

u D Œu1; u2; :::; un! with ui 2 R for i D 1; 2; : : : ; n

! Rn exhibits some special properties that lead to a general definition ofa vector space:

– Rn is an additive abeliean group

– Any n-tuple ku is also in Rn

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 5: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–5

– Given any k 2 R and u 2 Rn we can obtain by scalar multiplicationthe unique vector ku

In abstract algebra, an abelian group, also called acommutative group, is a group in which the result of ap-plying the group operation to two group elements doesnot depend on the order in which they are written (theaxiom of commutativity). Abelian groups generalize thearithmetic of addition of integers.[Jacobson, Nathan, Basic Algebra I, 2nd ed., 2009]

! Some further results:

– k .u C v/ D ku C kv (i.e., scalar multiplication is distributive underaddition)

– .k C l/ u D ku C lu

– .kl/ u D k .lu/

– 1 # u D u

! To define a vector space in general we need the following fourconditions:

1. A set V which forms an additive abeliean group2. A set F which forms a field3. A scalar multiplication of k 2 F with u 2 V to generate a unique

ku 2 V

4. Four scalar multiplication axioms from above.

! Then we say that V is a vector space of F .

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 6: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–6

Examples:

! All ordered n-tuples, u D Œu1; u2; : : : ; un! with ui 2 C forms a vectorspace Cn over C

! The set of all n % m real matrices Rn%m is a vector space over R

Linear Spans, Spanning Sets and Bases

! Let S be the set of vectors, S=fu1; u2; : : : ; umg 2 V where V is a vectorspace of some F

! Define L.S/ Dfv 2 V W v D a1u1 C a2u2 C : : : C amum with ai 2 F g

! Thus L.S ) is the set of all linear combinations of the vectors ui

! Then,

– L.S/ is a subspace of V (note, it is closed under vector additionand scalar multiplication)

– L.S/ is the smallest subspace containing S

! We say that L.S/ is the linear span of S , or that L.S/ is spanned by S

! Conversely, S is called a spanning set for L.S/

IMPORTANT: A spanning set v1; v2;: : : ; vm of V is a basis if the vi arelinearly independent

NOTE: If dim.V / D m, then any set v1; v2;: : : ; vn 2 V with n > m must belinearly dependent

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 7: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–7

Linear Transformations, Matrix Representation & Matrix Algebra

Linear Mappings

! Let X; Y be vector spaces over F and let T be a mapping of X to Y ,

T W X ! Y

! Then T 2 L.X; Y /; the set of linear mappings (or transformations) ofX into Y , if

T .u C v/ D T .u/ C T .v/

andT .ku/ D k # T .u/; for all u; v 2 X and k 2 F

) Linear Tranform

Matrix Respresentation of Linear Mappings

! Linear mappings can be made concrete and convenientlyrepresented by matrix multiplications...

– once x 2 X and y 2 Y are both expressed in terms of theircoordinates relative to appropriate basis sets

! We can now show that A is a matrix representation of the lineartransform T :

– y D T .x/ is equivalent to y D Ax

! Note that when dealing with actual vectors in Rk or Ck, an all-timefavorite basis set is the standard basis set, which consists of vectorsei which are zero everywhere except for a “1” appearing in the i th

position

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 8: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–8

Change of Basis

! Let e01; e

02;: : : ; e

0m be a new basis for X

! Then each e0i is in X and can be written as a linear combination of the

ei :

e 0i D

nXj D1

pij ej

E 0 D EP

where P is a square and invertible matrix with elements pji

! Thus we can write,

E D E0P &1

meaning that P &1 transforms the old coordinates into the newcoordinates.

! We define a similarity transformation as

T 0 D P &1TP

Image, Kernal, Rank & Nullity

Isomorphism. A linear transformation is an isomorphism if the mappingis “one-to-one”

Image. The image of a linear transformation T is,Image.T / D fy 2 Y W T .x/ D y; x 2 Xg

Kernel. The kernel of a linear transformation T is,ker.T / D fx 2 X W T .x/ D 0g

! Image.T / is a subspace of Y (the co-domain) and ker.T / is asubspace of X (the domain)

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 9: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–9

Example 3.2

A W R3 ! R2 with A D"

1 0 1

0 1 1

#

! It is clear to see that matrix A transforms a 3 % 1 input vector into a2 % 1 output vector, which is the dimension of the image of A

! The vector n,

n D ˛

264

&1

&1

1

375

denotes the null space of A

Rank of a Matrix

! The rank of a matrix is defined as the number of linearly independentcolumns (or equivalently the number of linearly independent rows)contained in the matrix

! Note that in the case of T 2 L.X; Y / defined by x! y D Ax, thedimension of Image.T / is the dimension of the linear span of thecolumns of A, which equals the number of linearly independentcolumn vectors

! An alternative but equivalent way to define the rank of a square matrixis by way of minors:

– If an n % n matrix A has a non-zero r % r minor, while every minorof order higher than r is zero, then A has rank r .

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 10: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–10

Example 3.3

A D

266664

1 1 0

0 &1 1

1 1 0

3 2 1

377775

! Since the dimension of A is 4 % 3, it’s rank can be no larger than 3

! But since column 2 plus column 3 equals column 1, the rank reducesto 2

! All four 3 % 3 minors of A are equal to 0, but A has non-zero minors ofdimension 2 % 2, indicating A has rank 2

Corrollary: A square n % n matrix is full rank (non-singular) if itsdeterminant is non-zero.

Some additional properties of determinants:

Let A; B 2 Rn

1. det.A/ D 0 if and only if the columns (rows) of A are linearlydependent

(a) If a column (row) of A is zero, then det.A/ D 0

(b) det.A/ D det.AT /

(c) det.AB/ D det.A/ # det.B/

(d) Swapping two rows (columns) changes the sign of det.A/.

i. Scaling a row (column) by k, scales the determinant by k.ii. Adding a multiple of one row (column) onto another does not

affect the determinant.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 11: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–11

(e) If A D diagonalfaiig or lower triangular, or A D upper triangular withdiagonal elements aii ; then det.A/ D a11 # a22 # # # # # ann

Eigenvalues and Eigenvectors

What is an Eigenvalue?

! Often in engineering we deal with matrix functions, e.g.,

G.s/ D C.sI & A/&1B

! Here, A is a square real matrix, and G.s/ is the transfer functionmatrix of a dynamic system with many inputs and many outputs

! Furthermore,

y D eAtx0

is the form of solution of n first-order ordinary differential equations

! For scalar systems, we know how to handle terms like1

s & a, whose

inverse Laplace transform is just eat

! But how do we handle the matrix .sI & A/&1? And how do weevaluate the matrix exponential eAt?

– Remember that an n % n matrix A can be thought of as a matrixrepresentation of a linear tranformation on Rn, relative to somebasis (e.g., the standard basis)

– Yet we know that a change of basis, described in matrix form byE

0 D EP , will transform A to A0 D P &1AP

! Suppose that it were possible to choose our new basis such thatP &1AP were a diagonal matrix D

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 12: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–12

– This transforms a general matrix problem into a diagonal (scalar)one – it now consists of a collection of simple elements, 1=.s&di / andedi t , which we can easily solve

! Such a basis exists for almost all A0s and is defined by the set ofeigenvectors of A

! The corresponding diagonal elements di of D turn out to be theeigenvalues of A

Eigenvalue-Eigenvector Relationship

! A linear transformation T maps x ! y D Ax where in general x andy have different directions

! There exist special directions w however which are invariant underthe mapping

Aw D "w; where " is a scalar

! Such A-invariant directions are called eigenvectors of A and thecorresponding "’s are called eigenvalues of A; eigen in Germanmeans “own” or “self”

! The above equation can be re-arranged as:

."I & A/ w D 0

and thus implies that w lies in the kernal of ."I & A/

! A non-trivial solution for w can be obtained if and only if,ker."I & A/ ¤ f0g, which is only possible if ."I & A/ is singular, i.e.,

det."I & A/ D 0

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 13: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–13

! This equation is called the characteristic equation of A, and definesall possible values for the eigenvalues "

! The " may assume one of the n roots, "i of the characteristicequation, and are thus called characteristic roots or simply theeigenvalues of A

! Interesting and important properties of the eigenvalues of a matrix A:

– trace.A/ ' sum of the diagonal elements of A DnX

iD1

"i D sum of

the eigenvalues

– det.A/ ' determinant of A DnY

iDi

"i D product of the eigenvalues

! Once the values of " have been determined, the correspondingeigenvectors may be calculated by solving the set of homogeneousequations:

."I & A/

0BBBB@

w1

w2:::

wn

1CCCCA

D 0; where wi denotes the i th element of w

NOTE: If Aw D w , then A .kw/ D kAw D k"w D " .kw/, so that if w

is an eigenvector of A, then so also will be any scalar multiple of w –the important property of w is its direction, not its scaling or length

Example 3.4:

A D

264

1 &3 5

0 1 2

0 &1 4

375

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 14: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–14

! Solving det .sI & A/ D 0 gives,

det."I & A/ D "3 & 6"2 C 11" & 6 D 0

and thus,"1 D 1 "2 D 2 "3 D 3

! Solving for the eigenvectors we obtain

w1 D ˛1

264

1

0

0

375 w2 D ˛2

264

1

&2

&1

375 w3 D ˛3

264

1

1

1

375

where the ˛’s are arbitrary scaling factors.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 15: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–15

IMPORTANT EIGENVALUE/EIGENVECTOR PROPERTIES:

1. The eigenvalues of a diagonal or triangular matrix are equal to thediagonal elements of the matrix.

2. The eigenvectors of a diagonal matrix (with distinct diagonalelements) are the standard basis vectors.

3. The eigenvalues of the identity matrix are all "1" and the eigenvectorsare arbitrary.

4. The eigenvalues of a scalar matrix kI are all k and the eigenvectorsare arbitrary.

5. Multiplying a matrix by a scalar k has the effect of multiplying all itseigenvalues by k , but leaves the eigenvectors unaltered.

6. (Shift Theorem) Adding onto a matrix a scalar matrix, kI , has theeffect of adding k to all its eigenvalues, but leaves the eigenvectorsunaltered:

"i .A C kI / D "i .A/ C k

7. The eigenvalues of the inverse of a matrix are given by:

"i

$A&1

%D 1

"i .A/

8. The eigenvectors of a matrix and its transpose are the same.

9. The left eigenvectors of a matrix A are the right eigenvectors of A$

10. The eigenvalues/eigenvectors of a real matrix are real or appear incomplex conjugate pairs.

11. # .A/ , maxi

j"i .A/j is called the spectral radius of A

12. Eigenvalues are invariant under similarity transformations.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 16: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–16

Gershgorin’s Theorem

! The eigenvalues of the n % n matrix A lie in the union of n circles inthe complex plane, each with center aii and radius

ri DXj ¤i

ˇ̌aij

ˇ̌

– Radius equals the sum of off-diagonal elements in row i

! They also lie in the union of n circles, each with center aii and radius

ri DXj ¤i

ˇ̌aji

ˇ̌

– Radius equals the sum of off-diagonal elements in column i

Example 3.5

! Consider the matrix,

A D

264

&5 1 2

4 2 &1

&3 &2 6

375

! The computed eigenvalues are

! D

264

&5:1866

5:6081

2:5785

375

! Gershgorin bands for both row and column sums are depicted in theplots below (overlaid with actual eigenvalues)

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 17: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–17

Eigen-Decomposition

! By convention, we collect all the eigenvectors of an n % n squarematrix A as the column vectors of the eigenvector matrix, W :

W D Œw1; w2; # # # ; wn!

! Upon inversion we can obtain the dual set of left eigenvectors as therow vectors of the dual eigenvector matrix, V D W &1:

V D W &1 D

266664

vT1

vT2:::

vTn

377775

! The left eigenvectors vTi satisfy

vTi .A & "iI / D 0 , vT

i A D "ivTi

! Combining the eigenvector and dual eigenvector matrices togetherwe get the eigenvalue/eigenvector decomposition (or characteristicdecomposition):

AW D Wƒ

where ƒ is a diagonal matrix containing the eigenvalues, "i , of A.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 18: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–18

– Post-mulitplying the above by W &1 we obtain:

A D WƒW &1 D WƒV

) This is also known as the spectral decomposition of A.

! The eigen-decomposition can be ill-conditioned with respect to itscomputation as shown in the following example.

Example 3.6

A D"

1 $

0 1:001

#

where by inspection, "1 D 1 and "2 D 1:001

! It is easy to see that,"

1 $

0 1:001

#1

0D 1 #

"1

0

#

! Hence the first eigenvector is w1 D"

1

0

#

! From, "1 $

0 1:001

#"a

b

#D"

a C $b

1:001b

#D .1:001/

"a

b

#

we get,

$b D :001a

1:001b D 1:001b

which gives,a D 1000$b

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 19: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–19

! Letting b D 1, then the second eigenvector is w2 D"

1000$

1

#

! The w2 eigenvector has a component which varies 1000 times fasterthan $ in the A matrix

! In general, eigenvectors can be extremely sensitive to small changesin matrix elements when the eignvalues are clustered closely together

Some Special Matrices

! REAL SYMMETRIC MATRIX: ST D S

– Here S has real eigenvalues, "i , and hence real eigenvectors, wi ,where the eigenvectors are orthonormal (i.e., wT

j ! wi D 0, for alli ¤ j ) . Therefore its eigenvalue/eigenvector decompositionbecomes:

S D RƒRT ;

where RT R D RRT D I .

Corollary: Any quadratic form,X

aij xixj can be written as xT Sx and ispositive definite (i.e., is positive for x ¤ 0 ) if and only if "i .S/ > 0 forall i .

! NORMAL MATRIX: N $N D NN $

– If ."i ; wi / is an eigen-pair of N , then$ N"i ; wi

%is an eigen-pair of N $

– The eigenvectors of N are orthogonal.

! HERMITIAN MATRIX: H $ D H

– H has real eigenvalues and orthogonal eigenvectors.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 20: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–20

! UNITARY MATRIX: U $U D U U $ D I

– The eigenvalues of U have unit modulus and the eigenvectors areorthogonal.

Jordan Form

! If the n % n matrix A cannot be diagonalized, then it can always bebrought into Jordan form via a similarity transformation,

T &1AT D J D

264

J1 0: : :

0 Jq

375

where

Ji D

266664

"i 1 0

"i: : :: : : 1

0 "i

377775

2 Rni%ni

is called a Jordan block of size ni with eigenvalue "i . Note that J isblock-diagonal and upper bi-diagonal.

Singular Value Decomposition

! Despite its extreme usefulness, the eignenvalue/eigenvectordecomposition suffers from two main drawbacks:

1. It can only handle square matrices2. Its computation may be sensitive to even small errors in the

elements of a matrix

! To overcome both of these we can instead use the Singular ValueDecomposition (when appropriate).

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 21: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–21

! Let M be any matrix of dimension p % m. It can then be factorized as:

M D U †V $

where U U $ D Ip and V V $ D Im. Here, † is a rectangular matrix ofdimension p % m which has non-zero entries only on its leading diagonal:

† D

266664

%1 0 # # # 0 0 # # # 0

0 %2 # # # 0 0 # # # 0::: ::: : : : ::: ::: : : : :::

0 0 # # # %N 0 # # # 0

377775

and %1 ( %2 ( # # # ( %N ( 0.

! This factorization is called the Singular Value Decomposition (SVD).

! The %i ’s are called the singular values of M .

! The number of positive (non-zero) singular values is equal to the rankof the matrix M .

! The SVD can be computed very reliably, even for very large matrices,although it is computationally heavy.

– In MATLAB it can be obtained by the function svd: [U,S,V] =svd(M).

! Matrices U and V are unitary matrices, i.e., they satisfy

U $ D U &1

andk"i .U /k D 1 8i

! The largest singular value of M , %1, (also denoted, N%) is an inducednorm of the matrix M :

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 22: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–22

%1 D supx

kM xkkxk

! It is important to note that,

MM $ D U †2U $

andM $M D V †2V $

! These are eigen-decompositions since U $U D I and V $V D I with† diagonal.

SOME PROPERTIES

! %i Dp

"i .M $M/ Dp

"i .MM $/= i thsingular value of M

! vi D wi .M$M / = i th input principal direction of M

! ui D wi .MM $/ = i th output principal direction of M

Example 3.7 (Golub & Van Loan):

A D"

0:96 1:72

2:28 0:96

#D U †V T

D"

0:6 0:8

0:8 &0:6

#"3 0

0 1

#"0:8 &0:6

0:6 0:8

#T

! Any real matrix, viewed geometrically, maps a unit-radius(hyper)sphere into a (hyper)ellipsoid.

! The singular values %i give the lengths of the major semi-axes of theellipsoid.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 23: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–23

! The output principal directions ui give the mutually orthogonaldirections of these major axes.

! The input principal directions vi are mapped into the ui vectors withgain %i such that, Avi D %iui .

v1

v2

u2

3! u1

Domain of Matrix Input Image of Matrix Output

! Numerically, the SVD can be computed much more accurately thanthe eigenvectors, since only orthogonal transformations are used inthe computation.

Singular value inequalities

! % .A/ ) j"i .A/j ) N% .A/

! N% .A$/ D N% .A/ and N%$AT%

D N% .A/

! N% .AB/ ) N% .A/ N% .B/

! % .A/ N% .B/ ) N% .AB/ or N% .A/ % .B/ ) N% .AB/

! % .A/ % .B/ ) % .AB/

! maxn

N% .A/ ; N% .B/o

) N%"

A

B

#)

p2 max

nN% .A/ ; N% .B/

o

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 24: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–24

! N%"

A

B

#) N% .A/ C N% .B/

! N%"

A 0

0 B

#D max

nN% .A/ ; N% .B/

o

! %i .A/ & N% .B/ ) %i .A C B/ ) %i .A/ C N% .B/

Condition number

! The condition number of a matrix A is defined as the ratio of themaximum to the minimum singular values,

& .A/ D N% .A/

% .A/

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 25: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–25

Vector & Matrix Norms

Vector Norms

! A vector norm on C is a function f W C ! R with the followingproperties:

– f .x/ ( 0 8 x 2 Cn

– f .x/ D 0 , x D 0

– f .x C y/ ) f .x/ C f .y/ 8 x; y 2 Cn

– f .˛x/ D j˛jf .x/ 8 ˛ 2 C; x 2 Cn

! The p-norms are defined by

kxkp D

nXiD1

jxi jp!1=p

where p ( 1

! Three norms of particular interest to control theory are:

– VECTOR 1-NORM

kxk1 DnX

i&1

jxi j

– VECTOR 2-NORM (EUCLIDIAN NORM)

kxk2 D

vuut nXi&1

jxi j2 Dp

x$x

– VECTOR 1-NORMkxk1 D max

ijxi j

! The following relationships hold for 1, 2, and 1 norms for vectors:

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 26: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–26

– kxk2 ) kxk1 )p

n kxk2

– kxk1 ) kxk2 )p

n kxk1

– kxk1 ) kxk1 ) n kxk1

! The following plot shows contours for the vector p-norms definedabove (n D 2)

x1

x2

p = 2

p = 1

p = !

1

1

!1

!1

Matrix Norms

! Matrix norms satisfy the same properties outlined above for vectornorms

! Matrix norms normally used in control theory are also the 1-norm,2-norm and 1-norm.

! Another useful norm is the Frobenius norm, defined as follows:

kAkF D

0@

mXiD1

nXj D1

jaij j21A

1=2

8 A 2 Cm%n

Here aij denotes the element of A in the i th row, j th column.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 27: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–27

! The p-norms for matrices are defined in terms of induced norms, thatis they are induced by the p-norms on vectors.

! One way to think of the norm kAkp is as the maximum gain of thematrix A as measured by the p-norms of vectors acting as inputs tothe matrix A:

kAkp D supx¤0

kAxkp

kxkp

8 A 2 Cm

– Norms defined in this manner are known as induced norms

– NOTE: the supremum is used here instead of the maximum sincethe maximum value may not be achieved

! The matrix norms are computed by the following expressions:

kAk1 D maxj

mXiD1

jaij j

kAk2 D N%.A/

kAk1 D maxi

nXj D1

jaij j

SOME USEFUL MATRIX NORM RELATIONSHIPS:

! kABkp ) kAkp kBkp .multiplicative property/

! kAk2 ) kAkF )p

n kAk2

! maxi;j

jaij j ) kAk2 )p

mn maxi;j

jaij j

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 28: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–28

! kAk2 )p

kAk1 kAk1

! 1pn

kAk1 ) kAk2 )p

m kAk1

! 1pm

kA1k ) kAk2 )p

n kAk1

Spectral Radius

! The spectral radius was defined earlier as

# .A/ D maxi

j"i .A/j

! Note: the spectral radius is not a norm!

Example 3.8

! Consider the matrices

A1 D"

1 0

10 1

#A2 D

"1 10

0 1

#

! It is obvious that# .A1/ D 1; # .A2/ D 1

! But,# .A1 C A2/ D 12

# .A1A2/ D 101:99

! Hence the spectral radius does not satisfy the multiplicative property,i.e.,

# .A1A2/ — # .A1/ # .A2/

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 29: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–29

System Norms

! Consider the input-output system below

w

Gz

– Given information about allowable input signals w.t/, how largecan the outputs z.t/ become?

– To answer this question, we must examine relevant system norms.

We shall evaluate the output signals in terms of the signal 2-norm,defined by

kz.t/k2 DsX

i

Z 1

&1jzi .'/j2 d'

and consider three inputs:

1. w .t/ is a series of unit impulses2. w .t/ is any signal such that kw .t/k2 D 1

3. w .t/ is any signal such that kw .t/k2 D 1, but w .t/ D 0 for t ( 0

and we only measure z .t/ for t ( 0

! The relevant system norms are: H2, H1, and Hankel norms,respectively

H2 norm

! Consider a stricly proper system G .s/ (i.e., D D 0 )

! For the H2 norm we use the Frobenius norm spatially (for the matrix)and integrate over frequency, i.e.,

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 30: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–30

kG .s/k2 ,s

1

2(

Z 1

&1tr$G .j!/$ G .j!/

%d!

– G .s/ must be strictly proper, or else the H2 norm is infinite

! By Parseval’s theorem we have

kG .s/k2 D kg .t/k2 ,sZ 1

0

tr .gT .'/ g .'// d'

where g .t/ is a matrix of impulse responses.

! Note we can change the order of integration and summation to get

kG .s/k2 D kg .t/k2 DvuutX

ij

Z 1

0

ˇ̌gij .'/

ˇ̌2d'

where gij .t/is the ij th element of the impulse response matrix.

! Thus, the H2 norm is the 2-norm output resulting from applying unitimpulses ıj .t/ to each input

Numerical computation of the H2 norm

! Consider G .s/ D C .sI & A/&1 B

! Then,kG .s/k2 D

ptr .BT QB/ or kG .s/k2 D

ptr .CP C T /

where Q is the observability Gramian and P is the controllabilityGramian.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 31: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–31

H1 norm

! Consider a proper, stable system G .s/ (for this case, nonzero D isallowed)

! For the H1 norm, we use the singular value (induced 2-norm)spatially for the matrix and pick out the peak value as a function offrequency

kG .s/k1 , max!

N% .G .j!//

– The H1 norm is interpreted as the peak transfer function“magnitude”

! Time Domain Interpretation:

– Worst-case steady-state gain for sinusoidal inputs at any frequency

– Induced (worst-case) 2-norm in the time domain:

kG .s/k1 D maxw.t/¤0

kz .t/k2

kw .t/k2

D maxkw.t/k2D1

kz .t/k2

– NOTE: at a given frequency !, the gain depends on the direction ofw .!/

Numerical computation of the H1 norm

! Computation of the H1 norm is non-trivial

– It can be approximated via the defnition kG .s/k1 , max!

N% .G .j!//

– But in practice, it’s normally computed via its state-spacerepresentation as shown below

! Consider the system represented by

G .s/ D C .sI & A/ B C D

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 32: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–32

! The H1 norm is the smallest value of ) such that the Hamiltonianmatrix H has no eigenvalues on the imaginary axis

H D"

A C BR&1DT C BR&1BT

&C T$I C DR&1DT

%C &

$A C BR&1DT C

%T#

where R D )2I & DT D

– The derivation of this expression will not be addressed here

Comparison of H2 and H1 norms

! We first recognize that we can write the Frobenius norm in terms ofsingular values,

kG .s/k2 Ds

1

2(

Z 1

&1

Xi

%2i .G .j!// d!

! Minimizing the H1 norm essentially “pushes down the peak” of thelargest singular value

! Minimizing the H2 norm “pushes down the entire function” of singularvalues over all frequencies

Example 3.9

! Consider the plant

G .s/ D 1

s C a

! Computing the H2 norm

kG .s/k2 D!

1

2(

Z 1

&1jG .j!/j2 d!

"12

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 33: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–33

D!

1

2(a

htan&1

&!

a

'i1

&1

"12

Dr

1

2a

! Alternatively, consider the impulse response

g .t/ D L&1

!1

s C a

"D e&at ; t ( 0

– which gives

kg .t/k2 DsZ 1

0

.e&at /2 dt Dr

1

2a

! Computing the H1 norm,

kG .s/k1 D max!

jG .j!/j D max!

1

.!2 C a2/12

D 1

a

! Note that there is no general relationship between the H2 and H1norms

Hankel norm

! A special norm in systems theory is the Hankel norm, which relatesthe future output of a system to its past input:

kG .s/kH , maxw.t/

qR10 kz .'/k2

2 d'qR 0

&1 kw .'/k22 d'

! It can be shown that the Hankel norm is equal to

kG .s/kH Dp

# .PQ/

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 34: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–34

where # is the spectral radius and P and Q are the controllability andobservability Gramians, respectively

! The name “Hankel norm” derives from the fact that the matrix PQ isin the form of a Hankel matrix

! The Hankel singular values are given by

%H;i Dp

"i .PQ/

! Hankel and H1 norms can be related by

kG .s/kH D N%H ) kG .s/k1 ) 2

nXiD1

%H;i

Example 3.10

! Let’s compute the various norms for the system of Example 3.9 usingstate-space formulations.

! For the plant G .s/ D 1=.sCa/ we have the following state-spacerealization: "

A B

C D

#D"

&a 1

1 0

#

! The controllability Gramian P is the solution of the Lyapunov equation

AP C PAT D &BBT

&aP & aP D &1

P D 1

2a

! Similarly, for the observability Gramian,

Q D 1

2a

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 35: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–35

! So for the H2 norm we have

kG .s/k2 Dp

tr .BT QB/ Dr

1

2a

! Computing the H1 norm, we first find the eigenvalues of theHamiltonian matrix,

" .H/ D "

24 &a

1

"2

&1 a

35 D ˙

sa2 & 1

)2

– Clearly, H has no imaginary eigenvalues for ) >1

a, so

kG .s/k1 D 1

a

! Finally, the Hankel matrix is PQ D 1

4a2, so the Hankel norm is

kG .s/kH Dp

# .PQ/ D 1

2a

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 36: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–36

Signal Norms

Example 3.11

! Consider the following two time signals

e1(t)

e2(t)

e(t)

t

1

! For these two signals, we compute the following values for theirrespective norms:

ke1.t/k1 D 1

ke1.t/k2 D 1

ke2.t/k1 D 1ke2.t/k2 D 1

Computation of signal norms

1. “Sum up” the channels at a given time or frequency using a vectornorm (for a scalar signal, take the absolute value)

2. “Sum up” in time or frequency using a temporal norm

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 37: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–37

e(t)

t

Area= !e!1

!e!!

! We define the temporal p-norm of a time-varying vector as

ke.t/kp D Z 1

&1

Xi

jei .'/jp d'

!1=p

! The following temporal norms are commonly used:

– 1-norm in time

ke.t/k1 DZ 1

&1

Xi

jei .'/j d'

– 2-norm in time

ke.t/k2 DsZ 1

&1

Xi

jei.'/j2 d'

– 1-norm in time

ke .t/k1 D sup'

!max

ijei .'/j

"

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 38: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–38

Additional Topics

Toeplitz & Hankel Matrix Forms

! Systems theory makes extensive use of Toeplitz and Hankel matricesto simplify much of the algebra.

! Consider the polynomial n.z/:

n.z/ D n0 C n1z&1 C # # # C nmz&m

! We define the Toeplitz matrices, *n, Cn for n.z/ from the followingmatrix form:

*n D(

Cn

Mn

)

where,

Cn D

26666664

n0 0 0 # # # 0

n1 n0 0 # # # 0

n2 n1 n0 # # # 0::: ::: ::: ::: :::

nm nm&1 nm&2:::

37777775

and

Mn D

264

0 nm nm&1:::

::: ::: ::: ::: :::

0 0 0 # # # n0

375

! Define the Hankel matrix Hn as

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 39: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–39

Hn D

2666666666666664

n1 n2 n3 # # # nm&1 nm

n2 n3 n4 # # # nm 0

n3 n4 n5 # # # 0 0::: ::: ::: ::: ::: :::

nm&1 nm 0 ::: 0 0

nm 0 0 ::: 0 0

0 0 0 ::: 0 0::: ::: ::: ::: ::: :::

3777777777777775

! NOTE: The dimension of matrices *n; Cn; Hn are not defined here asthey are implicit in the context in which they are used.

! Toeplitz matrices defined in this manner are often used for changingpolynomial convolution into a matrix-vector multiplication

Analytic Functions of Matrices

! Let A be an n % n matrix with eigenvalues "1; "2; : : : ; "n, and let p.x/

be an infinite series in a scalar variable x,

p.x/ D a0 C a1x C a2x2 C : : : C akxk C : : :

– Such an infinite series may converge or diverge depending on thevalue of x.

EXAMPLES

1 C x C x2 C x3 C : : : C xk C : : :

! This geometric series converges for all jxj < 1 .

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 40: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–40

1 C x C x2

2+C x3

3+C # # # C xk

k+C # # #

! This series converges for all values of x and is in fact, ex.

! These results have analogies for matrices.

– Let A be an n % n matrix with eigenvalues "i . If the infinite seriesp.x/ defined above is convergent for each of the n values of "i ,then the corresponding matrix inifinite series

p.A/ D a0I C a1A C a2A2 C # # # C akAk C # # # D

1XkD0

akAk

converges.

! DEFINITION: A single-valued function f .z/, with z a complex scalar, isanalytic at a point z0 if and only if its derivative exists at every point insome neighborhood of z0: Points at which the function is not analyticare called singular points.

! RESULT. If a function f .z/ is analytic at every point in some circle ,

in the complex plane, then f .z/ can be represented as acomnvergent power series (Taylor series) at every point z inside ,.

! IMPORTANT RESULT. If f .z/ is any function which is analytic withing acircle in the complex plane which contains all the eigenvalues "i of A,then a corresponding matrix function f .A/ can be defined by aconvergent power series.

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 41: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–41

EXAMPLE 3.12

! The function e˛x is analytic for all values of x; therefore it has aconvergent series representation:

e˛x D 1 C ˛x C ˛2x2

2+C ˛3x3

3+C # # # C ˛kxk

k+C # # #

– The corresponding matrix function is

e˛A D I C ˛A C ˛2A2

2+C ˛3A3

3+C # # # C ˛kAk

k+C # # #

EXAMPLE 3.13eAt # eBt D e.ACB/t

if and only if, AB D BA.

OTHER USEFUL REPRESENTATIONS:

sin A D A & A3

3+C A5

5+& # # #

cos A D I & A2

2+C A4

4+& # # #

sinh D A C A3

3+C A5

5+C # # #

cosh D I C A2

2+C A4

4+C # # #

sin2 A C cos2 A D I

sin A D ejA & e&jA

2jand cos A D ejA C e&jA

2

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 42: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

ECE5580, Matrix Theory and Norms 3–42

Cayley-Hamilton Theorem

! If the characteristic polynomial of a matrix A is

j"I & Aj D .&"/n C cn&1"n&1 C cn&2"

n&2 C # # # C c1" C c0 D 4."/

then the corresponding matrix polynomial is

4.A/ D .&1/nAn C cn&1An&1 C cn&2A

n&2 C # # # C c1A C c0I

CAYLEY-HAMILTON THEOREM. Every matrix satisfies its owncharacteristic equation.

4.A/ D 0

! The proof of this follows by applying a diagonalizing similaritytransformation to A and substituing into 4.A/.

EXAMPLE 3.14

A D"

3 1

1 2

#

j"I & Aj D .3 & "/.2 & "/ & 1 D "2 & 5" C 5

4.A/ D A2 & 5A C 5I D"

10 5

5 5

#& 5 #

"3 1

1 2

#C 5 #

"1 0

0 1

#D"

0 0

0 0

#

Lecture notes prepared by M. Scott Trimboli. Copyright c" 2016, M. Scott Trimboli

Page 43: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

(mostly blank)

Page 44: Matrix Theory and Normsmocha-java.uccs.edu/ECE5580/ECE5580_CH3_20Feb16.pdf · ECE5580, Matrix Theory and Norms 3–9 Example 3.2 A W R3! R2 withA D " 101 011 #! It is clear to see

(mostly blank)