[steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

93
Jordan Canonical Form: Application to Differential Equations

Upload: alejandro-contreras-munive

Post on 21-Aug-2015

43 views

Category:

Education


2 download

TRANSCRIPT

Page 1: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

Jordan Canonical Form:Application toDifferential Equations

Page 2: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

Copyright © 2008 by Morgan & Claypool

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted inany form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations inprinted reviews, without the prior permission of the publisher.

Jordan Canonical Form: Application to Differential Equations

Steven H. Weintraub

www.morganclaypool.com

ISBN: 9781598298048 paperbackISBN: 9781598298055 ebook

DOI 10.2200/S00146ED1V01Y200808MAS002

A Publication in the Morgan & Claypool Publishers seriesSYNTHESIS LECTURES ON MATHEMATICS AND STATISTICS

Lecture #2Series Editor: Steven G. Krantz, Washington University, St. Louis

Series ISSNSynthesis Lectures on Mathematics and StatisticsISSN pending.

Page 3: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

Jordan Canonical Form:Application toDifferential Equations

Steven H. WeintraubLehigh University

SYNTHESIS LECTURES ON MATHEMATICS AND STATISTICS #2

CM& cLaypoolMorgan publishers&

Page 4: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

ABSTRACTJordan Canonical Form ( JCF) is one of the most important, and useful, concepts in linear algebra.In this book we develop JCF and show how to apply it to solving systems of differential equations.We first develop JCF, including the concepts involved in it–eigenvalues, eigenvectors, and chainsof generalized eigenvectors. We begin with the diagonalizable case and then proceed to the generalcase, but we do not present a complete proof. Indeed, our interest here is not in JCF per se, but inone of its important applications. We devote the bulk of our attention in this book to showing howto apply JCF to solve systems of constant-coefficient first order differential equations, where it isa very effective tool. We cover all situations–homogeneous and inhomogeneous systems; real andcomplex eigenvalues. We also treat the closely related topic of the matrix exponential. Our discussionis mostly confined to the 2-by-2 and 3-by-3 cases, and we present a wealth of examples that illustrateall the possibilities in these cases (and of course, a wealth of exercises for the reader).

KEYWORDSJordan Canonical Form, linear algebra,differential equations, eigenvalues, eigenvectors,generalized eigenvectors, matrix exponential

Page 5: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

v

Contents

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

1 Jordan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 The Diagonalizable Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Solving Systems of Linear Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

2.1 Homogeneous Systems with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . .25

2.2 Homogeneous Systems with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . .40

2.3 Inhomogeneous Systems with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.4 The Matrix Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

A Background Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

A.1 Bases, Coordinates, and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

A.2 Properties of the Complex Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

B Answers to Odd-Numbered Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85

Page 6: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

vi CONTENTS

Page 7: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

PrefaceJordan Canonical Form ( JCF) is one of the most important, and useful, concepts in linear

algebra. In this book, we develop JCF and show how to apply it to solving systems of differentialequations.

In Chapter 1, we develop JCF. We do not prove the existence of JCF in general, butwe present the ideas that go into it—eigenvalues and (chains of generalized) eigenvectors. InSection 1.1, we treat the diagonalizable case, and in Section 1.2, we treat the general case. Wedevelop all possibilities for 2-by-2 and 3-by-3 matrices, and illustrate these by examples.

In Chapter 2, we apply JCF. We show how to use JCF to solve systems Y ′ = AY +G(x) ofconstant-coefficient first-order linear differential equations. In Section 2.1, we consider homoge-neous systems Y ′ = AY . In Section 2.2, we consider homogeneous systems when the characteristicpolynomial of A has complex roots (in which case an additional step is necessary). In Section 2.3,we consider inhomogeneous systems Y ′ = AY +G(x) with G(x) nonzero. In Section 2.4, wedevelop the matrix exponential eAx and relate it to solutions of these systems. Also in this chapterwe provide examples that illustrate all the possibilities in the 2-by-2 and 3-by-3 cases.

Appendix A has background material. Section A.1 gives background on coordinates forvectors and matrices for linear transformations. Section A.2 derives the basic properties of thecomplex exponential function. This material is relegated to the Appendix so that readers whoare unfamiliar with these notions, or who are willing to take them on faith, can skip it and stillunderstand the material in Chapters 1 and 2.

Our numbering system for results is fairly standard: Theorem 2.1, for example, is the firstTheorem found in Section 2 of Chapter 1.

As is customary in textbooks, we provide the answers to the odd-numbered exercises here.Instructors may contact me at [email protected] and I will supply the answers to all of the exercises.

Steven H. WeintraubLehigh UniversityBethlehem, PA USAJuly 2008

Page 8: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

viii PREFACE

Page 9: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1

C H A P T E R 1

Jordan Canonical Form1.1 THE DIAGONALIZABLE CASEAlthough, for simplicity, most of our examples will be over the real numbers (and indeed over therational numbers), we will consider that all of our vectors and matrices are defined over the complexnumbers C. It is only with this assumption that the theory of Jordan Canonical Form ( JCF) workscompletely. See Remark 1.9 for the key reason why.

Definition 1.1. If v �= 0 is a vector such that, for some λ,

Av = λv ,

then v is an eigenvector of A associated to the eigenvalue λ.

Example 1.2. Let A be the matrix A =[

5 −72 −4

]. Then, as you can check, if v1 =

[72

], then

Av1 = 3v1, so v1 is an eigenvector of A with associated eigenvalue 3, and if v2 =[

11

], then

Av2 = −2v2, so v2 is an eigenvector of A with associated eigenvalue −2.

We note that the definition of an eigenvalue/eigenvector can be expressed in an alternate form.Here I denotes the identity matrix:

Av = λv

Av = λIv

(A− λI)v = 0 .

For an eigenvalue λ of A, we let Eλ denote the eigenspace of λ,

Eλ = {v | Av = λv} = {v | (A− λI)v = 0} = Ker(A− λI) .

(The kernel Ker(A− λI) is also known as the nullspace NS(A− λI).)We also note that this alternate formulation helps us find eigenvalues and eigenvectors. For if

(A− λI)v = 0 for a nonzero vector v, the matrix A− λI must be singular, and hence its determinantmust be 0. This leads us to the following definition.

Definition 1.3. The characteristic polynomial of a matrix A is the polynomial det(λI − A).

Page 10: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2 CHAPTER 1. JORDAN CANONICAL FORM

Remark 1.4. This is the customary definition of the characteristic polynomial. But note that, if A isan n-by-n matrix, then the matrix λI − A is obtained from the matrix A− λI by multiplying eachof its n rows by−1, and hence det(λI − A) = (−1)n det(A− λI). In practice, it is most convenientto work with A− λI in finding eigenvectors—this minimizes arithmetic—and when we come tofind chains of generalized eigenvectors in Section 1.2, it is (almost) essential to use A− λI , as usingλI − A would introduce lots of spurious minus signs.

Example 1.5. Returning to the matrix A =[

5 −72 −4

]of Example 1.2, we compute that det(λI −

A) = λ2 − λ− 6 = (λ− 3)(λ+ 2), so A has eigenvalues 3 and −2. Computation then shows that

the eigenspace E3 = Ker(A− 3I ) has basis{[

72

]}, and that the eigenspace E−2 = Ker(A−

(−2)I ) has basis{[

11

]}.

We now introduce two important quantities associated to an eigenvalue of a matrix A.

Definition 1.6. Let a be an eigenvalue of a matrix A. The algebraic multiplicity of the eigenvaluea is alg-mult(a) = the multiplicity of a as a root of the characteristic polynomial det(λI − A). Thegeometric multiplicity of the eigenvalue a is geom-mult(a) = the dimension of the eigenspace Ea .

It is common practice to use the word multiplicity (without a qualifier) to mean algebraicmultiplicity.

We have the following relationship between these two multiplicities.

Lemma 1.7. Let a be an eigenvalue of a matrix A. Then

1 ≤ geom-mult(a) ≤ alg-mult(a) .

Proof. By the definition of an eigenvalue, there is at least one eigenvector v with eigenvalue a, andso Ea contains the nonzero vector v, and hence dim(Ea) ≥ 1.

For the proof that geom-mult(a) ≤ alg-mult(a), see Lemma 1.12 in Appendix A. �

Corollary 1.8. Let a be an eigenvalue of A and suppose that a has algebraic multiplicity 1. Then a alsohas geometric multiplicity 1.

Proof. In this case, applying Lemma 1.7, we have

1 ≤ geom-mult(a) ≤ alg-mult(a) = 1 ,

so geom-mult(a) = 1. �

Page 11: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.1. THE DIAGONALIZABLE CASE 3

Remark 1.9. Let A be an n-by-n matrix.Then its characteristic polynomial det(λI − A) has degreen. Since we are considering A to be defined over the complex numbers, we may apply the FundamentalTheorem of Algebra, which states that an nth degree polynomial has n roots, counting multiplicities.Hence, we see that, for any n-by-n matrix A, the sum of the algebraic multiplicities of the eigenvaluesof A is equal to n.

Lemma 1.10. Let A be an n-by-n matrix. The following are equivalent:(1) For each eigenvalue a of A, geom-mult(a) = alg-mult(a).(2) The sum of the geometric multiplicities of the eigenvalues of A is equal to n.

Proof. Let A have eigenvalues a1, a2, . . . , am. For each i between 1 and m, let si = geom-mult(ai)

and ti = alg-mult(ai). Then, by Lemma 1.7, si ≤ ti for each i, and by Remark 1.9,∑m

i=1 ti = n.Thus, if si = ti for each i, then

∑mi=1 si = n, while if si < ti for some i, then

∑mi=1 si < n. �

Proposition 1.11. (1) Let a1, a2, . . . , am be distinct eigenvalues of A (i.e., ai �= aj for i �= j ). For eachi between 1 and m, let vi be an associated eigenvector. Then {v1, v2, . . . , vm} is a linearly independent setof vectors.(2) More generally, let a1, a2, . . . , am be distinct eigenvalues of A. For each i between 1 and m, let Si bea linearly independent set of eigenvectors associated to ai . Then S = S1 ∪ . . . Sm is a linearly independentset of vectors.

Proof. (1) Suppose we have a linear combination 0 = c1v1 + c2v2 + . . .+ cmvm. We need to showthat ci = 0 for each i.To do this, we begin with an observation: If v is an eigenvector of A associatedto the eigenvalue a, and b is any scalar, then (A− bI)v = Av − bv = av − bv = (a − b)v. (Notethat this answer is 0 if a = b and nonzero if a �= b.)

We now go to work, multiplying our original relation by (A− amI). Of course, (A− amI)0 =0, so:

0 = (A− amI)(c1v1 + c2v2 + . . .+ cm−2vm−2 + cm−1vm−1 + cmvm)

= c1(A− amI)v1 + c2(A− amI)v2 + . . .

+ cm−2(A− amI)vm−2 + cm−1(A− amI)vm−1 + cm(A− amI)vm

= c1(a1 − am)v1 + c2(a2 − am)v2 + . . .

+ cm−2(am−2 − am)vm−2 + cm−1(am−1 − am)vm−1 .

Page 12: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

4 CHAPTER 1. JORDAN CANONICAL FORM

We now multiply this relation by (A− am−1I ). Again, (A− am−1I )0 = 0, so:

0 = (A− am−1I )(c1(a1 − am)v1 + c2(a2 − am)v2 + . . .

+ cm−2(am−2 − am)vm−2 + cm−1(am−1 − am)vm−1)

= c1(a1 − am)(A− am−1I )v1 + c2(a2 − am)(A− am−1I )v2 + . . .

+ cm−2(am−2 − am)(A− am−1I )vm−2 + cm−1(am−1 − am)(A− am−1I )vm−1

= c1(a1 − am)(a1 − am−1)v1 + c2(a2 − am)(a2 − am−1)v2 + . . .

+ cm−2(am−2 − am)(am−2 − am−1)vm−2 .

Proceed in this way, until at the last step we multiply by (A− a2I ). We then obtain:

0 = c1(a1 − a2) · · · (a1 − am−1)(a1 − am)v1 .

But v1 �= 0, as by definition an eigenvector is nonzero. Also, the product (a1 − a2) · · · (a1 −am−1)(a1 − am) is a product of nonzero numbers and is hence nonzero. Thus, we must have c1 = 0.

Proceeding in the same way, multiplying our original relation by (A− amI), (A− am−1I ),(A− a3I ), and finally by (A− a1I ), we obtain c2 = 0, and, proceeding in this vein, we obtain ci = 0for all i, and so the set {v1, v2, . . . , vm} is linearly independent.

(2) To avoid complicated notation, we will simply prove this when m = 2 (which illustrates thegeneral case).Thus, let m = 2, let S1 = {v1,1, . . . , v1,i1} be a linearly independent set of eigenvectorsassociated to the eigenvalue a1 of A, and let S2 = {v2,1, . . . , v2,i2} be a linearly independent setof eigenvectors associated to the eigenvalue a2 of A. Then S = {v1,1, . . . , v1,i1, v2,1, . . . , v2,i2}.We want to show that S is a linearly independent set. Suppose we have a linear combination0 = c1,1v1,1 + . . .+ c1,i1v1,i1 + c2,1v2,1 + . . .+ c2,i2v2,i2 . Then:

0 = c1,1v1,1 + . . .+ c1,i1v1,i1 + c2,1v2,1 + . . .+ c2,i2v2,i2

= (c1,1v1,1 + . . .+ c1,i1v1,i1)+ (c2,1v2,1 + . . .+ c2,i2v2,i2)

= v1 + v2

where v1 = c1,1v1,1 + . . .+ c1,i1v1,i1 and v2 = c2,1v2,1 + . . .+ c2,i2v2,i2 . But v1 is a vector in Ea1 ,so Av1 = a1v1; similarly, v2 is a vector in Ea2 , so Av2 = a2v2. Then, as in the proof of part (1),

0 = (A− a2I )0 = (A− a2I )(v1 + v2) = (A− a2I )v1 + (A− a2I )v2

= (a1 − a2)v1 + 0 = (a1 − a2)v1

so 0 = v1; similarly, 0 = v2. But 0 = v1 = c1,1v1,1 + . . .+ c1,i1v1,i1 implies c1,1 = . . . c1,i1 = 0,as, by hypothesis, {v1,1, . . . , v1,i1} is a linearly independent set; similarly, 0 = v2 implies c2,1 =. . . = c2,i2 = 0. Thus, c1,1 = . . . = c1,i1 = c2,1 = . . . = c2,i2 = 0 and S is linearly independent, asclaimed. �

Definition 1.12. Two square matrices A and B are similar if there is an invertible matrix P withA = PBP−1.

Page 13: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.1. THE DIAGONALIZABLE CASE 5

Definition 1.13. A square matrix A is diagonalizable if A is similar to a diagonal matrix.

Here is the main result of this section.

Theorem 1.14. Let A be an n-by-n matrix over the complex numbers. Then A is diagonalizable ifand only if, for each eigenvalue a of A, geom-mult(a) = alg-mult(a). In that case, A = PJP−1 whereJ is a diagonal matrix whose entries are the eigenvalues of A, each appearing according to its algebraicmultiplicity, and P is a matrix whose columns are eigenvectors forming bases for the associated eigenspaces.

Proof. We give a proof by direct computation here. For a more conceptual proof, see Theorem 1.10in Appendix A.

First let us suppose that for each eigenvalue a of A, geom-mult(a) = alg-mult(a).

Let A have eigenvalues a1, a2, …, an. Here we do not insist that the ai ’s are distinct; rather,each eigenvalue appears the same number of times as its algebraic multiplicity.Then J is the diagonalmatrix

J =[j1

∣∣∣∣ j2

∣∣∣∣ . . .

∣∣∣∣ jn

]

and we see that ji , the ith column of J , is the vector

ji =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

0...

0ai

0...

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

,

with ai in the ith position, and 0 elsewhere.We have

P =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

],

a matrix whose columns are eigenvectors forming bases for the associated eigenspaces. By hypothesis,geom-mult(a) = alg-mult(a) for each eigenvector a of A, so there are as many columns of P that areeigenvectors for the eigenvalue a as there are diagonal entries of J that are equal to a. Furthermore,by Lemma 1.10, the matrix P indeed has n columns.

We first show by direct computation that AP = PJ . Now

AP = A

[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

]

Page 14: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

6 CHAPTER 1. JORDAN CANONICAL FORM

so the ith column of AP is Avi . ButAvi = aivi

as vi is an eigenvector of A with associated eigenvalue ai .On the other hand,

PJ =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

]J

and the ith column of PJ is Pji ,

Pji =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

]ji .

Remembering what the vector ji is, and multiplying, we see that

Pji = aivi

as well.Thus, every column of AP is equal to the corresponding column of PJ , so

AP = PJ .

By Proposition 1.11, the columns of the square matrix P are linearly independent, so P isinvertible. Multiplying on the right by P−1, we see that

A = PJP−1 ,

completing the proof of this half of the Theorem.Now let us suppose that A is diagonalizable, A = PJP−1.Then AP = PJ . We use the same

notation for P and J as in the first half of the proof. Then, as in the first half of the proof, wecompute AP and PJ column-by-column, and we see that the ith column of AP is Avi and thatthe ith column of PJ is aivi , for each i. Hence, Avi = aivi for each i, and so vi is an eigenvectorof A with associated eigenvalue ai .

For each eigenvalue a of A, there are as many columns of P that are eigenvectors for a asthere are diagonal entries of J that are equal to a, and these vectors form a basis for the eigenspaceassociated of the eigenvaluea, so we see that for each eigenvalue a of A,geom-mult(a) = alg-mult(a),completing the proof. �

For a general matrix A, the condition in Theorem 1.14 may or may not be satisfied, i.e.,some but not all matrices are diagonalizable. But there is one important case when this condition isautomatic.

Corollary 1.15. Let A be an n-by-n matrix over the complex numbers all of whose eigenvalues aredistinct (i.e., whose characteristic polynomial has no repeated roots). Then A is diagonalizable.

Page 15: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 7

Proof. By hypothesis, for each eigenvalue a of A, alg-mult(a) = 1. But then, by Corollary 1.8, foreach eigenvalue a of A, geom-mult(a) = alg-mult(a), so the hypothesis of Theorem 1.14 is satisfied.

Example 1.16. Let A be the matrix A =[

5 −72 −4

]of Examples 1.2 and 1.5. Then, referring to

Example 1.5, we see [5 −72 −4

]=

[7 12 1

] [3 00 −2

] [7 12 1

]−1

.

As we have indicated, we have developed this theory over the complex numbers, as JFC worksbest over them. But there is an analog of our results over the real numbers—we just have to requirethat all the eigenvalues of A are real. Here is the basic result on diagonalizability.

Theorem 1.17. Let A be an n-by-n real matrix. Then A is diagonalizable if and only if all theeigenvalues of A are real numbers, and, for each eigenvalue a of A, geom-mult(a) = alg-mult(a). In thatcase, A = PJP−1 where J is a diagonal matrix whose entries are the eigenvalues of A, each appearingaccording to its algebraic multiplicity (and hence is a real matrix), and P is a real matrix whose columnsare eigenvectors forming bases for the associated eigenspaces.

1.2 THE GENERAL CASE

Let us begin this section by describing what a matrix in JCF looks like. A matrix in JCF is composedof “Jordan blocks,” so we first see what a single Jordan block looks like.

Definition 2.1. A k-by-k Jordan block associated to the eigenvalue λ is a k-by-k matrix of the form

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

λ 1λ 1

λ 1. . .

. . .

λ 1λ

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Page 16: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

8 CHAPTER 1. JORDAN CANONICAL FORM

In other words, a Jordan block is a matrix with all the diagonal entries equal to each other, allthe entries immediately above the diagonal equal to 1, and all the other entries equal to 0.

Definition 2.2. A matrix J in Jordan Canonical Form ( JCF) is a block diagonal matrix

J =

⎡⎢⎢⎢⎢⎢⎣

J1

J2

J3. . .

J�

⎤⎥⎥⎥⎥⎥⎦

with each Ji a Jordan block.

Remark 2.3. Note that every diagonal matrix is a matrix in JCF, with each Jordan block a 1-by-1block.

In order to understand and be able to use JCF, we must introduce a new concept, that of ageneralized eigenvector.

Definition 2.4. If v �= 0 is a vector such that, for some λ,

(A− λI)k(v) = 0

for some positive integer k, then v is a generalized eigenvector of A associated to the eigenvalue λ.The smallest k with (A− λI)k(v) = 0 is the index of the generalized eigenvector v.

Let us note that if v is a generalized eigenvector of index 1, then

(A− λI)(v) = 0(A)v = (λI)v

Av = λv

and so v is an (ordinary) eigenvector.Recall that, for an eigenvalue λ of A, Eλ denotes the eigenspace of λ,

Eλ = {v | Av = λv} = {v | (A− λI)v = 0} .We let Eλ denote the generalized eigenspace of λ,

Eλ = {v | (A− λI)k(v) = 0 for some k} .It is easy to check that Eλ is a subspace.

Page 17: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 9

Since every eigenvector is a generalized eigenvector, we see that

Eλ ⊆ Eλ .

The following result (which we shall not prove) is an important fact about generalizedeigenspaces.

Proposition 2.5. Let λ be an eigenvalue of the n-by-n matrix A of algebraic multiplicity m. Then, Eλ

is a subspace of Cn of dimension m.

Example 2.6. Let A be the matrix A =[

0 1−4 4

]. Then, as you can check, if u =

[12

], then

(A− 2I )u = 0, so u is an eigenvector of A with associated eigenvalue 2 (and hence a generalized

eigenvector of index 1 of A with associated eigenvalue 2). On the other hand, if v =[

10

], then

(A− 2I )2v = 0 but (A− 2I )v �= 0, so v is a generalized eigenvector of index 2 of A with associatedeigenvalue 2.

In this case, as you can check, the vector u is a basis for the eigenspace E2, so E2 = { cu | c ∈ C}is one dimensional.

On the other hand, u and v are both generalized eigenvectors associated to the eigenvalue2, and are linearly independent (the equation c1u+ c2v = 0 only has the solution c1 = c2 = 0, asyou can readily check), so E2 has dimension at least 2. Since E2 is a subspace of C

2, it must havedimension exactly 2, and E2 = C

2 (and {u, v} is indeed a basis for C2).

Let us next consider a generalized eigenvector vk of index k associated to an eigenvalue λ, andset

vk−1 = (A− λI)vk .

We claim that vk−1 is a generalized eigenvector of index k − 1 associated to the eigenvalue λ.To see this, note that

(A− λI)k−1vk−1 = (A− λI)k−1(A− λI)vk = (A− λI)kvk = 0

but(A− λI)k−2vk−1 = (A− λI)k−2(A− λI)vk = (A− λI)k−1vk �= 0 .

Proceeding in this way, we may set

vk−2 = (A− λI)vk−1 = (A− λI)2vk

vk−3 = (A− λI)vk−2 = (A− λI)2vk−1 = (A− λI)3vk

...

v1 = (A− λI)v2 = · · · = (A− λI)k−1vk

Page 18: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

10 CHAPTER 1. JORDAN CANONICAL FORM

and note that each vi is a generalized eigenvector of index i associated to the eigenvalue λ. Acollection of generalized eigenvectors obtained in this way gets a special name:

Definition 2.7. If {v1, . . . , vk} is a set of generalized eigenvectors associated to the eigenvalue λ ofA, such that vk is a generalized eigenvector of index k and also

vk−1 =(A− λI)vk, vk−2 = (A− λI)vk−1, vk−3 = (A− λI)vk−2,

· · · , v2 = (A− λI)v3, v1 = (A− λI)v2 ,

then {v1, . . . , vk} is called a chain of generalized eigenvectors of length k. The vector vk is calledthe top of the chain and the vector v1 (which is an ordinary eigenvector) is called the bottom of thechain.

Example 2.8. Let us return to Example 2.6. We saw there that v =[

10

]is a generalized eigenvector

of index 2 of A =[

0 1−4 4

]associated to the eigenvalue 2. Let us set v2 = v =

[10

]. Then v1 =

(A− 2I )v2 =[−2−4

]is a generalized eigenvector of index 1 (i.e., an ordinary eigenvector), and

{v1, v2} is a chain of length 2.

Remark 2.9. It is important to note that a chain of generalized eigenvectors {v1, . . . , vk} is entirelydetermined by the vector vk at the top of the chain. For once we have chosen vk , there are no otherchoices to be made: the vector vk−1 is determined by the equation vk−1 = (A− λI)vk ; then thevector vk−2 is determined by the equation vk−2 = (A− λI)vk−1, etc.

With this concept in hand, let us return to JCF. As we have seen, a matrix J in JCF has anumber of blocks J1, J2, . . . , J�, called Jordan blocks, along the diagonal. Let us begin our analysiswith the case when J consists of a single Jordan block. So suppose J is a k-by-k matrix

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

λ 1λ 1 0

λ 1. . .

. . .

0 λ 1λ

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Page 19: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 11

Then,

J − λI =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 10 1

0 1. . .

. . .

0 10

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Let e1=

⎡⎢⎢⎢⎢⎢⎣

100...

0

⎤⎥⎥⎥⎥⎥⎦, e2=

⎡⎢⎢⎢⎢⎢⎣

010...

0

⎤⎥⎥⎥⎥⎥⎦, e3=

⎡⎢⎢⎢⎢⎢⎣

001...

0

⎤⎥⎥⎥⎥⎥⎦, …, ek=

⎡⎢⎢⎢⎢⎢⎣

000...

1

⎤⎥⎥⎥⎥⎥⎦ .

Then direct calculation shows:

(J − λI)ek = ek−1

(J − λI)ek−1 = ek−2...

(J − λI)e2 = e1

(J − λI)e1 = 0

and so we see that {e1, . . . , ek} is a chain of generalized eigenvectors. We also note that {e1, . . . , ek}is a basis for C

k , and soEλ = C

k .

We first see that the situation is very analogous when we consider any k-by-k matrix with asingle chain of generalized eigenvectors of length k.

Proposition 2.10. Let {v1, . . . , vk} be a chain of generalized eigenvectors of length k associated to theeigenvalue λ of a matrix A. Then {v1, . . . , vk} is linearly independent.

Proof. Suppose we have a linear combination

c1v1 + c2v2 + · · · + ck−1vk−1 + ckvk = 0 .

We must show each ci = 0.By the definition of a chain, vk−i = (A− λI)ivk for each i, so we may write this equation as

c1(A− λI)k−1vk + c2(A− λI)k−2vk + · · · + ck−1(A− λI)vk + ckvk = 0 .

Page 20: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

12 CHAPTER 1. JORDAN CANONICAL FORM

Now let us multiply this equation on the left by (A− λI)k−1. Then we obtain the equation

c1(A− λI)2k−2vk + c2(A− λI)2k−3vk + · · · + ck−1(A− λI)kvk + ck(A− λI)k−1vk = 0 .

Now (A− λI)k−1vk = v1 �= 0. However, (A− λI)kvk = 0, and then also (A− λI)k+1vk =(A− λI)(A− λI)kvk = (A− λI)(0) = 0, and then similarly (A− λI)k+2vk = 0, . . . , (A−λI)2k−2vk = 0, so every term except the last one is zero and this equation becomes

ckv1 = 0 .

Since v1 �= 0, this shows ck = 0, so our linear combination is

c1v1 + c2v2 + · · · + ck−1vk−1 = 0 .

Repeat the same argument, this time multiplying by (A− λI)k−2 instead of (A− λI)k−1.Then we obtain the equation

ck−1v1 = 0 ,

and, since v1 �= 0, this shows that ck−1 = 0 as well. Keep going to get

c1 = c2 = · · · = ck−1 = ck = 0 ,

so {v1, . . . , vk} is linearly independent. �

Theorem 2.11. Let A be a k-by-k matrix and suppose that Ck has a basis {v1, . . . , vk} consisting of a

single chain of generalized eigenvectors of length k associated to an eigenvalue a. Then

A = PJP−1

where

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

a 1a 1

a 1. . .

. . .

a 1a

⎤⎥⎥⎥⎥⎥⎥⎥⎦

is a matrix consisting of a single Jordan block and

P =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vk

]

is a matrix whose columns are generalized eigenvectors forming a chain.

Page 21: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 13

Proof. We give a proof by direct computation here. (Note the similarity of this proof to the proofof Theorem 1.14.) For a more conceptual proof, see Theorem 1.11 in Appendix A.

Let P be the given matrix. We will first show by direct computation that AP = PJ .It will be convenient to write

J =[j1

∣∣∣∣ j2

∣∣∣∣ . . .

∣∣∣∣ jk

]

and we see that ji , the ith column of J , is the vector

ji =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

0...

1a

0...

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

with 1 in the (i − 1)st position, a in the ith position, and 0 elsewhere.We show that AP = PJ by showing that their corresponding columns are equal.Now

AP = A

[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vk

]

so the ith column of AP is Avi . But

Avi = (A− aI + aI)vi

= (A− aI)vi + aIvi

= vi−1 + avi for i > 1, = avi for i = 1 .

On the other hand,

PJ =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vk

]J

and the ith column of PJ is Pji ,

Pji =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vk

]ji .

Remembering what the vector ji is, and multiplying, we see that

Pji = vi−1 + avi for i > 1, = avi for i = 1

as well.

Page 22: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

14 CHAPTER 1. JORDAN CANONICAL FORM

Thus, every column of AP is equal to the corresponding column of PJ , so

AP = PJ .

But Proposition 2.10 shows that the columns of P are linearly independent, so P is invertible.Multiplying on the right by P−1, we see that

A = PJP−1 .

Example 2.12. Applying Theorem 2.11 to the matrix A =[

0 1−4 4

]of Examples 2.6 and 2.8, we

see that [0 1−4 4

]=

[−2 1−4 0

] [2 10 2

] [−2 1−4 0

]−1

.

Here is the key theorem to which we have been heading. This theorem is one of the mostimportant (and useful) theorems in linear algebra.

Theorem 2.13. Let A be any square matrix defined over the complex numbers. Then A is similar to amatrix in Jordan Canonical Form. More precisely, A = PJP−1, for some matrix J in Jordan CanonicalForm. The diagonal entries of J consist of eigenvalues of A, and P is an invertible matrix whose columnsare chains of generalized eigenvectors of A.

Proof. (Rough outline) In general, the JCF of a matrix A does not consist of a single block, but willhave a number of blocks, of varying sizes and associated to varying eigenvalues.

But in this situation we merely have to “assemble” the various blocks (to get the matrix J )and the various chains of generalized eigenvectors (to get a basis and hence the matrix P ). Actually,the word “merely” is a bit misleading, as the argument that we can do so is, in fact, a subtle one, andwe shall not give it here. �

In lieu of proving Theorem 2.13, we shall give a number of examples that illustrate thesituation. In fact, in order to avoid complicated notation we shall merely illustrate the situation for2-by-2 and 3-by-3 matrices.

Theorem 2.14. Let A be a 2-by-2 matrix. Then one of the following situations applies:

Page 23: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 15

(i) A has two eigenvalues, a and b, each of algebraic multiplicity 1. Let u be an eigenvector associated tothe eigenvalue a and let v be an eigenvector associated to the eigenvalue b. Then A = PJP−1 with

J =[a 00 b

]and P =

[u

∣∣∣∣ v]

.

(Note, in this case, A is diagonalizable.)

(ii) A has a single eigenvalue a of algebraic multiplicity 2.

(a) A has two linearly independent eigenvectors u and v.

Then A = PJP−1 with

J =[a 00 a

]and P =

[u

∣∣∣∣ v]

.

(Note, in this case, A is diagonalizable. In fact, in this case Ea = C2 and A itself is the matrix[

a 00 a

].)

(b) A has a single chain {v1, v2} of generalized eigenvectors. Then A = PJP−1 with

J =[a 10 a

]and P =

[v1

∣∣∣∣ v2

].

Theorem 2.15. Let A be a 3-by-3 matrix. Then one of the following situations applies:

(i) A has three eigenvalues, a, b, and c, each of algebraic multiplicity 1. Let u be an eigenvectorassociated to the eigenvalue a, v be an eigenvector associated to the eigenvalue b, and w be aneigenvector associated to the eigenvalue c. Then A = PJP−1 with

J =⎡⎣a 0 0

0 b 00 0 c

⎤⎦ and P =

[u

∣∣∣∣ v∣∣∣∣ w

].

(Note, in this case, A is diagonalizable.)

(ii) A has an eigenvalue a of algebraic multiplicity 2 and an eigenvalue b of algebraic multiplicity 1.

(a) A has two independent eigenvectors, u and v, associated to the eigenvalue a. Let w be an eigenvectorassociated to the eigenvalue b. Then A = PJP−1 with

J =⎡⎣a 0 0

0 a 00 0 b

⎤⎦ and P =

[u

∣∣∣∣ v∣∣∣∣ w

].

(Note, in this case, A is diagonalizable.)

Page 24: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

16 CHAPTER 1. JORDAN CANONICAL FORM

(b) A has a single chain {u1, u2} of generalized eigenvectors associated to the eigenvalue a. Let v be aneigenvector associated to the eigenvalue b. Then A=PJP−1 with

J =⎡⎣a 1 0

0 a 00 0 b

⎤⎦ and P =

[u1

∣∣∣∣ u2

∣∣∣∣ v]

.

(iii) A has a single eigenvalue a of algebraic multiplicity 3.

(a) A has three linearly independent eigenvectors, u, v, and w. Then A = PJP−1 with

J =⎡⎣a 0 0

0 a 00 0 a

⎤⎦ and P =

[u

∣∣∣∣ v∣∣∣∣ w

].

(Note, in this case, A is diagonalizable. In fact, in this case Ea = C3 and A itself is the matrix⎡

⎣a 0 00 a 00 0 a

⎤⎦.)

(b) A has a chain {u1, u2} of generalized eigenvectors and an eigenvector v with {u1, u2, v} linearlyindependent. Then A = PJP−1 with

J =⎡⎣a 1 0

0 a 00 0 a

⎤⎦ and P =

[u1

∣∣∣∣ u2

∣∣∣∣ v]

.

(c) A has a single chain {u1, u2, u3} of generalized eigenvectors. Then A =PJP−1 with

J =⎡⎣a 1 0

0 a 10 0 a

⎤⎦ and P =

[u1

∣∣∣∣ u2

∣∣∣∣ u3

].

Remark 2.16. Suppose that A has JCF J = aI , a scalar multiple of the identity matrix. ThenA = PJP−1 = P(aI)P−1 = a(P IP−1) = aI = J .This justifies the parenthetical remark inThe-orems 2.14 (ii) (a) and 2.15 (iii) (a).

Remark 2.17. Note that Theorems 2.14 (i), 2.14 (ii) (a), 2.15 (i), 2.15 (ii) (a), and 2.15 (iii) (a) areall special cases of Theorem 1.14, and in fact Theorems 2.14 (i) and 2.15 (i) are both special casesof Corollary 1.15.

Page 25: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 17

Now we would like to apply Theorems 2.14 and 2.15. In order to do so, we need to have aneffective method to determine which of the cases we are in, and we give that here (without proof ).

Definition 2.18. Let λ be an eigenvalue of A. Then for any positive integer i,

Eiλ = {v | (A− λI)i(v) = 0}= Ker((A− λI)i) .

Note that Eiλ consists of generalized eigenvectors of index at most i (and the 0 vector), and is

a subspace. Note also thatEλ = E1

λ ⊆ E2λ ⊆ . . . ⊆ Eλ .

In general, the JCF of A is determined by the dimensions of all the spaces Eiλ, but this

determination can be a bit complicated. For eigenvalues of multiplicity at most 3, however, thesituation is simpler—we need only consider the eigenspacesEλ.This is a consequence of the followinggeneral result.

Proposition 2.19. Let λ be an eigenvalue of A. Then the number of blocks in the JCF of A correspondingto λ is equal to dim Eλ, i.e., to the geometric multiplicity of λ.

Proof. (Outline) Suppose there are � such blocks. Since each block corresponds to a chain of gener-alized eigenvectors, there are � such chains. Now the bottom of the chain is an (ordinary) eigenvector,so we get � eigenvectors in this way. It can be shown that these � eigenvectors are always linearlyindependent and that they always span Eλ, i.e., that they are a basis of Eλ. Thus, Eλ has a basisconsisting of � vectors, so dim Eλ = �. �

We can now determine the JCF of 1-by-1, 2-by-2, and 3-by-3 matrices, using the followingconsequences of this proposition.

Corollary 2.20. Let λ be an eigenvalue of A of algebraic multiplicity 1. Then dim E1λ = 1, i.e., a has

geometric multiplicity 1, and the submatrix of the JCF of A corresponding to the eigenvalue λ is a single1-by-1 block.

Corollary 2.21. Let λ be an eigenvalue of A of algebraic multiplicity 2. Then there are the followingpossibilities:

(a) dim E1λ = 2, i.e., a has geometric multiplicity 2. In this case, the submatrix of the JCF of A

corresponding to the eigenvalue λ consists of two 1-by-1 blocks.

Page 26: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

18 CHAPTER 1. JORDAN CANONICAL FORM

(b) dim E1λ = 1, i.e., a has geometric multiplicity 1. Also, dim E2

λ = 2. In this case, the submatrix ofthe JCF of A corresponding to the eigenvalue λ consists of a single 2-by-2 block.

Corollary 2.22. Let λ be an eigenvalue of A of algebraic multiplicity 3. Then there are the followingpossibilities:

(a) dim E1λ = 3, i.e., a has geometric multiplicity 3. In this case, the submatrix of the JCF of A corre-

sponding to the eigenvalue λ consists of three 1-by-1 blocks.

(b) dim E1λ = 2, i.e., a has geometric multiplicity 2. Also, dim E2

λ = 3. In this case, the submatrix ofthe Jordan Canonical Form of A corresponding to the eigenvalue λ consists of a 2-by-2 block and a1-by-1 block.

(c) dim E1λ = 1, i.e., a has geometric multiplicity 1. Also, dim E2

λ = 2, and dim E3λ = 3. In this case,

the submatrix of the Jordan Canonical Form of A corresponding to the eigenvalue λ consists of asingle 3-by-3 block.

Now we shall do several examples.

Example 2.23. A =⎡⎣ 2 −3 −3

2 −2 −2−2 1 1

⎤⎦ .

A has characteristic polynomial det (λI − A) = (λ+ 1)(λ)(λ− 2). Thus, A has eigenvalues−1,0, and 2, each of multiplicity one,and so we are in the situation of Theorem 2.15 (i).Computation

shows that the eigenspace E−1 = Ker(A− (−I )) has basis

⎧⎨⎩

⎡⎣1

01

⎤⎦

⎫⎬⎭ , the eigenspace E0 = Ker(A)

has basis

⎧⎨⎩

⎡⎣ 0

1−1

⎤⎦

⎫⎬⎭ , and the eigenspace E2 = Ker(A− 2I ) has basis

⎧⎨⎩

⎡⎣−1−11

⎤⎦

⎫⎬⎭ . Hence, we see

that ⎡⎣ 2 −3 −3

2 −2 −2−2 1 1

⎤⎦ =

⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦

⎡⎣−1 0 0

0 0 00 0 2

⎤⎦

⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦−1

.

Example 2.24. A =⎡⎣3 1 1

2 4 21 1 3

⎤⎦ .

Page 27: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 19

A has characteristic polynomial det (λI − A) = (λ− 2)2(λ− 6). Thus, A has an eigenvalue2 of multiplicity 2 and an eigenvalue 6 of multiplicity 1. Computation shows that the eigenspace

E2 = Ker(A− 2I ) has basis

⎧⎨⎩

⎡⎣−1

10

⎤⎦ ,

⎡⎣−1

01

⎤⎦

⎫⎬⎭ , so dim E2 = 2 and we are in the situation of

Corollary 2.21 (a). Further computation shows that the eigenspace E6 = Ker(A− 6I ) has basis⎧⎨⎩

⎡⎣1

21

⎤⎦

⎫⎬⎭ . Hence, we see that

⎡⎣3 1 1

2 4 21 1 3

⎤⎦ =

⎡⎣−1 −1 1

1 0 20 1 1

⎤⎦

⎡⎣2 0 0

0 2 00 0 6

⎤⎦

⎡⎣−1 −1 1

1 0 20 1 1

⎤⎦−1

.

Example 2.25. A =⎡⎣ 2 1 1

2 1 −2−1 0 −2

⎤⎦ .

A has characteristic polynomial det (λI − A) = (λ+ 1)2(λ− 3). Thus, A has an eigenvalue−1 of multiplicity 2 and an eigenvalue 3 of multiplicity 1. Computation shows that the eigenspace

E−1 = Ker(A− (−I )) has basis

⎧⎨⎩

⎡⎣−1

21

⎤⎦

⎫⎬⎭ so dim E−1 = 1 and we are in the situation of Corol-

lary 2.21 (b). Then we further compute that E2−1 = Ker((A− (−I ))2) has basis

⎧⎨⎩

⎡⎣−1

20

⎤⎦ ,

⎡⎣0

01

⎤⎦

⎫⎬⎭ ,

therefore is two-dimensional, as we expect. More to the point, we may choose any generalized eigen-vector of index 2, i.e., any vector in E2−1 that is not in E1−1, as the top of a chain. We choose u2 =⎡⎣0

01

⎤⎦ , and then we have u1 = (A− (−I ))u2 =

⎡⎣ 1−2−1

⎤⎦ , and {u1, u2} form a chain.

We also compute that, for the eigenvalue 3, the eigenspace E3 has basis

⎧⎨⎩v =

⎡⎣−5−61

⎤⎦

⎫⎬⎭ .

Hence, we see that

⎡⎣ 2 1 1

2 1 −2−1 0 2

⎤⎦ =

⎡⎣ 1 0 −5−2 0 −6−1 1 1

⎤⎦

⎡⎣−1 1 0

0 −1 00 0 3

⎤⎦

⎡⎣ 1 0 −5−2 0 −6−1 1 1

⎤⎦−1

.

Page 28: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

20 CHAPTER 1. JORDAN CANONICAL FORM

Example 2.26. A =⎡⎣ 2 1 1−2 −1 −21 1 2

⎤⎦ .

A has characteristic polynomial det (λI − A) = (λ− 1)3, so A has one eigenvalue 1 of

multiplicity three. Computation shows that E1 = Ker(A− I ) has basis

⎧⎨⎩

⎡⎣−1

01

⎤⎦ ,

⎡⎣−1

10

⎤⎦

⎫⎬⎭ , so

dim E1 = 2 and we are in the situation of Corollary 2.22 (b). Computation then shows that

dim E21 = 3 (i.e.,(A− I )2 = 0 andE2

1 is all of C3) with basis

⎧⎨⎩

⎡⎣1

00

⎤⎦ ,

⎡⎣0

10

⎤⎦ ,

⎡⎣0

01

⎤⎦

⎫⎬⎭ .We may choose

u2 to be any vector in E21 that is not in E1

1 , and we shall choose u2 =⎡⎣1

00

⎤⎦ .Then u1 = (A− I )u2 =

⎡⎣ 1−21

⎤⎦ , and {u1, u2} form a chain. For the third vector, v, we may choose any vector in E1 such that

{u1, v} is linearly independent. We choose v =⎡⎣−1

01

⎤⎦ . Hence, we see that

⎡⎣ 2 1 1−2 −1 21 1 2

⎤⎦ =

⎡⎣ 1 1 −1−2 0 01 0 1

⎤⎦

⎡⎣1 1 0

0 1 00 0 1

⎤⎦

⎡⎣ 1 1 −1−2 0 01 0 1

⎤⎦−1

.

Example 2.27. A =⎡⎣ 5 0 1

1 1 0−7 1 0

⎤⎦ .

A has characteristic polynomial det (λI − A) = (λ− 2)3, so A has one eigenvalue 2 of multi-

plicity three. Computation shows that E2 = Ker(A− 2I ) has basis

⎧⎨⎩

⎡⎣−1−13

⎤⎦

⎫⎬⎭ , so dim E1

2 = 1 and

we are in the situation of Corollary 2.22 (c). Then computation shows that E22 = Ker((A− 2I )2)

has basis

⎧⎨⎩

⎡⎣−1

02

⎤⎦ ,

⎡⎣−1

20

⎤⎦

⎫⎬⎭ . (Note that

⎡⎣−1−13

⎤⎦ = 3/2

⎡⎣−1

02

⎤⎦+ 1/2

⎡⎣−1

20

⎤⎦.) Computation then

Page 29: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 21

shows that dim E32 = 3 (i.e., (A− 2I )3 = 0 and E3

2 is all of C3) with basis

⎧⎨⎩

⎡⎣1

00

⎤⎦ ,

⎡⎣0

10

⎤⎦ ,

⎡⎣0

01

⎤⎦

⎫⎬⎭ .

We may choose u3 to be any vector in C3 that is not in E2

2 , and we shall choose u3 =⎡⎣1

00

⎤⎦ . Then

u2 = (A− 2I )u3 =⎡⎣ 3

1−7

⎤⎦ and u1 = (A− 2I )u2 =

⎡⎣ 2

2−6

⎤⎦ , and then {u1, u2, u3} form a chain.

Hence, we see that

⎡⎣ 5 0 1

1 1 0−7 1 0

⎤⎦ =

⎡⎣ 2 3 1

2 1 0−6 −7 0

⎤⎦

⎡⎣2 1 0

0 2 10 0 2

⎤⎦

⎡⎣ 2 3 1

2 1 0−6 −7 0

⎤⎦−1

.

As we have mentioned, we need to work over the complex numbers in order for the theoryof JCF to fully apply. But there is an analog over the real numbers, and we conclude this section bystating it.

Theorem 2.28. Let A be a real square matrix (i.e., a square matrix with all entries real numbers), andsuppose that all of the eigenvalues of A are real numbers. Then A is similar to a real matrix in JordanCanonical Form. More precisely, A = PJP−1 with P and J real matrices, for some matrix J in JordanCanonical Form.The diagonal entries of J consist of eigenvalues of A, and P is an invertible matrix whosecolumns are chains of generalized eigenvectors of A.

EXERCISES FOR CHAPTER 1For each matrix A, write A = PJP−1 with P an invertible matrix and J a matrix in JCF.

1. A =[

75 56−90 −67

], det(λI − A) = (λ− 3)(λ− 5).

2. A =[−50 99−20 39

], det(λI − A) = (λ+ 6)(λ+ 5).

3. A =[−18 9−49 24

], det(λI − A) = (λ− 3)2.

Page 30: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

22 CHAPTER 1. JORDAN CANONICAL FORM

4. A =[

1 1−16 9

], det(λI − A) = (λ− 5)2.

5. A =[

2 1−25 12

], det(λI − A) = (λ− 7)2.

6. A =[−15 9−25 15

], det(λI − A) = λ2.

7. A =⎡⎣1 0 0

1 2 −31 −1 0

⎤⎦, det(λI − A) = (λ+ 1)(λ− 1)(λ− 3).

8. A =⎡⎣3 0 2

1 3 10 1 1

⎤⎦, det(λI − A) = (λ− 1)(λ− 2)(λ− 4).

9. A =⎡⎣ 5 8 16

4 1 8−4 −4 −11

⎤⎦, det(λI − A) = (λ+ 3)2(λ− 1).

10. A =⎡⎣ 4 2 3−1 1 −32 4 9

⎤⎦, det(λI − A) = (λ− 3)2(λ− 8).

11. A =⎡⎣ 5 2 1−1 2 −1−1 −2 3

⎤⎦, det(λI − A) = (λ− 4)2(λ− 2).

12. A =⎡⎣ 8 −3 −3

4 0 −2−2 1 3

⎤⎦, det(λI − A) = (λ− 2)2(λ− 7).

13. A =⎡⎣−3 1 −1−7 5 −1−6 6 −2

⎤⎦, det(λI − A) = (λ+ 2)2(λ− 4).

14. A =⎡⎣ 3 0 0

9 −5 −18−4 4 12

⎤⎦, det(λI − A) = (λ− 3)2(λ− 4).

Page 31: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

1.2. THE GENERAL CASE 23

15. A =⎡⎣−6 9 0−6 6 −29 −9 3

⎤⎦, det(λI − A) = λ2(λ− 3).

16. A =⎡⎣−18 42 168

1 −7 −40−2 6 27

⎤⎦, det(λI − A) = (λ− 3)2(λ+ 4).

17. A =⎡⎣ −1 1 −1−10 6 −5−6 3 −2

⎤⎦, det(λI − A) = (λ− 1)3.

18. A =⎡⎣0 −4 1

2 −6 14 −8 0

⎤⎦, det(λI − A) = (λ+ 2)3.

19. A =⎡⎣−4 1 2−5 1 3−7 2 3

⎤⎦, det(λI − A) = λ3.

20. A =⎡⎣−4 −2 5−1 −1 1−2 −1 2

⎤⎦, det(λI − A) = (λ+ 1)3.

Page 32: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

24

Page 33: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

25

C H A P T E R 2

Solving Systems of LinearDifferential Equations

2.1 HOMOGENEOUS SYSTEMS WITH CONSTANTCOEFFICIENTS

We will now see how to use Jordan Canonical Form ( JCF) to solve systems Y ′ = AY . We begin bydescribing the strategy we will follow throughout this section.

Consider the matrix systemY ′ = AY .

Step 1. Write A = PJP−1 with J in JCF, so the system becomes

Y ′ = (PJP−1)Y

Y ′ = PJ(P−1Y )

P−1Y ′ = J (P−1Y )

(P−1Y )′ = J (P−1Y ) .

(Note that, since P−1 is a constant matrix, we have that (P−1Y )′ = P−1Y ′.)

Step 2. Set Z = P−1Y , so this system becomes

Z′ = JZ

and solve this system for Z.

Step 3. Since Z = P−1Y , we have that

Y = PZ

is the solution to our original system.

Examining this strategy, we see that we already know how to carry out Step 1, and also thatStep 3 is very easy—it is just matrix multiplication. Thus, the key to success here is being able tocarry out Step 2. This is where JCF comes in. As we shall see, it is (relatively) easy to solve Z′ = JZ

when J is a matrix in JCF.

Page 34: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

26 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

You will note that throughout this section, in solving Z′ = JZ, we write the solution asZ = MZC, where MZ is a matrix of functions, called the fundamental matrix of the system, andC is a vector of arbitrary constants. The reason for this will become clear later. (See Remarks 1.12and 1.14.)

Although it is not logically necessary—we may regard a diagonal matrix as a matrix in JCFin which all the Jordan blocks are 1-by-1 blocks—it is illuminating to handle the case when J isdiagonal first. Here the solution is very easy.

Theorem 1.1. Let J be a k-by-k diagonal matrix,

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

a1

a2 0a3

. . .

0 ak−1

ak

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Then the system Z′ = JZ has the solution

Z =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

ea1x

ea2x 0ea3x

. . .

0 eak−1x

eakx

⎤⎥⎥⎥⎥⎥⎥⎥⎦

C = MZC

where C =

⎡⎢⎢⎢⎣

c1

c2...

ck

⎤⎥⎥⎥⎦ is a vector of arbitrary constants c1, c2, . . . , ck .

Proof. Multiplying out, we see that the system Z′ = JZ is just the system

⎡⎢⎢⎢⎢⎢⎣

z′1z′2...

z′k

⎤⎥⎥⎥⎥⎥⎦ =

⎡⎢⎢⎢⎢⎢⎣

a1z1

a2z2

...

akzk

⎤⎥⎥⎥⎥⎥⎦ .

Page 35: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 27

But this system is “uncoupled”, i.e., the equation for z′i only involves zi and none of theother functions. Now this equation is very familiar. In general, the differential equation z′ = az hassolution z = ceax , and applying that here we find that Z′ = JZ has solution

Z =

⎡⎢⎢⎢⎣

c1ea1x

c2ea2x

...

ckeakx

⎤⎥⎥⎥⎦ ,

which is exactly the above product MZC. �

Example 1.2. Consider the system

Y ′ = AY where A =[

5 −72 −4

].

We saw in Example 1.16 in Chapter 1 that A = PJP−1 with

P =[

7 12 1

]and J =

[3 00 −2

].

Then Z′ = JZ has solution

Z =[e3x 00 e−2x

] [c1

c2

]= MZC =

[c1e

3x

c2e−2x

]

and so Y = PZ = PMZC, i.e.,

Y =[

7 12 1

] [e3x 00 e−2x

] [c1

c2

]

=[

7e3x e−2x

2e3x e−2x

] [c1

c2

]

=[

7c1e3x + c2e

−2x

2c1e3x + c2e

−2x

].

Example 1.3. Consider the system

Y ′ = AY where A =⎡⎣ 2 −3 −3

2 −2 −2−2 1 1

⎤⎦ .

Page 36: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

28 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

We saw in Example 2.23 in Chapter 1 that A = PJP−1 with

P =⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦ and J =

⎡⎣−1 0 0

0 0 00 0 2

⎤⎦ .

Then Z′ = JZ has solution

Z =⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦ = MZC

and so Y = PZ = PMZC, i.e.,

Y =⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦

⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣e−x 0 −e2x

0 −1 −e2x

e−x 1 e2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣c1e

−x − c3e2x

−c2 − c3e2x

c1e−x + c2 + c3e

2x

⎤⎦ .

We now see how to use JCF to solve systems Y ′ = AY where the coefficient matrix A is notdiagonalizable.

The key to understanding systems is to investigate a system Z′ = JZ where J is a matrixconsisting of a single Jordan block. Here the solution is not as easy as in Theorem 1.1, but it is stillnot too hard.

Theorem 1.4. Let J be a k-by-k Jordan block with eigenvalue a,

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

a 1a 1 0

a 1. . .

. . .

0 a 1a

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Page 37: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 29

Then the system Z′ = JZ has the solution

Z = eax

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1 x x2/2! x3/3! · · · xk−1/(k − 1)!1 x x2/2! · · · xk−2/(k − 2)!

1 x · · · xk−3/(k − 3)!. . .

...

x

1

⎤⎥⎥⎥⎥⎥⎥⎥⎦

C = MZC

where C =

⎡⎢⎢⎢⎣

c1

c2...

ck

⎤⎥⎥⎥⎦ is a vector of arbitrary constants c1, c2, . . . , ck .

Proof. We will prove this in the cases k = 1, 2, and 3, which illustrate the pattern. As you will see,the proof is a simple application of the standard technique for solving first-order linear differentialequations.

The case k = 1: Here we are considering the system

[z′1] = [a][z1]which is nothing other than the differential equation

z′1 = az1 .

This differential equation has solution

z1 = c1eax ,

which we can certainly write as[z1] = eaz[1][c1] .

The case k = 2: Here we are considering the system

[z′1z′2

]=

[a 10 a

] [z1

z2

],

which is nothing other than the pair of differential equations

z′1 = az1 + z2

z′2 = az2 .

Page 38: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

30 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

We recognize the second equation as having the solution

z2 = c2eax

and we substitute this into the first equation to get

z′1 = az1 + c2eax .

To solve this, we rewrite this as

z′1 − az1 = c2eax

and recognize that this differential equation has integrating factor e−ax . Multiplying by this factor,we find

e−ax(z′1 − az1) = c2

(e−axz1)′ = c2

e−axz1 =∫

c2 dx = c1 + c2x

soz1 = eax(c1 + c2x) .

Thus, our solution is

z1 = eax(c1 + c2x)

z2 = eaxc2 ,

which we see we can rewrite as [z1

z2

]= eax

[1 x

0 1

] [c1

c2

].

The case k = 3: Here we are considering the system

⎡⎣z′1

z′2z′3

⎤⎦ =

⎡⎣a 1 0

0 a 10 0 a

⎤⎦

⎡⎣z1

z2

z3

⎤⎦ ,

which is nothing other than the triple of differential equations

z′1 = az1 + z2

z′2 = az2 + z3

z′3 = az3 .

Page 39: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 31

If we just concentrate on the last two equations, we see we are in the k = 2 case. Referring tothat case, we see that our solution is

z2 = eax(c2 + c3x)

z3 = eaxc3 .

Substituting the value of z2 into the equation for z1, we obtain

z′1 = az1 + eax(c2 + c3x) .

To solve this, we rewrite this as

z′1 − az1 = eax(c2 + c3x)

and recognize that this differential equation has integrating factor e−ax . Multiplying by this factor,we find

e−ax(z′1 − az1) = c2 + c3x

(e−axz1)′ = c2 + c3x

e−axz1 =∫

(c2 + c3x) dx = c1 + c2x + c3(x2/2)

soz1 = eax(c1 + c2x + c3(x

2/2)) .

Thus, our solution is

z1 = eax(c1 + c2x + c3(x2/2))

z2 = eax(c2 + c3x)

z3 = eaxc3 ,

which we see we can rewrite as⎡⎣z1

z2

z3

⎤⎦ = eax

⎡⎣1 x x2/2

0 1 x

0 0 1

⎤⎦

⎡⎣c1

c2

c3

⎤⎦ .

Remark 1.5. Suppose that Z′ = JZ where J is a matrix in JCF but one consisting of several blocks,not just one block. We can see that this systems decomposes into several systems, one correspondingto each block, and that these systems are uncoupled, so we may solve them each separately, usingTheorem 1.4, and then simply assemble these individual solutions together to obtain a solution ofthe general system.

Page 40: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

32 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

We now illustrate this (confining our illustrations to the case that A is not diagonalizable, aswe have already illustrated the diagonalizable case).

Example 1.6. Consider the system

Y ′ = AY where A =[

0 1−4 4

].

We saw in Example 2.12 in Chapter 1 that A = PJP−1 with

P =[−2 1−4 0

]and J =

[2 10 2

].

Then Z′ = JZ has solution

Z = e2x

[1 x

0 1

] [c1

c2

]=

[e2x xe2x

0 e2x

] [c1

c2

]= MZC =

[c1e

2x + c2xe2x

c2e2x

]

and so Y = PZ = PMZC, i.e.,

Y =[−2 1−4 0

]e2x

[1 x

0 1

] [c1

c2

]

=[−2 1−4 0

] [e2x xe2x

0 e2x

] [c1

c2

]

=[−2e2x −2xe2x + e2x

−4e2x −4xe2x

] [c1

c2

]

=[(−2c1 + c2)e

2x − 2c2xe2x

−4c1e2x − 4c2xe2x

].

Example 1.7. Consider the system

Y ′ = AY where A =⎡⎣ 2 1 1

2 1 −2−1 0 −2

⎤⎦ .

We saw in Example 2.25 in Chapter 1 that A = PJP−1 with

P =⎡⎣ 1 0 −5−2 0 −6−1 1 1

⎤⎦ and J =

⎡⎣−1 1 0

0 −1 00 0 3

⎤⎦ .

Page 41: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 33

Then Z′ = JZ has solution

Z =⎡⎣e−x xe−x 0

0 e−x 00 0 e3x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦ = MZC

and so Y = PZ = PMZC, i.e.,

Y =⎡⎣ 1 0 −5−2 0 −6−1 1 1

⎤⎦

⎡⎣e−x xe−x 0

0 e−x 00 0 e3x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ e−x xe−x −5e3x

−2e−x −2xe−x −6e3x

−e−x −xe−x + e−x e3x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ c1e

−x + c2xe−x − 5c3e3x

−2c1e−x − 2c2xe−x − 6c3e

3x

(−c1 + c2)e−x − c2xe−x + c3e

3x

⎤⎦ .

Example 1.8. Consider the system

Y ′ = AY where A =⎡⎣ 2 1 1−2 −1 −21 1 2

⎤⎦ .

We saw in Example 2.26 in Chapter 1 that A = PJP−1 with

P =⎡⎣ 1 1 1−2 0 01 0 1

⎤⎦ and J =

⎡⎣1 1 0

0 1 00 0 1

⎤⎦ .

Then Z′ = JZ has solution

Z =⎡⎣ex xex 0

0 ex 00 0 ex

⎤⎦

⎡⎣c1

c2

c3

⎤⎦ = MZC

and so Y = PZ = PMZC, i.e.,

Y =⎡⎣ 1 1 1−2 0 01 0 1

⎤⎦

⎡⎣ex xex 0

0 ex 00 0 ex

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

Page 42: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

34 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

=⎡⎣ ex xex + ex ex

−2ex −2xex 0ex xex ex

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ (c1 + c2 + c3)e

x + c2xex

−2c1ex − 2c2xex

(c1 + c3)ex + c2xex

⎤⎦ .

Example 1.9. Consider the system

Y ′ = AY where A =⎡⎣ 5 0 1

1 1 0−7 1 0

⎤⎦ .

We saw in Example 2.27 in Chapter 1 that A = PJP−1 with

P =⎡⎣ 2 3 1

2 1 0−6 −7 0

⎤⎦ and J =

⎡⎣2 1 0

0 2 10 0 2

⎤⎦ .

Then Z′ = JZ has solution

Z =⎡⎣e2x xe2x (x2/2)e2x

0 e2x xe2x

0 0 e2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦ = MZC

and so Y = PZ = PMZC, i.e.,

Y =⎡⎣ 2 3 1

2 1 0−6 −7 0

⎤⎦

⎡⎣e2x xe2x (x2/2)e2x

0 e2x xe2x

0 0 e2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ 2e2x 2xe2x + 3e2x x2e2x + 3xe2x + e2x

2e2x 2xe2x + e2x x2e2x + xe2x

−6e2x −6xe2x − 7e2x −3x2e2x − 7xe2x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ (2c1 + 3c2 + c3)e

2x + (2c2 + 3c3)xe2x + c3x2e2x

(2c1 + c2)e2x + (2c2 + c3)xe2x + c3x

2e2x

(−6c1 − 7c2)e2x + (−6c2 − 7c3)xe2x − 3c3x

2e2x

⎤⎦ .

Page 43: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 35

We conclude this section by showing how to solve initial value problems.This is just one morestep, given what we have already done.

Example 1.10. Consider the initial value problem

Y ′ = AY where A =[

0 1−4 4

], and Y (0) =

[3−8

].

In Example 1.6, we saw that this system has the general solution

Y =[(−2c1 + c2)e

2x − 2c2xe2x

−4c1e2x − 4c2xe2x

].

Applying the initial condition (i.e., substituting x = 0 in this matrix), gives[3−8

]= Y (0) =

[−2c1 + c2

−4c1

]

with solution [c1

c2

]=

[27

].

Substituting these values in the above matrix gives

Y =[

3e2x − 14xe2x

−8e2x − 28te2x

].

Example 1.11. Consider the initial value problem

Y ′ = AY where A =⎡⎣ 2 1 1

2 1 −2−1 0 2

⎤⎦ , and Y (0) =

⎡⎣ 8

325

⎤⎦ .

In Example 1.8, we saw that this system has the general solution

Y =⎡⎣c1e

−x + c2xe−x − 5c3xe3x

−2c1e−x − 2c2xe−x − 6c3e

3x

(−c1 + c2)e−x − c2xe−x + c3e

3x

⎤⎦ .

Applying the initial condition (i.e., substituting x = 0 in this matrix) gives⎡⎣ 8

325

⎤⎦ = Y (0) =

⎡⎣ c1 − 5c3

−2c1 − 6c3

−c1 + c2 + c3

⎤⎦

Page 44: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

36 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

with solution ⎡⎣c1

c2

c3

⎤⎦ =

⎡⎣−7

1−3

⎤⎦ .

Substituting these values in the above matrix gives

Y =⎡⎣−7e−x + xe−x + 15e3x

14e−x − 2xe−x + 18e3x

8e−x − xe−x − 3e3x

⎤⎦ .

Remark 1.12. There is a variant on our method of solving systems or initial value problems.

We have written our solution of Z′ = JZ as Z = MZC. Let us be more explicit here andwrite this solution as

Z(x) = MZ(x)C .

This notation reminds us that Z(x) is a vector of functions, MZ(x) is a matrix of functions, and C isa vector of constants. The key observation is that MZ(0) = I , the identity matrix. Thus, if we wishto solve the initial value problem

Z′ = JZ, Z(0) = Z0 ,

we find that, in general,Z(x) = MZ(x)C

and, in particular,Z0 = Z(0) = MZ(0)C = IC = C ,

so the solution to this initial value problem is

Z(x) = MZ(x)Z0 .

Now suppose we wish to solve the system Y ′ = AY .Then, if A = PJP−1, we have seen thatthis system has solution Y = PZ = PMZC. Let us manipulate this a bit:

Y = PMZC = PMZIC = PMZ(P−1P)C

= (PMZP−1)(PC) .

Now let us set MY = PMZP−1, and also let us set � = PC. Note that MY is still a matrix offunctions, and that � is still a vector of arbitrary constants (since P is an invertible constant matrixand C is a vector of arbitrary constants). Thus, with this notation, we see that

Y ′ = AY has solution Y = MY � .

Page 45: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 37

Now suppose we wish to solve the initial value problem

Y ′ = AY, Y (0) = Y0 .

Rewriting the above solution of Y ′ = AY to explicitly include the independent variable, we see thatwe have

Y (x) = MY (x)�

and, in particular,

Y0 = Y (0) = MY (0)� = PMZ(0)P−1� = PIP−1� = � ,

so we see that

Y ′ = AY, Y (0) = Y0 has solution Y (x) = MY (x)Y0 .

This variant method has pros and cons. It is actually less effective than our original methodfor solving a single initial value problem (as it requires us to compute P−1 and do some extramatrix multiplication), but it has the advantage of expressing the solution directly in terms of theinitial conditions. This makes it more effective if the same system Y ′ = AY is to be solved for avariety of initial conditions. Also, as we see from Remark 1.14 below, it is of considerable theoreticalimportance.

Let us now apply this variant method.

Example 1.13. Consider the initial value problem

Y ′ = AY where A =[

0 1−4 4

], and Y (0) =

[a1

a2

].

As we have seen in Example 1.6,A = PJP−1 with P =[−2 1−4 0

]and J =

[2 10 2

].Then MZ(x) =[

e2x xe2x

0 e2x

]and

MY (x) = PMZ(x)P−1 =[−2 1−4 0

] [e2x xe2x

0 e2x

] [−2 1−4 0

]−1

=[e2x − 2xe2x xe2x

−4xe2x e2x + 2xe2x

]so

Y (x) = MY (x)

[a1

a2

]=

[e2x − 2xe2x xe2x

−4xe2x e2x + 2xe2x

] [a1

a2

]

=[

a1e2x + (−2a1 + a2)xe2x

a2e2x + (−4a1 + 2a2)xe2x

].

Page 46: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

38 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

In particular, if Y (0) =[

3−8

], then Y (x) =

[3e2x − 14xe2x

−8e2x − 28xe2x

], recovering the result of Exam-

ple 1.10. But also, if Y (0) =[

25

], then Y (x) =

[2e2x + xe2x

5e2x + 2te2x

], and if Y (0) =

[−415

], then Y (x) =[−4e2x + 23xe2x

15e2x + 46xe2x

], etc.

Remark 1.14. In Section 2.4 we will define the matrix exponential, and, with this definition,MZ(x) = eJx and MY (x) = PMZ(x)P−1 = eAx .

EXERCISES FOR SECTION 2.1For each exercise, see the corresponding exercise in Chapter 1. In each exercise:

(a) Solve the system Y ′ = AY .

(b) Solve the initial value problem Y ′ = AY , Y (0) = Y0.

1. A =[

75 56−90 −67

]and Y0 =

[1−1

].

2. A =[−50 99−20 39

]and Y0 =

[73

].

3. A =[−18 9−49 24

]and Y0 =

[4198

].

4. A =[

1 1−16 9

]and Y0 =

[7

16

].

5. A =[

2 1−25 12

]and Y0 =

[−10−75

].

6. A =[−15 9−25 15

]and Y0 =

[50

100

].

Page 47: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.1. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 39

7. A =⎡⎣1 0 0

1 2 −31 −1 0

⎤⎦ and Y0 =

⎡⎣ 6−1010

⎤⎦.

8. A =⎡⎣3 0 2

1 3 10 1 1

⎤⎦ and Y0 =

⎡⎣0

33

⎤⎦.

9. A =⎡⎣ 5 8 16

4 1 8−4 −4 −11

⎤⎦ and Y0 =

⎡⎣ 0

2−1

⎤⎦.

10. A =⎡⎣ 4 2 3−1 1 −32 4 9

⎤⎦ and Y0 =

⎡⎣3

21

⎤⎦.

11. A =⎡⎣ 5 2 1−1 2 −1−1 −2 3

⎤⎦ and Y0 =

⎡⎣−3

29

⎤⎦.

12. A =⎡⎣ 8 −3 −3

4 0 −2−2 1 3

⎤⎦ and Y0 =

⎡⎣5

87

⎤⎦.

13. A =⎡⎣−3 1 −1−7 5 −1−6 6 −2

⎤⎦ and Y0 =

⎡⎣−1

36

⎤⎦.

14. A =⎡⎣ 3 0 0

9 −5 −18−4 4 12

⎤⎦ and Y0 =

⎡⎣ 2−11

⎤⎦.

Page 48: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

40 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

15. A =⎡⎣−6 9 0−6 6 −29 −9 3

⎤⎦ and Y0 =

⎡⎣ 1

3−6

⎤⎦.

16. A =⎡⎣−18 42 168

1 −7 −40−2 6 27

⎤⎦ and Y0 =

⎡⎣ 7−21

⎤⎦.

17. A =⎡⎣ −1 1 −1−10 6 −5−6 3 2

⎤⎦ and Y0 =

⎡⎣ 3

1018

⎤⎦.

18. A =⎡⎣0 −4 1

2 −6 14 −8 0

⎤⎦ and Y0 =

⎡⎣2

58

⎤⎦.

19. A =⎡⎣−4 1 2−5 1 3−7 2 3

⎤⎦ and Y0 =

⎡⎣ 6

119

⎤⎦.

20. A =⎡⎣−4 −2 5−1 −1 1−2 −1 2

⎤⎦ and Y0 =

⎡⎣9

58

⎤⎦.

2.2 HOMOGENEOUS SYSTEMS WITH CONSTANTCOEFFICIENTS: COMPLEX ROOTS

In this section, we show how to solve a homogeneous system Y ′ = AY where the characteristicpolynomial of A has complex roots. In principle, this is the same as the situation where thecharacteristic polynomial of A has real roots, which we dealt with in Section 2.1, but in practice,there is an extra step in the solution.

Page 49: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.2. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 41

We will begin by doing an example, which will show us where the difficulty lies, and then wewill overcome that difficulty. But first, we need some background.

Definition 2.1. For a complex number z, the exponential ez is defined by

ez = 1+ z+ z2/2! + z3/3! + . . . .

The complex exponential has the following properties.

Theorem 2.2. (1) (Euler) For any θ ,

eiθ = cos(θ)+ i sin(θ) .

(2) For any a,d

dz(eaz) = aeaz .

(3) For any z1 and z2,ez1+z2 = ez1ez2 .

(4) If z = s + it , thenez = es(cos(t)+ i sin(t)) .

(5) For any z,ez = ez .

Proof. For the proof, see Theorem 2.2 in Appendix A. �

The following lemma will save us some computations.

Lemma 2.3. Let A be a matrix with real entries, and let v be an eigenvector of A with associatedeigenvalue λ. Then v is an eigenvector of A with associated eigenvalue λ.

Proof. We have that Av = λv, by hypothesis. Let us take the complex conjugate of each side of thisequation. Then

Av = λv,

Av = λv,

Av = λv (as A = A since all the entries of A are real) ,

as claimed. �

Page 50: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

42 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Now for our example.

Example 2.4. Consider the system

Y ′ = AY where A =[

2 −171 4

].

A has characteristic polynomial λ2 − 6λ+ 25 with roots λ1 = 3+ 4i and λ2 = λ1 = 3− 4i, eachof multiplicity 1. Thus, λ1 and λ2 are the eigenvalues of A, and we compute that the eigenspace

E3+4i = Ker(A− (3+ 4i)I ) has basis{v1 =

[−1+ 4i

1

]}, and hence, by Lemma 2.3, that the

eigenspace E3−4i = Ker(A− (3− 4i)I ) has basis{v2 = v1 =

[−1− 4i

1

]}. Hence, just as before,

A = PJP−1 with P =[−1+ 4i −1− 4i

1 1

]and J =

[3+ 4i 0

0 3− 4i

].

We continue as before, but now we use F to denote a vector of arbitrary constants. (This is just forneatness. Our constants will change, as you will see, and we will use the vector C to denote our finalconstants, as usual.) Then Z′ = JZ has solution

Z =[e(3+4i)x 0

0 e(3−4i)x

] [f1

f2

]= MZF =

[f1e

(3+4i)x

f2e(3−4i)x

]

and so Y = PZ = PMZF , i.e.,

Y =[−1+ 4i −1− 4i

1 1

] [e(3+4i)x 0

0 e(3−4i)x

] [f1

f2

]

= f1e(3+4i)x

[−1+ 4i

1

]+ f2e

(3−4i)x

[−1− 4i

1

].

Now we want our differential equation to have real solutions, and in order for this to be thecase, it turns out that we must have f2 = f1. Thus, we may write our solution as

Y = f1e(3+4i)x

[−1+ 4i

1

]+ f1e

(3−4i)x

[−1− 4i

1

]

= f1e(3+4i)x

[−1+ 4i

1

]+ f1e(3+4i)x

[−1+ 4i

1

],

where f1 is an arbitrary complex constant.This solution is correct but unacceptable. We want to solve the system Y ′ = AY , where A has

real coefficients, and we have a solution which is indeed a real vector, but this vector is expressed in

Page 51: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.2. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 43

terms of complex numbers and functions. We need to obtain a solution that is expressed totally interms of real numbers and functions. In order to do this, we need an extra step.

In order not to interrupt the flow of exposition, we simply state here what we need to do, andwe justify this after the conclusion of the example.

We therefore do the following: We simply replace the matrix PMZ by the matrix whose

first column is the real part Re(eλ1xv1) = Re

(e(3+4i)x

[−1+ 4i

1

]), and whose second column is

the imaginary part Im(eλ1xv1) = Im

(e(3+4i)x

[−1+ 4i

1

]), and the vector F by the vector C of

arbitrary real constants. We compute

e(3+4i)x

[−1+ 4i

1

]= e3x(cos(4x)+ i sin(4x))

[−1+ 4i

1

]

= e3x

[− cos(4x)− 4 sin(4x)

cos(4x)

]+ ie3x

[4 cos(4x)− sin(4x)

sin(4x)

]

and so we obtain

Y =[e3x(− cos(4x)− 4 sin(4x)) e3x(4 cos(4x)− sin(4x))

e3x cos(4x) e3x sin(4x)

] [c1

c2

]

=[(−c1 + 4c2)e

3x cos(4x)+ (−4c1 − c2)e3x sin(4x)

c1e3x cos(4x)+ c2e

3x sin(4x)

].

Now we justify the step we have done.

Lemma 2.5. Consider the system Y ′ = AY , where A is a matrix with real entries. Let this system havegeneral solution of the form

Y = PMZF =[v1

∣∣∣∣ v1

] [eλ1x 0

0 eλ1x

] [f1

f1

]=

[eλ1xv1

∣∣∣∣ eλ1xv1

] [f1

f1

],

where f1 is an arbitrary complex constant. Then this system also has general solution of the form

Y =[

Re(eλ1xv1)

∣∣∣∣ Im(eλ1xv1)

] [c1

c2

],

where c1 and c2 are arbitrary real constants.

Proof. First note that for any complex number z = x + iy,x = Re(z) = 12 (z+ z) and y = Im(z) =

12i

(z− z), and similarly, for any complex vector.

Page 52: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

44 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Now Y ′ = AY has general solution Y = PMZF = PMZ(RR−1)F = (PMZR)(R−1F) forany invertible matrix R. We now (cleverly) choose

R =[

1/2 1/(2i)

1/2 −1/(2i)

].

With this choice of R,

PMZR =[

Re(eλ1xv1)

∣∣∣∣ Im(eλ1xv1)

].

Then

R−1 =[

1 1i −i

].

Since f1 is an arbitrary complex constant, we may (cleverly) choose to write it as f1 = 12 (c1 + ic2)

for arbitrary real constants c1 and c2, and with this choice

R−1F =[c1

c2

],

yielding a general solution as claimed. �

We now solve Y ′ = AY where A is a real 3-by-3 matrix with a pair of complex eigenvaluesand a third, real eigenvalue. As you will see, we use the idea of Lemma 2.5 to simply replace the“relevant” columns of PMZ in order to obtain our final solution.

Example 2.6. Consider the system

Y ′ = AY where A =⎡⎣15 −16 8

10 −10 50 1 2

⎤⎦ .

A has characteristic polynomial (λ2 − 2λ+ 5)(λ− 5) with roots λ1 = 1+ 2i, λ2 = λ1 = 1− 2i,and λ3 = 5, each of multiplicity 1. Thus, λ1, λ2, and λ3 are the eigenvalues of A, and we compute

that the eigenspace E1+2i = Ker(A− (1+ 2i)I ) has basis

⎧⎨⎩v1 =

⎡⎣−2+ 2i

−1+ 2i

1

⎤⎦

⎫⎬⎭, and hence, by

Lemma 2.3, that the eigenspace E1−2i = Ker(A− (1− 2i)I ) has basis

⎧⎨⎩v2 = v1 =

⎡⎣−2− 2i

−1− 2i

1

⎤⎦

⎫⎬⎭.

We further compute that the eigenspace E5 = Ker(A− 5I ) has basis

⎧⎨⎩v3 =

⎡⎣4

31

⎤⎦

⎫⎬⎭. Hence, just as

Page 53: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.2. HOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 45

before,

A = PJP−1 with P =⎡⎣−2+ 2i −2− 2i 4−1+ 2i −1− 2i 3

1 1 1

⎤⎦ and J =

⎡⎣1+ 2i 0 0

0 1− 2i 00 0 5

⎤⎦ .

Then Z′ = JZ has solution

Z =⎡⎣e(1+2i)x 0 0

0 e(1−2i)x 00 0 e5x

⎤⎦

⎡⎣f1

f1

c3

⎤⎦ = MZF =

⎡⎢⎣f1e

(1+2i)x

f1e(1+2i)x

c3e5x

⎤⎥⎦

and so Y = PZ = PMZF , i.e.,

Y =⎡⎣−2+ 2i −2− 2i 4−1+ 2i −1− 2i 3

1 1 1

⎤⎦

⎡⎣e(1+2i)x 0 0

0 e(1−2i)x 00 0 e5x

⎤⎦

⎡⎣f1

f1

c3

⎤⎦ .

Now

e(1+2i)x

⎡⎣−2+ 2i

−1+ 2i

1

⎤⎦ = ex(cos(2x)+ i sin(2x))

⎡⎣−2+ 2i

−1+ 2i

1

⎤⎦

=⎡⎣ex(−2 cos(2x)− 2 sin(2x))

ex(− cos(2x)− 2 sin(2x))

ex cos(2x)

⎤⎦+ i

⎡⎣ex(2 cos(2x)− 2 sin(2x))

ex(2 cos(2x)− sin(2x))

ex sin(2x)

⎤⎦

and of course

e5x

⎡⎣4

31

⎤⎦ =

⎡⎣4e5x

3e5x

e5x

⎤⎦ ,

so, replacing the relevant columns of PMZ , we find

Y =⎡⎣ex(−2 cos(2x)− 2 sin(2x)) ex(2 cos(2x)− 2 sin(2x)) 4e5x

ex(− cos(2x)− 2 sin(2x)) ex(2 cos(2x)− sin(2x)) 3e5x

ex cos(2x) ex sin(2x) e5x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣(−2c1 + 2c2)e

x cos(2x)+ (−2c1 − 2c2)ex sin(2x)+ 4c3e

5x

(−c1 + 2c2)ex cos(2x)+ (−2c1 − c2)e

x sin(2x)+ 3c3e5x

c1ex cos(2x)+ c2e

x sin(2x)+ c3e5x

⎤⎦ .

Page 54: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

46 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

EXERCISES FOR SECTION 2.2

In Exercises 1–4:

(a) Solve the system Y ′ = AY .

(b) Solve the initial value problem Y ′ = AY , Y (0) = Y0.

In Exercises 5 and 6, solve the system Y ′ = AY .

1. A =[

3 5−2 5

], det(λI − A) = λ2 − 8λ+ 25, and Y0 =

[8

13

].

2. A =[

3 4−2 7

], det(λI − A) = λ2 − 10λ+ 29, and Y0 =

[35

].

3. A =[

5 13−1 9

], det(λI − A) = λ2 − 14λ+ 58, and Y0 =

[21

].

4. A =[

7 17−4 11

], det(λI − A) = λ2 − 18λ+ 145, and Y0 =

[52

].

5. A =⎡⎣ 37 10 20−59 −9 −24−33 −12 −21

⎤⎦, det(λI − A) = (λ2 − 4λ+ 29)(λ− 3).

6. A =⎡⎣−4 −42 15

4 25 −106 32 −13

⎤⎦, det(λI − A) = (λ2 − 6λ+ 13)(λ− 2).

2.3 INHOMOGENEOUS SYSTEMS WITH CONSTANTCOEFFICIENTS

In this section, we show how to solve an inhomogeneous system Y ′ = AY +G(x) where G(x)

is a vector of functions. (We will often abbreviate G(x) by G). We use a method that is a directgeneralization of the method we used for solving a homogeneous system in Section 2.1.

Consider the matrix system

Y ′ = AY +G .

Page 55: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.3. INHOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 47

Step 1. Write A = PJP−1 with J in JCF, so the system becomes

Y ′ = (PJP−1)Y +G

Y ′ = PJ(P−1Y )+G

P−1Y ′ = J (P−1Y )+ P−1G

(P−1Y )′ = J (P−1Y )+ P−1G .

(Note that, since P−1 is a constant matrix, we have that (P−1Y )′ = P−1Y ′.)

Step 2. Set Z = P−1Y and H = P−1G, so this system becomes

Z′ = JZ +H

and solve this system for Z.

Step 3. Since Z = P−1Y , we have that

Y = PZ

is the solution to our original system.

Again, the key to this method is to be able to perform Step 2, and again this is straightforward.Within each Jordan block, we solve from the bottom up. Let us focus our attention on a single k-by-kblock.The equation for the last function zk in that block is an inhomogeneous first-order differentialequation involving only zk , and we go ahead and solve it.The equation for the next to the last functionzk−1 in that block is an inhomogeneous first-order differential equation involving only zk−1 and zk .We substitute in our solution for zk to obtain an inhomogeneous first-order differential equation forzk−1 involving only zk−1, and we go ahead and solve it, etc.

In principle, this is the method we use. In practice, using this method directly is solving eachsystem “by hand,” and instead we choose to “automate” this procedure.This leads us to the followingmethod. In order to develop this method we must begin with some preliminaries.

For a fixed matrix A, we say that the inhomogeneous system Y ′ = AY +G(x) has associatedhomogeneous system Y ′ = AY . By our previous work, we know how to find the general solution ofY ′ = AY . First we shall see that, in order to find the general solution of Y ′ = AY +G(x), it sufficesto find a single solution of that system.

Lemma 3.1. Let Yi be any solution of Y ′ = AY +G(x). If Yh is any solution of the associated ho-mogeneous system Y ′ = AY , then Yh + Yi is also a solution of Y ′ = AY +G(x), and every solution ofY ′ = AY +G(x) is of this form.

Consequently, the general solution of Y ′ = AY +G(x) is given by Y = YH + Yi ,where YH denotesthe general solution of Y ′ = AY .

Page 56: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

48 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Proof. First we check that Y = Yh + Yi is a solution of Y ′ = AY +G(x). We simply compute

Y ′ = (Yh + Yi)′ = Y ′h + Y ′i = (AYh)+ (AYi +G)

= A(Yh + Yi)+G = AY +G

as claimed.Now we check that every solution Y of Y ′ = AY +G(x) is of this form. So let Y be any

solution of this inhomogeneous system.We can certainly write Y = (Y − Yi)+ Yi = Yh + Yi whereYh = Y − Yi . We need to show that Yh defined in this way is indeed a solution of Y ′ = AY . Againwe compute

Y ′h = (Y − Yi)′ = Y ′ − Y ′i = (AY +G)− (AYi +G)

= A(Y − Yi) = AYh

as claimed. �

(It is common to call Yi a particular solution of the inhomogeneous system.)Let us now recall our work from Section 2.1, and keep our previous notation. The homoge-

neous system Y ′ = AY has general solution YH = PMZC where C is a vector of arbitrary constants.Let us set NY = NY (x) = PMZ(x) for convenience, so YH = NY C. Then Y ′H = (NY C)′ = N ′Y C,and then, substituting in the equation Y ′ = AY , we obtain the equation N ′Y C = ANY C. Since thisequation must hold for any C, we conclude that

N ′Y = ANY .

We use this fact to write down a solution to Y ′ = AY +G. We will verify by direct computationthat the function we write down is indeed a solution. This verification is not a difficult one, butnevertheless it is a fair question to ask how we came up with this function. Actually, it can be derivedin a very natural way, but the explanation for this involves the matrix exponential and so we defer ituntil Section 2.4. Nevertheless, once we have this solution (no matter how we came up with it) weare certainly free to use it.

It is convenient to introduce the following nonstandard notation. For a vector H(x), welet

∫0 H(x)dx denote an arbitrary but fixed antiderivative of H(x). In other words, in obtaining∫

0 H(x)dx, we simply ignore the constants of integration. This is legitimate for our purposes, as byLemma 3.1 we only need to find a single solution to an inhomogeneous system, and it doesn’t matterwhich one we find—any one will do. (Otherwise said, we can “absorb” the constants of integrationinto the general solution of the associated homogeneous system.)

Theorem 3.2. The function

Yi = NY

∫0N−1

Y G dx

is a solution of the system Y ′ = AY +G.

Page 57: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.3. INHOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 49

Proof. We simply compute Y ′i . We have

Y ′i =(

NY

∫0N−1

Y G dx

)′

= N ′Y∫

N−1Y G dx +NY

(∫0N−1

Y G dx

)′by the product rule

= N ′Y∫

0N−1

Y G dx +NY (N−1Y G)

by the definition of the antiderivative

= N ′Y∫

0N−1

Y G dx +G

= (ANY )

∫0N−1

Y G dx +G

as N ′Y = ANY

= A

(NY

∫0N−1

Y G dx

)+G

= AYi +G

as claimed. �

We now do a variety of examples: a 2-by-2 diagonalizable system, a 2-by-2 nondiagonalizablesystem, a 3-by-3 diagonalizable system, and a 2-by-2 system in which the characteristic polynomialhas complex roots. In all these examples, when it comes to finding N−1

Y , it is convenient to use thefact that N−1

Y = (PMZ)−1 = M−1Z P−1.

Example 3.3. Consider the system

Y ′ = AY +G where A =[

5 −72 −4

]and G =

[30ex

60e2x

].

We saw in Example 1.2 that

P =[

7 12 1

]and MZ =

[e3x 00 e−2x

],

and NY = PMZ . Then

N−1Y G =

[e−3x 0

0 e2x

](1/5)

[1 −1−2 7

] [30ex

60e2x

]=

[6e−2x − 12e−x

−12e3x + 84e4x

].

Then ∫0N−1

Y G =[−3e−2x + 12e2x

−4e3x + 21e4x

]

Page 58: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

50 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

and

Yi = NY

∫0N−1

Y G =[

7 12 1

] [e3x 00 e−2x

] [−3e−2x + 12e2x

−4e3x + 21e4x

]

=[−25ex + 105e2x

−10ex + 45e2x

].

Example 3.4. Consider the system

Y ′ = AY +G where A =[

0 1−4 4

]and G =

[60e3x

72e5x

].

We saw in Example 1.6 that

P =[−2 1

4 0

]and MZ = e2x

[1 x

0 1

],

and NY = PMZ . Then

N−1Y G = e−2x

[1 −x

0 1

](1/4)

[0 −14 −2

] [60e3x

72e5x

]=

[−18e3x − 60xex + 36xe3x

60ex − 36e3x

].

Then ∫0N−1

Y G =[

60ex − 60xex − 10e3x + 12xe3x

60ex − 12e3x

]

and

Yi = NY

∫0N−1

Y G =[−2 1−4 0

]e2x

[1 x

0 1

] [60ex − 60xex − 10e3x + 12xe3x

60ex − 12e3x

]

=[ −60e3x + 8e5x

−240e3x + 40e5x

].

Example 3.5. Consider the system

Y ′ = AY +G where A =⎡⎣ 2 −3 −3

2 −2 −2−2 1 1

⎤⎦ and G =

⎡⎣ ex

12e3x

20e4x

⎤⎦ .

Page 59: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.3. INHOMOGENEOUS SYSTEMS WITH CONSTANT COEFFICIENTS 51

We saw in Example 1.3 that

P =⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦ and MZ =

⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦ ,

and NY = PMZ . Then

N−1Y G =

⎡⎣ex 0 0

0 1 00 0 e−2x

⎤⎦

⎡⎣ 0 1 1

1 −2 −1−1 1 1

⎤⎦

⎡⎣ ex

12e3x

20e4x

⎤⎦ =

⎡⎣ 12e4x + 20e5x

ex − 24e3x − 20e4x

−e−x + 12ex + 20e2x

⎤⎦ .

Then ∫0N−1

Y G =⎡⎣ 3e4x + 4e5x

ex − 8e3x − 5e4x

e−x + 12ex + 10e2x

⎤⎦

and

Yi = NY

∫0N−1

Y G =⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦

⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦

⎡⎣ 3e4x + 4e5x

ex − 8e3x − 5e4x

e−x + 12ex + 10e2x

⎤⎦

=⎡⎣ −ex − 9e3x − 6e4x

−2ex − 4e3x − 5e4x

2ex + 7e3x + 9e4x

⎤⎦ .

Example 3.6. Consider the system

Y ′ = AY +G where A =[

2 −171 4

]and G =

[200

160ex

].

We saw in Example 2.4 that

P =[−1+ 4i −1− 4i

1 1

]and MZ =

[e(3+4i)x 0

0 e(3−4i)x

],

and NY = PMZ . Then

N−1Y G =

[e−(3+4i)x 0

0 e−(3−4i)x

](1/(8i))

[1 1+ 4i

1 1− 4i

] [200

160ex

]

=[−25e(−3−4i)x + 20(4− i)e(−2−4i)x

25e(−3+4i)x + 20(4+ i)e(−2+4i)x

].

Page 60: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

52 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Then ∫0N−1

Y G =[(4+ 3i)e(−3−4i)x + (−4+ 18i)e(−2−4i)x

(4− 3i)e(−3+4i)x + (−4− 18i)e(−2+4i)x

]and

Yi = NY

∫0N−1

Y G

=[−1+ 4i −1− 4i

1 1

] [e(3+4i)x 0

0 e(3−4i)x

] [(4+ 3i)e(−3−4i)x + (−4+ 18i)e(−2−4i)x

(4− 3i)e(−3+4i)x + (−4− 18i)e(−2+4i)x

]

=[−1+ 4i −1− 4i

1 1

] [(4+ 3i)+ (−4+ 18i)ex

(4− 3i)+ (−4− 18i)ex

]

=[−32− 136ex

8− 8ex

].

(Note that in this last example we could do arithmetic with complex numbers directly, i.e.,without having to convert complex exponentials into real terms.)

Once we have done this work, it is straightforward to solve initial value problems. We do asingle example that illustrates this.

Example 3.7. Consider the initial value problem

Y ′ = AY +G, Y(0) =[

717

], where A =

[5 −72 −4

]and G =

[30ex

60e2x

].

We saw in Example 1.2 that the associated homogenous system has general solution

YH =[

7c1e3x + c2e

−2x

2c1e3x + c2e

−2x

]

and in Example 3.3 that the original system has a particular solution

Yi =[−25ex + 105e2x

−10ex + 45e2x

].

Thus, our original system has general solution

Y = YH + Yi =[

7c1e3x + c2e

−2x − 25ex + 105e2x

2c1e3x + c2e

−2x − 10ex + 45e2x

].

We apply the initial condition to obtain the linear system

Y (0) =[

7c1 + c2 + 802c1 + c2 + 35

]=

[7

17

]

Page 61: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 53

with solution c1 = −11, c2 = 4. Substituting, we find that our initial value problem has solution

Y =[−77e3x + 4e−2x − 25ex + 105e2x

−22e3x + 4e−2x − 10ex + 45e2x

].

EXERCISES FOR SECTION 2.3In each exercise, find a particular solution Yi of the system Y ′ = AY +G(x), where A is the matrixof the correspondingly numbered exercise for Section 2.1, and G(x) is as given.

1. G(x) =[

2e8x

3e4x

].

2. G(x) =[

2e−7x

6e−8x

].

3. G(x) =[

e4x

4e5x

].

4. G(x) =[

e6x

9e8x

].

5. G(x) =[

9e10x

25e12x

].

6. G(x) =[

5e−x

12e2x

].

7. G(x) =⎡⎣ 1

3e2x

5e4x

⎤⎦.

8. G(x) =⎡⎣ 8

3e3x

3e5x

⎤⎦.

2.4 THE MATRIX EXPONENTIALIn this section, we will discuss the matrix exponential and its use in solving systems Y ′ = AY .

Our first task is to ask what it means to take a matrix exponential. To answer this, we areguided by ordinary exponentials. Recall that, for any complex number z, the exponential ez is givenby

ez = 1+ z+ z2/2! + z3/3! + z4/4! + . . . .

Page 62: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

54 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

With this in mind, we define the matrix exponential as follows.

Definition 4.1. Let T be a square matrix. Then the matrix exponential eT is defined by

eT = I + T + 1

2!T2 + 1

3!T3 + 1

4!T4 + . . . .

(For this definition to make sense we need to know that this series always converges, and itdoes.)

Recall that the differential equation y′ = ay has the solution y = ceax . The situation forY ′ = AY is very analogous. (Note that we use � rather than C to denote a vector of constants forreasons that will become clear a little later. Note that � is on the right in Theorem 4.2 below, aconsequence of the fact that matrix multiplication is not commutative.)

Theorem 4.2.

(1) Let A be a square matrix. Then the general solution of

Y ′ = AY

is given byY = eAx�

where � is a vector of arbitrary constants.

(2) The initial value problemY ′ = AY, Y (0) = Y0

has solutionY = eAxY0 .

Proof. (Outline) (1) We first compute eAx . In order to do so, note that (Ax)2 = (Ax)(Ax) =(AA)(xx) = A2x2 as matrix multiplication commutes with scalar multiplication, and (Ax)3 =(Ax)2(Ax) = (A2x2)(Ax) = (A2A)(x2x) = A3x3, and similarly, (Ax)k = Akxk for any k. Then,substituting in Definition 4.1, we have that

Y = eAx� = (I + Ax + 1

2!A2x2 + 1

3!A3x3 + 1

4!A4x4 + . . .)� .

Page 63: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 55

To find Y ′, we may differentiate this series term-by-term. (This claim requires proof, but we shallnot give it here.) Remembering that A and � are constant matrices, we see that

Y ′ = (A+ 1

2!A2(2x)+ 1

3!A3(3x2)+ 1

4!A4(4x3)+ . . .)�

= (A+ A2x + 1

2!A3x2 + 1

3!A4x3 + . . .)�

= A(I + Ax + 1

2!A2x2 + 1

3!A3x3 + . . .)�

= A(eAx�) = AY

as claimed.(2) By (1) we know that Y ′ = AY has solution Y = eAx�. We use the initial condition to

solve for �. Setting x = 0, we have:

Y0 = Y (0) = eA0� = e0� = I� = �

(where e0 means the exponential of the zero matrix, and the value of this is the identity matrix I , asis apparent from Definition 4.1), so � = Y0 and Y = eAx� = eAxY0. �

In the remainder of this section we shall see how to translate the theoretical solution ofY ′ = AY given by Theorem 4.2 into a practical one. To keep our notation simple, we will stick to2-by-2 or 3-by-3 cases, but the principle is the same regardless of the size of the matrix.

One case is relatively easy.

Lemma 4.3. If J is a diagonal matrix,

J =

⎡⎢⎢⎢⎣

d1

d2 0

0. . .

dn

⎤⎥⎥⎥⎦

then eJx is the diagonal matrix

eJx =

⎡⎢⎢⎢⎣

ed1x

ed2x 0

0. . .

ednx

⎤⎥⎥⎥⎦ .

Page 64: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

56 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Proof. Suppose, for simplicity, that J is 2-by-2,

J =[d1 00 d2

].

Then you can easily compute that J 2 =[d1

2 00 d2

2

], J 3 =

[d1

3 00 d2

3

], and similarly, J k =[

d1k 0

0 d2k

]for any k.

Then, as in the proof of Theorem 4.2,

eJx = I + Jx + 1

2!J2x2 + 1

3!J3x3 + 1

4!J4x4 + . . .

=[

1 00 1

]+

[d1 00 d2

]x + 1

2![d1

2 00 d2

2

]x2 + 1

3![d1

3 00 d2

3

]x3 + . . .

=[

1+ d1x + 12! (d1x)2 + 1

3! (d1x)3 + . . . 00 1+ d2x + 1

2! (d2x)2 + 13! (d2x)3 + . . .

]

which we recognize as

=[ed1x 0

0 ed2x

].

Example 4.4. We wish to find the general solution of Y ′ = JY where

J =[

3 00 −2

].

To do so we directly apply Theorem 4.2 and Lemma 4.3. The solution is given by[y1

y2

]= Y = eJx� =

[e3x 00 e−2x

] [γ1

γ2

]=

[γ1e

3x

γ2e−2x

].

Now suppose we want to find the general solution of Y ′ = AY where A =[

5 −72 −4

]. We may

still apply Theorem 4.2 to conclude that the solution is Y = eAx�. We again try to calculate eAx .Now we find

A =[

5 −72 −4

], A2 =

[11 −72 2

], A3 =

[41 −4914 −22

], . . .

Page 65: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 57

so

eAx =[

1 00 1

]+

[5 −72 −4

]x + 1

2![

11 −72 2

]x2 + 1

3![

41 −4914 −22

]x3 + . . . ,

which looks like a hopeless mess. But, in fact, the situation is not so hard!

Lemma 4.5. Let S and T be two matrices and suppose

S = PT P−1

for some invertible matrix P . Then

Sk = PT kP−1 for every k

and

eS = PeT P−1 .

Proof. We simply compute

S2 = SS = (PT P−1)(PT P−1) = PT (P−1P)T P−1 = PT IT P−1

= PT T P−1 = PT 2P−1,

S3 = S2S = (PT 2P−1)(PT P−1) = PT 2(P−1P)T P−1 = PT 2IT P−1

= PT 2T P−1 = PT 3P−1,

S4 = S3S = (PT 3P−1)(PT P−1) = PT 3(P−1P)T P−1 = PT 3IT P−1

= PT 3T P−1 = PT 4P−1 ,

etc.Then

eS = I + S + 1

2!S2 + 1

3!S3 + 1

4!S4 + . . .

= PIP−1 + PT P−1 + 1

2!PT 2P−1 + 1

3!PT 3P−1 + 1

4!PT 4P−1 + . . .

= P(I + T + 1

2!T2 + 1

3!T3 + 1

4!T4 + . . .)P−1

= PeT P−1

as claimed. �

Page 66: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

58 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

With this in hand let us return to our problem.

Example 4.6. (Compare Example 1.2.) We wish to find the general solution of Y ′ = AY where

A =[

5 −72 −4

].

We saw in Example 1.16 in Chapter 1 that A = PJP−1 with

P =[

7 12 1

]and J =

[3 00 −2

].

Then

eAx = PeJxP−1

=[

7 12 1

] [e3x 00 e−2x

] [7 12 1

]−1

=[ 7

5e3x − 25e−2x − 7

5e3x + 75e−2x

25e3x − 2

5e−2x − 25e3x + 7

5e−2x

]

and

Y = eAx� = eAx

[γ1

γ2

]

=[

( 75γ1 − 7

5γ2)e3x + (− 2

5γ1 + 75γ2)e

−2x

( 25γ1 − 2

5γ2)e3x + (− 2

5γ1 + 75γ2)e

−2x

].

Example 4.7. (Compare Example 1.3.) We wish to find the general solution of Y ′ = AY where

A =⎡⎣ 2 −3 −3

2 −2 −2−2 1 1

⎤⎦ .

We saw in Example 2.23 in Chapter 1 that A = PJP−1 with

P =⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦ and J =

⎡⎣−1 0 0

0 0 00 0 2

⎤⎦ .

Page 67: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 59

Then

eAx = PeJxP−1

=⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦

⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦

⎡⎣1 0 −1

0 −1 −11 1 1

⎤⎦−1

=⎡⎣ e2x e−x − e2x e−x − e2x

−1+ e2x 2− e2x 1− e2x

1− e2x e−x − 2+ e2x e−x − 1+ e2x

⎤⎦

and

Y = eAx� = eAx

⎡⎣γ1

γ2

γ3

⎤⎦

=⎡⎣ (γ2 + γ3)e

−x + (γ1 − γ2 − γ3)e2x

(−γ1 + 2γ2 + γ3)+ (γ1 − γ2 − γ3)e2x

(γ2 + γ3)e−x + (γ1 − 2γ2 − γ3)+ (−γ1 + γ2 + γ3)e

2x

⎤⎦ .

Now suppose we want to solve the initial value problem Y ′ = AY , Y (0) =⎡⎣1

00

⎤⎦. Then

Y = eAx Y (0)

=⎡⎣ e2x e−x − e2x e−x − e2x

−1+ e2x 2− e2x 1− e2x

1− e2x e−x − 2+ e2x e−x − 1+ e2x

⎤⎦

⎡⎣1

00

⎤⎦

=⎡⎣ e2x

−1+ e2x

1− e2x

⎤⎦ .

Remark 4.8. Let us compare the results of our method here with that of our previous method. Inthe case of Example 4.6, our previous method gives the solution

Y = P

[e3x 00 e−2x

]C

= PeJxC

Page 68: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

60 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

where J =[

3 00 −2

],

while our method here gives

Y = PeJxP−1� .

But note that these answers are really the same! For P−1 is a constant matrix, so if � is a vector ofarbitrary constants, then so is P−1�, and we simply set C = P−1�.

Similarly, in the case of Example 4.7, our previous method gives the solution

Y = P

⎡⎣e−x 0 0

0 1 00 0 e2x

⎤⎦ C

= PeJxC

where J =⎡⎣−1 0 0

0 0 00 0 2

⎤⎦,

while our method here gives

Y = PeJxP−1�

and again, setting C = P−1�, we see that these answers are the same.So the point here is not that the matrix exponential enables us to solve new problems, but

rather that it gives a new viewpoint about the solutions that we have already obtained.

While these two methods are in principle the same, we may ask which is preferable inpractice. In this regard we see that our earlier method is better, as the use of the matrix exponentialrequires us to find P−1, which may be a considerable amount of work. However, this advantageis (partially) negated if we wish to solve initial value problems, as the matrix exponential methodimmediately gives the unknown constants �, as � = Y (0), while in the former method we mustsolve a linear system to obtain the unknown constants C.

Now let us consider the nondiagonalizable case. Suppose Z′ = JZ where J is a matrix con-sisting of a single Jordan block. Then by Theorem 4.2 this has the solution Z = eJx�. On the other

Page 69: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 61

hand, in Theorem 1.1 we already saw that this system has solution Z = MZC. In this case, we simplyhave C = �, so we must have eJx = MZ . Let us see that this is true by computing eJx directly.

Theorem 4.9. Let J be a k-by-k Jordan block with eigenvalue a,

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

a 1a 1 0

a 1. . .

. . .

0 a 1a

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Then

eJx = eax

⎡⎢⎢⎢⎢⎢⎢⎢⎣

1 x x2/2! x3/3! · · · xk−1/(k − 1)!0 1 x x2/2! · · · xk−2/(k − 2)!

1 x · · · xk−3/(k − 3)!. . .

...

x

1

⎤⎥⎥⎥⎥⎥⎥⎥⎦

.

Proof. First suppose that J is a 2-by-2 Jordan block,

J =[a 10 a

].

Then J 2 =[a2 2a

0 a2

], J 3 =

[a3 3a2

0 a3

], J 4 =

[a4 4a3

0 a4

], …

so

eJx =[

1 00 1

]+

[a 10 a

]x + 1

2![a2 2a

0 a2

]x2 + 1

3![a3 3a2

0 a3

]x3 + 1

4![a4 4a3

0 a4

]x4 + . . .

=[m11 m12

0 m22

],

and we see that

m11 = m22 = 1+ ax + 1

2! (ax)2 + 1

3! (ax)3 + 1

4! (ax)4 + 1

5! (ax)5 + . . .

= eax ,

Page 70: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

62 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

and

m12 = x + ax2 + 1

2!a2x3 + 1

3!a3x4 + 1

4!a4x5 + . . .

= x(1+ ax + 1

2! (ax)2 + 1

3! (ax)3 + 1

4! (ax)4 + . . .) = xeax

and so we conclude that

eJx =[eax xeax

0 eax

]= eax

[1 x

0 1

].

Next suppose that J is a 3-by-3 Jordan block,

J =⎡⎣a 1 0

0 a 10 0 1

⎤⎦ .

Then J 2 =⎡⎣a2 2a 1

0 a2 2a

0 0 a2

⎤⎦, J 3 =

⎡⎣a3 3a2 3a

0 a3 3a2

0 0 a3

⎤⎦, J 4 =

⎡⎣a4 4a3 6a2

0 a4 4a3

0 0 a4

⎤⎦,

J 5 =⎡⎣a5 5a4 10a3

0 a5 5a4

0 0 a5

⎤⎦, …

so

eJx =⎡⎣1 0 0

0 1 00 0 1

⎤⎦+

⎡⎣a 1 0

0 a 10 0 a

⎤⎦ x + 1

2!

⎡⎣a2 2a 1

0 a2 2a

0 0 a2

⎤⎦ x2 + 1

3!

⎡⎣a3 3a2 3a

0 a3 3a2

0 0 a3

⎤⎦ x3

+ 1

4!

⎡⎣a4 4a3 6a2

0 a4 4a3

0 0 a4

⎤⎦ x4 + 1

5!

⎡⎣a5 5a4 10a3

0 a5 5a4

0 0 a5

⎤⎦ x5 + . . .

=⎡⎣m11 m12 m13

0 m22 m23

0 0 m33

⎤⎦ ,

and we see that

m11 = m22 = m33 = 1+ ax + 1

2! (ax)2 + 1

3! (ax)3 + 1

4! (ax)4 + 1

5! (ax)5 + . . .

= eax ,

and

m12 = m23 = x + ax2 + 1

2!a2x3 + 1

3!a3x4 + 1

4!a4x5 + . . .

= xeax

Page 71: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 63

as we saw in the 2-by-2 case. Finally,

m13 = 1

2!x2 + 1

2!ax3 + 1

2! (1

2!a2x4)+ 1

2! (1

3!a3x5)+ . . .

(as 6/4! = 1/4 = (1/2!)(1/2!) and 10/5! = 1/12 = (1/2!)(1/3!), etc.)

= 1

2!x2(1+ ax + 1

2! (ax)2 + 1

3! (ax)3 + . . .)

= 1

2!x2eax ,

so

eJx =⎡⎣eax xeax 1

2!x2eax

0 eax xeax

0 0 eax

⎤⎦ = eax

⎡⎣1 x x2/2!

0 1 x

0 0 1

⎤⎦ ,

and similarly, for larger Jordan blocks. �

Let us see how to apply this theorem in a couple of examples.

Example 4.10. (Compare Examples 1.6 and 1.10.) Consider the system

Y ′ = AY where A =[

0 1−4 4

].

Also, consider the initial value problem Y ′ = AY , Y (0) =[

3−8

].

We saw in Example 2.12 in Chapter 1 that A = PJP−1 with

P =[−2 1−4 0

]and J =

[2 10 2

].

Then

eAx = PeJxP−1

=[−2 1−4 0

] [e2x xe2x

0 e2x

] [−2 1−4 0

]−1

=[(1− 2x)e2x xe2x

−4xe2x (1+ 2x)e2x

],

Page 72: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

64 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

and so

Y = eAx�

= eAx

[γ1

γ2

]

=[

γ1e2x + (−2γ1 + γ2)xe2x

γ2e2x + (−4γ1 + 2γ2)xe2x

].

The initial value problem has solution

Y = eAxY0

= eAx

[3−8

]

=[

3e2x − 14xe2x

8e2x − 28xe2x

].

Example 4.11. (Compare Examples 1.7 and 1.11.) Consider the system

Y ′ = AY where A =⎡⎣ 2 1 1

2 1 −2−1 0 2

⎤⎦ .

Also, consider the initial value problem Y ′ = AY , Y (0) =⎡⎣ 8

325

⎤⎦.

We saw in Example 2.25 in Chapter 1 that A = PJP−1 with

P =⎡⎣ 1 0 −5−2 0 6−1 1 1

⎤⎦ and J =

⎡⎣−1 1 0

0 −1 00 0 3

⎤⎦ .

Then

eAx = PeJxP−1

=⎡⎣ 1 0 −5−2 0 6−1 1 1

⎤⎦

⎡⎣e−x xe−x 0

0 e−x 00 0 e3x

⎤⎦

⎡⎣ 1 0 −5−2 0 6−1 1 1

⎤⎦−1

Page 73: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 65

=

⎡⎢⎢⎣

38e−x + 1

2xe−x + 58e3x − 5

16e−x − 14xe−x + 5

16e3x xe−x

− 34e−x − xe−x + 3

4e3x 58e−x + 1

2xe−x + 38e3x −2xe−x

18e−x − 1

2xe−x − 18e3x 1

16e−x + 14xe−x − 1

16e3x e−x − xe−x

⎤⎥⎥⎦

and so

Y = eAx�

= eAx

⎡⎣γ1

γ2

γ3

⎤⎦

=

⎡⎢⎢⎣

( 38γ1 − 5

16γ2)e−x + ( 1

2γ1 − 14γ2 + γ3)xe−x + ( 5

8γ1 + 516γ2)e

3x

(− 34γ1 + 5

8γ2)e−x + (−γ1 + 1

2γ2 − 2γ3)xe−x + ( 34γ1 + 3

8γ2)e3x

( 18γ1 + 1

16γ2 + γ3)e−x + (− 1

2γ1 + 14γ2 − γ3)xe−x + (− 1

8γ1 − 116γ2)e

3x

⎤⎥⎥⎦ .

The initial value problem has solution

Y = eAxY0

= eAx

⎡⎣ 8

325

⎤⎦

=⎡⎣−7e−x + xe−x + 15e3x

14e−x − 2xe−x + 18e3x

8e−x − xe−x − 3e3x

⎤⎦ .

Now we solve Y ′ = AY in an example where the matrix A has complex eigenvalues. As youwill see, our method is exactly the same.

Example 4.12. (Compare Example 2.4.) Consider the system

Y ′ = AY where A =[

2 −171 4

].

We saw in Example 2.4 that A = PJP−1 with

P =[−1+ 4i −1− 4i

1 1

]and J =

[3+ 4i 0

0 3− 4i

].

Page 74: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

66 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Then

eAx = PeJxP−1

=[−1+ 4i −1− 4i

1 1

] [e(3+4i)x 0

0 e(3−4i)x

] [−1+ 4i −1− 4i

1 1

]−1

=[−1+ 4i −1− 4i

1 1

] [e(3+4i)x 0

0 e(3−4i)x

](1/(8i))

[1 1+ 4i

−1 −1+ 4i

]

=[m11 m12

m21 m22

]

where

m11 = (1/(8i))((−1+ 4i)e(3+4i)x + (−1− 4i)(−e(3−4i)x))

= (1/(8i))((−1+ 4i)e3x(cos(4x)+ i sin(4x))− (−1− 4i)e3x(cos(4x)− i sin(4x))

= (1/(8i))(ie3x(4 cos(4x)− sin(4x))(2)) = e3x(cos(4x)− (1/4) sin(4x)),

m12 = (1/(8i))((−1+ 4i)(1+ 4i)e(3+4i)x + (−1− 4i)(−1+ 4i)e(3−4i)x)

= (1/(8i))(ie3x(−17 sin(4x))(2)) = e3x((−17/4) sin(4x)),

m21 = (1/(8i))(e(3+4i)x − e(3−4i)x)

= (1/(8i))(ie3x(sin(4x))(2)) = e3x((1/4) sin(4x)),

m22 = (1/(8i))((1+ 4i)e(3+4i)x + (−1+ 4i)e(3−4i)x)

= (1/(8i))((1+ 4i)e3x(cos(4x)+ i sin(4x))+ (−1+ 4i)e3x(cos(4x)− i sin(4x))

= (1/(8i))(ie3x(4 cos(4x)+ sin(4x))(2)) = e3x(cos(4x)+ (1/4) sin(4x)) .

Thus,

eAx = e3x

[cos(4x)− 1

4 sin(4x) −174 sin(4x)

14 sin(4x) cos(4x)+ 1

4 sin(4x)

]

and

Y = eAx� =[γ1e

3x cos(4x)+ (−14 γ1 + −17

4 γ2)e3x sin(4x)

γ2e3x cos(4x)+ ( 1

4γ1 + 14γ2)e

3x sin(4x)

].

Remark 4.13. Our procedure in this section is essentially that of Remark 1.12. (Compare Exam-ple 4.10 with Example 1.13.)

Remark 4.14. As we have seen, for a matrix J in JCF, eJx = MZ , in the notation of Section 2.1.But also, in the notation of Section 2.1, if A = PJP−1, then eAx = PeJxP−1 = PMZP−1 = MY .

Page 75: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

2.4. THE MATRIX EXPONENTIAL 67

Remark 4.15. Now let us see how to use the matrix exponential to solve an inhomogeneous systemY ′ = AY +G(x). Since we already know how to solve homogeneous systems, we need only, byLemma 3.1, find a (single) particular solution Yi of this inhomogeneous system, and that is whatwe do. We shall again use our notation from Section 2.3, that

∫0 H(x)dx denotes an arbitrary (but

fixed) antiderivative of H(x).

Thus, consider Y ′ = AY +G(x). Then, proceeding analogously as for an ordinary first-orderlinear differential equation, we have

Y ′ = AY +G(x)

Y ′ − AY = G(x)

and, multiplying this equation by the integrating factor e−Ax , we obtain

e−Ax(Y ′ − AY) = e−AxG(x)

(e−AxY )′ = e−AxG(x)

with solution

e−AxYi =∫

0e−AxG(x)

Yi = eAx

∫0e−AxG(x) .

Let us compare this with the solution we found in Theorem 3.2. By Remark 4.14, we canrewrite this solution as Yi = MY

∫0 M−1

Y G(x). This is almost, but not quite, what we had in Theo-rem 3.2. There we had the solution Yi = NY

∫0 N−1

Y G(x), where NY = PMZ . But these solutionsare the same, as MY = PMZP−1 = NY P−1. Then M−1

Y = PM−1Z P−1 and N−1

Y = M−1Z P−1, so

M−1Y = PN−1

Y . Substituting, we find

Yi = MY

∫0M−1

Y G(x)

= NY P−1∫

0PN−1

Y G(x) ,

and, since P is a constant matrix, we may bring it outside the integral to obtain

Yi = NY P−1P

∫0N−1

Y G(x)

= NY

∫0N−1

Y G(x)

as claimed.

Page 76: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

68 CHAPTER 2. SOLVING SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS

Remark 4.16. In applying this method we must compute M−1Z = (eJx)−1 = e−Jx = eJ (−x),

and, as an aid to calculation, it is convenient to make the following observation. Suppose, forsimplicity, that J consists of a single Jordan block. Then we compute: in the 1-by-1 case,[eax

]−1 = [e−ax

]; in the 2-by-2 case,

(eax

[1 x

0 1

])−1

= e−ax

[1 −x

0 1

]; in the 3-by-3 case,

⎛⎝eax

⎡⎣1 x x2/2!

0 1 x

0 0 1

⎤⎦

⎞⎠−1

= e−ax

⎡⎣1 −x (−x)2/2!

0 1 −x

0 0 1

⎤⎦ = e−ax

⎡⎣1 −x x2/2!

0 1 −x

0 0 1

⎤⎦, etc.

EXERCISES FOR SECTION 2.4In each exercise:

(a) Find eAx and the solution Y = eAx� of Y ′ = AY .

(b) Use part (a) to solve the initial value problem Y ′ = AY , Y (0) = Y0.

Exercises 1–24: In Exercise n, for 1 ≤ n ≤ 20, the matrix A and the initial vector Y0 are thesame as in Exercise n of Section 2.1. In Exercise n, for 21 ≤ n ≤ 24, the matrix A and the initialvector Y0 are the same as in Exercise n− 20 of Section 2.2.

Page 77: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

69

A P P E N D I X A

Background ResultsA.1 BASES, COORDINATES, AND MATRICES

In this section of the Appendix,we review the basic facts on bases for vector spaces and on coordinatesfor vectors and matrices for linear transformations.Then we use these to (re)prove some of the resultsin Chapter 1.

First we see how to represent vectors, once we have chosen a basis.

Theorem 1.1. Let V be a vector space and let B = {v1, . . . , vn} be a basis of V . Then any vector v inV can be written as v = c1v1 + . . .+ cnvn in a unique way.

This theorem leads to the following definition.

Definition 1.2. Let V be a vector space and let B = {v1, . . . , vn} be a basis of V . Let v be a vectorin V and write v = c1v1 + . . .+ cnvn. Then the vector

[v]B =⎡⎢⎣

c1...

cn

⎤⎥⎦

is the coordinate vector of v in the basis B.

Remark 1.3. In particular, we may take V = Cn and consider the standard basis E = {e1, . . . , en}

where

ei =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

0...

010...

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

,

with 1 in the ith position, and 0 elsewhere.

Page 78: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

70 APPENDIX A. BACKGROUND RESULTS

Then, if

v =

⎡⎢⎢⎢⎢⎢⎣

c1

c2...

cn−1

cn

⎤⎥⎥⎥⎥⎥⎦ ,

we see that

v = c1

⎡⎢⎢⎢⎢⎢⎣

10...

00

⎤⎥⎥⎥⎥⎥⎦+ . . .+ cn

⎡⎢⎢⎢⎢⎢⎣

00...

01

⎤⎥⎥⎥⎥⎥⎦ = c1e1 + . . .+ cnen

so we then see that

[v]E =

⎡⎢⎢⎢⎢⎢⎣

c1

c2...

cn−1

cn

⎤⎥⎥⎥⎥⎥⎦ .

(In other words, a vector in Cn “looks like” itself in the standard basis.)

Next we see how to represent linear transformations, once we have chosen a basis.

Theorem 1.4. Let V be a vector space and let B = {v1, . . . , vn} be a basis of V . Let T : V −→ V bea linear transformation. Then there is a unique matrix [T ]B such that, for any vector v in V ,

[T (v)]B = [T ]B[v]B .

Furthermore, the matrix [T ]B is given by

[T ]B =[[v1]B

∣∣∣∣ [v2]B∣∣∣∣ . . .

∣∣∣∣ [vn]B]

.

Similarly, this theorem leads to the following definition.

Definition 1.5. Let V be a vector space and let B = {v1, . . . , vn} be a basis of V . Let T : V −→ V

be a linear transformation. Let [T ]B be the matrix defined in Theorem 1.4. Then [T ]B is the matrixof the linear transformation T in the basis B.

Page 79: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

A.1. BASES, COORDINATES, AND MATRICES 71

Remark 1.6. In particular, we may take V = Cn and consider the standard basis E = {e1, . . . , en}.

Let A be an n-by-n square matrix and write A =[a1

∣∣∣∣ a2

∣∣∣∣ . . .

∣∣∣∣ an

]. If TA is the linear transfor-

mation given by TA(v) = Av, then

[TA]E =[[TA(e1)]E

∣∣∣∣ [TA(e2)]E∣∣∣∣ . . .

∣∣∣∣ [TA(en)]E]

=[[Ae1]E

∣∣∣∣ [Ae2]E∣∣∣∣ . . .

∣∣∣∣ [Aen]E]

=[[a1]E

∣∣∣∣ [a2]E∣∣∣∣ . . .

∣∣∣∣ [an]E]

=[a1

∣∣∣∣ a2

∣∣∣∣ . . .

∣∣∣∣ an

]= A .

(In other words, the linear transformation given by multiplication by a matrix “looks like” that samematrix in the standard basis.)

What is essential to us is the ability to compare the situation in different bases. To that end,we have the following theorem.

Theorem 1.7. Let V be a vector space, and let B = {v1, . . . , vn} and C = {w1, . . . , wn} be two basesof V . Let PC←B be the matrix

PC←B =[[v1]C

∣∣∣∣ [v2]C∣∣∣∣ . . .

∣∣∣∣ [vn]C]

.

This matrix has the following properties:(1) For any vector v in V ,

[v]C = PC←B[v]B .

(2) This matrix is invertible and

(PC←B)−1 = PB←C =[[w1]B

∣∣∣∣ [w2]B∣∣∣∣ . . .

∣∣∣∣ [wn]B]

.

(3) For any linear transformation T : V −→ V ,

[T ]C = PC←B[T ]BPB←C= PC←B[T ]B(PC←B)−1

= (PB←C)−1[T ]BPB←C .

Page 80: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

72 APPENDIX A. BACKGROUND RESULTS

Again, this theorem leads to a definition.

Definition 1.8. The matrix PC←B is the change-of-basis matrix from the basis B to the basis C.

Now we come to what is the crucial point for us.

Corollary 1.9. Let V = Cn, let E be the standard basis of V , and let B = {v1, . . . , vn} be any basis of

V . Let A be any n-by-n square matrix. Then

A = PBP−1

where

P =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

]and B = [TA]B .

Proof. By Theorem 1.7,

[TA]E = PE←B[TA]B(PE←B)−1 .

But by Remark 1.3,

PE←B =[[v1]E

∣∣∣∣ [v2]E∣∣∣∣ . . .

∣∣∣∣ [vn]E]=

[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vn

]= P ,

and by Remark 1.6,

[TA]E = A .

With this in hand we now present new proofs of Theorems 1.14 and 2.11 in Chapter 1, anda proof of Lemma 1.7 in Chapter 1. For convenience, we restate these results.

Theorem 1.10. Let A be an n-by-n matrix over the complex numbers. Then A is diagonalizable ifand only if, for each eigenvalue a of A, geom-mult(a) = alg-mult(a). In that case, A = PJP−1 whereJ is a diagonal matrix whose entries are the eigenvalues of A, each appearing according to its algebraicmultiplicity, and P is a matrix whose columns are eigenvectors forming bases for the associated eigenspaces.

Page 81: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

A.1. BASES, COORDINATES, AND MATRICES 73

Proof. First suppose that for each eigenvalue a of A,geom-mult(a) = alg-mult(a). In the notation ofthe proof of Theorem 1.14 in Chapter 1, B = {v1, . . . , vn} is a basis of C

n. Then, by Corollary 1.9,A = P [TA]BP−1. But B is a basis of eigenvectors, so for each i, TA(vi) = Avi = aivi = 0v1 +. . .+ 0vi−1 + aivi + 0vi+1 + . . .+ 0vn. Then

[TA(vi)]B =

⎡⎢⎢⎢⎢⎢⎢⎣

0...

ai

...

0

⎤⎥⎥⎥⎥⎥⎥⎦

with ai in the ith position and 0 elsewhere. But [TA(vi)]B is the ith column of the matrix [TA]B, sowe see that J = [TA]B is a diagonal matrix.

Conversely, if J = [TA]B is a diagonal matrix, then the same computation shows that Avi =aivi , so for each i, vi is an eigenvector of A with associated eigenvalue ai . �

Theorem 1.11. Let A be a k-by-k matrix and suppose that Ck has a basis {v1, . . . , vk} consisting of a

single chain of generalized eigenvectors of length k associated to an eigenvalue a. Then

A = PJP−1

where

J =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

a 1a 1

a 1. . .

. . .

a 1a

⎤⎥⎥⎥⎥⎥⎥⎥⎦

is a matrix consisting of a single Jordan block and

P =[v1

∣∣∣∣ v2

∣∣∣∣ . . .

∣∣∣∣ vk

]

is a matrix whose columns are generalized eigenvectors forming a chain.

Proof. Let B = {v1, . . . , vk}. Then, by Corollary 1.9, A = P [TA]BP−1. Now the ith column of[TA]B is [TA(vi)]B = [Avi]B. By the definition of a chain,

Avi = (A− aI + aI)vi

= (A− aI)vi + aIvi

= vi−1 + avi for i > 1, = avi for i = 1 ,

Page 82: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

74 APPENDIX A. BACKGROUND RESULTS

so for i > 1

[Avi]B =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

0...

1a

0...

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

with 1 in the (i − 1)st position, a in the ith position, and 0 elsewhere, and [Av1]B is similar, exceptthat a is in the 1st position (there is no entry of 1), and every other entry is 0. Assembling thesevectors, we see that the matrix [TA]B = J has the form of a single k-by-k Jordan block with diagonalentries equal to a. �

Lemma 1.12. Let a be an eigenvalue of a matrix A. Then

1 ≤ geom-mult(a) ≤ alg-mult(a) .

Proof. By the definition of an eigenvalue, there is at least one eigenvector v with eigenvalue a, andso Ea contains the nonzero vector v, and hence dim(Ea) ≥ 1.

Now suppose that a has geometric multiplicity k, and let {v1, . . . , vk} be a basis for theeigenspace Ea . Extend this basis to a basis B = {v1, . . . , vk, vk+1, . . . , vn} of C

n. Let B = [TA]B.Then

B =[b1

∣∣∣∣ b2

∣∣∣∣ . . .

∣∣∣∣ bn

]

with bi = [TA(vi)]B. But for i between 1 and k, TA(vi) = Avi = avi , so

bi = [avi]B =

⎡⎢⎢⎢⎢⎢⎢⎣

0...

a...

0

⎤⎥⎥⎥⎥⎥⎥⎦

with a in the ith position and 0 elsewhere. (For i > k, we do not know what bi is.)Now we compute the characteristic polynomial of B, det(λI − B). From our computation of

B, we see that, for i between 1 and k, the ith column of (λI − B) is

Page 83: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

A.2. PROPERTIES OF THE COMPLEX EXPONENTIAL 75

⎡⎢⎢⎢⎢⎢⎢⎣

0...

λ− a...

0

⎤⎥⎥⎥⎥⎥⎥⎦

with λ− a in the ith position and 0 elsewhere. (For i > k, we do not know what the ith column ofthis matrix is.)

To compute the characteristic polynomial, i.e., the determinant of this matrix, we successivelyexpand by minors of the 1st, 2nd, . . ., kth columns. Each of these gives a factor of (λ− a), so we seethat det(λI − B) = (λ− a)kq(λ) for some (unknown) polynomial q(λ).

We have computed the characteristic polynomial of B, but what we need to know is thecharacteristic polynomial of A. But these are equal, as we see from the following computation(which uses the fact that scalar multiplication commutes with matrix multiplication, and propertiesof determinants):

det(λI − A) = det(λI − PBP−1) = det(λ(P IP−1)− PBP−1)

= det(P (λI)P−1 − PBP−1) = det(P (λI − B)P−1)

= det(P ) det(λI − B) det(P−1) = det(P ) det(λI − B)(1/ det(P ))

= det(λI − B) .

Thus, det(λI − A), the characteristic polynomial of A, is divisible by (λ− a)k (and perhapsby a higher power of (λ− a), and perhaps not, as we do not know anything about the polynomialq(λ)), so alg-mult(a) ≥ k = geom-mult(a), as claimed. �

A.2 PROPERTIES OF THE COMPLEX EXPONENTIALIn this section of the Appendix, we prove properties of the complex exponential. For convenience,we restate the basic definition and the properties we are trying to prove.

Definition 2.1. For a complex number z, the exponential ez is defined by

ez = 1+ z+ z2/2! + z3/3! + . . . .

First we note that this definition indeed makes sense, as this power series converges for everycomplex number z. Now for the properties we wish to prove. Note that properties (2) and (3) aredirect generalizations of the situation for the real exponential function.

Theorem 2.2. (1) (Euler) For any θ ,

eiθ = cos(θ)+ i sin(θ) .

Page 84: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

76 APPENDIX A. BACKGROUND RESULTS

(2) For any a,d

dz(eaz) = aeaz .

(3) For any z1 and z2,ez1+z2 = ez1ez2 .

(4) If z = s + it , thenez = es(cos(t)+ i sin(t)) .

(5) For any z,ez = ez .

Proof. (1) We begin with the following observation: i0 = 1, i1 = i, i2 = −1, i3 = i2i = −i,i4 = i3i = 1, i5 = i4i = i, i6 = i4i2 = −1, i7 = i4i3 = −i, etc. In other words. the powers ofi, beginning with i0, successively cycle through 1, i, −1, and −i. With this in mind, we computedirectly from the definition:

eiθ = 1+ iθ + (iθ)2/2! + (iθ)3/3! + (iθ)4/4! + (iθ)5/5! + (iθ)6/6! + (iθ)7/7! + . . .

= 1+ iθ − θ2/2! − iθ3/3! + θ4/4! + iθ5/5! − θ6/6! − iθ7/7! + . . . .

We now rearrange the terms, gathering the terms that do not involve i together and gatheringthe terms that do involve i together. (That we can do so requires proof, but we shall not give thatproof here.) We obtain:

eiθ = (1− θ2/2! + θ4/4! − θ6/6! + . . .)+ i(θ − θ3/3! + θ5/5! − θ7/7! + . . .) .

But we recognize the power series inside the first set of parentheses as the power series forcos(θ), and the power series inside the second set of parentheses as the power series for sin(θ),completing the proof.

(2) We prove a more general formula. Consider the function ec+dz, where c and d are arbitraryconstants. To differentiate this function, we substitute in the power series and differentiate term-by-term (again a procedure that requires justification, but whose justification we again skip). Usingthe chain rule to take the derivative of each term, we obtain

ec+dz = 1+ (c + dz)+ (c + dz)2/2! + (c + dz)3/3! + (c + dz)4/4! + . . .

(ec+dz)′ = d + 2d(c + dz)/2! + 3d(c + dz)2/3! + 4d(c + dz)3/4! + . . .

= d + d(c + dz)+ d(c + dz)2/2! + d(c + dz)3/3! + . . .

= d(1+ (c + dz)+ (c + dz)2/2! + (c + dz)3/3! + . . .)

= dec+dz .

Now, setting c = 0 and d = a, we obtain (2).

Page 85: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

A.2. PROPERTIES OF THE COMPLEX EXPONENTIAL 77

(3) Setting c = a and d = 1 in the above formula, we find that (ea+z)′ = ea+z. In other words,f1(z) = ea+z is a solution of the differential equation f ′(z) = f (z), and f1(0) = ea+0 = ea . On theother hand, setting f2(z) = eaez, we see from (2), and the fact that ea is a constant, that f2(z) is alsoa solution of the differential equation f ′(z) = f (z), and f2(0) = eae0 = ea . Thus, f1(z) and f2(z)

are solutions of the same first-order linear differential equation satisfying the same initial condition,so by the fundamental existence and uniqueness theorem (also valid for complex functions), theymust be equal. Thus, ea+z = eaez. Setting a = z1 and z = z2, we obtain (3).

(4) This follows directly from (1) and (3):

ez = es+it = eseit = es(cos(t)+ i sin(t)) .

(5) This follows directly from (1) and (4). Let z = s + it , so z = s − it . We compute:

ez = es−it = es+i(−t) = es(cos(−t)+ i sin(−t)) = es(cos(t)+ i(− sin(t))

= es(cos(t)− i sin(t)) = es(cos(t)+ i sin(t)) = ez .

Page 86: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

78

Page 87: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

79

A P P E N D I X B

Answers to Odd-NumberedExercises

Chapter 1

1. A =[−7 4

9 −5

] [3 00 5

] [−7 49 −5

]−1

.

3. A =[−21 1−49 0

] [3 10 3

] [−21 1−49 0

]−1

.

5. A =[ −5 1−25 0

] [7 10 7

] [ −5 1−25 0

]−1

.

7. A =⎡⎣0 2 0

1 1 −31 1 1

⎤⎦

⎡⎣−1 0 0

0 1 00 0 3

⎤⎦

⎡⎣0 2 0

1 1 −31 1 1

⎤⎦−1

.

9. A =⎡⎣−1 −2 −2

1 0 −10 1 1

⎤⎦

⎡⎣−3 0 0

0 −3 00 0 1

⎤⎦

⎡⎣−1 −2 −2

1 0 −10 1 1

⎤⎦−1

.

11. A =⎡⎣−1 −2 −1

0 1 11 0 1

⎤⎦

⎡⎣4 0 0

0 4 00 0 2

⎤⎦

⎡⎣−1 −2 −1

0 1 11 0 1

⎤⎦−1

.

13. A =⎡⎣−1 0 0−1 0 10 1 1

⎤⎦

⎡⎣−2 1 0

0 −2 00 0 4

⎤⎦

⎡⎣−1 0 0−1 0 10 1 1

⎤⎦−1

.

15. A =⎡⎣−3 −1 −2−2 −1 −23 1 3

⎤⎦

⎡⎣0 1 0

0 0 00 0 3

⎤⎦

⎡⎣−3 −1 −2−2 −1 −23 1 3

⎤⎦−1

.

Page 88: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

80 APPENDIX B. ANSWERS TO ODD-NUMBERED EXERCISES

17. A =⎡⎣ −2 1 1−10 0 2−6 0 0

⎤⎦

⎡⎣1 1 0

0 1 00 0 1

⎤⎦

⎡⎣ −2 1 1−10 0 2−6 0 0

⎤⎦−1

.

19. A =⎡⎣1 2 0

2 3 01 3 1

⎤⎦

⎡⎣0 1 0

0 0 10 0 0

⎤⎦

⎡⎣1 2 0

2 3 01 3 1

⎤⎦−1

.

Section 2.1

1a. Y =[−7c1e

3x + 4c2e5x

9c1e3x − 5c2e

5x

]. b. Y =

[−7e3x + 8e5x

9e3x − 10e5x

].

3a. Y =[(−21c1 + c2)e

3x − 21c2xe3x

−49c1e3x − 49c2xe3x

]. b. Y =

[41e3x + 21xe3x

98e3x + 49xe3x

].

5a. Y =[(−5c1 + c2)e

7x − 5c2xe7x

−25c1e7x − 25c2xe7x

]. b. Y =

[ −10e7x − 25xe7x

−75e7x − 125xe7x

].

7a. Y =⎡⎣ 2c2e

x

c1e−x + c2e

x − 3c3e3x

c1e−x + c2e

x + c3e3x

⎤⎦. b. Y =

⎡⎣ 6ex

2e−x + 3ex − 15e3x

2e−x + 3ex + 5e3x

⎤⎦.

9a. Y =⎡⎣(−c1 − 2c2)e

−3x − 2c3ex

c1e−3x − c3e

x

c2e−3x + c3e

x

⎤⎦. b. Y =

⎡⎣ 0

2e−3x

−e−3x

⎤⎦.

11a. Y =⎡⎣(−c1 − 2c2)e

4x − c3e2x

c2e4x + c3e

2x

c1e4x + c3e

2x

⎤⎦. b. Y =

⎡⎣ 2e4x − 5e2x

−3e4x + 5e2x

4e4x + 5e2x

⎤⎦.

13a. Y =⎡⎣ −c1e

−2x − c2xe−2x

−c1e−2x − c2xe−2x + c3e

4x

c2e−2x + c3e

4x

⎤⎦. b. Y =

⎡⎣ −e−2x − 2xe−2x

−e−2x − 2xe−2x + 4e4x

2e−2x + 4e4x

⎤⎦.

15a. Y =⎡⎣(−3c1 − c2)− 3c2x − 2c3e

3x

(−2c1 − c2)− 2c2x − 2c3e3x

(3c1 + c2)+ 3c2x + 3c3e3x

⎤⎦. b. Y =

⎡⎣−9− 9x + 10e3x

−7− 6x + 10e3x

9 + 9x − 15e3x

⎤⎦.

Page 89: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

81

17a. Y =⎡⎣(−2c1 + c2 + c3)e

x − 2c2xex

(−10c1 + 2c3)ex − 10c2xex

−6c1ex − 6c2xex

⎤⎦. b. Y =

⎡⎣ 3ex − 14xex

10ex − 70xex

18ex − 42xex

⎤⎦.

19a. Y =⎡⎣(c1 + 2c2) + (c2 + 2c3)x + (c3/2)x2

(2c1 + 3c2) + (2c2 + 3c3)x + c3x2

(c1 + 3c2 + c3)+ (c2 + 3c3)x + (c3/2)x2

⎤⎦. b. Y =

⎡⎣ 6+ 5x + x2

11+ 8x + 2x2

9+ 7x + x2

⎤⎦.

Section 2.2

1a.

Y =[e4x(cos(3x)+ 3 sin(3x)) e4x(−3 cos(3x)+ sin(3x))

2e4x cos(3x) 2e4x sin(3x)

] [c1

c2

]

=[(c1 − 3c2)e

4x cos(3x)+ (3c1 + c2)e4x sin(3x)

2c1e4x cos(3x)+ 2c2e

4x sin(3x)

].

b. Y =[

8e4x cos(3x)+ 19e4x sin(3x)

13e4x cos(3x)− sin(3x)

].

3a.

Y =[e7x(2 cos(3x)+ 3 sin(3x)) e7x(−3 cos(3x)+ 2 sin(3x))

e7x cos(3x) e7x sin(3x)

] [c1

c2

]

=[(2c1 − 3c2)e

7x cos(3x)+ (3c1 + 2c2)e7x sin(3x)

c1e7x cos(3x)+ c2e

7x sin(3x)

].

b. Y =[

2e7x cos(3x)+ 3e7x sin(3x)

e7x cos(3x)

].

5.

Y =⎡⎣ e2x(− cos(5x)− sin(5x)) e2x(cos(5x)− sin(5x)) 0

e2x(−3 cos(5x)+ 4 sin(5x)) e2x(−4 cos(5x)− 3 sin(5x)) −2e3x

3e2x cos(5x) 3e2x sin(5x) e3x

⎤⎦

⎡⎣c1

c2

c3

⎤⎦

=⎡⎣ (−c1 + c2)e

2x cos(5x)+ (−c1 − c2)e2x sin(5x)

(−3c1 − 4c2)e2x cos(5x)+ (4c1 − 3c2)e

2x sin(5x)− 2c3e3x

3c1e2x cos(5x)+ 3c2e

2x sin(5x)+ c3e3x

⎤⎦ .

Section 2.3

1. Yi =[

10e8x − 168e4x

−12e8x + 213e4x

].

Page 90: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

82 APPENDIX B. ANSWERS TO ODD-NUMBERED EXERCISES

3. Yi =[ −20e4x + 9e5x

−49e4x + 23e5x

].

5. Yi =[ −2e10x + e12x

−25e10x + 10e12x

].

7. Yi =⎡⎣ −1−1− 2e2x − 3e4x

−1+ e2x + 2e4x

⎤⎦.

Section 2.4

1a. eAx =[−35e3x + 36e5x −28e3x + 28e5x

45e3x − 45e5x 36e3x − 35e5x

].

Y =[(−35γ1 − 28γ2)e

3x + (36γ1 + 28γ2)e5x

(45γ1 + 36γ2)e3x + (−45γ1 − 35γ2)e

5x

].

3a. eAx =[e3x − 21xe3x 9xe3x

−49xe3x e3x + 21xe3x

].

Y =[

γ1e3x + (−21γ1 + 9γ2)xe3x

γ2e3x + (−49γ1 + 21γ2)xe3x

].

5a. eAx =[e7x − 5xe7x xe7x

−25xe7x e7x + 5xe7x

].

Y =[

γ1e7x + (−5γ1 + γ2)xe7x

γ2e7x + (−25γ1 + 5γ2)xe7x

].

7a. eAx =⎡⎢⎣

ex 0 0− 1

2e−x + 12ex 1

4e−x + 34e3x 3

4e−x − 34e3x

− 12e−x + 1

2ex 14e−x − 1

4e3x 34e−x + 1

4e3x

⎤⎥⎦ .

Y =⎡⎢⎣

γ1ex

(− 12γ1 + 1

4γ2 + 34γ3)e

−x + 12γ1e

x + ( 34γ2 − 3

4γ3)e3x

(− 12γ1 + 1

4γ2 + 34γ3)e

−x + 12γ1e

x + (− 14γ2 + 1

4γ3)e3x

⎤⎥⎦ .

Page 91: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

83

9a. eAx =⎡⎣−e−3x + 2ex −2e−3x + 2ex −4e−3x + 4ex

−e−3x + ex ex −2e−3x + 2ex

e−3x − ex e−3x − ex 3e−3x − 2ex

⎤⎦ .

Y =⎡⎣(−γ1 − 2γ2 − 4γ3)e

−3x + (2γ1 + 2γ2 + 4γ3)ex

(−γ1 − 2γ3)e−3x + (γ1 + γ2 + 2γ3)e

x

(γ1 + γ2 + 2γ3)e−3x + (−γ1 − γ2 − 2γ3)e

x

⎤⎦ .

11a. eAx =

⎡⎢⎢⎣

32e4x − 1

2e2x e4x − e2x 12e4x − 1

2e2x

− 12e4x + 1

2e2x e2x − 12e4x + 1

2e2x

− 12e4x + 1

2e2x −e4x + e2x 12e4x + 1

2e2x

⎤⎥⎥⎦ .

Y =

⎡⎢⎢⎣

( 32γ1 + γ2 + 1

2γ3)e4x + (− 1

2γ1 − γ2 − 12γ3)e

2x

(− 12γ1 − 1

2γ3)e4x + ( 1

2γ1 + γ2 + 12γ3)e

2x

(− 12γ1 − γ2 + 1

2γ3)e4x + ( 1

2γ1 + γ2 + 12γ3)e

2x

⎤⎥⎥⎦ .

13a. eAx =⎡⎣ e−2x − xe−2x xe−2x −xe−2x

e−2x − xe−2x − e4x xe−2x + e4x −xe−2x

e−2x − e4x −e−2x + e4x e−2x

⎤⎦ .

Y =⎡⎣ γ1e

−2x + (−γ1 + γ2 − γ3)xe−2x

γ1e−2x + (−γ1 + γ2 − γ3)xe−2x + (−γ1 + γ2)e

4x

(γ1 − γ2 + γ3)e−2x + (−γ1 + γ2)e

4x

⎤⎦ .

15a. eAx =⎡⎣ 3− 2e3x 9x 2+ 6x − 2e3x

2− 2e3x 1+ 6x 2+ 4x − 2e3x

−3+ 3e3x −9x −2− 6x + 3e3x

⎤⎦ .

Y =⎡⎣ (3γ1 + 2γ3)+ (9γ2 + 6γ3)x + (−2γ1 − 2γ3)e

3x

(2γ1 + γ2 + 2γ3)+ (6γ2 + 4γ3)x + (−2γ1 − 2γ3)e3x

(−3γ1 − 2γ3)+ (−9γ2 − 6γ3)x + (3γ1 + 3γ3)e3x

⎤⎦ .

17a. eAx =⎡⎣ex − 2xex xex −xex

−10xex ex + 5xex −5xex

−6xex 3xex ex − 3xex

⎤⎦ .

Page 92: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

84 APPENDIX B. ANSWERS TO ODD-NUMBERED EXERCISES

Y =⎡⎣ γ1e

x + (−2γ1 + γ2 − γ3)xex

γ2ex + (−10γ1 + 5γ2 − 5γ3)xex

γ3ex + (−6γ1 + 3γ2 − 3γ3)xex

⎤⎦ .

19a. eAx =⎡⎣1− 4x − 3

2x2 x + 12x2 2x + 1

2x2

−5x − 3x2 1+ x + x2 3x + x2

−7x − 32x2 2x + 1

2x2 1+ 3x + 12x2

⎤⎦ .

Y =⎡⎣ γ1 + (−4γ1 + γ2 + 2γ3)x + (− 3

2γ1 + 12γ2 + 1

2γ3)x2

γ2 + (−5γ1 + γ2 + 3γ3)x + (−3γ1 + γ2 + γ3)x2

γ3 + (−7γ1 + 2γ2 + 3γ3)x + (− 32γ1 + 1

2γ2 + 12γ3)x

2

⎤⎦ .

21a. eAx =[e4x(cos(3x)− 1

3 sin(3x)) 53e4x sin(3x)

− 23e4x sin(3x) e4x(cos(3x)+ 1

3 sin(3x))

].

Y =[γ1e

4x cos(3x)+ (− 13γ1 + 5

3γ2)e4x sin(3x)

γ2e4x cos(3x)+ (− 2

3γ1 + 13γ2)e

4x sin(3x)

].

23a. eAx =[e7x(cos(3x)− 2

3 sin(3x)) 133 e7x sin(3x)

− 13e7x sin(3x) e7x(cos(3x)+ 2

3 sin(3x))

].

Y =[γ1e

7x cos(3x)+ (− 23γ1 + 13

3 γ2)e7x sin(3x)

γ2e7x cos(3x)+ (− 1

3γ1 + 23γ2)e

7x sin(3x)

].

Page 93: [Steven h. weintraub]_jordan_canonical_form_appli(book_fi.org)

85

Index

antiderivativearbitrary, 48, 67

chain of generalized eigenvectors, 10, 12, 73bottom of, 10top of, 10

characteristic polynomial, 1, 75complex root, see eigenvalue, complex

complex exponential, 41, 75

diagonalizable, 5–7, 72

eigenspace, 1generalized, 8, 9

eigenvalue, 1complex, 40, 43, 65

eigenvector, 1generalized, 8index of generalized, 8

Euler, 41, 75

Fundamental Theorem of Algebra, 3

integrating factor, 30, 31, 67

JCF, see Jordan Canonical FormJordan block, 7, 8, 12, 28, 47, 60, 61, 68, 73Jordan Canonical Form, 8, 14, 17, 21, 25, 31,

47, 66

linear differential equationsassociated homogenous system of, 47fundamental matrix of system of, 26

general solution of homogenous systemof, 25, 26, 29, 36, 43, 47, 54

general solution of inhomogenous systemof, 47

homogeneous system of, 25, 29, 36, 40, 54inhomogeneous system of, 46, 67initial value problem for system of, 35, 36,

52, 54particular solution of inhomogeneous

system of, 48, 67uncoupled system of, 27, 31

linear transformation, 70matrix of, 70

matrixchange-of-basis, 72

matrix exponential, 38, 48, 53–55, 60, 61,66–68

multiplicity, 2algebraic, 2, 3, 5, 9, 72, 74geometric, 2, 3, 5, 17, 72, 74

similar, 4, 5standard basis, 69–71

vectorcoordinate, 69imaginary part of, 43real part of, 43

vector spacebasis of, 69