math 716–introduction to differential …angenent/716.2020s/diffeqnotes.pdfmath 716–introduction...

28
Contents Chapter 1. Prototypes of Dierential Equations 3 1.1. The General Case 3 1.2. Calculus Solution Methods 3 1.3. 1D autonomous 3 1.4. 1D nonautonomous 4 1.5. 2D autonomous 4 1.6. Second order ODE as 1st order system 4 1.7. 3D autonomous 5 1.8. Mechanical systems–particle in a conservative force field 6 Chapter 2. Existence and Uniqueness Theorems 7 2.1. Basic assumptions 7 2.2. Existence theorem using Picard iteration 7 2.3. Uniqueness and dependence on initial values using Gronwall’s trick 8 2.4. Verifying the Lipschitz condition 9 Chapter 3. The Maximal Solution 11 3.1. Definition and Existence Theorem for maximal solutions. 11 3.2. Continuation of the solution 11 3.3. Example: mechanical system with potential bounded from below 12 Chapter 4. Local flows 15 4.1. Definitions and properties 15 4.2. Continuous dependence on parameters 17 Chapter 5. Linear systems 19 5.1. The matrix exponential 19 5.2. Properties of the exponential 20 5.3. e tA when there are complex eigenvalues 21 Chapter 6. Topological dynamics 23 6.1. α and ω-limit sets 23 6.2. Aractors 23 6.3. Notions of stability 23 6.4. Lyapunov functions 23 Chapter 7. Dierentiability of the flow 25 7.1. Linearized equation 25 Chapter 8. Appendix 27 .1. Norms on vector spaces 27 1

Upload: others

Post on 14-Feb-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

Contents

Chapter 1. Prototypes of Di�erential Equations 31.1. The General Case 31.2. Calculus Solution Methods 31.3. 1D autonomous 31.4. 1D nonautonomous 41.5. 2D autonomous 41.6. Second order ODE as 1st order system 41.7. 3D autonomous 51.8. Mechanical systems–particle in a conservative force field 6

Chapter 2. Existence and Uniqueness Theorems 72.1. Basic assumptions 72.2. Existence theorem using Picard iteration 72.3. Uniqueness and dependence on initial values using Gronwall’s trick 82.4. Verifying the Lipschitz condition 9

Chapter 3. The Maximal Solution 113.1. Definition and Existence Theorem for maximal solutions. 113.2. Continuation of the solution 113.3. Example: mechanical system with potential bounded from below 12

Chapter 4. Local flows 154.1. Definitions and properties 154.2. Continuous dependence on parameters 17

Chapter 5. Linear systems 195.1. The matrix exponential 195.2. Properties of the exponential 205.3. etA when there are complex eigenvalues 21

Chapter 6. Topological dynamics 236.1. α and ω-limit sets 236.2. A�ractors 236.3. Notions of stability 236.4. Lyapunov functions 23

Chapter 7. Di�erentiability of the flow 257.1. Linearized equation 25

Chapter 8. Appendix 27.1. Norms on vector spaces 27

1

Page 2: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1
Page 3: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 1

Prototypes of Di�erential Equations

1.1. �e General Case

Given functions f1, . . . , fn : (t−, t+)×O→ R with O ⊂ Rn open, and with−∞ ≤t− < t+ ≤ +∞ consider solutions of

x1 = f1(t, x1(t), . . . xn(t))

...xn = fn(t, x1(t), . . . xn(t))

that are de�ned for t ∈ I for some interval I ⊂ (t−, t+).Or, in vector form, consider solutions x : I → Rn, I ⊂ (t−, t+) of

(1.1) x = F (t,x(t)),

where F : (t−, t+)× O→ Rn is de�ned by

F (t,x) =

f1(t, x1, . . . , xn)...

fn(t, x1, . . . , xn)

, x(t) =

x1(t)...

xn(t)

.

In lecture I will not use boldface for vectors x and simply write x.�e general equation (1.1) includes very many di�erent special cases. Here is a list of

a few of them, in increasing order of di�culty. �e �rst few can be understood completelyusing calculus; the last few present hard open problems.

1.2. Calculus Solution Methods

“Integrate the equation,” i.e. Separate variables and/or �nd an integrating factor– all calculus solution methods are one form or another of the idea “�nd a function g :(t−, t+)× O→ R such that g(t, x1(t), . . . , xn(t)) is constant for solutions of (1.1).”

1.3. 1D autonomous

For f : R→ R considerx = f(x), x(t0) = a

1.3.1. Logistic Equation. x = x(A− x)

1.3.2. Chemical Explosion Equation. x = Kx2

3

Page 4: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

4 1. PROTOTYPES OF DIFFERENTIAL EQUATIONS

1.3.3. Leaky Bucket Equation. x = −K√x

1.4. 1D nonautonomous

For f : R× R→ Rx = f(t, x), x(t0) = a.

For example, periodic harvesting : x = x(1 + a cosωt− x)

�estions:

• do solutions exist for all t ≥ 0?• If x : [0,∞)→ R is a solution, what is limt→∞ x(t)?

1.5. 2D autonomous

For f, g : R2 → Rx = f(x, y)

y = g(x, y)

1.5.1. Linear systems. x = ax+ by, y = cx+ dy

�is is a special case of the general linear system:autonomous: x = Ax(t), nonautonomous: x = A(t)x(t)

where A : Rn → Rn and A(t) : Rn → Rn are linear transformations.

1.5.2. Populations dynamics.

x = x(K0 + ax+ by), y = y(K1 + cx+ dy)

Depending on the signs of the coe�cients a, b, c, d this covers many di�erent cases (com-petitive, cooperative, predator-prey), and also the Lotka-Volterra equations

x = x(A−By), y = y(Dx− C)

where A,B,C,D > 0.

1.6. Second order ODE as 1st order system

1.6.1. General case x′′ = f(t, x, x′).

x = y, y = f(t, x, y)

�ere is a steep increase in level of di�culty as one goes from autonomous systems in2D to nonautonomous systems in 2D. While autonomous systems can be more or less“completely understood,” the nonautonomous 2D systems are still a source of many verydeep unsolved problems in dynamical systems and ergodic theory.

1.6.2. Linear oscillator. x′′ +Kx = 0 rewri�en as x = y, y = −Kx

1.6.3. Pendulum. θ′′ = − gL sin θ, rewri�en as

x = y, y = − gL

sinx

Page 5: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

1.7. 3D AUTONOMOUS 5

1.6.4. Damped pendulum. θ′′ = − gL sin θ −Kθ′, rewri�en as

x = y, y = − gL

sinx−Ky

1.6.5. Autonomous Forced Pendulum. θ′′ = − gL sin θ + F , rewri�en as

x = y, y = − gL

sinx+ F

Here F ∈ R is the Force, which does not depend on time.

1.6.6. Periodically Forced Pendulum. θ′′ = − gL sin θ + F , rewri�en as

x = y, y = − gL

sinx+ F (t)

Here the force F (t) is a periodic function of time (F = F (t)).

1.6.7. Pendulum on a boat. If a pendulum is on a ship which goes up and downperiodically, and whose height is h(t), then the equation for a pendulum on this ship isobtained by replacing g with g + h′′(t):

θ′′(t) = −g + h′′(t)

Lsin θ,

or, as a system of �rst order equations,

x′ = y, y′ = −g + h′′(t)

Lsinx.

1.6.8. Periodically forced, damped pendulum. θ′′ = − gL sin θ − Kθ′ + F (t),

rewri�en asx = y, y = − g

Lsinx−Ky + F (t)

in which the Force F (t) is periodic with period T > 0, i.e. ∀t : F (t+ T ) = F (t)

1.7. 3D autonomous

1.7.1. 2D nonautonomous rewritten as 3D system. �e system

x = f(t, x, y, t), y = g(x, y)

can be rewri�en also

x = f(t, x, y, t) y = g(x, y) t = 1,

which is an autonomous system in R3.

1.7.2. �e Lorentz Equations.

x = A(y − x), y = x(B − z)− y, z = xy − Cz

(See h�ps://en.wikipedia.org/wiki/Lorenz system)

Page 6: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

6 1. PROTOTYPES OF DIFFERENTIAL EQUATIONS

1.8. Mechanical systems–particle in a conservative force �eld

If a particle of massm is free to move around in the plane or in R3, and if x(t) ∈ Rnis its position at time t, then Newton’s law states thatmx′′(t) equals the force F acting onthe particle. If the force is conservative, then there is a potential function V : Rn → Rsuch that F = −∇V (x(t)). �e motion of the particle is then described by the followingsystem of di�erential equations

(1.2)

x = y

y = − 1

m∇V (x)

If the potential is de�ned only in an open subset O ⊂ Rn then the above system ofdi�erential equations is de�ned in the set

TO ={

(x, y) ∈ R2n | x ∈ O, y ∈ Rn}.

For example, for a particle moving the plane we have n = 2 so that the system of di�er-ential equations is de�ned on an open subset of R4.

Another example is given by themathematical pendulum (inwhichm = g = L = 1),where x = θ is the angle, and where the potential is given by

V (x) = − cosx.

1.8.1. Conserved quantity: the energy. If the potential V is C2, then the functionE : R2n → R given by

E(x, y) =m

2‖y‖2 + V (x)

is a conserved quantity for the mechanical system (1.2), i.e. if (x, y) : I → R2n is a solutionof (1.2), then

d

dtE(x(t), y(t)) = 0.

Proof. Direct computation using the chain rule. ////

Page 7: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 2

Existence and Uniqueness �eorems

In this section we see Picard’s existence proof, Gronwall’s bound for solutions, theuniqueness and continuation theorems.

2.1. Basic assumptions

Let O ⊂ Rn be open, and I ⊂ R be an interval. For any function f : I × O → Rnwe consider the system of di�erential equations

(2.1) dx

dt= f(t, x(t)) t ∈ I

We make two basic assumptions on f :

(Picard 1) f is continuous;(Picard 2) f satis�es the Lipschitz condition, i.e. there is an L ∈ R such that

∀x, y ∈ O ∀t ∈ I : ‖f(x, t)− f(y, t)‖ ≤ L‖x− y‖.

2.2. Existence theorem using Picard iteration

2.2.1. �eorem. For any x0 ∈ O and t0 ∈ I there exist an ε > 0 with [t0−ε, t0+ε] ⊂I , and an x : [t0 − ε, t0 + ε]→ O that is continuously di�erentiable, and that satis�es

x(t, t0) = x0 and ∀t ∈ I : x′(t) = f(x(t)).

Proof. Choose r > 0 so small that[t0 − r, t0 + r]×Br(x0) ⊂ I × O.

Since f is continuous the quantity

Mdef= max‖x−x0‖≤r|t−t0|≤r

‖f(t, x)‖

is well de�ned. De�neε =

r

2M, Iε = [t0 − ε, t0 + ε].

For each nonnegative integer k we de�ne a function xk : Iε → Br(x0) by

x0(t)def= x0 for all t ∈ Iε

and

(2.2) xk(t) = x0 +

∫ t

t0

f(s, xk−1(s)) ds

for k ≥ 1.

7

Page 8: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

8 2. EXISTENCE AND UNIQUENESS THEOREMS

To see that the xk are well de�ned, use induction. Since xk−1(t) ∈ Br(x0) we have‖f(s, xk−1(s))‖ ≤M for all s ∈ Iε. �is implies that ‖xk(t)− x0‖ ≤Mε ≤ r

2 < r, andhence that xk(t) ∈ Br(x0) for all t ∈ Iε.

�e next step is to estimate the di�erences

yk(t)def= xk(t)− xk−1(t), and δk(t)

def= ‖yk(t)‖.

For k = 1 we have

δ1(t) = ‖x1(t)− x0‖ ≤∫[t0,t]

‖f(s, x0)‖ds ≤M |t− t0|.

For k ≥ 2 we compute

δk(t) ≤∫[t0,t]

‖f(s, xk−1(s))− f(s, xk−2(s))‖ ds

≤ L∫[t0,t]

‖xk−1(s)− xk−2(s)‖ ds

= L

∫[t0,t]

δk−1(s)ds.

which then implies, by induction,

δk(t) ≤MLk−1|t− t0|k/k! =M

L

(L|t− t0|)k

k!.

On the interval Iε we then have

‖yk(t)‖ ≤ M

L

(Lε)k

k!.

Since the series∞∑k=1

M

L

Lkεk

k!=M

L

(eLε − 1

)converges, we conclude that the series

∑yk(t) converges uniformly for t ∈ Iε. �erefore

x(t)def= lim

k→∞xk(t) = lim

k→∞x0 + y1(t) + · · ·+ yk(t)

exists for all t ∈ Iε, and the convergence is uniform.We may then pass to the limit in (2.2), with result

x(t) = x0 +

∫ t

t0

f(s, x(s)) ds

for all t ∈ Iε. �is implies that x is continuously di�erentiable and that x satis�es thedi�erential equation x′ = f(t, x), and that x(t0) = x0. /////

2.3. Uniqueness and dependence on initial values using Gronwall’s trick

�eorem. If x, y : I → O both satisfy the di�erential equation, i.e.

x′(t, t) = f(x(t), t) and y′(t) = f(y(t)) for all t ∈ I,then one has

‖x(t)− y(t)‖ ≤ eL|t−t0|‖x(t0)− y(t0)‖for all t, t0 ∈ I .

Page 9: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

2.4. VERIFYING THE LIPSCHITZ CONDITION 9

In particular, if for some t0 ∈ I one has x(t0) = y(t0), then one has x(t) = y(t) for allt ∈ I .

Proof. �e di�erence between the two solutions satis�es

x(t)− y(t) = x0 − y0 +

∫ t

t0

{f(s, x(s))− f(s, y(s))

}ds

and hence

‖x(t)− y(t)‖ ≤ ‖x0 − y0‖+

∫ t

t0

∥∥f(s, x(s))− f(s, y(s))∥∥ ds

≤ ‖x0 − y0‖+

∫ t

t0

‖x(s)− y(s)‖ ds.

Here we are assuming that t ≥ t0; if t < t0 then the sign in front of the integral must bereversed.

Gronwall’s trick consists in de�ning

F (t) = ‖x0 − y0‖+

∫ t

t0

‖x(s)− y(s)‖ ds.

�is is a C1 function, which for t ≥ t0 satis�es‖x(t)− y(t)‖ ≤ F (t) and F ′(t) = L‖x(t)− y(t)‖.

Hence F ′(t) ≤ LF (t), andd

dt

(e−LtF (t)

)≤ 0.

�us for any t0, t ∈ I with t0 ≤ t we havee−LtF (t) ≤ e−Lt0F (t0) =⇒ ‖x(t)− y(t)‖ ≤ eL|t−t0|‖x(t0)− y(t0)‖.

Similar arguments lead to the same estimate if t ≤ t0. /////

2.4. Verifying the Lipschitz condition

To verify the Lipschitz condition in practice one can bound the derivatives of f(x, t)with respect to x. �e following two theorems show how. For simplicity they are formu-latedwith the assumption that f(t, x) does not depend on time t, i.e. they say how to provea Lipschitz estimate for a function f : O→ Rn instead of a function f : I × O→ Rn.

2.4.1. �e case of convex domains. If O ⊂ Rn is convex and if the derivative dfx :Rn → Rn satis�es ‖dfx‖ ≤ L for all x ∈ O then f satis�es the Lipschitz condition ‖f(x)−f(y)‖ ≤ L‖x− y‖ for all x, y ∈ O.

Here ‖dfx‖ is the operator norm of the linear map dfx. See appendix .1.

Proof. De�ne z(θ) = x+ θ(y − x). �en one has

f(y)− f(x) =

∫ 1

θ=0

df(z(θ))

dθdθ =

∫ 1

θ=0

dfz(θ) · (y − x) dθ

and hence

‖f(y)− f(x)‖ ≤∫ 1

0

∥∥dfz(θ) · (y − x)∥∥ dθ ≤ ∫ 1

0

L‖y − x‖dθ = L‖y − x‖.

Page 10: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

10 2. EXISTENCE AND UNIQUENESS THEOREMS

////

2.4.2. General domains. If O ⊂ Rn is open andK ⊂ 7O is compact, and if f : O→Rn is C1 then f |K is Lipschitz.

Proof. Use contradiction and the criterion on convex domains that we just provedabove. ////

Page 11: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 3

�eMaximal Solution

Let f : I × O → Rn be continuous and assume it satis�es the Lipschitz condition,i.e. assume that there is an L ∈ R such that for all x, y ∈ O and t ∈ I one has ‖f(t, x)−f(t, y)‖ ≤ L‖x− y‖.

�e existence and uniqueness theorems from the previous chapter show that for anygiven initial data (t0, x0) ∈ I × O there exists a solution of

(3.1) dx

dt= f(t, x), x(t0) = x0

on a short closed time interval [t0 − ε, t0 + ε]. Given the values of the solution at t0 ± εone can then use the existence theorem to extend the solution to a slightly larger interval.�is raises the question, how far can the solution be extended, and how many extensionsare there?

3.1. De�nition and Existence �eorem for maximal solutions.

3.1.1. De�nition. If J ⊂ I is an interval with t0 ∈ J , and x : J → O is a C1

solution of x′ = f(t, x) with x(t0) = x0, then x is a maximal solution if for any otherC1 solution x : J → O of the di�erential equation with x(t0) = x0 one has J ⊂ J andx|J = x.

If a maximal solution for given initial data (t0, x0) exists then it is unique.

3.1.2. �eorem. For any (x0, t0) ∈ I×O there exists a maximal solution x : J → O

of (3.1).

Proof. Consider the union of all solutions. /////

3.2. Continuation of the solution

How can we decide if the maximal solution is de�ned for all t ∈ I? A common proofstrategy for showing that the maximal solution exists at least for all t ∈ J for some giveninterval J ⊂ I is to argue by contradiction using the following fact.

11

Page 12: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

12 3. THE MAXIMAL SOLUTION

3.2.1. Continuation �eorem. Let x : J → O be a solution of x′ = f(t, x) withx(t0) = x0, and let Γ+ = {(t, x(t)) | t ∈ J, t ≥ t0} be the right half of its graph. If thereis a compact set K ⊂ I × O such that Γ+ ⊂ K then x is not the maximal solution.

Similarly, if the le� half Γ− = {(t, x(t)) | t ∈ J, t ≤ t0} is contained in some compactK ⊂ I × O, then x is again not the maximal solution.

Another way to formulate the conclusion is to say: ifK ⊂ I×O is compact then anymaximal solution must leave K in both time directions.

Proof. Picard’s existence theorem says that for every (t0, x0) ∈ I × O there is anε(t0, x) > 0 such that for ε = ε(t0, x0) a local solution x : [t0 − ε, t0 + ε] → O withx(t0) = x0 exists. Close scrutiny of Picard’s proof shows that ε(t0, x0) can be chosen tobe a continuous function of (t0, x0) ∈ I × O.

Since K ⊂ I × O is compact there is an ε > 0 such that ε(t0, x0) > ε for all(t0, x0) ∈ K .

Suppose Γ+ ⊂ K . �en the domain of the solution is J = (a, b) ⊂ I . Chooset1 ∈ J with t1 > b− 1

2ε. �en Picard’s theorem implies that we have a solution x of thedi�erential equation on the interval [t1 − ε, t1 + ε] with x(t1) = x(t1). By uniqueness,the solutions x and x coincide on J ∩ [t1 − ε, t1 + ε]. Finally, it follows from t1 + 1

2ε > bthat x extends our given solution x, and thus that x is not a maximal solution. ////

3.3. Example: mechanical system with potential bounded from below

We consider the mechanical system (1.2) in which we assume that the mass ism = 1.�us we have a potential V ∈ C2(Rn) and we consider the di�erential equations

dx

dt= y,

dy

dt= −∇V (x).

3.3.1. �eorem. If the potential is bounded from below, i.e. if there is anM ∈ R suchthat V (x) ≥ M for all x ∈ Rn, then for any initial data (x0, y0) ∈ R2n the maximalsolution with (x(0), y(0)) = (x0, y0) is de�ned for all t ∈ R.

Proof. For simplicity assumem = 1.

Let (x0, y0) ∈ R2n be given, and let (x, y) : (t−, t+)→ R2n be the maximal solutionwith (x(0), y(0)) = (x0, y0). �en the total energy remains constant along the solution,i.e.

E(x(t), y(t)) =1

2‖y(t)‖2 + V (x(t)) = E0 for all t ∈ R

where E0 = 12‖y0‖

2 + V (x0) is the initial energy. It follows that for all t ∈ (t−, t+) onehas

‖y(t)‖ =√

2(E0 − V (x(t))) ≤√

2(E0 −M).

In words, the velocity y(t) is uniformly bounded. �is implies that for any t ∈ [0, t+) theposition x(t) is bounded by

‖x(t)− x0‖ =

∥∥∥∥∫ t

0

x′(s)ds

∥∥∥∥ ≤ ∫ t

0

‖y(s)‖ ds ≤√

2(E0 −M) t.

Page 13: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

3.3. EXAMPLE: MECHANICAL SYSTEM WITH POTENTIAL BOUNDED FROM BELOW 13

Suppose now that t+ < ∞. �en the graph Γ+ = {(t, x(t), y(t)) : 0 ≤ t < t+} iscontained in the compact set

K ={

(t, x, y) | 0 ≤ t ≤ t+, ‖x− x0‖ ≤√

2(E0 −M) t+, ‖y‖ ≤√

2(E−M)}

But then our solution (x(t), y(t)) would not be maximal. Hence t+ =∞.A similar argument shows that t− = −∞. ////

Since the pendulum equation (without forcing) is a special case of this example weconclude that solutions to the pendulum equation (see § 1.6.3).

Page 14: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1
Page 15: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 4

Local �ows

4.1. De�nitions and properties

From here on we will mostly consider autonomous di�erential equations, i.e. sys-tems of the form

(4.1) dx

dt= f(x)

where f : O→ Rn is C1 and O ⊂ Rn is open.We have seen that for each x0 ∈ O there is a unique maximal solution x : (t−, t+)→

O of (4.1) with x(0) = x0. �e time interval on which the solution is de�ned depends onthe initial value x0.

We de�neD = {(t, x) ∈ R× O | t−(x0) < t < t+(x0)}

and a map φ : D→ O given byφ(t, p) = x(t),

where x :(t−(p), t+(p)

)→ O is the solution of the di�erential equation with x(0) = p.

4.1.1. �eorem.

a. D is open and φ : D→ O is continuous.b. {0} × O ⊂ D and φ0(p) = p for all p ∈ O.c. If (s, p) ∈ D and (t, φs(p)) ∈ D then (t+ s, p) ∈ D and

φt(φs(p)

)= φt+s(p).

We break the proof into the next few lemmas.

4.1.2. Lemma. If p ∈ O and t0 ∈ (0, t+(p)), then there is a δ > 0 such that t+(q) >t0 for all q ∈ Bδ(p).

In other words, the existence time t+(p) is a lower semi continuous function of p ∈O. �e idea of the proof is to use Gronwall’s trick to show that solutions y(t) of thedi�erential equation y′ = f(y) for which ‖y(0) − x(0)‖ is small stay close to the givensolution x(t).

Proof. �e assumption t+(p) > t0 implies that the solution x : (t−(p), t+(p)) withinitial value p is de�ned for all t ∈ [0, t0].

�e set x([0, t0]) = {x(t) | 0 ≤ t ≤ t0} is compact, so there is an r > 0 such thatKr = {x | ∃t ∈ [0, t0] : ‖x− x(t)‖ ≤ r} ⊂ O.

15

Page 16: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

16 4. LOCAL FLOWS

Since K is compact and since f ∈ C1, f |K is Lipschitz continuous, i.e. there is anL > 0 such that ‖f(x)− f(y)‖ ≤ L‖x− y‖ for all x, y ∈ K .

De�ne δ = 12e−Ltr. We will show that t+(q) > t0 for all q ∈ Bδ(p).

To reach a contradiction, suppose q ∈ Bδ(p), and suppose that t+(q) ≤ t0. Lety : [0, t+(q)) → O be the maximal solution of y′ = f(y) with y(0) = q. �en y(t) ∈ Kfor all t ∈ [0, t+(q)), for if this were not true then there would be a smallest t1 ∈ [0, t+(q))with ‖y(t1)−x(t1)‖ = r. For all t ∈ [0, t1]wewould have ‖y(t)−x(t)‖ ≤ r, i.e. y(t) ∈ K ,so that

‖y(t)− x(t)‖ ≤ eLt‖q − p‖ ≤ eLtδ ≤ r

2

for all t ∈ [0, t1]. In particular ‖y(t1) − x(t1)‖ ≤ r2 < r, in contradiction with ‖y(t1) −

x(t1)‖ = r. ////

4.1.3. Lemma. D is open.

Proof. Let (t0, p) ∈ D. �en t+(p) > t0 so we can choose t1 ∈ (t0, t+(p)). Bythe previous Lemma there is a δ > 0 such that t+(q) > t1 for all q ∈ Bδ(p). Hence[0, t1] × Bδ(p) ⊂ D. Since [0, t1] × Bδ(p) is a neighborhood of (t, p), it follows that(t0, p) is an interior point ofD. Since (t0, p) was an arbitrary point inD, we have shownthat D is opepn. ////

4.1.4. Lemma. φ : D→ O is continuous.

Proof. Let (t0, p) ∈ D be given, and choose t1 ∈ (t0, t+(p)).

Choose r > 0, δ > 0 as in the proof of Lemma 4.1.2. �en Gronwall’s argumentimplies that for all t ∈ [0, t1]

‖φt(q)− φt(p)‖ ≤ eLt1‖p− q‖,

and thus

‖φt(q)− φt0(p)‖ ≤ eLt1‖p− q‖+ ‖φt(p)− φt0(p)‖.

Let ε > 0 be given. Choose δ = 14eLt1ε. Also choose δ′ > 0 so small that ‖φt(p) −

φt0(p)‖ < 14ε if |t− t0| < δ′.

If ‖p− q‖ < δ and if |t− t0| < δ′ we then get

‖φt(q)− φt0(p)‖ ≤ eLt1‖p− q‖+ ‖φt(p)− φt0(p)‖ < ε

4+ε

4< ε.

�is shows that φ is indeed continuous at (t0, p). ////

Page 17: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

4.2. CONTINUOUS DEPENDENCE ON PARAMETERS 17

4.1.5. Lemma. If (s, p) ∈ D and (t, φs(p)) ∈ D then (t+ s, p) ∈ D and

φt+s(p) = φt(φs(p)

).

Proof. Since (s, p) ∈ D there is a solution x : [0, s] → O of x′ = f(x) withx(0) = p. We have q def

= φs(p) = x(s).Since (t, q) ∈ D there also is a solution y : [0, t]→ O of y′ = f(y) with y(0) = q.�e function z : [0, t+ s]→ O de�ned by

z(τ)def=

{x(τ) 0 ≤ τ ≤ sy(τ − s) s ≤ τ ≤ s+ t

is a solution of z′ = f(z) with z(0) = p.It follows that (t+ s, p) ∈ D, and φt+s(p) = φt(φs(p)). ////

4.2. Continuous dependence on parameters

Page 18: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1
Page 19: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 5

Linear systems

5.1. �e matrix exponential

Let A be a real n×nmatrix, and consider the system of linear di�erential equations

(5.1) x′(t) = Ax(t), x(0) = p

for given initial value p ∈ Rn. �is equation is of the form x′ = f(x)where f : Rn → Rnis given by f(x) = Ax. �e right hand side f(x) of the equation is Lipschitz continuous,because

‖f(x)− f(y)‖ = ‖Ax−Ay‖ = ‖A(x− y)‖ ≤ ‖A‖ ‖x− y‖.

�e matrix norm ‖A‖ is therefore the Lipschitz constant.

It follows that (5.1) has a uniqeu solution that is de�ned for all t ∈ R.

5.1.1. De�nition. For any p ∈ Rn and n× n matrix A one de�nes

etAp = x(t)

where x : R→ Rn is the solution of (5.1).

5.1.2. Lemma. �e map p 7→ etAp is linear.

Proof. Let p, q ∈ Rn be given and de�ne u(t) = etAp and v(t) = etAq. �en

u′ = Au, u(0) = p, v′ = Av, v(0) = q.

It follows that for any given α, β ∈ R the function w(t) = αu(t) + βv(t) satis�es

w′ = αu′ + βv′ = αAu+ βAv = A(αu+ βv

)= Aw,

and alsow(0) = αu(0) + βv(0) = αp+ βq.

HenceetA(αp+ βq

)= w(t) = αu(t) + βv(t) = αetAp+ βetAq.

�erefore p 7→ etAp is linear. ////

19

Page 20: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

20 5. LINEAR SYSTEMS

5.1.3. �e series for etA. One has for all t ∈ R

etA = I + tA+t2A2

2!+t3A3

3!+t4A4

4!+ · · ·

For any T > 0 the series converges uniformly for |t| ≤ T .

Proof. Apply Picard iteration to the di�erential equation x′ = Ax, with x(0) = p.One �nds that the kth function obtained in the Picard iteration is

xk(t) = p+

∫ t

0

Axk−1(τ) dτ = p+ tAp+ · · ·+ tk

k!Akp.

Picard’s existence proof shows that this sequence of functions converges as k →∞, andthat the convergence is uniform on bounded time intervals. ////

5.1.4. etA when A is diagonal. If A is a diagonal matrix then etA is obtained byexponentiating the diagonal entries:

A =

λ1

λ2. . .

λn

=⇒ etA =

eλ1t

eλ2t

. . .eλnt

In particular, if I is the identity matrix, then

etI = etI.

5.2. Properties of the exponential

5.2.1. �eorem. Let A,B be real n× n matrices.

(a) For any t, s ∈ R one has e(t+s)A = etAesA.(b) For all t ∈ R the matrix etA is invertible, and

(etA)−1

= e−tA.(c) If AB = BA then etAB = BetA.(d) AetA = etAA.(e) If AB = BA then et(A+B) = etAetB .

Proof. (a) is a special case of the �ow property φt+s = φt ◦ φs.(b) follows from (a) by se�ing s = −t.For (c) consider for any p ∈ Rn the function x(t) = BetAp. It satis�es

x′(t) = BAetAp = ABetAp = Ax(t) and x(0) = Bp.

By de�nition this implies that x(t) = etABp. �us we have shown that etABp = BetApfor all p ∈ Rn, which implies etAB = BetA.

To prove (e) one applies the same idea to x(t) = e−tAet(A+B)p for arbitrary p ∈ Rn.Di�erentiation reveals that

x′(t) = e−tA(−A)et(A+B)p+ e−tA(A+B)et(A+B)p

= e−tABet(A+B)p

= Be−tAet(A+B)p

= Bx(t).

Page 21: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

5.3. etA WHEN THERE ARE COMPLEX EIGENVALUES 21

�us x(t) = etBx(0) = etBp. By de�nition of x(t) we see that for all t ∈ R and p ∈ Rn

etBp = e−tAet(A+B)p

which implies

etAetB = et(A+B).

////

5.2.2. Kamke’s �eorem. If aij are the entries in the real n× n matrix A then

∀t ≥ 0 : etA ≥ 0 ⇐⇒ ∀i 6= j : aij ≥ 0

Here we say that for a real matrixB one hasB ≥ 0 if all entries ofB are nonnegative,i.e.

B ≥ 0def⇐⇒ ∀i, j : bij ≥ 0.

Proof. If A ≥ 0 then the series for etA shows that etA ≥ 0.

In general if only the o�-diagonal entries of A are known to be nonnegative, choosem ∈ R so thatmI +A ≥ 0. �en

etA = et(−mI+mI+A) = e−mtIet(mI+A).

For the �rst factor we have e−mtI = e−mtI ≥ 0, and for the second factor we havemI + A ≥ 0, so et(mI+A) ≥ 0. It follows that e−mtIet(mI+A) ≥ 0, so that etA ≥ 0whenever the o�-diagonal elements of A are nonnegative. ////

5.3. etA when there are complex eigenvalues

5.3.1. Complexi�cation. Vectors in Cn are of the form

w = u+ iv, u, v ∈ Rn,

where u and v are the real and imaginary parts of w. �e complex conjugate w is de�nedby

w = u+ iv ⇐⇒ w = u− iv,

and satis�es

<w = u =w + w

2, =w = v =

w − w2i

.

If A is a real n× n matrix, then A de�nes a linear map A : Rn → Rn. �is map alsoextends to a map A : Cn → Cn by A(u + iv) = Au + iAv. �e extension is complexlinear, meaning for any complex number α one has A(αw) = αAw.

Page 22: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

22 5. LINEAR SYSTEMS

5.3.2. Properties of complex eigenvalues. If λ = α+ iβ is a complex eigenvalueof the real matrix A, and if u + iv is the corresponding complex eigenvector (whereα, β ∈ R and u, v ∈ Rn), then

A(u+ iv) = (α+ iβ)(u+ iv) =⇒ A(u− iv) = (α− iβ)(u− iv).

�us we see that if λ ∈ C is an eigenvalue of a real matrix A, then λ is also an eigenvalueof A.

Multiplying(α+ iβ)(u+ iv) = αu− βv + i(βu+ αv)

and comparing this withA(u+ iv) = Au+ iAv

we see that the subspace of Rn spanned by {u, v} is invariant under A, and that thematrix of A on this subspace with respect to the basis {u, v} is

mat[A; {u, v}] =

(α β−β α

).

Page 23: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 6

Topological dynamics

6.1. α and ω-limit sets

6.2. Attractors

6.3. Notions of stability

6.4. Lyapunov functions

23

Page 24: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1
Page 25: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 7

Di�erentiability of the �ow

Consider the di�erential equation(7.1) x′(t) = f

(x(t)

),

where f : O→ Rn is continuously di�erentiable.

7.1. Linearized equation

7.1.1. �eorem. �e �ow φ : D→ O is a continuously di�erentiable map. In partic-ular, the derivative η(t) = dφtp · ξ is the solution of the linearization

(7.2) η′(t) = dfφt(p) · η(t), η(0) = ξ.

25

Page 26: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1
Page 27: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

CHAPTER 8

Appendix

.1. Norms on vector spaces

.1.1. De�nition. If V is a real vector space then a norm on V is a function from Vto [0,∞) with the following properties

∀x ∈ V : ‖x‖ ≥ 0 and ‖x‖ = 0 =⇒ x = 0

∀x ∈ V, t ∈ R : ‖tx‖ = |t| ‖x‖∀x, y ∈ V : ‖x+ y‖ ≤ ‖x‖+ ‖y‖

.1.2. Examples of norms. �e Euclidean norm on Rn is

‖x‖ =√x21 + · · ·+ x2n where x =

x1...xn

.

It is the special case p = 2 of the `p norms which, for 1 ≤ p <∞ are de�ned by

‖x‖p = {|x1|p + · · ·+ |xn|p}1/p

Other special cases are the `1 or “sum-norm”‖x‖1 = |x1|+ · · ·+ |xn|

and the “maximum norm”‖x‖∞ = ‖x‖max = max

{|x1|, . . . , |xn|

}.

.1.3. �eorem. For every norm ‖x‖ on RN there exist constants c, C ∈ R such thatfor all x ∈ RN one has

c‖x‖2 ≤ ‖x‖ ≤ C‖x‖2.

Proof. �e given norm de�nes a function p(x) = ‖x‖. If e1, . . . , eN is the standardbasis for RN then any vector x is of the form x =

∑i xiei, and one has

p(x) = p(x1e1 + . . .+ xNeN )

≤ |x1|p(e1) + . . .+ |xN |p(eN )

≤ C√x21 + . . .+ x2N , (Cauchy-Schwarz)

= C‖x‖2.

where C =√‖e1‖2 + . . .+ ‖eN‖2.

�is implies that the norm p is a continuous function on RN , since∣∣p(x)− p(y)∣∣ ≤ p(x− y) ≤ C‖x− y‖2.

27

Page 28: MATH 716–INTRODUCTION TO DIFFERENTIAL …angenent/716.2020s/DiffeqNotes.pdfMATH 716–INTRODUCTION TO DIFFERENTIAL EQUATIONS Spring 2020 Contents 1. PrototypesofDifferentialEquations1

28 8. APPENDIX

�us x 7→ p(x) is a continuous function that is strictly positive on the unit sphere S ={x ∈ RN : ‖x‖2 = 1}. �is sphere is compact, and hence p(x) is bounded from belowby a positive constant, i.e.

infx∈S‖x‖ = c > 0.

Consequently we have

p(x) = ‖x‖2 p(

x

‖x‖2

)≥ c‖x‖2

for all x ∈ RN . �e norms p(x) and ‖x‖2 are therefore equivalent. /////

.1.4. Operator norms. If T : V → V is linear then there is a constant C ∈ R suchthat ‖Tx‖ ≤ C‖x‖. By de�nition the best constant is the operator norm of T :

‖T‖L = sup‖x‖6=0

‖Tx‖‖x‖

.