1.6. existence and uniqueness of solutions...each initial condition! otherwise, we cannot hope to...

7
1.6. EXISTENCE AND UNIQUENESS OF SOLUTIONS 27 (c) Using your favorite computer plotting program 12 plot several solutions of the given equation, including y(t)= t/5. 1.6. Existence and uniqueness of solutions So far we have tacitly worked under the assumption that ordinary dierential equations have solutions and that, if we also specify an initial condition (i.e., if we have an initial-value problem), the solution is unique. But how do we know this is true? For instance, how can we be so sure that the monstrosity of an equation y 0 = e arctan y 3 sin(ln(1 + y 2 )) + y 1,000,000 admits any solutions whatsoever? If we want dierential equations to be of any use in modeling physical phenomena, we better make sure that important equations do have unique solutions for each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical circuits, etc. What exactly do we even mean by existence and uniqueness of solutions of dierential equations? Let’s look at a couple of examples. 1.16. Example. Consider the following dierential equation: dy dt = ( 1 if y< 0 -1 if y 0. We claim that there exists no solution satisfying y(0) = 0. Assume that y(t) were to be such a solution. Since y 0 (0) = -1, y(t) is supposed to start odecreasing. However, as soon as it does that, y(t) (for small t> 0) becomes negative and the equation demands that if y(t) < 0, then y 0 (t) = 1, so the solution needs to increase, not decrease. These two requirements are obviously incompatible, so y(t) does not exist. If this slightly hand-waving argument does not convince you, here is a more rigorous one. Suppose y(t) is a solution satisfying y(0) = 0. Since y(t) must be dierentiable (otherwise, how can it be a solution to a dierential equation?!), by Darboux’s theorem from Calculus, its derivative y 0 has the Intermediate Value Property (IntVP). 13 Thus y 0 (t) cannot take both -1 and 1 as values, since if that were the case, every number between -1 and 1 would be a value of y 0 . Therefore, since y 0 (0) = -1, it follows that y 0 (t) must always equal -1. Thus y 0 (t)= -1, for all t. Integrating and using y(0) = 0, we get y(t)= -t. However, this function is not a solution, as can be easily checked. Contradiction! Therefore, the equation does not admit a solution satisfying y(0) = 0. 1.17. Example. Consider the initial-value problem y 0 =3y 2/3 , y(0) = 0. It is easy to check that y(t)= t 3 is a solution. For any non-negative real number a define the following function: y a (t)= ( 0 if t a (t - a) 3 if t > a. We claim that y a is also a solution to the above IVP. (Compare with Example 1.4.) Indeed, y a (0) = 0 and it is not hard to see that y 0 a (t)=3y a (t) 2/3 if t 6= a (both the zero function and the function (t - a) 3 satisfy the equation). Using the limit definition of derivative we can also verify 12 For instance, on the Mac you may want to use the Grapher. 13 A function f has the IntVP if for every two distinct values of f , any number in between is also a value of f . We call b a value of f if b = f (a) for some a in the domain of f . Continuous functions have the IntVP but a function which has the IntVP is not necessarily continuous.

Upload: others

Post on 27-Feb-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

1.6. EXISTENCE AND UNIQUENESS OF SOLUTIONS 27

(c) Using your favorite computer plotting program12 plot several solutions of the givenequation, including y(t) = t/5.

1.6. Existence and uniqueness of solutions

So far we have tacitly worked under the assumption that ordinary di↵erential equations havesolutions and that, if we also specify an initial condition (i.e., if we have an initial-value problem),the solution is unique. But how do we know this is true? For instance, how can we be so sure thatthe monstrosity of an equation

y0 = earctan y

3sin(ln(1 + y2)) + y1,000,000

admits any solutions whatsoever? If we want di↵erential equations to be of any use in modelingphysical phenomena, we better make sure that important equations do have unique solutions foreach initial condition! Otherwise, we cannot hope to predict the motion of stars and planets,trajectories of rockets, behavior of electrical circuits, etc.

What exactly do we even mean by existence and uniqueness of solutions of di↵erential equations?Let’s look at a couple of examples.

1.16. Example. Consider the following di↵erential equation:

dy

dt=

(

1 if y < 0

�1 if y � 0.

We claim that there exists no solution satisfying y(0) = 0. Assume that y(t) were to be such asolution. Since y0(0) = �1, y(t) is supposed to start o↵ decreasing. However, as soon as it does that,y(t) (for small t > 0) becomes negative and the equation demands that if y(t) < 0, then y0(t) = 1,so the solution needs to increase, not decrease. These two requirements are obviously incompatible,so y(t) does not exist.

If this slightly hand-waving argument does not convince you, here is a more rigorous one.Suppose y(t) is a solution satisfying y(0) = 0. Since y(t) must be di↵erentiable (otherwise, how canit be a solution to a di↵erential equation?!), by Darboux’s theorem from Calculus, its derivative y0

has the Intermediate Value Property (IntVP).13 Thus y0(t) cannot take both �1 and 1 as values,since if that were the case, every number between �1 and 1 would be a value of y0. Therefore, sincey0(0) = �1, it follows that y0(t) must always equal �1. Thus y0(t) = �1, for all t. Integrating andusing y(0) = 0, we get y(t) = �t. However, this function is not a solution, as can be easily checked.Contradiction! Therefore, the equation does not admit a solution satisfying y(0) = 0. ⇤

1.17. Example. Consider the initial-value problem

y0 = 3y2/3, y(0) = 0.

It is easy to check that y(t) = t3 is a solution. For any non-negative real number a define thefollowing function:

ya

(t) =

(

0 if t a

(t � a)3 if t > a.

We claim that ya

is also a solution to the above IVP. (Compare with Example 1.4.) Indeed,ya

(0) = 0 and it is not hard to see that y0a

(t) = 3ya

(t)2/3 if t 6= a (both the zero function and thefunction (t � a)3 satisfy the equation). Using the limit definition of derivative we can also verify

12For instance, on the Mac you may want to use the Grapher.13A function f has the IntVP if for every two distinct values of f , any number in between is also a value of f .

We call b a value of f if b = f(a) for some a in the domain of f . Continuous functions have the IntVP but a functionwhich has the IntVP is not necessarily continuous.

Page 2: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

28 1. FIRST ORDER DIFFERENTIAL EQUATIONS

that y0a

(a) = 0, so ya

satisfies the equation also when t = a. Thus there are infinitely many solutionsto this IVP, one for each a � 0.

From the previous two examples we can conclude that di↵erential equations do not always havesolutions and solutions to IVP’s are not always unique! But observe that the right-hand side of theequation in Example 1.16 is discontinuous whereas the right-hand side in Example 1.17 althoughcontinuous is not everywhere di↵erentiable (the function f(y) = 3y2/3 fails to be di↵erentiable atzero). Could that be the cause of the non-existence and non-uniqueness of solutions? It turns ourthat the answer to both questions is yes. This is due to the following fundamental result.

1.18. Theorem (Existence and Uniqueness of Solutions). Consider the di↵erential equation

y0 = f(t, y).

(a) Assume that f is continuous on the rectangle

Q = (a, b) ⇥ (c, d) = {(t, y) : a < t < b, c < y < d}and let (the initial point) (t0, y0) be in Q. Then there exists " > 0 and a solution y(t)defined for t0 � " < t < t0 + " which satisfies the initial condition y(t0) = y0.

(b) If both f and its partial derivatives with respect to t and y are continuous on Q, then forevery (t0, y0) 2 Q the solution satisfying y(t0) = y0 is unique. That is, if both y1(t) andy2(t) are solutions satisfying the given initial condition, then

y1(t) = y2(t),

for all t for which both sides are defined.

BASICS OF DISCRETE MORSE THEORY

SLOBODAN N. SIMIC

t

y

t0

y0

t0 � " t0 + "

Q

Department of Mathematics and Statistics, San Jose State University, San Jose, CA 95192-0103E-mail address: [email protected]

Date: August 6, 2014.

1

Figure 1.8. Solution satisfying the initial condition y(t0) = y0.

The moral of this theorem is: if the right-hand side of the di↵erential equation is su�cientlynice, the solutions exists and are unique (for any given initial value). Part (a) says that the solutionmay happen to be defined only for a short interval of time, namely (t0 � ", t0 + "). See Figure 1.8.

Page 3: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

1.6. EXISTENCE AND UNIQUENESS OF SOLUTIONS 29

So even the nasty equation

(1.22) y0 = earctan y

3sin(ln(1 + y2)) + y1,000,000

admits solutions and for any initial value y0, there is a unique solution satisfying y(0) = y0! Thereason is that f(y) = earctan y

3sin(ln(1+ y2))+ y1,000,000 although complicated, is a “nice” function:

it has continuous partials with respect to both variables (of course, since it doesn’t depend on t, itst-partial is just zero).

Think about it: we have no idea how to actually solve this equation, but due to Theorem 1.18we do know that its solutions exist. In what sense do they exist if we cannot construct them?That is a question for philosophers of mathematics, which we do not have time to address here.14

However, we can certainly approximate them, which we will learn how to do in Section 1.9.Here are some important consequences of the Existence and Uniqueness Theorem. Suppose that

f satisfies the assumptions of Theorem 1.18. Then:

(1) The graphs of distinct solutions never cross. That is, if y1(t) and y2(t) are two solutionsand y1(t) = y2(t) for some t, then y1(t) = y2(t) for all t for which both sides are defined.In other words, if two solutions are in the same place at the same time, then they are thesame solution.

t

y

Figure 1.9. Solutions can’t cross, so this scenario is impossible.

(2) Consider the (autonomous) equation y0 = f(y) and suppose that f(y0) = 0. Let y(t) bea solution such that y(t0) = y0 for some t0. Then by (1), y(t) = y0 for all t, i.e., y is anequilibrium solution.

(3) If y0(t), y1(t) and y2(t) are solutions and y0(t0) < y1(t0) < y2(t0) for some t0, then y0(t) <y1(t) < y2(t) for all t. That is, if y1(t) lies between two solutions at one instant of time,then it is trapped between these solutions for all time.

Here are some examples of applications of the Existence and Uniqueness Theorem.

1.19. Example. Find the unique solution to (1.22) satisfying y(0) = 0.

Solution: Let f(y) = earctan y

3sin(ln(1 + y2)) + y1,000,000. It is easy to check that f(0) = 0,

so the zero function y⇤(t) ⌘ 0 is an equilibrium solution. Since y⇤(0) = 0, by the Existence andUniqueness Theorem y⇤(t) is the (unique) solution satisfying y(0) = 0. ⇤

14There is a point of view called Mathematical Platonism, which (according to the Stanford Encyclopedia ofPhilosophy) is “the metaphysical view that there are abstract mathematical objects whose existence is independentof us and our language, thought, and practices. (...) Mathematical truths are therefore discovered, not invented.”

Page 4: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

30 1. FIRST ORDER DIFFERENTIAL EQUATIONS

1.20. Example. What can be said about the solution to the equation

y0 = (y + 1)y(y � 2)

satisfying y(0) = 1?Solution: Let f(y) = (y + 1)y(y � 2). This function does not depend on t and f 0(y) is clearly

continuous (it’s a polynomial), so the equation admits unique solutions. Since f(�1) = f(0) =f(2) = 0, the equilibrium solutions are y�(t) = �1, y0(t) = 0 and y+(t) = 2. Consider the solutiony(t) satisfying y(0) = 1. Since y0(0) < y(0) < y+(0), Remark (3) above implies that y(t) is trappedbetween 0 and 2 forever. Moreover, since f is negative on (0, 2), we have y0(t) = f(y(t)) < 0, sothe solution is decreasing. Assume that y(t) is defined for all positive t. Since y(t) is bounded anddecreasing, we know from Calculus that y(t) must approach a limit, as t ! 1. Let’s call that limitL:

limt!1

y(t) = L.

What value might L have? It is clear that 0 L < 2. Since y(t) approaches a constant value in ax ’ = 1 y ’ = (y + 1) y (y − 2)

−0.6 −0.4 −0.2 0 0.2 0.4 0.6

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

x

y

Figure 1.10. The graphs of equilibrium solutions y�(t) = �1, y0(t) = 0 and y+(t) =2, and the solution y(t) satisfying y(0) = 1.

decreasing manner, its derivative must approach zero: y0(t) ! 0, as t ! 1. So:

0 = limt!1

y0(t)

= limt!1

f(y(t))

= f( limt!1

y(t))

= f(L).

Here we used the fact that y(t) is a solution (second line) and that f is continuous (third line). Thusf(L) = 0 and since 0 L < 2, we must have L = 0. In conclusion: the unique solution satisfyingy(0) = 0 lies in the interval (0, 2) for all time and approaches the equilibrium solution 0 as t ! 1.In a similar way it can be shown that y(t) ! 2, as t ! �1. See Figure 1.10. ⇤

Remark. It can be shown in a similar way as in Example 1.20 that solutions to first orderdi↵erential equations (in one variable) can converge only to equilibrium solutions.

Page 5: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

1.6. EXISTENCE AND UNIQUENESS OF SOLUTIONS 31

Picard’s method of successive approximations. The Existence and Uniqueness Theoremis an extremely general (hence powerful and important) theorem, so it is natural to ask: how inthe world can one prove that a di↵erential equation has solutions when we don’t even know itsright-hand side? Here we sketch the classical approach to this question, called Picard’s method ofsuccessive approximations.15

Suppose we are given an IVP

(1.23) y0 = f(t, y), y(0) = y0,

and assume that f satisfies the assumptions of the Existence and Uniqueness Theorem. Denote theunique solution to the IVP by y(t). At this point, we know nothing about y except that it existsin some abstract sense (living in some strange Platonic universe). However, we can construct asequence of approximate solutions y0(t), y1(t), y2(t), . . . such that y

k

(t) converges to y(t), as k ! 1.This is done in the following way.

Since y(t) is a solution defined on some interval (�", "), we have

y0(s) = f(s, y(s)),

for all �" < s < ". Let us integrate this equation with respect to s from 0 to t, where �" < t < ".We obtain

y(t) � y0 =

Z

t

0f(s, y(s)) ds.

On the left-hand side we used the Fundamental Theorem of Calculus and the initial conditiony(0) = y0. Moving y0 to the right-hand side, we see that y(t) satisfies the following integral equation

(1.24) y(t) = y0 +

Z

t

0f(s, y(s)) ds.

Conversely, if y(t) satisfies (1.24), then it satisfies the IVP (1.23). Indeed, di↵erentiating both sidesof (1.24) with respect to t and using the Fundamental Theorem of Calculus, we obtain y0(t) =f(t, y(t)). In addition, setting t = 0 in (1.24) gives y(0) = y0.

In other words, equations (1.23) and (1.24) are equivalent. That means that instead of tryingto solve the given di↵erential equation, we can solve the integral equation (1.24), which turns outto be better suited for constructing successive approximations.

We define our sequence y0(t), y1(t), y2(t), . . . of successive approximations recursively (or induc-tively) as follows:

y0(t) ⌘ y0

yk+1(t) = y0 +

Z

t

0f(s, y

k

(s)) ds.

Thus we first compute y0(t), which is just a constant function equal to the initial value y0, thenuse the above formula to compute (at least theoretically: the integral may be hard) y1(t), then weplug y1(t) back into the recursive formula to find y2(t), and so on. It turns out that this works,namely, y

k

(t) converges to the true solution, as k ! 1, at least for small enough t! Let’s see howthis plays out in a simple example.

1.21. Example. Let us apply Pickard’s method to the IVP

y0 = y, y(0) = 1.

15Emile Pickard (1856-1941) was a prominent French mathematician.

Page 6: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

32 1. FIRST ORDER DIFFERENTIAL EQUATIONS

Note that f(t, y) = y. We obtain:

y0(t) = 1

y1(t) = 1 +

Z

t

01 ds = 1 + t

y2(t) = 1 +

Z

t

0(1 + s) ds = 1 + t +

t2

2

y3(t) = 1 +

Z

t

0

1 + s +s2

2

ds = 1 + t +t2

2+

t3

6...

What pattern do you see? You should be able to recognize that y0(t), y1(t), y2(t), and y3(t) are thetruncations of the MacLauren series of et. In general, it is not too hard to show that

yk

(t) = 1 + t +t2

2!+ · · · +

tk

k!,

where k! = 1 · 2 · 3 · . . . · k. So as k ! 1,

yk

(t) !1X

n=0

tn

n!.

You learned in calculus that the sum of this infinite series is y(t) = et, which we know is the uniquesolution to the above IVP. ⇤

Completeness. In applications it is important to know if a model has solutions defined for allfuture (and sometimes past) time t. For if a di↵erential equation has a solution y(t) defined onlyup to some finite time t0, it can undergo the phenomenon called a finite-time blow up. We alreadyhad two instances of that phenomenon: see Examples 1.3 and 1.10. Recall, e.g., that in the firstexample, the unique solution to the IVP y0 = 1 + y2, y(0) = 0 is y(t) = tan t, which is defined onlyon the interval (�⇡/2, ⇡/2).

If all its solutions are defined for all time, the equation is called complete. We saw that even somevery simple equations are not complete. It would clearly be useful to know under what conditionsa given equation is complete. This is a non-trivial question so we only mention the following basicresults.

1.22. Theorem. (a) If y(t) is a solution to y0 = f(y) and y(t) is bounded (i.e., there existnumbers a, b such that a y(t) b for all t), then y(t) is defined for all t.

(b) Let f be a di↵erentiable function and assume its derivative f 0 is bounded (i.e., there is aconstant M > 0 such that |f 0(y)| M , for all y). Then the di↵erential equation y0 = f(y) iscomplete.

Note that in the above example f(y) = 1 + y2, so f 0(y) = 2y, which is not bounded (it goes toinfinity as y ! 1). However, the theorem guarantees that the equation

y0 = (arctan y) sin y

is complete. (Check that!)

Exercises

Page 7: 1.6. Existence and uniqueness of solutions...each initial condition! Otherwise, we cannot hope to predict the motion of stars and planets, trajectories of rockets, behavior of electrical

1.7. THE PHASE LINE AND CLASSIFICATION OF EQUILIBRIA 33

In Exercises 1–3 you are given a di↵erential equation

y0 = f(t, y),

where f satisfies the hypotheses of the Existence and Uniqueness Theorem. You are also givenseveral solutions of this equation and an initial condition. The question is: using the (Existenceand) Uniqueness Theorem, what can you say about the solution y(t) satisfying the given initialcondition?

1. y1(t) = �1 and y2(t) = 2, for all t, are solutions, initial condition y(0) = 0.

2. y1(t) = t and y2(t) = t + 2e�t, for all t, are solutions, initial condition y(0) = 1.

3. y1(t) =2

1 + t2, y2(t) = 0, and y3(t) = �1, for all t, are solutions, initial condition y(0) = 1.

In Exercises 4–7, you are given an initial condition for the di↵erential equation

y0 = y3 + 3y2 � 10y.

What does the Existence and Uniqueness Theorem tell us about the solution y(t) satisfying thegiven initial condition?

4. y(0) = 1 5. y(0) = 2

6. y(0) = �1 7. y(0) = �2014.

8. (a) Solve the IVP

y0 = y3, y(0) =1

2.

(b) Find the domain of definition of the solution.(c) Describe what happens to the solution as it approaches the limits of its domain. Whycan’t the solution be extended to a larger set of values of t?

9. Consider the di↵erential equation

y0 = 5y4/5.

(a) Show that y(t) = 0, for all t, is an equilibrium solution.(b) Find a di↵erent solution satisfying the initial condition y(0) = 0. [Hint : You can usethe language like “y(t) = · · · when t 0 and y(t) = · · · when t > 0, etc. See also Example1.17.](c) Why doesn’t this contradict the Uniqueness Theorem?

1.7. The phase line and classification of equilibria

In this section we focus on qualitative analysis of di↵erential equations. This approach waspioneered by the the famous French mathematician Henri Poincare (1854-1912), one of the leadingmathematicians and scientists of his time.16 Poincare studied the Solar system hoping to showthat it is stable (which among other things would mean that the Moon would never crash into theEarth or fly o↵ into space). Instead what he discovered was that the Solar system, modeled by

16Among his many accomplishments, Poincare discovered special theory of relativity independently of Einsteinand more or less at the same time as him.