1. introduction to perturbation theory & asymptotic...
TRANSCRIPT
1. Introduction To perturbation Theory & Asymptotic Expansions
Example 1.0.1. Considerx − 2 = ε coshx (1.1)
For ε 6= 0 we cannot solve this in closed form. (Note: ε = 0 ⇒ x = 2)The equation defines a function x : (−ε, ε) → R (some range of ε either side of 0)We might look for a solution of the form x = x0+εx1+ε2x2+· · · and by subsititungthis into equation (1.1) we have
x0 + εx1 + ε2x2 + · · · − 2 = ε cosh(x0 + εx1 + · · ·)Now for ε = 0, x0 = 2 and so for a suitably small ε
2 + εx1 − 2 ≈ ε cosh(2 + εx1 + · · · )⇒ x1 ≈ cosh(2)⇒ x(ε) = 2 + ε cosh(ε) + · · ·
For example, if we set ε = 10−2 we get x = 2.037622... where the exact solution isx = 2.039068...
1.1. “Landau” or “Order” Notation.
Definition 1.1.1. Let f and g be real functions defined on some open set containingzero, ie: 0 ∈ D ⊂ R. We say:
(1) f(x) = O(g(x)) as x → ∞ (”Big O”)if ∃K > 0 and ε > 0 such that(−ε, ε) ∈ D and ∀x ∈ (−ε, ε)|f(x)| <K|g(x)|
(2) f(x) = o(g(x)) as x → ∞ (”little o”) if limx→0
f(x)
g(x)= 0
(3) f(x) ∼ g(x) as x → ∞ (asymptotically equivalent) if limx→0
f(x)
g(x)= 1
Remark 1.1.2. .
(1) Could define these for x → x0 or x → ∞(2) Abuse of notation: for say, sin x = x + O(x2), O(x2) should be an equiva-
lence class, ie: sin x − x ∈ O(x2)
Lemma 1.1.3. If limx→0
|f(x)||g(x)| → m < ∞ then f(x) = O(g(x))
Proof. Suppose∣∣∣f(x)
g(x)− m
∣∣∣ < ε and |x| < δ
then∣∣∣|f(x)||g(x)| − |m|
∣∣∣ < ε ⇒ |f(x)|
|g(x)| < |m| + ε
⇒ |f(x)| < (|m| + ε)|g(x)|But this is just the definition of O with |m| + ε for K
Example 1.1.4.
(i) x2 = o(x) as x → 2 sincex2
x→ 0 as x → ∞
(ii) 3x2 = O(x2) since3x2
5x2→ 0 as x → 0
1
2
(iii) x = o(|x 12 |) as x → 0
(iv)2x
1 + x2= o(1) since
2x/(1 + x2)
1→ 0 as x → 0
(v)2x
1 + x2= O(x)
(vi) sinx = x + o(x) as x → 0 since limx→0
sin x − x
x= lim
x→0
cosx − 1
1= 0
(vii) sinx = x + O(x3) since limx→0
sin x−xx3 = lim
x→0
cos x−13x2
= limx→0
− sin x6x
= limx→0
− cos x6 = 1
6
(viii) sinx ∼ x − x3
3! since limx→0
sin x
x− x3
3!
= 1
(ix) sinx = x − x3
3! + O(x4)
The definition of f ′(x) in ”o” notation is:
f(x + h) = f(x) + f ′(x)h + o(h) as h → 0
The Taylor series for f(x + h) is given by
f(x + h) = f(x) + f ′(x)h + · · · + f (n)(x)hn
n!+ o(h)
If the Taylor series is a convergent power series then o(hn) can be replaced byO(hn+1).
Any convergent power series
N∑
n=0
anxn =
N∑
n=0
anxn + O(xn+1)
Examples:√1 + 2x = 1 + 1
2 (−2x) +12(− 1
2)(−2x)2
2! + O(x3) = 1 − x − x2
2 + O(x3)
ln(1 + x) = x − x2
2 + O(x3)
x2 sin( 1x) = O(x2) It is important to note that this is ”big O” with no limit as
∣∣x2 sin( 1
x)∣∣≤ 1 · x2 though
|x2 sin( 1x )|
x2 = sin( 1x), and this has no limit.
1.2. The ”Fundamental Theorem” of Perturbation Theory.
Theorem 1.2.1. If A0 +A1ε+ · · ·+ANεN = O(εN+1) then A0 = A1 = · · ·AN = 0
Proof. Suppose A0 + A1ε + · · · + ANεN = O(εN+1) but not all Ak are zero. LetAM be the first non-zero.
ConsiderAMεM + AM+1ε
M+1 + · · · + ANεN+1
εN+1
=AM + AM+1ε + · · ·+
εN+1−M→ ∞ as ε → ∞
Then we have a contradiction with “big O”
.
3
1.3. Perturbation Theory of Algebraic Equations.
Example 1.3.1. Consider x2−3x+2+ε = 0. Assume the roots have the followingexpansion: x0 + εx1 + ε2x2 + O(ε3) then by substitution we get
(x0 + εx1 + ε2x2 + · · ·)2 − 3(x0 + εx1 + ε2x2 + · · ·) + ε + 2 = O(ε3)
= (x20 + 3x0 + 2) + ε(2x0x1 − 3x1 + 1) + ε2(x2
1 + 2x0x1 − 3x2) = O(ε3)
Terms in ε0 :x2
0 − 3x0 + 2 = 0 ⇒ x0 = 1 or x0 = 2
Terms in ε1 :2x0x1 − 3x1 + 1 = 0
if x0 = 1 then x1 = 1otherwise if x0 = 2 then x1 = −1Terms in ε2 :
x21 = 2x0x2 − 3x2 = 0
if x0 = 1 then x1 = 1, and so x2 = 1otherwise if x0 = 2 then x1 = −1, and so x2 = −1
∴ x = 1 + ε + ε2 + O(ε3) or x = 2 − ε − ε2 + O(ε3)
We can solve x2 − 3x + 2 + ε = 0 directly to get x =3 ±
√1 − 4ε
2
Now√
1 − 4ε = 1− 2ε− 2ε2+O(x3) and substituting this into3 ±
√1 − 4ε
2we get
x =3 ± (1 − 2ε − 2ε2)
2which is the same answer as above.
Example 1.3.2 (Singular Perturbation). .
Consider εx2−2x+1 = 0 and again assume there is an expansion x0 +εx1 +ε2x2 +O(ε3). We get for terms in ε0 :
−2x0 + 1 = 0 ⇒ x0 =1
2
terms in ε1
x20 − 2x1 = 0 ⇒ x1 =
1
8terms in ε2
2x0x1 − x2 = 6 ⇒ x2 =1
8and so we have
x =1
2+
ε
8+
ε2
8and this gives us one of the roots, but where is the second?The exact solution is given by:
x =1 ±
√1 − ε
εand the other root should be
x =2
ε− 1
2+ O(ε)
4
Last time in example 1.3.2 we did not find the other root of εx2 − 2x + 1 = 0using the expansion of form x0 + εx1 + ε2x2 + O(ε3).
If instead we try ω = εx, then ω2
ε= 2w
ε+ 1 = 0 ⇒ ω2 − 2ω + ε = 0 this, we can
assume in the usual way has an expansion of the form: ω0 + εω1 + ε2ω2O(ε3), and:Terms in ε0 :
ω20 − 2ω0 = 0 ⇒ ω0 = 0 or 2
Terms in ε1 :
2ω0ω1 − 2ω1 + 1 = 0 ⇒ ω1 = −1
2or
1
2
∴ωε
= x = 1
2 + O(ε)2ε− 1
2 + O(ε)
Example 1.3.3 (Non Regular Expansions). Consider x2 − x(2 + ε) + 1 = 0.assume x = x0 + εx1 + ε2x2 + O(ε3)
ε0 : x20 − 2x0 + 1 = 0 ⇒ x0 = 1 twice
ε1 : (1+ ε)2− (1+ εx1)(2+ ε)+1 = O(ε2) ⇒ 2εx1 −2εx1 +1 = O(ε2) ⇒ 1 = O(ε2)
This contradicts the assumption that there was a regular expansion.The exact roots are:
x = 2+ε±√
4ε−ε2
2 = 12 (2 + ε ±√
ε√
4 − ε) = 12 (2 + ε ± 2
√ε√
4 − ε) = 1 ± ε12 + O(ε)
Try x = x0 + ε12 x1 + ε x2 + · · ·
ε0 : x0 = 1 as before.
ε12 : (1 + ε
12 x1)
2 − (1 + ε12 )(2 + ε) + 1 = O(ε
32 ) ⇒ 2ε
12 x1 − 2ε
12 x1 = O(ε) ⇒ 0 = 0
ε1 : (1 + ε12 x1 + ε(x2)
2 − (1 + ε12 x1 + εx2)(2 + ε) + 1 = O(ε
32 )
which gives x21 = 1 ⇒ x1 = ±1 so x = 1 ± ε
12 + O(ε
32 )
1.4. Perturbation Theory of Odes.
Example 1.4.1 (Regular Problem). Consider the following ODE:
x + x = εx2, x(0) = 1
We try an expansion of the form: x(t) = x0(t) + εx1(t) + O(ε2) which leads to:ε0 :
x0 + x0 = 0x0(0) = 1
⇒ x0(t) = e−t
ε1 : x1 + x1 = x2
0 = e−2t
x1(0) = 0 (no ε in x(0) = 1)⇒ x1(t) = e−t − e−2t
and so
x = e−t + ε(e−t − e−2t + O(ε2)
5
For t ∈ [0, ∞] the constants in O definition can be independent of t
Sometimes we want only ε > 0 versions of o, O with one sided limits limε→o+
.
We use a series 1, ε12 , ε, ε
32 or in general: ϕ0(ε), ϕ1(ε) . . .
ϕ 12+1(ε) = o(ϕ 1
2(ε)) is what is needed.
Example 1.4.2 (Singular Model Equation).Consider εx + x = 1, x(0) = 0, and suppose x = x0 + εx1O(ε2)Then ε(x0 + εx1) + x0 + εx1 = 1 + O(ε2)
ε0 : x0 = 1 but x(0) = 0 cannot solve the initial condition.
We can rescale time, ie: t = ετ which gives τ = tε
anddt
dε= ε,
dx
dτ=
dx
dt
dt
dτ= εx
dx
dτ+ x = 1, x(0) = 0
Now use x = x0 + εx1O(ε2)ε0 :
dx0
dτ+ x0 = 1, x0(0) = 0 ⇒ x0 = 1 − eτ = 1 − e−
tε
ε1 :dx1
dτ+ x1 = 1, x1(0) = 0 ⇒ x1 = 0 (similarly for ε2, ε3 etc...)
Therefore the solution is:x = 1 − e−tε
Example 1.4.3 (”Singular In The Domain”).Consider x + εx2 = 1, x(0) = o, t > 0 and assume x = x0 + εx1 + ε2x2O(ε2)ε0 :
x0 = 1, x0(0) = 0 ⇒ x0 = t
ε1 :
˙(t + εx1) + ε(t + εx1)2 = 1 + O(ε2) ⇒ 1 + εx1 + ε(t2 + 2εtx1) = 1 + O(ε2)
⇒ εx1 + εt2 = O(ε2) ⇒ x1 + t2 = 0 ⇒ x1 = − t3
3(If we carry on we find the ε2 term ∼ t5) The solution is:
x = t − εt3
3+ O(ε3)
This is not regular for t ∈ [0, ∞)
Example 1.4.4 (Damped Harmonic Motion with Small Damping (ε > 0)).Consider x + εx+ x = 0, x(0) = 0, x(0) = 1 and assume x = x0 + εx1 + ε2x2O(ε3)ε0 :
x0 + x0 = 0, x0(0) = 0, x0 = 1 ⇒ x0 = sin t
ε1 :x1 + x0 + x1 = 0 ⇒ x1 + x1 = − cos t, x1(0) = 0, x1(0) = 0
and we should get: x1 = − t2 sin t whereby the solution is:
x = sin t − 1
2εt sin t
6
This again is not uniform for t ∈ [0, ∞)
The exact solution is: x = e−ε2t sin(
√
1 = ε2
4 )t, the expansion above is only good
for small t.
1.5. Asymptotic Expansions.
Definition 1.5.1. A sequence of functions ϕn, n = 0, 1, 2, . . . is called anasymptotic series as x → x0 if
limx→x0
ϕn+1(x)
ϕn(x)= 0
(ie: ϕn+1(x) = o(ϕn(x)))
Note: We could have x0 = ∞ or a one-sided limit x → x+0
Examples:
(i) x− 12 , 1, x
12 , x, x− 3
2 , . . . x → 0+
(ii) 1, 1x, 1
x ln(x) ,1
x32 ln(x)
, x → ∞(iii) tanx, (x − π)2, (sin x)3, x → π
Taylor’s Theorem:
f(x) =N∑
n=0
f (n)(x)(x − x0)n
n!+
remainder︷ ︸︸ ︷
RN (x) , f ∈ CN+1[x0 − r, x0 + r], r > 0
There are remainder formulas to bound RN (x), for example,
Cauchy: |RN (x)| ≤ MNrN+1
(n + 1)!, MN > 0, RN = O((x − x0)
N+1) as x → x0.
It is important to remember that Taylor ; RN → 0 as N → ∞ In fact we know
plenty of power series that do not converge, eg:
N∑
n=0
(−1)nn!xndiverges for all x.
Another famous non-convergent series is Stirling’s formula for n!:
lnn! = n lnn +1
2ln(2πn) +
1
2231n− 1
233251n3+ O(
1
n4)
Definition 1.5.2. Let ϕn be an asymptotic sequence as x → x0.
The sum
N∑
n=0
anϕn(x) is called an asymptotic expansion of f with N terms if
f(x) −N∑
n=0
anϕn(x) = o(ϕN (x)).
The coefficients an are called the coefficients of the asymptotic expansion.∞∑
n=0
anϕn(x)
is called an asymptotic series.
Note: Some people use the stronger definition: O(ϕN (x))
Notation 1.5.3. f(x) ∼∞∑
n=0
anϕn(x) as x → x0
7
Clearly any Taylor series is an asymptotic series.
Example: Find an asymptotic expansion for f(x) =
∞∫
0
e−t
1 + xtdt
(1 + xt)−1 = 1 − xt + x2t2 + · · · ⇒ f(x) =
∞∫
0
(1 − xt + x2t2 + · · · )e−t dt
Now, it can be shown that
∞∫
0
tne−t dt = n! and hence
f(x) = 1 − x + 2!x2 − 3!x3 + · · ·
which diverges by the ratio test∣∣∣
n!xn
(n − 1)!xn−1
∣∣∣ = n|x| → ∞∀x 6= 0.
It could still be be an asymptotic expansion however; we’d need to check
f(x) −N∑
n=0
(−1)nn!xn = o(xN )
This is a special case of:
Lemma 1.5.4 (Watson’s Lemma). Let f be a function with convergent power seriesand radius of convergence R, and f(t) = O(eαt) as t → ∞ (for some α > 0) then:
∞∫
0
e−atf(t) dt ∼∞∑
n=0
fn(o)
an+1
In last example we had f(x) =
∞∫
0
e−t
1 + xdt as x → 0+ let xt = u, a = 1/x, then
∞∫
0
e−au
u + u2du (looks like Watson’s)
Example 1.5.5 (Incomplete Gamma Function). .
γ(a, x) =
x∫
0
ta−1ee−t
dt =
x∫
0
ta−1 dt
N∑
n=0
(−t)n
n!
=
N∑
n=0
(−1)n
n!
x∫
0
tn+a−1 dt =
N∑
n=0
(−1)n
n!(n + a)xn+a
Note: Power series under integral is convergent, hence uniformly convergent. Wehave a convergent power series for γ(a, x)in x
Example 1.5.6. .
Ei(x) =
x∫
−∞
et
tdt (exponential integral)
8
E1(x) =
∞∫
1
e−tx
tdt (Cauchy principal value), it turns out that E1(x) = −Ei(−x)
Ei(x) = γ + ln(x) + x +x2
2 · 2!+
x3
3 · 3!+ O(x4)
where γ =
∞∫
0
e−xln(x) dx = limn→∞
( N∑
k=0
1
k− ln(n)
)
≈ 0.5772
2. ODEs In The Plane
We consider systems of ODEs of the form:
x = u(x, y)y = v(x, y)
where x(0) = x0, y(0) = y0.
(Note: A 2nd order ODE can be expressed as a coupled system of 1st order ODEssince if x + f(x)x + g(x) = 0 then if we say x = y it follows that y = x so we get
x = yy = −f(x) − g(x)
Where in this case it turns out that u(x, y) = y, v(x, y) = −f(x) − g(x))
2.1. Linear Plane autonomous Systems.
x = ax + byy = cx + dy
the 1D case is easy: x = ax, x(t) = x(0)eat
Example 2.1.1. .
Consider x = y
˙−3x = −2y
If we say ~x =
(xy
)
then we can write ~x =
(−3 00 −2
)
~x
We solve these two ODEs to get
x(t) = x(0)e−3t, y(t) = y(0)e−2t (which may be written as)
~x(t) =
(e−3t 0
0 e−2t
) (x(0)y(0)
)
We can construct a “Phase plot” with solutions being
curves in the plane called “trajectories” or “orbits”.
We could eliminate t as follows:
y = y(0)( x
x(0)
) 23
, x(0) 6= 0
x
y
Figure 2.1.1:
9
Example 2.1.2.
Similar to the above example consider x = −x
y = y
We solve in both cases to get x(t) = x(0)e−t, y(t) = y(0)et and eliminate t suchthat:
y = y(0)( x
x(0)
)−1
Noting that ~x =
(−1 00 1
)
~x we end up with a phase plot which looks like this
Figure 2.1.2:
Example 2.1.3 (Simple Harmonic Motion: x + x = 0).
x = yy = −x
~x =
(0 1−1 0
)
~x
x(t) = A cos t + B sin t, x(0) = Ax(t) = −A sin t + B cos t, x(0) = B
x =
(x(0) cos t + y(0) sin t−x(0) sin t + y(0) cos t
)
=
(cos t sin t− sin t cos t
)
︸ ︷︷ ︸
rotation matrix
(x(0)y(0)
)
The orbits are circles.
x
y
Figure 2.1.3:
Example 2.1.4 (Damped Harmonic Motion: x + bx + x = 0). x = y
y = −x − byx =
(0 1−1 b
)
x
Try x(t) = eλt giving the characteristic polynomial
λ2 + bλ + 1 ⇒ λ =−b
2±
√
b2
4− 1
10
If b is small then b2
4 − 1 < 0 such that
λ =−b
2± i
√
1 − b2
4= α + iβ
and so
x(t) = eαt(A cos(βt) + B sin(βt))
xx
yy
Figure 2.1.4:
Theorem 2.1.5. Let A ⊂ R2×2 be a real matrix with eigenvalues λ1, λ2 then:
(i) If λ1 6= λ2 are real then there exists an invertible matrix P such that
P−1AP =
(λ1 00 λ2
)
(ii) If λ1 = λ2 then either A is diagonal, A = λI or A is not diagonal and thereis a P such that
P−1AP =
(λ 10 λ
)
(iii) If λ1 = α + iβ, λ2 = α − iβ, β 6= 0 then there is a P such that
P−1AP =
(α ββ α
)
How does this help? Put ~x = A~x and let ~y = P−1~xthen ~x = P~y, ~y = P−1~x, P~x = PA~xand so
~y = P−1AP~y
This allows us to generalise the work we did above.
Example 2.1.6. For case 1 in the theorem, consider ~u = P−1AP~uthen
~u1 = λ1u1, ~u2 = λ2u2
(since P−1AP~u is just a matrix of eigenvectors acting on ~u)Then ui(t) = ui(0)eλit and so
u(t) =
(eλ1t 00 eλ2t
) (u1(0)u2(0)
)
Now ~x = P~u so ~x(t) = P
(eλ1t 00 eλ2t
)(u1(0)u2(0)
)
= P
(eλ1t 00 eλ2t
)
P−1
(x1(0)x2(0)
)
u1 and u2 are related by eliminating t:
u1(t) = u1(0)eλ1t ⇒( u1
u1(0)
) 1λ1
= et, then u2 = u2(0)( u1
u1(0)
)λ2λ1
.
11
x
y
λ1 > λ2 > 0 λ1 < λ2 < 0
Figure 2.1.5:
For λ1 6= λ2, distinct eigenvectors v1, ~v2, A~vi = λi~vi
Half lines through the origin in eigen-directions are trajectories.
x
yeigen directions
e.g. λ1 > 0 > λ2
Figure 2.1.6:
12
2.2. Phase Space Plots. .
Eigenvalues real,
different, same sign
Node: Source
Eigenvalues real,
different, same sign
Node: Sink
Eigenvalues real,
different, opposite
sign
Saddle
Eigenvalues real,
equal,
λ > 0, A = λI
Node: Source
Eigenvalues real,
equal
λ < 0, A = λI
Node: Sink
Eigenvalues real,
equal
λ > 0, A ≠ λI
Degenerate Source
Eigenvalues real,
equal
λ < 0, A ≠ λI
Degenerate Sink
Eigenvalues
complex,
λ1 = α+iβ, λ2 = α - iβ
α < 0, β ≠ 0
Stable Spiral
Eigenvalues
complex,
λ1 = α+iβ, λ2 = α - iβ
α > 0, β ≠ 0
Unstable Spiral
Eigenvalues
purely imaginary,
λ1 = iβ, λ2 - iβ, β ≠ 0
Ellipse
Figure 2.2.1:
13
Example 2.2.1. consider
x = −3x + yy = 2x − 2y
The critical points of this system are at
(00
)
, and the Jacobian is given by
(−3 12 −2
)
we find the eigenvalues by setting A − I =
(−3 − λ 1
2 −2 − λ
)
= 0, ie:
λ2 + 5λ + 4 = 0 ⇒ λ = −4, −1 this corresponds to a node (sink)
As for the associated eigenvectors;
A − 4I =
(1 12 2
)
⇒ ~v =
(12
)
is in the null space (hence an eigenvector)
A − I =
(−2 12 −1
)
⇒ ~v =
(12
)
is the other eigenvector
We can get further information to help in curve sketching by considering isoclines:
y
x=
dy
dx=
2x − 2y
−3x + y
Figure 2.2.2:
2.3. Linear Systems. A linear system for which the eigenvalues are wholly imag-inary (λ = ±β) is called called a centre.The characteristic equation, in general of A ⊂ R
2×2 = λ2 − (trace A)λ + detA.In this case (for λ imaginary), λ2 + β2 = 0, trace A = 0, detA > 0.Consider a simple case: x = −y
y = cxeliminate t to get
y
x=
dy
dx= −cx
y
⇒∫
y dy = −∫
cxdx ⇒ y2
2+
cx2
2= const
This is the equation for an ellipse.To determine the direction of the arrows, set y = 0 then on x axis x = 0, y = cx
and
(xy
)
is a vector in the direction of solutions, and we that for x positive x is
positive.
14
The Oscillatory nature of these graphs don’t reflect any ’real life’ situations.
y(t)
x(t)
Figure 2.3.1:
2.4. Linear Approximations. .Consider x = u(x, y), y = v(x, y), Critical points occur when u = v = 0.Let (x0, y0) be critical points, put ξ = x−x0, η = y−y0 and Taylor expand about(x0, y0) to get (near equilibrium point)
u(x, y) = u(x0, y0) + ξ∂u
∂x
∣∣∣(x0, y0)
+η∂u
∂y
∣∣∣(x0, y0)
+O(ξ2 + η2) as (ξ, η) → (0, 0)
v(x, y) = v(x0, y0) + ξ∂v
∂x
∣∣∣(x0, y0)
+η∂v
∂y
∣∣∣(x0, y0)
+O(ξ2 + η2) as (ξ, η) → (0, 0)
and so
(uv
)
=
∂u
∂x
∂u
∂y∂v
∂x
∂v
∂y
(ξη
)
+ O(ξ2 + η2)
We can now make the approximation
(
ξη
)
=
∂u
∂x
∂u
∂y∂v
∂x
∂v
∂y
∣∣∣∣∣(x0, y0)
(ξη
)
+ O(ξ2 + η2)
which is a linear system.
Example 2.4.1 (Preditor-Prey). .We wish to model the dynamics between predators and prey. Without consideringexternal & environmental variables, as the number of predators increases, we expectthe population growth rate of the prey should lessen; this then, should result in aslow down in the growth rate of predators (as there is more competition for fewerprey).let x be a population of prey (eg: rabbits), y be a population of predators (eg:foxes) with x, y > 0 (note: this model relies on large x and y such that we are ableto talk about derivatives etc... (since x, y are integers!)
A simple model is x = x(a − αy)
y = y(−c + γx)(a, c, α, γ > 0).
Then u(x, y) = x(a − αy), v(x, y) = y(−c + γx) and so for an equilibrium,
u = v = 0, x(a − αy), y(−c + γx) = 0
so for u = 0, either x = 0 or a − αy = 0 (y =a
α)
15
for v = 0, either y = 0 or −c + γx = 0 (x =c
γ)
Therefore the criticals are at(0, 0), (
c
γ,
a
α)
More specifically if we put a = 1, α = 12 , c = 3
4 , γ = 14 then:
x = x(1 − y2 )
y = y(− 34 + x
4 )with critical points at (0, 0), (3, 0)
Near (0, 0):(
ξη
)
=
(1 00 − 3
4
) (ξη
)
The corresponding eigenvalues being:
λ1 = 1, v1 =
(10
)
, λ2 = −3
4, v2 =
(01
)
This is a saddle. Near (3, 2):(
ξη
)
=
(0 − 3
412 0
) (ξη
)
with eigenvalues:
λ1,2 = ±i
√3
2ie: ξ2 + η2 = const, so ellipses.
Nowdy
dx=
y(x4 − 3
4 )
x(1 − y2 )
⇒ 34 lnx + ln y − y
2 − x4 = const.
It is possible, albeit tricky to show this is a closed curve.
x(t)
y(t)
Figure 2.4.1:
Example 2.4.2 (Circular Pendulum). .
We consider a circular pendulum given by
non-dimensional units︷ ︸︸ ︷
x + sin x = 0 where x is an angle.
m
mg
x
Figure 2.4.2:
Note that for small angles x, x ≈ sin x and so we get x + x = 0 (simple harmonic
16
motion)We shall solve x + sin x = 0 qualititively since it can’t easily be solved analytically.Let x = y = u, y = − sin x = vThe critical points are at
x = y = 0 ⇒ y = 0, x = nπ, n ∈ Z
∂u
∂x
∂u
∂y∂v
∂x
∂v
∂y
=
(0 1
−(−1)n 0
)
at y = 0, x = nπ
Near the critical point we consider
(0 1
(−1)n+1 0
)(ξη
)
The characteristic equation is
∣∣∣∣
−λ 1(−1)n+1 −λ
∣∣∣∣= 0 = λ2 + (−1)n = 0
If n even λ2 + 1 = 0 ⇒ λ = ±i, if n odd λ2 − 1 = 0 ⇒ λ = ±1
For n odd we get eigen-vectors v1 =
(11
)
, v2 =
(1−1
)
which is a saddle.
For n even we get a centre.
The centres correspond to small swings
The saddles correspond to swings ‘just’ large
enough that they stop at the top
Everywhere else corresponds to big swings, where
with no damping, the pendulem doesn’t stop.
Figure 2.4.3:
2.5. Non-Linear Operators. .x+x = 0 represents simple harmonic motion (SHM) with solution x(t) = A cos(t)+B sin(t), a centre.Consider x + 2βx + x = 0 making the substitution x = y. The system of ODEs is
given by x = y
y = −2β − x, and the roots of the resulting char poly are
λ = −β ± i√
1 − β2
and so for 0 < β < 1 we get a stable spiral
Example 2.5.1 (Stiff Spring System). .In a simple spring, force and hence acceleration is proportional to the extension ofthe spring (Hooke’s Law), instead we think of a stiff spring with force proportionalto x + βx3. For small x this behaves like x, for large x, like x3.
x + x + βx3 = 0,
x = yy = −x − βx3 , u = y, v = −x − βx3
The critical points are at u = v = 0 ⇒ y = 0, x + βx3 = 0 ⇒ x = 0 or 1 + βx2 = 0(no real solutions).
17
The only critical is at (0, 0)
∂u
∂x
∂u
∂y∂v
∂x
∂v
∂y
=
(0 1
−1 − 3βx2 0
)
=
(0 1−1 0
)
evaluated at (0, 0)
This is a centre λ = ±i
Example 2.5.2 (Soft Spring). .We could change the sign before β and simulate a soft spring ie: x + x − βx3 = 0The critical points in this case lie at (0, 0), ( ±1√
β, 0) and the jacobian is given by
(0 1
−1 + 3βx2 0
)
for x close to±1√
βwe have a source; which, in physical terms, means we are actually
“adding energy” to the system.
3. Limit Cycles
Orbits, that is, trajectories of a system of ODEs cannot cross.
x(0)
x(t)
x(0)
= x(T)
Figure 3.0.1:
If ~x =
(x(t)y(t)
)
and ~x(0) = ~x(T ) (for T 6= 0) then ~x(0) = ~x(T ).
when this happens we have a ”periodic orbit”, eg a clock, oscillator, cycle in theeconomy, biology etc...Suppposing we have such a cycle what can be said about what happens around it?It could be the case that both outside and inside the orbit we spiral towards it;but then again, something else could happen instead. The equilibrium point at thecentre doesn’t give us this information.
Figure 3.0.2:
Example 3.0.3 (Cooked up). Consider a system of the contrived form:
x = −y − x(x2 + y2 − 1) 1y = x − y(x2 + y2 − 1) 2
18
we see that by construction, when x, y satisfy x2+y2 = 1 we obtain simple harmonicmotion. There is only one equilibriums point and that occurs at (0,0). The Jacobianat this point is given by
∂f
∂x
∂f
∂y∂g
∂x
∂g
∂y
(x,y)=(0,0)
=
(1 −11 1
)
The characteristic equation is λ2 − 2λ + 2 = 0 which has roots λ = 1 ± i. This isan unstable spiral.
If instead however, we look at this in polar coordinates
r2 = x2 + y2, tan θ =y
x
Differentiating implicitely we get:
2rr = 2xx + 2yy, θ =xy − yx
x2 + y2
If we multiply (1) and (2) by x & y respectively we get;
xx = x(−y − x(x2 + y2 − 1)), yy = y(x − y(x2 + y2 − 1))
⇒ xx + yy = −(x2 + y2)(r2 − 1) ⇒ rr = −r2(r2 − 1)
r = r(r2 − 1)
This reveals that there is also an equilibrium point at r = 1 (ie; for r = 1 we stayon the circle). Furthermore for r > 1, r < 0 which is a stable spiral, whilst forr < 1, r > 0 which is the unstable spiral we found above.
r < 0
r > 0
Figure 3.0.3:
Now if we multiply (1) and (2) by y & x respectively and subtract we get:
xy − yx = x2 − xy(r2 − 1) + y2 + xy(r2 − 1) = x2 + y2 = r2
⇒ θ =r2
r2= 1
The equilibrium point correctly told us that locally we have an unstable spiral butit failed to illuminate the behaviour as we move further out.
We shall establish a couple of results that allow us determine when & wherethere there exist no closed orbits. First recall
Theorem 3.0.4 (Divergence Theorem/Divergence theorem in the plane). Let Cbe a closed curve, let A ⊂ R
2 be the region it encloses, and u, v be functions withcontinuous derivatives. Then
∫∫
A
∂u
∂x+
∂v
∂ydx dy =
∫
C
u dy − v dy =
∫
C
udy
dsds − v
dx
dsds
where (x(s), y(s)) is a parameterisation of C
19
Theorem 3.0.5 (Bendixon’s Negative criterion). Consider the system
x = u(x, y)y = v(x, y)
with u and v continuously differentiable.
Let A ⊂ R2 be a region of the plane for which
∂u
∂x+
∂v
∂ydoes not change sign.
Then there is no closed orbit contained within A.
Proof. Suppose for contradiction there exists a closed orbit in A. Then this orbitforms a closed curve C in A.
AA‘
x(0)= x(T)
Figure 3.0.4:
Let A′ ⊂ A be the region enclosed by C (ie ∂A′ = C). Then by the divergencetheorem
T∫
0
(xy − yx) dt =
T∫
0
udy
dtdt − v
dx
dtdt =
∫
C
u dy − v dy =
∫∫
A′
∂u
∂x+
∂v
∂ydxdy = 0
but∂u
∂x+
∂v
∂yis either > 0 or < 0 ∀x, y ∈ A′ hence
∫∫
A′
6= 0
Example 3.0.6 (returned to “cooked” example). . x = u = −y − x(x2 + y2 − 1)
y = v = x − y(x2 + y2 − 1)∂u
∂x= −3x2 − y2 + 1,
∂v
∂y= −x2 − 3y2 + 1 and so
∂u
∂x+
∂v
∂y= −4x2 − 4y2 + 2 = −4(x2 + y2) + 2
For values of x, y such that x2 + y2 > 12 this fails to be always positive or always
negative; and so we cannot say there exists no closed orbit. (Though we can be
confident there is no such orbit for√
x2 + y2 = r < 1√2)
Bendixon’s Criterion is not much use for answering the question
Is there a closed orbit between r = 1 or r = 1√2?
Example 3.0.7 (Damped Harmonic Motion). .For x + 2βx + x = 0 we have x = u = y, y = v = −2βy − xThis is a stable spiral at (0,0) and
∂u
∂x+
∂v
∂y= 0 − 2β which is constant
Hence there is no sign change for∂u
∂x+
∂v
∂yand so no closed orbits.
20
Example 3.0.8 (General Damped Oscillator). .This system is characterised by x+f(x)x
︸ ︷︷ ︸
damping f(x)>0
+g(x) = 0 which with the usual substitu-
tion x = y yields
x = yy = −f(x)y − g(x)
Now∂u
∂x+
∂v
∂y= 0− f(x) is always negative and so general damped systems of this
form have no closed orbits.
3.1.
Theorem 3.1.1 (Poincare -Bendixon Theorem). Given a region A ⊂ R2 and an
orbit of a system of ODEs C which remains in A ∀t ≥ 0 then C must approacheither a limit cycle or equilibrium.
Remark 3.1.2.
(1) orbit = trajectory = solution curve(2) limit cycle = closed orbit that nearby orbits approach(3) We can use this result with time “running backwards”, ie: orbits “come
from” unstable equilibrium or closed orbits.
3.2. energy (brief). .
consider an oscillator of the form x = y
y = −f(x)characterising x + f(x) = 0
Since there is no damping we expect energy to be conserved.
Consider ε =x2
2+ F (x) =
y2
2+ F (x) where
x2
2can be considered kinetic energy,
F (x) the potential energy. Then
dε(x, y)
dt= yy + f ′(x) = (x + F ′(x))x
Set F ′ = f thendε
dt= 0 along solution curves.
So ε is constant on a solution (x(t), y(t)). we call this a “first integral”
Example 3.2.1 (Duffing’s Equation). .This is the ”hard spring system” we met earlier x + ω2x + εx3 = 0
f(x) = ω2x + εx3 ⇒ F (x) =ω2x2
2+
εx4
4+ some constant we need not worry
about in this context of constant solution curves. Then ε(x, y) =y2
2+
ω2x2
2+
εx4
4for ε > 0. As ε(x, y) is constant then the solutions are bounded for all t (closedcurves).
We can say that x2 + y2 ≤ max(2,2
ω2) · (
y2
2+
ω2x2
2+
εx4
4) = max(2,
2
ω2)ε. Ie;
solutions stay in the circle.For constant ε we can check that for y 6= 0 there are two solutions for x.
21
4. Lindstedt’s Method
Example 4.0.2 (Duffing’s Equation). x + ω2x + εx3 = 0, 0 < ε << 1We know the solutions are periodic for y(0) = x(0) 6= 0. The solutions will resembleslightly square ellipses.
Figure 4.0.1:
consider a straight-forward expansion x = x0 + εx1 + ε2x2O(ε3)Terms in ε0 :
x0 + ω2x0 = 0 ⇒ x0 = a cosωt
We may suppose without loss of generality that the initial conditions for this systemare x(0) = a, and x(0) = 0 since we are just picking out a particular solution curve;we still get all of them.Terms in ε1 :
x1 + ω2x1 + x30 = 0 ⇒ x1 + ω2x1 = −a3 cos3(ωt)
To deal with cos3(ωt) we use the identity that
cos3(ωt) =3cos(ωt)
4+
cos(3ωt)
4
and so
x1 + ω2x1 = −a3(3cos(ωt)
4+
cos(3ωt)
4
)
since cos(ωt) appears in the homogeneous solution we would need to introduce at sin(ωt) term (secular term), and this is at odds with what we already know aboutthis system; that it is bounded as t → ∞The trick to get rid of the secular term is to introduce another series:
τ = Ωt where Ω = Ω0 + εΩ1 + · · ·Example 4.0.3 (Lindstedt’s Method). We try again with Duffings equation (pre-fixing ε with a minus sign this time) using the above idea and expecting to see asolution that has periodic orbits for ε small enough.Without loss of generality, we rescale time (setting ω = 1) such that we try to solve
x + x − εx3 = 0
We define τ = Ωt where Ω = Ω0 + εΩ1 + · · · and so x(t) becomes x(τ). couplingthis with the standard expansion we use for x we have
x(τ) = x0((Ω0 + εΩ1)t) + εx1((Ω0 + εΩ1)t)
We want xi(τ) = xi(τ + 2π) (ie; period of 2π for i = 1, 2, 3, . . . ).For ε = 0 we have Ω0 = 1 (had we not set ω = 1 earlier then we’d have instead
22
Ω0 = ω; actually we could choose what we want for Ω0 and depending on howdifficult we wnat to make things, this choice determines Ω1 later. It makes senseto keep things simple and choose Ω0 such that for ε = 0 we have x(τ) = x(ωt))
Nowdτ
dt= Ω so
dx
dt=
dx
dτ
dτ
dt= Ω
dx
dτ(so x = Ωx′) and
d2x
dt2= Ω2 d2x
dτ2
and so returning to the ODE we have
Ω2x′′ − εx3 = 0
⇒ (=Ω0
1 +εΩ1 + · · · )2(x′′0 + εx′′
1 + · · ·) + (x0 + εx1 + · · ·) − ε(x0 + εx1 + · · ·)3 = 0
Terms in ε0:
x′′0 + x0 = 0
As before we will assume initial conditions x(0) = a, x(0) = 0 to pick a criticalpoint on some curve. We know there exists a solution with these properties byassuming ellipsoidal shaped orbits; and by varying a we get all the curves.Since there is no ε in initial conditions we get
x0(0) = a, x1(0) = 0, x′0(0) = x′
1(0) = 0
Then the solution for x0 is
x0 = a cos τ
Terms in ε1:
x′′1 + 2Ω1x
′′0 + x1 − x3
0 = 0
⇒ x′′1 + x1 = −2Ω1(−a cos(τ)) + (a cos(τ))3 = 0
⇒ x′′1 + x1 = (2Ω1a +
3
4a3) cos(τ) +
1
4a3 cos(3τ)
The whole point of doing this was to eliminate the cos(τ) term which would haveintroduced a secular term and so we set 2Ω1a + 3
4a2 = 0
⇒ Ω1 = −3
8a3
so it now remains to solve
x′′1 + x1 =
1
4a3 cos(3τ)
We try x1 = A cos(3τ) 7→ −9A cos(3τ) + A cos(3τ) = 14a3 cos(3τ) ⇒ A = −a3
32The general solution for x1 before applying boundary conditions is
x1 = α cos(τ) + β sin(τ) − a3
32cos(3τ)
Now x1(0) = 0 ⇒ α = −a3
32 and x′1(0) = 0 ⇒ β = 0 giving
x1 =a3
32cos(τ) − a3
32cos(3τ)
Thus
x(t) = a cos(Ωt) +a3
32(cos(Ωt) − cos(3Ωt))
Where Ω = 1 − 38a2 + · · ·
23
5. Method of Multiple scales
In the last section we used Lindstedt’s method to take account of varying fre-quencies; now we develop a more general method for situations with “two timescales”.An example of where this is relevant is the damped circular pendulem
x + 2βx + sinx = 0
There are two things going on here; there is an oscillation which is captured by onetime scale; and a slow loss of energy due to the damping term captured by anothertime scale. By considering two scales we are able to capture different features ofthe system.
t T
Figure 5.0.2:
Consider small oscillations and small energy
x + εx + sin x = 0 (D.H.M)
If we did a standard expansion we’d end up with a solution
x(t, ε) = sin t + ε2 t sin t + · · ·
and this is not a uniform expansion for the solution. The exact solution for thissystem is:
x =1
√
1 − ε2
4
e−εt2 sin
(
t
√
1 − ε2
4
)
wherex(0) = 1x(0) = 0
5.1. The Method. .
(1) Introduce a new variable T = εt, and think of t as “fast time”, and T as“slow time”.
(2) Treat t and T as “independent variables” for the function x(T, t, ε). Usingthe chain rule we have
dx
dt=
∂x
∂t
∂t
∂t+
∂x
∂T
∂T
∂t+
∂x
∂ε
=0
∂ε
∂t
=∂x
∂t+ ε
∂x
∂Td2x
dt2=
d
dt
(∂x
∂t+ ε
∂x
∂T
)
=∂
∂t
(∂x
∂t+ ε
∂x
∂T
)
· 1 +∂
∂T
(∂x
∂t+ ε
∂x
∂T
)
ε
=∂2x
∂t2+ 2ε
∂2x
∂t∂T+ ε2 ∂x
∂T(3) Try an expansion x = x0(t, T ) + εx1(t, T ) + · · · where of course T = εt
24
(4) Use the extra freedom of x depending on T to kill off any secular terms.
5.2. Application of Multiple scales to D.H.M.
Example 5.2.1. we consider x + εx + x = 0 with boundary conditions x(0) =0, x(0) = 1, this becomes
∂2x
∂t2+ 2ε
∂2x
∂t∂T+ ε2 ∂2x
∂T 2+ ε
(∂x
∂t+ ε
∂x
∂T
)
+ x = 0
Now let X = x0 + εx1 + ε2x2 + · · · to get( ∂2
∂t2+2ε
∂2
∂t∂T+ε2 ∂2
∂T 2
)
(x0+εx1+ε2x2+· · ·)+ε( ∂
∂t+ε
∂
∂T
)
(x0+εx1+ε2x2+· · ·)
+x0 + εx1 + ε2x2 + · · · = 0
Terms in ε0:∂2x0
∂t2+ x0 = 0
this is a PDE with solution
x0 = A0(T ) cos t + B0(T ) sin t
where A0(T ), B0(T ) are some functions of T (as opposed to just constants) Usingboundary conditions x(0) = 0, x(0) = 1 then
x0(0) = A0(T ) cos t + B0(T ) sin t = 0 ⇒ A0(0) = 0
x0(0) = −A0(T ) sin t + B0(T ) cos t = 1 ⇒ B0(0) = 1
Terms in ε1:∂2x1
∂t2+ 2
∂2x0
∂t∂T+
∂x0
∂t+ x1 = 0
Now∂x0
∂t= A0(T ) sin t + B0(T ) cos t and
∂2x0
∂t∂T= −dA0(T )
dTsin t +
dB0(T )
dTcos t
and so it remains to solve
∂2x1
∂t2+ x1 = −2
(
−dA0(T )
dTsin t +
dB0(T )
dTcos t
)
+ A0(T ) sin t − B0(T ) cos t
The RHS contains terms in sin(t), cos(t) which will induce secular terms. wetherefore choose A0(T ) and B0(T ) such that they “go away”. In other words:
2dA0
dT+ A0 = 0 ⇒ A0 = A0(0)e−
12
T = 0
2dB0
dT+ B0 = 0 ⇒ B0 = B0(0)e−
12T = e−
12T
and so
x0(t) = e−12T sin t = e−
εt2 sin t
to get the x1 term we would need higher order terms to fully specify it. Notice thatwith the exception of constants, the first order part captures most of the featuresof the exact solution.
In general we would have multiple time scales: T0 = t, T1 = εt, . . . , Tn = εntIf we consider a series x0(T0, T1, . . . , Tn) + εx1(T0, T1, . . . , Tn) + . . . then
d
dt=
∂
∂t=T0
+ ε∂
∂T1+ ε2 ∂
∂T2+ · · ·
25
Example 5.2.2 (Van Der Pol’s Equation). Consider
x + ε(x2 − 1)x + x = 0
Immediately we see that if x2 > 1 we have damping, x2 < 1 we have negativedamping; so it would seem there is a tendency to head towards x = 1. Perhaps wecould use energy methods; anyhow...
( ∂2
∂t2+ 2ε
∂2
∂t∂T+ · · ·
)
(x0 + εx1 + · · · )
+ε((x20 + · · · − 1)
(∂
∂+ ε
∂
∂T
)
(x0 + εx1 + · · ·) + x0 + εx1 + · · · = 0
Terms in ε0:∂2x0
∂t2+ x0 = 0 ⇒ x0 = A(T ) cos t + B(T ) sin t
We can write this in complex form
x0 = A0(T )eit + A0(T )e−it (noting that z + z = 2Re(z))
We can justify this step since in general, if z = a + ib then
(a + ib)eit + (a − ib)e−it = a(eit + e−it) + ib(eit − e−it) = 2a cos t − 2b sin t
Terms in ε1:∂2x1
∂t2+ x1 + 2
∂2x0
∂t∂T+ (x2
0 − 1)∂x0
∂t= 0
⇒ ∂2x1
∂t2+ x1 + 2(A′ieit − A′ie−it) +
[
(Aeit + Ae−it) − 1]
(iAeit − iAe−it) = 0
where A′ = A′(T ) =∂A
∂TIf we wish to find A(T ) as opposed to x1 we need only consider secular terms. Sowe need to equate coefficients of eit (and e−t) to zero and kill them off.considering terms in eit we have:
2iA′ − iA − iA2A + 2iA2A = 0
⇒ 2A′ − A + A2A = 0
Similarly, if we consider e−it we get the conjugate of this expression; ie; whateverA kills off the eit terms also kills off the e−it terms.We should now use the polar form of A, as |A2A| = |A3|, and if we write A = |A|eiϕ
with ϕ = arg(A) and |A| = 12a then
x0 = a cos(t + ϕ) and we now wish to find a
NowdA
dT= 1
2
da
dTeiϕ + 1
2aie−iϕ dϕ
dTand so it remains to solve
a′eiϕ + iaeiϕϕ′ − 12aeiϕ + a2
4 e2iϕ a2 e−iϕ = 0
which dividing through by eiϕ gives
a′ =1
2a +
1
8a3 − iaϕ′ = 0
We now equate real and imaginary parts:
ϕ′ = 0 ⇒ ϕ is constant
26
Which leaves us the following ODE
a′ =1
2a − 1
8a3
There are a number of methods to solve this; one of which being making the sub-stitution cos θ = 1
2a and using the fact that
1
sin θ=
1
sin θ
csc θ + cot θ
csc θ + cot θ=
csc2 θ + csc θ cot θ
csc θ + cot θ=
− ddθ
(csc θ + cot θ)
csc θ + cot θ
⇒∫
1
sin θdθ = − log(csc(θ) + cot(θ))
This would still require a bit of work and so we shall instead treat a2 as a variableand use partial fractions.
da2
dT= 2a
da
dT= a2 − a4
4
⇒∫
4
a2(4 − a2)da2 =
∫ ( 1
a2+
1
4 − a2
)
da2 = log|a2| − log|4 − a2| = T + T0
⇒ a2
4 − a2= keT ⇒ a2(1 + keT ) = 4keT ⇒ a2 =
4k
e−T + k
⇒ a(T ) =2√
1 + k−1e−T
Hence
x0(t, T ) =2√
1 + k−1e−Tcos(t + ϕ)
Example 5.2.3 (Duffing’s Equation). .We return again to Duffing’s equation x + ω2x + εx3 in the context of multiplescales. Let T = εx then
x =dx
dt=
∂
∂+ ε
∂
∂, x =
∂2x
∂t2=
∂2x
∂t2+ 2ε
∂2x
∂t∂T+
ε2 ∂2x
∂T 2
and so putting this into the ODE above gives( ∂2
∂t2+ 2ε
∂2
∂t∂T
)
(x0 + εx1 + · · ·) + ω2(x0 + εx1 + · · ·) + ε(x0 + εx1 + · · ·)3
Terms in ε0:∂2x0
∂t2+ ω2x0 = 0 ⇒ x0 = Aeiωt + Ae−iωt
Terms in ε1:∂2x1
∂t2+ 2
∂2x0
∂t∂T+ ω2x1 + x3
0 = 0
Now
2∂2x0
∂t∂T= 2(A′iωeiωt − iωA′e−iωt)
whilstx3
0 = A3e3iωt + 3A2Aeiωt + 3AA2e−iωt + A3e−3iωt
and so∂2x1
∂t2+ ω2x1 + 2(A′iωeiωt − iωA′e−iωt)
+A3e3iωt + 3A2Aeiωt + 3AA2e−iωt + A3e−3iωt = 0
27
we now wish to kill off the secular terms eiωt and e−iωt
terms in eiωt:2iωA′ + 3A2A = 0
Again set A = 12aeiϕ such that A′ = 1
2a′eiϕ + 12aieiϕϕ′. Then
2iω(12a′eiϕ + 1
2aieiϕϕ′) + 34a2e2iϕ 1
2ae−iϕ = 0
iωa′ − ωaϕ′ + 38a3 = 0
The imaginary term iωa′ = 0 ⇒ a = a0 (a constant)This leaves us with a simple ODE to solve
ωaϕ′ = 38a3
0
with solution
ϕ(T ) =3
8
a20
ωεt (since T = εT )
Thus the solution we get working as far as the first term x0 is
x = x0 + O(ε) = a0 cos((ω + 38
a20
ωε)t) + O(ε)
which as far as the first term goes is identical (except for the sign of 38
a20
ωε (we
started with +εx3 in this example)) to that which we got for Lindstedt’s method.
6. The WKB Method
In this section we focus on eigenvalue problems for 2nd order ODEs, specifically
d2y
dx2+ λ2r(x)y = 0, y(0) = y(1) = 0 (6.1)
ie; y is a function of x.
6.1. Motivation. stationary waves, for example, a string on a musical instrumentsatisfy
T∂2y
∂x2= ρ
∂2y
∂t2
Where T is the tension, ρ the density of string, y the displacement, and x is thedistance along the string.
T
ρ(x)
x
y
Figure 6.1.1:
y(0, t), y(1, t) = 0 ⇒ the string is pinned down at both ends.Standing wave solutions have the form
y(x, t) = y(x) cos ωt
and so we reduce the wave equation to
d2y
dx2+
ρ(x)
Tω2y = 0
this is in the form of (6.1) where λ = ω, r =ρ(x)
t
28
6.1.1. Constant density. Consider the case ρ(x) = constant, then
d2y
dx2+
ω2
c2y = 0 (where c2 = T
ρ)
This gives
y(x) = A cos(λx) + B sin(λx) (where λ2 = ω2
c2 )
Using the boundary conditions gives
y(0) = 0 ⇒ A = 0, y(1) = 0 ⇒ B sin λ = 0 ⇒ λ = ±nπ (B 6= 0, n ∈ Z)
thus solutions have the form
y = b sin(nπx) (6.2)
Solutions only exist for λ ∈ πZ; ie, special or ”eigen” values.In general we only get closed solutions for a simple r(x).Note: we are using “boundary conditions” y(0), y(1) rather than initial conditionsy(0), y′(0). This means solutions do not exist for all λ (ie; they have to “fit”).Like “eigenvectors” the eigenfunctions y can be scaled by any B.
6.2. The WKB Approximation. This method stands for Wentzel, Kramer,Brillouin (though the work done by Jeffreys 2 years earlier had not beenrecognised by these three, and so he is often neglected credit for it).
if ω or λ are very large, the zeros of (6.2) are very close together. we intend to usean asymptotic expansion for large λ.if r was constant in (6.1) we would have a solution of the form
y(x) = Ae±λx√
r
So we will try solutions of the form
y = eλg(x,λ)
Then
dy
dx= λg′(λ, x)eλg(x,λ) and
d2y
dx2= λ2(g′(λ, x))2eλg(x,λ) + λg′′(λ, x)eλg(x,λ)
we substitute into (6.1) and divide through by λ2eλg(λ,x) to get
1λg′′ + (g′)2 + r = 0
This is a 1st order ODE in g′, ie:
1λh + h2 + r = 0
Noting that 1λ
behaves like ε (since we are considering λ large), we look for anexpression of the form
h(x) = h0(x) + 1λh1(x) + 1
λ2 h2(x) + O( 1λ3 )
and since g = h′ = h′0(x) + 1
λh′
1(x) + 1λ2 h′
2(x) + O( 1λ3 ) then the original
expression for y becomes
y = eλg0+g1+ 1λ g2+O( 1
λ2 )
29
Example 6.2.1 (Airy’s Equation). .Consider
y′′ + λ2xy = 0
using y = eλg0+g1+ 1λ g2+O( 1
λ2 ) then
y′ = eλg0+g1+1λ g2+···(λg′0 + g′1 + · · · )
y′′ = eλg0+g1+ 1λ g2+···(λg0
′ + g1′ + · · · )2 + eλg0+g1+ 1
λ g2+···(λg0′′ + g1
′′ + · · · )we substitute this into the original expression to get
(λg0′ + g1
′ + · · · )2 + λg0′′ + g1
′′ + · · · + λ2x = 0
Terms in λ2:
g0′2 + x = 0 ⇒ g0
′ = ±ix12
⇒ g0 = 23 ix
32 +A
we ignore the constant because as it merely factorises out of y as a constantmultiple of the solution and is of no real value to us.
Terms in λ1 using g0 = 23 ix
32 :
g0′′ + 2g0
′g1′ = 0 ⇒ 1
2 ix− 12 + 2ix
12 g1
′ = 0 ⇒ 1
4x+ g1
′ = 0
⇒ g1 = − 14 log(x) +B
we didn’t need the absolute value signs for log in this case since 0 < x, 1. Also, ifwe were to consider the other solution for g0 then since g0
′′ inherits the same signwe still get the same solution for g1.The general solution is
y = Aeiλ 23x3/2− 1
4log(x) + Be−iλ 2
3x3/2− 1
4log(x)
=1
x14
[
C cos(23λx
32 ) + D sin(2
3λx32 )
]
+ O(1
λ2) as λ → ∞
In general, if y = y′′ + λ2r(x)y = 0 then
y(x, λ) =1
r(x)14
[
A cos(λx∫
a
√
r(t)dt) + B sin(λx∫
a
√
r(t)dt)]
+ O( 1λ2 ) as λ → ∞
Returning to the particular problem, we still haven’t found λ or used boundaryconditions.to find approximate eigenvalues λ for λ large:
y(0) = 0 ⇒ C = 0
y(1) = 0 ⇒ sin(23λ) = 0 ⇒ 2
3λ = nπ
⇒ λ ≈ 32nπ (for large n ∈ Z)
Example 6.2.2. .this time consider y′′ + λ2(1 + 3 sin2 x)2y = 0, y(0) = y(1) = 0, again usingy = eλg0+g1+··· such that
(λg0′ + g1
′ + · · · )2 + (λg0′′ + g1
′′ + · · · ) + λ2(1 + 3 sin2 x)2 = 0
terms in λ2:
g0′2 + (1 + 3 sin2 x)2 = 0 ⇒ g0
′ = ±i
√
(1 + 3 sin2 x)2
30
we now use the angle sum trig identity sin2 x = 12 (1 − cos 2x) to get
g0′ = ±i(5
2 + − 32 (cos 2x))
⇒ g0 = ±i(52x − 3
4 sin 2x)
terms in λ1 (taking positive g0):
2g0′g1
′ + g0′′ = 0 ⇒ 2(1 + 3 sin2 x)g1
′ + 3 sin 2x = 0 ⇒ g1′ = − 3 sin 2x
2(1 + 3 sin2 x)
g1 = − 12 log(1 + 3 sin2 x)
Again in this example we’d gain no new information if we used the other form ofg0. Putting it all together we have
y =1
(1 + 3 sin2 x)12
(
C cos(λ(52x − 3
4 sin 2x)) + D sin(λ(52x − 3
4 sin 2x)))
Now we use the boundary conditions
y(0) = 0 ⇒ C = 0, y(1) = 0 ⇒ λ(52 − 3
4 sin 2) = nπ
⇒ λn ≈ nπ52 − 3
4 sin 2
For example, plugging in some numbers, say n = 8 then our approximation yieldsλ8 = 13.824 whilst the exact solution is 13.820