numerical methods by jeffrey r. chasnov
DESCRIPTION
IEEE Arithmetic,Root Finding.Systems of equations, Least-squares approximation, InterpolationTRANSCRIPT
Introduction to NumericalMethods
Lecture notes for MATH 3311
Jeffrey R. Chasnov
The Hong Kong University of
Science and Technology
The Hong Kong University of Science and TechnologyDepartment of MathematicsClear Water Bay, Kowloon
Hong Kong
Copyright cβ 2012 by Jeffrey Robert Chasnov
This work is licensed under the Creative Commons Attribution 3.0 Hong Kong License.
To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/hk/
or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco,
California, 94105, USA.
Preface
What follows are my lecture notes for Math 3311: Introduction to NumericalMethods, taught at the Hong Kong University of Science and Technology. Math3311, with two lecture hours per week, is primarily for non-mathematics majorsand is required by several engineering departments.
All web surfers are welcome to download these notes athttp://www.math.ust.hk/~machas/numerical-methods.pdf
and to use the notes freely for teaching and learning. I welcome any comments,suggestions or corrections sent by email to [email protected].
iii
Contents
1 IEEE Arithmetic 1
1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Numbers with a decimal or binary point . . . . . . . . . . . . . . 1
1.3 Examples of binary numbers . . . . . . . . . . . . . . . . . . . . . 1
1.4 Hex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.5 4-bit unsigned integers as hex numbers . . . . . . . . . . . . . . . 2
1.6 IEEE single precision format: . . . . . . . . . . . . . . . . . . . . 2
1.7 Special numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.8 Examples of computer numbers . . . . . . . . . . . . . . . . . . . 3
1.9 Inexact numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.9.1 Find smallest positive integer that is not exact in singleprecision . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.10 Machine epsilon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.11 IEEE double precision format . . . . . . . . . . . . . . . . . . . . 5
1.12 Roundoff error example . . . . . . . . . . . . . . . . . . . . . . . 6
2 Root Finding 7
2.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Newtonβs Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Estimateβ2 = 1.41421356 using Newtonβs Method . . . . 8
2.3.2 Example of fractals using Newtonβs Method . . . . . . . . 8
2.4 Order of convergence . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.1 Newtonβs Method . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.2 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Systems of equations 13
3.1 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 πΏπ decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Partial pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4 Operation counts . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.5 System of nonlinear equations . . . . . . . . . . . . . . . . . . . . 21
4 Least-squares approximation 23
4.1 Fitting a straight line . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Fitting to a linear combination of functions . . . . . . . . . . . . 24
v
vi CONTENTS
5 Interpolation 275.1 Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . . . 27
5.1.1 Vandermonde polynomial . . . . . . . . . . . . . . . . . . 275.1.2 Lagrange polynomial . . . . . . . . . . . . . . . . . . . . . 285.1.3 Newton polynomial . . . . . . . . . . . . . . . . . . . . . . 28
5.2 Piecewise linear interpolation . . . . . . . . . . . . . . . . . . . . 295.3 Cubic spline interpolation . . . . . . . . . . . . . . . . . . . . . . 305.4 Multidimensional interpolation . . . . . . . . . . . . . . . . . . . 33
6 Integration 356.1 Elementary formulas . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1.1 Midpoint rule . . . . . . . . . . . . . . . . . . . . . . . . . 356.1.2 Trapezoidal rule . . . . . . . . . . . . . . . . . . . . . . . 366.1.3 Simpsonβs rule . . . . . . . . . . . . . . . . . . . . . . . . 37
6.2 Composite rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.2.1 Trapezoidal rule . . . . . . . . . . . . . . . . . . . . . . . 376.2.2 Simpsonβs rule . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3 Local versus global error . . . . . . . . . . . . . . . . . . . . . . . 386.4 Adaptive integration . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 Ordinary differential equations 437.1 Examples of analytical solutions . . . . . . . . . . . . . . . . . . 43
7.1.1 Initial value problem . . . . . . . . . . . . . . . . . . . . . 437.1.2 Boundary value problems . . . . . . . . . . . . . . . . . . 447.1.3 Eigenvalue problem . . . . . . . . . . . . . . . . . . . . . 45
7.2 Numerical methods: initial value problem . . . . . . . . . . . . . 467.2.1 Euler method . . . . . . . . . . . . . . . . . . . . . . . . . 467.2.2 Modified Euler method . . . . . . . . . . . . . . . . . . . 467.2.3 Second-order Runge-Kutta methods . . . . . . . . . . . . 477.2.4 Higher-order Runge-Kutta methods . . . . . . . . . . . . 487.2.5 Adaptive Runge-Kutta Methods . . . . . . . . . . . . . . 497.2.6 System of differential equations . . . . . . . . . . . . . . . 50
7.3 Numerical methods: boundary value problem . . . . . . . . . . . 517.3.1 Finite difference method . . . . . . . . . . . . . . . . . . . 517.3.2 Shooting method . . . . . . . . . . . . . . . . . . . . . . . 53
7.4 Numerical methods: eigenvalue problem . . . . . . . . . . . . . . 547.4.1 Finite difference method . . . . . . . . . . . . . . . . . . . 547.4.2 Shooting method . . . . . . . . . . . . . . . . . . . . . . . 56
Chapter 1
IEEE Arithmetic
1.1 Definitions
Bit = 0 or 1Byte = 8 bitsWord = Reals: 4 bytes (single precision)
8 bytes (double precision)= Integers: 1, 2, 4, or 8 byte signed
1, 2, 4, or 8 byte unsigned
1.2 Numbers with a decimal or binary point
οΏ½ οΏ½ οΏ½ οΏ½ Β· οΏ½ οΏ½ οΏ½ οΏ½Decimal: 103 102 101 100 10β1 10β2 10β3 10β4
Binary: 23 22 21 20 2β1 2β2 2β3 2β4
1.3 Examples of binary numbers
Decimal Binary1 12 103 114 1000.5 0.11.5 1.1
1.4 Hex numbers
{0, 1, 2, 3, . . . , 9, 10, 11, 12, 13, 14, 15} = {0, 1, 2, 3.......9, a,b,c,d,e,f}
1
2 CHAPTER 1. IEEE ARITHMETIC
1.5 4-bit unsigned integers as hex numbers
Decimal Binary Hex1 0001 12 0010 23 0011 3...
......
10 1010 a...
......
15 1111 f
1.6 IEEE single precision format:
π β β οΏ½0
πβ β οΏ½1οΏ½2οΏ½3οΏ½4οΏ½5οΏ½6οΏ½7οΏ½8
πβ β οΏ½9Β· Β· Β· Β· Β· Β· Β· Β·οΏ½
31
# = (β1)π Γ 2πβ127 Γ 1.f
where s = signe = biased exponentp=e-127 = exponent1.f = significand (use binary point)
1.7. SPECIAL NUMBERS 3
1.7 Special numbers
Smallest exponent: e = 0000 0000, represents denormal numbers (1.f β 0.f)Largest exponent: e = 1111 1111, represents Β±β, if f = 0
e = 1111 1111, represents NaN, if f = 0
Number Range: e = 1111 1111 = 28 - 1 = 255 reservede = 0000 0000 = 0 reserved
so, p = e - 127 is1 - 127 β€ p β€ 254-127-126 β€ p β€ 127
Smallest positive normal number= 1.0000 0000 Β· Β· Β· Β· Β·Β· 0000Γ 2β126
β 1.2 Γ 10β38
bin: 0000 0000 1000 0000 0000 0000 0000 0000hex: 00800000MATLAB: realmin(βsingleβ)
Largest positive number= 1.1111 1111 Β· Β· Β· Β· Β·Β· 1111Γ 2127
= (1 + (1β 2β23))Γ 2127
β 2128 β 3.4Γ 1038
bin: 0111 1111 0111 1111 1111 1111 1111 1111hex: 7f7fffffMATLAB: realmax(βsingleβ)
Zerobin: 0000 0000 0000 0000 0000 0000 0000 0000hex: 00000000
Subnormal numbersAllow 1.f β 0.f (in software)Smallest positive number = 0.0000 0000 Β· Β· Β· Β· Β· 0001 Γ 2β126
= 2β23 Γ 2β126 β 1.4 Γ 10β45
1.8 Examples of computer numbers
What is 1.0, 2.0 & 1/2 in hex ?
1.0 = (β1)0 Γ 2(127β127) Γ 1.0Therefore, π = 0, π = 0111 1111, π = 0000 0000 0000 0000 0000 000bin: 0011 1111 1000 0000 0000 0000 0000 0000hex: 3f80 0000
2.0 = (β1)0 Γ 2(128β127) Γ 1.0Therefore, π = 0, π = 1000 0000, π = 0000 0000 0000 0000 0000 000bin: 0100 00000 1000 0000 0000 0000 0000 0000hex: 4000 0000
4 CHAPTER 1. IEEE ARITHMETIC
1/2 = (β1)0 Γ 2(126β127) Γ 1.0Therefore, π = 0, π = 0111 1110, π = 0000 0000 0000 0000 0000 000bin: 0011 1111 0000 0000 0000 0000 0000 0000hex: 3f00 0000
1.9 Inexact numbers
Example:1
3= (β1)0 Γ 1
4Γ (1 +
1
3),
so that π = π β 127 = β2 and π = 125 = 128 β 3, or in binary, π = 0111 1101.How is π = 1/3 represented in binary? To compute binary number, multiplysuccessively by 2 as follows:
0.333 . . . 0.
0.666 . . . 0.0
1.333 . . . 0.01
0.666 . . . 0.010
1.333 . . . 0.0101
etc.
so that 1/3 exactly in binary is 0.010101 . . . . With only 23 bits to represent π ,the number is inexact and we have
π = 01010101010101010101011,
where we have rounded to the nearest binary number (here, rounded up). Themachine number 1/3 is then represented as
00111110101010101010101010101011
or in hex3πππππππ.
1.9.1 Find smallest positive integer that is not exact insingle precision
Let π be the smallest positive integer that is not exact. Now, I claim that
π β 2 = 223 Γ 1.11 . . . 1,
andπ β 1 = 224 Γ 1.00 . . . 0.
The integer π would then require a one-bit in the 2β24 position, which is notavailable. Therefore, the smallest positive integer that is not exact is 224 + 1 =16 777 217. In MATLAB, single(224) has the same value as single(224+1). Sincesingle(224+1) is exactly halfway between the two consecutive machine numbers224 and 224+2, MATLAB rounds to the number with a final zero-bit in f, whichis 224.
1.10. MACHINE EPSILON 5
1.10 Machine epsilon
Machine epsilon (πmach) is the distance between 1 and the next largest number.If 0 β€ πΏ < πmach/2, then 1 + πΏ = 1 in computer math. Also since
π₯+ π¦ = π₯(1 + π¦/π₯),
if 0 β€ π¦/π₯ < πmach/2, then π₯+ π¦ = π₯ in computer math.
Find πmach
The number 1 in the IEEE format is written as
1 = 20 Γ 1.000 . . . 0,
with 23 0βs following the binary point. The number just larger than 1 has a 1in the 23rd position after the decimal point. Therefore,
πmach = 2β23 β 1.192Γ 10β7.
What is the distance between 1 and the number just smaller than 1? Here,the number just smaller than one can be written as
2β1 Γ 1.111 . . . 1 = 2β1(1 + (1β 2β23)) = 1β 2β24
Therefore, this distance is 2β24 = πmach/2.The spacing between numbers is uniform between powers of 2, with logarith-
mic spacing of the powers of 2. That is, the spacing of numbers between 1 and2 is 2β23, between 2 and 4 is 2β22, between 4 and 8 is 2β21, etc. This spacingchanges for denormal numbers, where the spacing is uniform all the way downto zero.
Find the machine number just greater than 5
A rough estimate would be 5(1+ πmach) = 5+5πmach, but this is not exact. Theexact answer can be found by writing
5 = 22(1 +1
4),
so that the next largest number is
22(1 +1
4+ 2β23) = 5 + 2β21 = 5 + 4πmach.
1.11 IEEE double precision format
Most computations take place in double precision, where round-off error is re-duced, and all of the above calculations in single precision can be repeated fordouble precision. The format is
π β β οΏ½0
πβ β οΏ½1οΏ½2οΏ½3οΏ½4οΏ½5οΏ½6οΏ½7οΏ½8οΏ½9οΏ½10οΏ½11
πβ β οΏ½12Β· Β· Β· Β· Β· Β· Β· Β·οΏ½
63
6 CHAPTER 1. IEEE ARITHMETIC
# = (β1)π Γ 2πβ1023 Γ 1.f
where s = signe = biased exponentp=e-1023 = exponent1.f = significand (use binary point)
1.12 Roundoff error example
Consider solving the quadratic equation
π₯2 + 2ππ₯β 1 = 0,
where π is a parameter. The quadratic formula yields the two solutions
π₯Β± = βπΒ±βπ2 + 1.
Consider the solution with π > 0 and π₯ > 0 (the π₯+ solution) given by
π₯ = βπ+β
π2 + 1. (1.1)
As π β β,
π₯ = βπ+βπ2 + 1
= βπ+ πβ1 + 1/π2
= π(β
1 + 1/π2 β 1)
β π
(1 +
1
2π2β 1
)=
1
2π.
Now in double precision, realmin β 2.2 Γ 10β308 and we would like π₯ to beaccurate to this value before it goes to 0 via denormal numbers. Therefore,π₯ should be computed accurately to π β 1/(2 Γ realmin) β 2 Γ 10307. Whathappens if we compute (1.1) directly? Then π₯ = 0 when π2 + 1 = π2, or1 + 1/π2 = 1. That is 1/π2 = πmach/2, or π =
β2/βπmach β 108.
For a subroutine written to compute the solution of a quadratic for a generaluser, this is not good enough. The way for a software designer to solve thisproblem is to compute the solution for π₯ as
π₯ =1
π+βπ2 + 1
.
In this form, if π2+1 = π2, then π₯ = 1/2π which is the correct asymptotic form.
Chapter 2
Root Finding
Solve π(π₯) = 0 for π₯, when an explicit analytical solution is impossible.
2.1 Bisection Method
The bisection method is the easiest to numerically implement and almost alwaysworks. The main disadvantage is that convergence is slow. If the bisectionmethod results in a computer program that runs too slow, then other fastermethods may be chosen; otherwise it is a good choice of method.
We want to construct a sequence π₯0, π₯1, π₯2, . . . that converges to the rootπ₯ = π that solves π(π₯) = 0. We choose π₯0 and π₯1 such that π₯0 < π < π₯1. Wesay that π₯0 and π₯1 bracket the root. With π(π) = 0, we want π(π₯0) and π(π₯1)to be of opposite sign, so that π(π₯0)π(π₯1) < 0. We then assign π₯2 to be themidpoint of π₯0 and π₯1, that is π₯2 = (π₯0 + π₯1)/2, or
π₯2 = π₯0 +π₯1 β π₯0
2.
The sign of π(π₯2) can then be determined. The value of π₯3 is then chosen aseither the midpoint of π₯0 and π₯2 or as the midpoint of π₯2 and π₯1, depending onwhether π₯0 and π₯2 bracket the root, or π₯2 and π₯1 bracket the root. The root,therefore, stays bracketed at all times. The algorithm proceeds in this fashionand is typically stopped when the increment to the left side of the bracket(above, given by (π₯1 β π₯0)/2) is smaller than some required precision.
2.2 Newtonβs Method
This is the fastest method, but requires analytical computation of the derivativeof π(π₯). Also, the method may not always converge to the desired root.
We can derive Newtonβs Method graphically, or by a Taylor series. We againwant to construct a sequence π₯0, π₯1, π₯2, . . . that converges to the root π₯ = π.Consider the π₯π+1 member of this sequence, and Taylor series expand π(π₯π+1)about the point π₯π. We have
π(π₯π+1) = π(π₯π) + (π₯π+1 β π₯π)πβ²(π₯π) + . . . .
7
8 CHAPTER 2. ROOT FINDING
To determine π₯π+1, we drop the higher-order terms in the Taylor series, andassume π(π₯π+1) = 0. Solving for π₯π+1, we have
π₯π+1 = π₯π β π(π₯π)
π β²(π₯π).
Starting Newtonβs Method requires a guess for π₯0, hopefully close to the rootπ₯ = π.
2.3 Secant Method
The Secant Method is second best to Newtonβs Method, and is used when afaster convergence than Bisection is desired, but it is too difficult or impossibleto take an analytical derivative of the function π(π₯). We write in place of π β²(π₯π),
π β²(π₯π) βπ(π₯π)β π(π₯πβ1)
π₯π β π₯πβ1.
Starting the Secant Method requires a guess for both π₯0 and π₯1.
2.3.1 Estimateβ2 = 1.41421356 using Newtonβs Method
Theβ2 is the zero of the function π(π₯) = π₯2 β 2. To implement Newtonβs
Method, we use π β²(π₯) = 2π₯. Therefore, Newtonβs Method is the iteration
π₯π+1 = π₯π β π₯2π β 2
2π₯π.
We take as our initial guess π₯0 = 1. Then
π₯1 = 1β β1
2=
3
2= 1.5,
π₯2 =3
2β
94 β 2
3=
17
12= 1.416667,
π₯3 =17
12β
172
122 β 2176
=577
408= 1.41426.
2.3.2 Example of fractals using Newtonβs Method
Consider the complex roots of the equation π(π§) = 0, where
π(π§) = π§3 β 1.
These roots are the three cubic roots of unity. With
ππ2ππ = 1, π = 0, 1, 2, . . .
the three unique cubic roots of unity are given by
1, ππ2π/3, ππ4π/3.
Withπππ = cos π + π sin π,
2.4. ORDER OF CONVERGENCE 9
and cos (2π/3) = β1/2, sin (2π/3) =β3/2, the three cubic roots of unity are
π1 = 1, π2 = β1
2+
β3
2π, π3 = β1
2β
β3
2π.
The interesting idea here is to determine which initial values of π§0 in the complexplane converge to which of the three cubic roots of unity.
Newtonβs method is
π§π+1 = π§π β π§3π β 1
3π§2π.
If the iteration converges to π1, we color π§0 red; π2, blue; π3, green. The resultwill be shown in lecture.
2.4 Order of convergence
Let π be the root and π₯π be the πth approximation to the root. Define the erroras
ππ = π β π₯π.
If for large π we have the approximate relationship
|ππ+1| = π|ππ|π,
with π a positive constant, then we say the root-finding numerical method is oforder π. Larger values of π correspond to faster convergence to the root. Theorder of convergence of bisection is one: the error is reduced by approximatelya factor of 2 with each iteration so that
|ππ+1| =1
2|ππ|.
We now find the order of convergence for Newtonβs Method and for the SecantMethod.
2.4.1 Newtonβs Method
We start with Newtonβs Method
π₯π+1 = π₯π β π(π₯π)
π β²(π₯π).
Subtracting both sides from π, we have
π β π₯π+1 = π β π₯π +π(π₯π)
π β²(π₯π),
or
ππ+1 = ππ +π(π₯π)
π β²(π₯π). (2.1)
10 CHAPTER 2. ROOT FINDING
We use Taylor series to expand the functions π(π₯π) and π β²(π₯π) about the rootπ, using π(π) = 0. We have
π(π₯π) = π(π) + (π₯π β π)π β²(π) +1
2(π₯π β π)2π β²β²(π) + . . . ,
= βπππβ²(π) +
1
2π2ππ
β²β²(π) + . . . ;
π β²(π₯π) = π β²(π) + (π₯π β π)π β²β²(π) +1
2(π₯π β π)2π β²β²β²(π) + . . . ,
= π β²(π)β πππβ²β²(π) +
1
2π2ππ
β²β²β²(π) + . . . .
(2.2)
To make further progress, we will make use of the following standard Taylorseries:
1
1β π= 1 + π+ π2 + . . . , (2.3)
which converges for |π| < 1. Substituting (2.2) into (2.1), and using (2.3) yields
ππ+1 = ππ +π(π₯π)
π β²(π₯π)
= ππ +βπππ
β²(π) + 12π
2ππ
β²β²(π) + . . .
π β²(π)β πππ β²β²(π) + 12π
2ππ
β²β²β²(π) + . . .
= ππ +βππ + 1
2π2ππ β²β²(π)π β²(π) + . . .
1β πππ β²β²(π)π β²(π) + . . .
= ππ +
(βππ +
1
2π2π
π β²β²(π)
π β²(π)+ . . .
)(1 + ππ
π β²β²(π)
π β²(π)+ . . .
)= ππ +
(βππ + π2π
(1
2
π β²β²(π)
π β²(π)β π β²β²(π)
π β²(π)
)+ . . .
)= β1
2
π β²β²(π)
π β²(π)π2π + . . .
Therefore, we have shown that
|ππ+1| = π|ππ|2
as π β β, with
π =1
2
π β²β²(π)
π β²(π)
,
provided π β²(π) = 0. Newtonβs method is thus of order 2 at simple roots.
2.4.2 Secant Method
Determining the order of the Secant Method proceeds in a similar fashion. Westart with
π₯π+1 = π₯π β (π₯π β π₯πβ1)π(π₯π)
π(π₯π)β π(π₯πβ1).
We subtract both sides from π and make use of
π₯π β π₯πβ1 = (π β π₯πβ1)β (π β π₯π)
= ππβ1 β ππ,
2.4. ORDER OF CONVERGENCE 11
and the Taylor series
π(π₯π) = βπππβ²(π) +
1
2π2ππ
β²β²(π) + . . . ,
π(π₯πβ1) = βππβ1πβ²(π) +
1
2π2πβ1π
β²β²(π) + . . . ,
so that
π(π₯π)β π(π₯πβ1) = (ππβ1 β ππ)πβ²(π) +
1
2(π2π β π2πβ1)π
β²β²(π) + . . .
= (ππβ1 β ππ)
(π β²(π)β 1
2(ππβ1 + ππ)π
β²β²(π) + . . .
).
We therefore have
ππ+1 = ππ +βπππ
β²(π) + 12π
2ππ
β²β²(π) + . . .
π β²(π)β 12 (ππβ1 + ππ)π β²β²(π) + . . .
= ππ β ππ1β 1
2πππ β²β²(π)π β²(π) + . . .
1β 12 (ππβ1 + ππ)
π β²β²(π)π β²(π) + . . .
= ππ β ππ
(1β 1
2ππ
π β²β²(π)
π β²(π)+ . . .
)(1 +
1
2(ππβ1 + ππ)
π β²β²(π)
π β²(π)+ . . .
)= β1
2
π β²β²(π)
π β²(π)ππβ1ππ + . . . ,
or to leading order
|ππ+1| =1
2
π β²β²(π)
π β²(π)
|ππβ1||ππ|. (2.4)
The order of convergence is not yet obvious from this equation, and to determinethe scaling law we look for a solution of the form
|ππ+1| = π|ππ|π.
From this ansatz, we also have
|ππ| = π|ππβ1|π,
and therefore|ππ+1| = ππ+1|ππβ1|π
2
.
Substitution into (2.4) results in
ππ+1|ππβ1|π2
=π
2
π β²β²(π)
π β²(π)
|ππβ1|π+1.
Equating the coefficient and the power of ππβ1 results in
ππ =1
2
π β²β²(π)
π β²(π)
,
andπ2 = π+ 1.
12 CHAPTER 2. ROOT FINDING
The order of convergence of the Secant Method, given by π, therefore is deter-mined to be the positive root of the quadratic equation π2 β πβ 1 = 0, or
π =1 +
β5
2β 1.618,
which coincidentally is a famous irrational number that is called The GoldenRatio, and goes by the symbol Ξ¦. We see that the Secant Method has an orderof convergence lying between the Bisection Method and Newtonβs Method.
Chapter 3
Systems of equations
Consider the system of π linear equations and π unknowns, given by
π11π₯1 + π12π₯2 + Β· Β· Β·+ π1ππ₯π = π1,
π21π₯1 + π22π₯2 + Β· Β· Β·+ π2ππ₯π = π2,
......
ππ1π₯1 + ππ2π₯2 + Β· Β· Β·+ ππππ₯π = ππ.
We can write this system as the matrix equation
Ax = b,
with
A =
βββββπ11 π12 Β· Β· Β· π1ππ21 π22 Β· Β· Β· π2π...
.... . .
...ππ1 ππ2 Β· Β· Β· πππ
βββββ , x =
βββββπ₯1
π₯2
...π₯π
βββββ , b =
βββββπ1π2...ππ
βββββ .
3.1 Gaussian Elimination
The standard numerical algorithm to solve a system of linear equations is calledGaussian Elimination. We can illustrate this algorithm by example.
Consider the system of equations
β3π₯1 + 2π₯2 β π₯3 = β1,
6π₯1 β 6π₯2 + 7π₯3 = β7,
3π₯1 β 4π₯2 + 4π₯3 = β6.
To perform Gaussian elimination, we form an Augmented Matrix by combiningthe matrix A with the column vector b:βββ3 2 β1 β1
6 β6 7 β73 β4 4 β6
ββ .
Row reduction is then performed on this matrix. Allowed operations are (1)multiply any row by a constant, (2) add multiple of one row to another row, (3)
13
14 CHAPTER 3. SYSTEMS OF EQUATIONS
interchange the order of any rows. The goal is to convert the original matrixinto an upper-triangular matrix.
We start with the first row of the matrix and work our way down as follows.First we multiply the first row by 2 and add it to the second row, and add thefirst row to the third row: βββ3 2 β1 β1
0 β2 5 β90 β2 3 β7
ββ .
We then go to the second row. We multiply this row by β1 and add it to thethird row: βββ3 2 β1 β1
0 β2 5 β90 0 β2 2
ββ .
The resulting equations can be determined from the matrix and are given by
β3π₯1 + 2π₯2 β π₯3 = β1
β2π₯2 + 5π₯3 = β9
β2π₯3 = 2.
These equations can be solved by backward substitution, starting from the lastequation and working backwards. We have
β2π₯3 = 2 β π₯3 = β1
β2π₯2 = β9β 5π₯3 = β4 β π₯2 = 2,
β3π₯1 = β1β 2π₯2 + π₯3 = β6 β π₯1 = 2.
Therefore, ββπ₯1
π₯2
π₯3
ββ =
ββ 22
β1
ββ .
3.2 πΏπ decomposition
The process of Gaussian Elimination also results in the factoring of the matrixA to
A = LU,
where L is a lower triangular matrix and U is an upper triangular matrix.Using the same matrix A as in the last section, we show how this factorizationis realized. We haveβββ3 2 β1
6 β6 73 β4 4
ββ β
βββ3 2 β10 β2 50 β2 3
ββ = M1A,
where
M1A =
ββ1 0 02 1 01 0 1
ββ βββ3 2 β16 β6 73 β4 4
ββ =
βββ3 2 β10 β2 50 β2 3
ββ .
3.2. πΏπ DECOMPOSITION 15
Note that the matrix M1 performs row elimination on the first column. Twotimes the first row is added to the second row and one times the first row isadded to the third row. The entries of the column of M1 come from 2 = β(6/β3)and 1 = β(3/β3) as required for row elimination. The number β3 is called thepivot.
The next step isβββ3 2 β10 β2 50 β2 3
ββ β
βββ3 2 β10 β2 50 0 β2
ββ = M2(M1A),
where
M2(M1A) =
ββ 1 0 00 1 00 β1 1
ββ βββ3 2 β10 β2 50 β2 3
ββ =
βββ3 2 β10 β2 50 0 β2
ββ .
Here, M2 multiplies the second row by β1 = β(β2/ β 2) and adds it to thethird row. The pivot is β2.
We now haveM2M1A = U
orA = Mβ1
1 Mβ12 U.
The inverse matrices are easy to find. The matrix M1 multiples the first row by2 and adds it to the second row, and multiplies the first row by 1 and adds itto the third row. To invert these operations, we need to multiply the first rowby β2 and add it to the second row, and multiply the first row by β1 and addit to the third row. To check, with
M1Mβ11 = I,
we have ββ 1 0 02 1 01 0 1
ββ ββ 1 0 0β2 1 0β1 0 1
ββ =
ββ1 0 00 1 00 0 1
ββ .
Similarly,
Mβ12 =
ββ1 0 00 1 00 1 1
ββ Therefore,
L = Mβ11 Mβ1
2
is given by
L =
ββ 1 0 0β2 1 0β1 0 1
ββ ββ1 0 00 1 00 1 1
ββ =
ββ 1 0 0β2 1 0β1 1 1
ββ ,
which is lower triangular. The off-diagonal elements of Mβ11 and Mβ1
2 are simplycombined to form L. Our LU decomposition is thereforeβββ3 2 β1
6 β6 73 β4 4
ββ =
ββ 1 0 0β2 1 0β1 1 1
ββ βββ3 2 β10 β2 50 0 β2
ββ .
16 CHAPTER 3. SYSTEMS OF EQUATIONS
Another nice feature of the LU decomposition is that it can be done by over-writing A, therefore saving memory if the matrix A is very large.
The LU decomposition is useful when one needs to solve Ax = b for x whenA is fixed and there are many different bβs. First one determines L and U usingGaussian elimination. Then one writes
(LU)x = L(Ux) = b.
We lety = Ux,
and first solveLy = b
for y by forward substitution. We then solve
Ux = y
for x by backward substitution. When we count operations, we will see thatsolving (LU)x = b is significantly faster once L and U are in hand than solvingAx = b directly by Gaussian elimination.
We now illustrate the solution of LUx = b using our previous example,where
L =
ββ 1 0 0β2 1 0β1 1 1
ββ , U =
βββ3 2 β10 β2 50 0 β2
ββ , b =
βββ1β7β6
ββ .
With y = Ux, we first solve Ly = b, that isββ 1 0 0β2 1 0β1 1 1
ββ ββπ¦1π¦2π¦3
ββ =
βββ1β7β6
ββ .
Using forward substitution
π¦1 = β1,
π¦2 = β7 + 2π¦1 = β9,
π¦3 = β6 + π¦1 β π¦2 = 2.
We now solve Ux = y, that isβββ3 2 β10 β2 50 0 β2
ββ ββπ₯1
π₯2
π₯3
ββ =
βββ1β92
ββ .
Using backward substitution,
β2π₯3 = 2 β π₯3 = β1,
β2π₯2 = β9β 5π₯3 = β4 β π₯2 = 2,
β3π₯1 = β1β 2π₯2 + π₯3 = β6 β π₯1 = 2,
and we have once again determinedββπ₯1
π₯2
π₯3
ββ =
ββ 22
β1
ββ .
3.3. PARTIAL PIVOTING 17
3.3 Partial pivoting
When performing Gaussian elimination, the diagonal element that one usesduring the elimination procedure is called the pivot. To obtain the correctmultiple, one uses the pivot as the divisor to the elements below the pivot.Gaussian elimination in this form will fail if the pivot is zero. In this situation,a row interchange must be performed.
Even if the pivot is not identically zero, a small value can result in big round-off errors. For very large matrices, one can easily lose all accuracy in the solution.To avoid these round-off errors arising from small pivots, row interchanges aremade, and this technique is called partial pivoting (partial pivoting is in contrastto complete pivoting, where both rows and columns are interchanged). We willillustrate by example the LU decomposition using partial pivoting.
Consider
A =
βββ2 2 β16 β6 73 β8 4
ββ .
We interchange rows to place the largest element (in absolute value) in the pivot,or π11, position. That is,
A β
ββ 6 β6 7β2 2 β13 β8 4
ββ = P12A,
where
P12 =
ββ0 1 01 0 00 0 1
ββ is a permutation matrix that when multiplied on the left interchanges the firstand second rows of a matrix. Note that Pβ1
12 = P12. The elimination step isthen
P12A β
ββ6 β6 70 0 4/30 β5 1/2
ββ = M1P12A,
where
M1 =
ββ 1 0 01/3 1 0
β1/2 0 1
ββ .
The final step requires one more row interchange:
M1P12A β
ββ6 β6 70 β5 1/20 0 4/3
ββ = P23M1P12A = U.
Since the permutation matrices given by P are their own inverses, we can writeour result as
(P23M1P23)P23P12A = U.
18 CHAPTER 3. SYSTEMS OF EQUATIONS
Multiplication of M on the left by P interchanges rows while multiplication onthe right by P interchanges columns. That is,
P23
ββ 1 0 01/3 1 0
β1/2 0 1
ββ P23 =
ββ 1 0 0β1/2 0 11/3 1 0
ββ P23 =
ββ 1 0 0β1/2 1 01/3 0 1
ββ .
The net result on M1 is an interchange of the nondiagonal elements 1/3 andβ1/2.
We can then multiply by the inverse of (P23M1P23) to obtain
P23P12A = (P23M1P23)β1U,
which we write asPA = LU.
Instead of L, MATLAB will write this as
A = (Pβ1L)U.
For convenience, we will just denote (Pβ1L) by L, but by L here we mean apermutated lower triangular matrix.
For example, in MATLAB, to solve Ax = b for x using Gaussian elimination,one types
x = A β b;
where β solves for x using the most efficient algorithm available, depending onthe form of A. If A is a general πΓ π matrix, then first the LU decompositionof A is found using partial pivoting, and then π₯ is determined from permutedforward and backward substitution. If A is upper or lower triangular, thenforward or backward substitution (or their permuted version) is used directly.
If there are many different right-hand-sides, one would first directly find theLU decomposition of A using a function call, and then solve using β. That is,one would iterate for different bβs the following expressions:
[LU] = lu(A);
y = L β b;x = U β π¦;
where the second and third lines can be shortened to
x = U β (L β b);
where the parenthesis are required. In lecture, I will demonstrate these solutionsin MATLAB using the matrix A = [β2, 2,β1; 6,β6, 7; 3,β8, 4]; which is theexample in the notes.
3.4 Operation counts
To estimate how much computational time is required for an algorithm, one cancount the number of operations required (multiplications, divisions, additionsand subtractions). Usually, what is of interest is how the algorithm scales with
3.4. OPERATION COUNTS 19
the size of the problem. For example, suppose one wants to multiply two fullπΓ π matrices. The calculation of each element requires π multiplications andπ β 1 additions, or say 2π β 1 operations. There are π2 elements to computeso that the total operation count is π2(2πβ 1). If π is large, we might want toknow what will happen to the computational time if π is doubled. What mattersmost is the fastest-growing, leading-order term in the operation count. In thismatrix multiplication example, the operation count is π2(2π β 1) = 2π3 β π2,and the leading-order term is 2π3. The factor of 2 is unimportant for the scaling,and we say that the algorithm scales like O(π3), which is read as βbig Oh of ncubed.β When using the big-Oh notation, we will drop both lower-order termsand constant multipliers.
The big-Oh notation tells us how the computational time of an algorithmscales. For example, suppose that the multiplication of two large πΓπ matricestook a computational time of ππ seconds. With the known operation countgoing like O(π3), we can write
ππ = ππ3
for some unknown constant π. To determine how long multiplication of two2πΓ 2π matrices will take, we write
π2π = π(2π)3
= 8ππ3
= 8ππ,
so that doubling the size of the matrix is expected to increase the computationaltime by a factor of 23 = 8.
Running MATLAB on my computer, the multiplication of two 2048Γ 2048matrices took about 0.75 sec. The multiplication of two 4096 Γ 4096 matricestook about 6 sec, which is 8 times longer. Timing of code in MATLAB can befound using the built-in stopwatch functions tic and toc.
What is the operation count and therefore the scaling of Gaussian elimina-tion? Consider an elimination step with the pivot in the πth row and πth column.There are both πβ π rows below the pivot and πβ π columns to the right of thepivot. To perform elimination of one row, each matrix element to the right ofthe pivot must be multiplied by a factor and added to the row underneath. Thismust be done for all the rows. There are therefore (πβ π)(πβ π) multiplication-additions to be done for this pivot. Since we are interested in only the scalingof the algorithm, I will just count a multiplication-addition as one operation.
To find the total operation count, we need to perform elimination using πβ1pivots, so that
op. counts =
πβ1βπ=1
(πβ π)2
= (πβ 1)2 + (πβ 2)2 + . . . (1)2
=
πβ1βπ=1
π2.
20 CHAPTER 3. SYSTEMS OF EQUATIONS
Three summation formulas will come in handy. They are
πβπ=1
1 = π,
πβπ=1
π =1
2π(π+ 1),
πβπ=1
π2 =1
6π(2π+ 1)(π+ 1),
which can be proved by mathematical induction, or derived by some tricks.The operation count for Gaussian elimination is therefore
op. counts =
πβ1βπ=1
π2
=1
6(πβ 1)(2πβ 1)(π).
The leading-order term is therefore π3/3, and we say that Gaussian eliminationscales like O(π3). Since LU decomposition with partial pivoting is essentiallyGaussian elimination, that algorithm also scales like O(π3).
However, once the LU decomposition of a matrix A is known, the solutionof Ax = b can proceed by a forward and backward substitution. How doesa backward substitution, say, scale? For backward substitution, the matrixequation to be solved is of the formβββββββ
π1,1 π1,2 Β· Β· Β· π1,πβ1 π1,π0 π2,2 Β· Β· Β· π2,πβ1 π2,π...
.... . .
......
0 0 Β· Β· Β· ππβ1,πβ1 ππβ1,π
0 0 Β· Β· Β· 0 ππ,π
βββββββ
βββββββπ₯1
π₯2
...π₯πβ1
π₯π
βββββββ =
βββββββπ1π2...
ππβ1
ππ
βββββββ .
The solution for π₯π is found after solving for π₯π with π > π. The explicit solutionfor π₯π is given by
π₯π =1
ππ,π
ββππ βπβ
π=π+1
ππ,ππ₯π
ββ .
The solution for π₯π requires π β π + 1 multiplication-additions, and since thismust be done for π such πβ²π , we have
op. counts =
πβπ=1
πβ π+ 1
= π+ (πβ 1) + Β· Β· Β·+ 1
=
πβπ=1
π
=1
2π(π+ 1).
3.5. SYSTEM OF NONLINEAR EQUATIONS 21
The leading-order term is π2/2 and the scaling of backward substitution isO(π2). After the LU decomposition of a matrix A is found, only a single forwardand backward substitution is required to solve Ax = b, and the scaling of thealgorithm to solve this matrix equation is therefore still O(π2). For large π,one should expect that solving Ax = b by a forward and backward substitutionshould be substantially faster than a direct solution using Gaussian elimination.
3.5 System of nonlinear equations
A system of nonlinear equations can be solved using a version of NewtonβsMethod. We illustrate this method for a system of two equations and twounknowns. Suppose that we want to solve
π(π₯, π¦) = 0, π(π₯, π¦) = 0,
for the unknowns π₯ and π¦. We want to construct two simultaneous sequencesπ₯0, π₯1, π₯2, . . . and π¦0, π¦1, π¦2, . . . that converge to the roots. To construct thesesequences, we Taylor series expand π(π₯π+1, π¦π+1) and π(π₯π+1, π¦π+1) about thepoint (π₯π, π¦π). Using the partial derivatives ππ₯ = ππ/ππ₯, ππ¦ = ππ/ππ¦, etc., thetwo-dimensional Taylor series expansions, displaying only the linear terms, aregiven by
π(π₯π+1, π¦π+1) = π(π₯π, π¦π) + (π₯π+1 β π₯π)ππ₯(π₯π, π¦π)
+ (π¦π+1 β π¦π)ππ¦(π₯π, π¦π) + . . .
π(π₯π+1, π¦π+1) = π(π₯π, π¦π) + (π₯π+1 β π₯π)ππ₯(π₯π, π¦π)
+ (π¦π+1 β π¦π)ππ¦(π₯π, π¦π) + . . . .
To obtain Newtonβs method, we take π(π₯π+1, π¦π+1) = 0, π(π₯π+1, π¦π+1) = 0 anddrop higher-order terms above linear. Although one can then find a system oflinear equations for π₯π+1 and π¦π+1, it is more convenient to define the variables
Ξπ₯π = π₯π+1 β π₯π, Ξπ¦π = π¦π+1 β π¦π.
The iteration equations will then be given by
π₯π+1 = π₯π +Ξπ₯π, π¦π+1 = π¦π +Ξπ¦π;
and the linear equations to be solved for Ξπ₯π and Ξπ¦π are given by(ππ₯ ππ¦ππ₯ ππ¦
)(Ξπ₯π
Ξπ¦π
)=
(βπβπ
),
where π , π, ππ₯, ππ¦, ππ₯, and ππ¦ are all evaluated at the point (π₯π, π¦π). The two-dimensional case is easily generalized to π dimensions. The matrix of partialderivatives is called the Jacobian Matrix.
We illustrate Newtonβs Method by finding the steady state solution of theLorenz equations, given by
π(π¦ β π₯) = 0,
ππ₯β π¦ β π₯π§ = 0,
π₯π¦ β ππ§ = 0,
22 CHAPTER 3. SYSTEMS OF EQUATIONS
where π₯, π¦, and π§ are the unknown variables and π, π, and π are the knownparameters. Here, we have a three-dimensional homogeneous system π = 0,π = 0, and β = 0, with
π(π₯, π¦, π§) = π(π¦ β π₯),
π(π₯, π¦, π§) = ππ₯β π¦ β π₯π§,
β(π₯, π¦, π§) = π₯π¦ β ππ§.
The partial derivatives can be computed to be
ππ₯ = βπ, ππ¦ = π, ππ§ = 0,
ππ₯ = π β π§, ππ¦ = β1, ππ§ = βπ₯,
βπ₯ = π¦, βπ¦ = π₯, βπ§ = βπ.
The iteration equation is thereforeββ βπ π 0π β π§π β1 βπ₯π
π¦π π₯π βπ
ββ ββΞπ₯π
Ξπ¦πΞπ§π
ββ = β
ββ π(π¦π β π₯π)ππ₯π β π¦π β π₯ππ§π
π₯ππ¦π β ππ§π
ββ ,
with
π₯π+1 = π₯π +Ξπ₯π,
π¦π+1 = π¦π +Ξπ¦π,
π§π+1 = π§π +Ξπ§π.
The MATLAB program that solves this system is contained in newton system.m.
Chapter 4
Least-squaresapproximation
The method of least-squares is commonly used to fit a parameterized curve toexperimental data. In general, the fitting curve is not expected to pass throughthe data points, making this problem substantially different from the one ofinterpolation.
We consider here only the simplest case of the same experimental error forall the data points. Let the data to be fitted be given by (π₯π, π¦π), with π = 1 toπ.
4.1 Fitting a straight line
Suppose the fitting curve is a line. We write for the fitting curve
π¦(π₯) = πΌπ₯+ π½.
The distance ππ from the data point (π₯π, π¦π) and the fitting curve is given by
ππ = π¦π β π¦(π₯π)
= π¦π β (πΌπ₯π + π½).
A least-squares fit minimizes the sum of the squares of the ππβs. This minimumcan be shown to result in the most probable values of πΌ and π½.
We define
π =
πβπ=1
π2π
=
πβπ=1
(π¦π β (πΌπ₯π + π½)
)2.
To minimize π with respect to πΌ and π½, we solve
ππ
ππΌ= 0,
ππ
ππ½= 0.
23
24 CHAPTER 4. LEAST-SQUARES APPROXIMATION
Taking the partial derivatives, we have
ππ
ππΌ=
πβπ=1
2(βπ₯π)(π¦π β (πΌπ₯π + π½)
)= 0,
ππ
ππ½=
πβπ=1
2(β1)(π¦π β (πΌπ₯π + π½)
)= 0.
These equations form a system of two linear equations in the two unknowns πΌand π½, which is evident when rewritten in the form
πΌ
πβπ=1
π₯2π + π½
πβπ=1
π₯π =
πβπ=1
π₯ππ¦π,
πΌ
πβπ=1
π₯π + π½π =
πβπ=1
π¦π.
These equations can be solved either analytically, or numerically in MATLAB,where the matrix form is(βπ
π=1 π₯2π
βππ=1 π₯πβπ
π=1 π₯π π
)(πΌπ½
)=
(βππ=1 π₯ππ¦πβππ=1 π¦π
).
A proper statistical treatment of this problem should also consider an estimateof the errors in πΌ and π½ as well as an estimate of the goodness-of-fit of the datato the model. We leave these further considerations to a statistics class.
4.2 Fitting to a linear combination of functions
Consider the general fitting function
π¦(π₯) =
πβπ=1
ππππ(π₯),
where we assume π functions ππ(π₯). For example, if we want to fit a cubicpolynomial to the data, then we would have π = 4 and take π1 = 1, π2 = π₯,π3 = π₯2 and π4 = π₯3. Typically, the number of functions ππ is less than thenumber of data points; that is, π < π, so that a direct attempt to solve forthe ππ βs such that the fitting function exactly passes through the π data pointswould result in π equations andπ unknowns. This would be an over-determinedlinear system that in general has no solution.
We now define the vectors
y =
βββββπ¦1π¦2...π¦π
βββββ , c =
βββββπ1π2...ππ
βββββ ,
and the π-by-π matrix
A =
βββββπ1(π₯1) π2(π₯1) Β· Β· Β· ππ(π₯1)π1(π₯2) π2(π₯2) Β· Β· Β· ππ(π₯2)
......
. . ....
π1(π₯π) π2(π₯π) Β· Β· Β· ππ(π₯π)
βββββ . (4.1)
4.2. FITTING TO A LINEAR COMBINATION OF FUNCTIONS 25
With π < π, the equation Ac = y is over-determined. We let
r = y βAc
be the residual vector, and let
π =
πβπ=1
π2π .
The method of least squares minimizes π with respect to the components of c.Now, using π to signify the transpose of a matrix, we have
π = rπ r
= (y βAc)π (y βAc)
= yπy β cπAπy β yπAc+ cπAπAc.
Since π is a scalar, each term in the above expression must be a scalar, and sincethe transpose of a scalar is equal to the scalar, we have
cπAπy =(cπAπy
)π= yπAc.
Therefore,
π = yπy β 2yπAc+ cπAπAc.
To find the minimum of π, we will need to solve ππ/πππ = 0 for π = 1, . . . ,π.To take the derivative of π, we switch to a tensor notation, using the Einsteinsummation convention, where repeated indices are summed over their allowablerange. We can write
π = π¦ππ¦π β 2π¦πAππππ + ππAπππAππππ.
Taking the partial derivative, we have
ππ
πππ= β2π¦πAππ
ππππππ
+ππππππ
AπππAππππ + ππA
πππAππ
ππππππ
.
Now,
ππππππ
=
{1, if π = π;
0, otherwise.
Therefore,ππ
πππ= β2π¦πAππ +Aπ
ππAππππ + ππAπππAππ .
Now,
ππAπππAππ = ππAππAππ
= AππAππππ
= AπππAππππ
= AπππAππππ.
26 CHAPTER 4. LEAST-SQUARES APPROXIMATION
Therefore,ππ
πππ= β2π¦πAππ + 2Aπ
ππAππππ.
With the partials set equal to zero, we have
AπππAππππ = π¦πAππ ,
orAπ
ππAππππ = Aππππ¦π,
In vector notation, we haveAπAc = Aπy. (4.2)
Equation (4.2) is the so-called normal equation, and can be solved for c byGaussian elimination using the MATLAB backslash operator. After construct-ing the matrix A given by (4.1), and the vector y from the data, one can codein MATLAB
π = (π΄β²π΄)β(π΄β²π¦);
But in fact the MATLAB back slash operator will automatically solve the normalequations when the matrix A is not square, so that the MATLAB code
π = π΄βπ¦;
yields the same result.
Chapter 5
Interpolation
Consider the following problem: Given the values of a known function π¦ = π(π₯)at a sequence of ordered points π₯0, π₯1, . . . , π₯π, find π(π₯) for arbitrary π₯. Whenπ₯0 β€ π₯ β€ π₯π, the problem is called interpolation. When π₯ < π₯0 or π₯ > π₯π theproblem is called extrapolation.
With π¦π = π(π₯π), the problem of interpolation is basically one of drawing asmooth curve through the known points (π₯0, π¦0), (π₯1, π¦1), . . . , (π₯π, π¦π). This isnot the same problem as drawing a smooth curve that approximates a set of datapoints that have experimental error. This latter problem is called least-squaresapproximation.
Here, we will consider three interpolation algorithms: (1) polynomial inter-polation; (2) piecewise linear interpolation, and; (3) cubic spline interpolation.
5.1 Polynomial interpolation
The π+ 1 points (π₯0, π¦0), (π₯1, π¦1), . . . , (π₯π, π¦π) can be interpolated by a uniquepolynomial of degree π. When π = 1, the polynomial is a linear function;when π = 2, the polynomial is a quadratic function. There are three standardalgorithms that can be used to construct this unique interpolating polynomial,and we will present all three here, not so much because they are all useful, butbecause it is interesting to learn how these three algorithms are constructed.
When discussing each algorithm, we define ππ(π₯) to be the unique πth degreepolynomial that passes through the given π+ 1 data points.
5.1.1 Vandermonde polynomial
This Vandermonde polynomial is the most straightforward construction of theinterpolating polynomial ππ(π₯). We simply write
ππ(π₯) = π0π₯π + π1π₯
πβ1 + Β· Β· Β·+ ππ.
27
28 CHAPTER 5. INTERPOLATION
Then we can immediately form π + 1 linear equations for the π + 1 unknowncoefficients π0, π1, . . . , ππ using the π+ 1 known points:
π¦0 = π0π₯π0 + π1π₯
πβ10 + Β· Β· Β·+ ππβ1π₯0 + ππ
π¦2 = π0π₯π1 + π1π₯
πβ11 + Β· Β· Β·+ ππβ1π₯1 + ππ
......
...
π¦π = π0π₯ππ + π1π₯
πβ1π + Β· Β· Β·+ ππβ1π₯π + ππ.
The system of equations in matrix form isβββββπ₯π0 π₯πβ1
0 Β· Β· Β· π₯0 1π₯π1 π₯πβ1
1 Β· Β· Β· π₯1 1...
.... . .
...π₯ππ π₯πβ1
π Β· Β· Β· π₯π 1
βββββ βββββπ0π1...ππ
βββββ =
βββββπ¦0π¦1...π¦π
βββββ .
The matrix is called the Vandermonde matrix, and can be constructed usingthe MATLAB function vander.m. The system of linear equations can be solvedin MATLAB using the β operator, and the MATLAB function polyval.m canused to interpolate using the π coefficients. I will illustrate this in class andplace the code on the website.
5.1.2 Lagrange polynomial
The Lagrange polynomial is the most clever construction of the interpolatingpolynomial ππ(π₯), and leads directly to an analytical formula. The Lagrangepolynomial is the sum of π + 1 terms and each term is itself a polynomial ofdegree π. The full polynomial is therefore of degree π. Counting from 0, the πthterm of the Lagrange polynomial is constructed by requiring it to be zero at π₯π
with π = π, and equal to π¦π when π = π. The polynomial can be written as
ππ(π₯) =(π₯β π₯1)(π₯β π₯2) Β· Β· Β· (π₯β π₯π)π¦0(π₯0 β π₯1)(π₯0 β π₯2) Β· Β· Β· (π₯0 β π₯π)
+(π₯β π₯0)(π₯β π₯2) Β· Β· Β· (π₯β π₯π)π¦1(π₯1 β π₯0)(π₯1 β π₯2) Β· Β· Β· (π₯1 β π₯π)
+ Β· Β· Β·+ (π₯β π₯0)(π₯β π₯1) Β· Β· Β· (π₯β π₯πβ1)π¦π(π₯π β π₯0)(π₯π β π₯1) Β· Β· Β· (π₯π β π₯πβ1)
.
It can be clearly seen that the first term is equal to zero when π₯ = π₯1, π₯2, . . . , π₯π
and equal to π¦0 when π₯ = π₯0; the second term is equal to zero when π₯ =π₯0, π₯2, . . . π₯π and equal to π¦1 when π₯ = π₯1; and the last term is equal to zerowhen π₯ = π₯0, π₯1, . . . π₯πβ1 and equal to π¦π when π₯ = π₯π. The uniqueness ofthe interpolating polynomial implies that the Lagrange polynomial must be theinterpolating polynomial.
5.1.3 Newton polynomial
The Newton polynomial is somewhat more clever than the Vandermonde poly-nomial because it results in a system of linear equations that is lower triangular,and therefore can be solved by forward substitution. The interpolating polyno-mial is written in the form
ππ(π₯) = π0 + π1(π₯β π₯0) + π2(π₯β π₯0)(π₯β π₯1) + Β· Β· Β·+ ππ(π₯β π₯0) Β· Β· Β· (π₯β π₯πβ1),
5.2. PIECEWISE LINEAR INTERPOLATION 29
which is clearly a polynomial of degree π. The π+1 unknown coefficients givenby the πβs can be found by substituting the points (π₯π, π¦π) for π = 0, . . . , π:
π¦0 = π0,
π¦1 = π0 + π1(π₯1 β π₯0),
π¦2 = π0 + π1(π₯2 β π₯0) + π2(π₯2 β π₯0)(π₯2 β π₯1),
......
...
π¦π = π0 + π1(π₯π β π₯0) + π2(π₯π β π₯0)(π₯π β π₯1) + Β· Β· Β·+ ππ(π₯π β π₯0) Β· Β· Β· (π₯π β π₯πβ1).
This system of linear equations is lower triangular as can be seen from thematrix formβββββ
1 0 0 Β· Β· Β· 01 (π₯1 β π₯0) 0 Β· Β· Β· 0...
......
. . ....
1 (π₯π β π₯0) (π₯π β π₯0)(π₯π β π₯1) Β· Β· Β· (π₯π β π₯0) Β· Β· Β· (π₯π β π₯πβ1)
βββββ βββββπ0π1...ππ
βββββ
=
βββββπ¦0π¦1...π¦π
βββββ ,
and so theoretically can be solved faster than the Vandermonde polynomial. Inpractice, however, there is little difference because polynomial interpolation isonly useful when the number of points to be interpolated is small.
5.2 Piecewise linear interpolation
Instead of constructing a single global polynomial that goes through all thepoints, one can construct local polynomials that are then connected together.In the the section following this one, we will discuss how this may be done usingcubic polynomials. Here, we discuss the simpler case of linear polynomials. Thisis the default interpolation typically used when plotting data.
Suppose the interpolating function is π¦ = π(π₯), and as previously, there areπ+1 points to interpolate. We construct the function π(π₯) out of π local linearpolynomials. We write
π(π₯) = ππ(π₯), for π₯π β€ π₯ β€ π₯π+1,
where
ππ(π₯) = ππ(π₯β π₯π) + ππ,
and π = 0, 1, . . . , πβ 1.We now require π¦ = ππ(π₯) to pass through the endpoints (π₯π, π¦π) and (π₯π+1, π¦π+1).
We have
π¦π = ππ,
π¦π+1 = ππ(π₯π+1 β π₯π) + ππ.
30 CHAPTER 5. INTERPOLATION
Therefore, the coefficients of ππ(π₯) are determined to be
ππ =π¦π+1 β π¦ππ₯π+1 β π₯π
, ππ = π¦π.
Although piecewise linear interpolation is widely used, particularly in plottingroutines, it suffers from a discontinuity in the derivative at each point. Thisresults in a function which may not look smooth if the points are too widelyspaced. We next consider a more challenging algorithm that uses cubic polyno-mials.
5.3 Cubic spline interpolation
The π+ 1 points to be interpolated are again
(π₯0, π¦0), (π₯1, π¦1), . . . (π₯π, π¦π).
Here, we use π piecewise cubic polynomials for interpolation,
ππ(π₯) = ππ(π₯β π₯π)3 + ππ(π₯β π₯π)
2 + ππ(π₯β π₯π) + ππ, π = 0, 1, . . . , πβ 1,
with the global interpolation function written as
π(π₯) = ππ(π₯), for π₯π β€ π₯ β€ π₯π+1.
To achieve a smooth interpolation we impose that π(π₯) and its first andsecond derivatives are continuous. The requirement that π(π₯) is continuous(and goes through all π+ 1 points) results in the two constraints
ππ(π₯π) = π¦π, π = 0 to πβ 1, (5.1)
ππ(π₯π+1) = π¦π+1, π = 0 to πβ 1. (5.2)
The requirement that πβ²(π₯) is continuous results in
πβ²π(π₯π+1) = πβ²π+1(π₯π+1), π = 0 to πβ 2. (5.3)
And the requirement that πβ²β²(π₯) is continuous results in
πβ²β²π (π₯π+1) = πβ²β²π+1(π₯π+1), π = 0 to πβ 2. (5.4)
There are π cubic polynomials ππ(π₯) and each cubic polynomial has four freecoefficients; there are therefore a total of 4π unknown coefficients. The numberof constraining equations from (5.1)-(5.4) is 2π+2(πβ1) = 4πβ2. With 4πβ2constraints and 4π unknowns, two more conditions are required for a uniquesolution. These are usually chosen to be extra conditions on the first π0(π₯) andlast ππβ1(π₯) polynomials. We will discuss these extra conditions later.
We now proceed to determine equations for the unknown coefficients of thecubic polynomials. The polynomials and their first two derivatives are given by
ππ(π₯) = ππ(π₯β π₯π)3 + ππ(π₯β π₯π)
2 + ππ(π₯β π₯π) + ππ, (5.5)
πβ²π(π₯) = 3ππ(π₯β π₯π)2 + 2ππ(π₯β π₯π) + ππ, (5.6)
πβ²β²π (π₯) = 6ππ(π₯β π₯π) + 2ππ. (5.7)
5.3. CUBIC SPLINE INTERPOLATION 31
We will consider the four conditions (5.1)-(5.4) in turn. From (5.1) and (5.5),we have
ππ = π¦π, π = 0 to πβ 1, (5.8)
which directly solves for all of the π-coefficients.To satisfy (5.2), we first define
βπ = π₯π+1 β π₯π,
andππ = π¦π+1 β π¦π.
Now, from (5.2) and (5.5), using (5.8), we obtain the π equations
ππβ3π + ππβ
2π + ππβπ = ππ, π = 0 to πβ 1. (5.9)
From (5.3) and (5.6) we obtain the πβ 1 equations
3ππβ2π + 2ππβπ + ππ = ππ+1, π = 0 to πβ 2. (5.10)
From (5.4) and (5.7) we obtain the πβ 1 equations
3ππβπ + ππ = ππ+1 π = 0 to πβ 2. (5.11)
It is will be useful to include a definition of the coefficient ππ, which is nowmissing. (The index of the cubic polynomial coefficients only go up to π β 1.)We simply extend (5.11) up to π = πβ 1 and so write
3ππβ1βπβ1 + ππβ1 = ππ, (5.12)
which can be viewed as a definition of ππ.We now proceed to eliminate the sets of π- and π-coefficients (with the π-
coefficients already eliminated in (5.8)) to find a system of linear equations forthe π-coefficients. From (5.11) and (5.12), we can solve for the π π-coefficientsto find
ππ =1
3βπ(ππ+1 β ππ) , π = 0 to πβ 1. (5.13)
From (5.9), we can solve for the π π-coefficients as follows:
ππ =1
βπ
(ππ β ππβ
3π β ππβ
2π
)=
1
βπ
(ππ β
1
3βπ(ππ+1 β ππ)β
3π β ππβ
2π
)=
ππβπ
β 1
3βπ (ππ+1 + 2ππ) , π = 0 to πβ 1. (5.14)
We can now find an equation for the π-coefficients by substituting (5.8),(5.13) and (5.14) into (5.10):
3
(1
3βπ(ππ+1 β ππ)
)β2π + 2ππβπ +
(ππβπ
β 1
3βπ(ππ+1 + 2ππ)
)=
(ππ+1
βπ+1β 1
3βπ+1(ππ+2 + 2ππ+1)
),
32 CHAPTER 5. INTERPOLATION
which simplifies to
1
3βπππ +
2
3(βπ + βπ+1)ππ+1 +
1
3βπ+1ππ+2 =
ππ+1
βπ+1β ππ
βπ, (5.15)
an equation that is valid for π = 0 to π β 2. Therefore, (5.15) represent π β 1equations for the π+1 unknown π-coefficients. Accordingly, we write the matrixequation for the π-coefficients, leaving the first and last row absent, asβββββββ
. . . . . . . . . . . . missing . . . . . .13β0
23 (β0 + β1)
13β1 . . . 0 0 0
......
.... . .
......
...0 0 0 . . . 1
3βπβ223 (βπβ2 + βπβ1)
13βπβ1
. . . . . . . . . . . . missing . . . . . .
βββββββ
βββββββπ0π1...
ππβ1
ππ
βββββββ
=
ββββββββmissingπ1β1
β π0β0
...ππβ1
βπβ1β ππβ2
βπβ2
missing
ββββββββ .
Once the missing first and last equations are specified, the matrix equationfor the π-coefficients can be solved by Gaussian elimination. And once the π-coefficients are determined, the π- and π-coefficients can also be determined from(5.13) and (5.14), with the π-coefficients already known from (5.8). The piece-wise cubic polynomials, then, are known and π(π₯) can be used for interpolationto any value π₯ satisfying π₯0 β€ π₯ β€ π₯π.
The missing first and last equations can be specified in several ways, andhere we show the two ways that are allowed by the MATLAB function spline.m.The first way should be used when the derivative πβ²(π₯) is known at the endpointsπ₯0 and π₯π; that is, suppose we know the values of πΌ and π½ such that
πβ²0(π₯0) = πΌ, πβ²πβ1(π₯π) = π½.
From the known value of πΌ, and using (5.6) and (5.14), we have
πΌ = π0
=π0β0
β 1
3β0(π1 + 2π0).
Therefore, the missing first equation is determined to be
2
3β0π0 +
1
3β0π1 =
π0β0
β πΌ. (5.16)
From the known value of π½, and using (5.6), (5.13), and (5.14), we have
π½ = 3ππβ1β2πβ1 + 2ππβ1βπβ1 + ππβ1
= 3
(1
3βπβ1(ππ β ππβ1)
)β2πβ1 + 2ππβ1βπβ1 +
(ππβ1
βπβ1β 1
3βπβ1(ππ + 2ππβ1)
),
5.4. MULTIDIMENSIONAL INTERPOLATION 33
which simplifies to
1
3βπβ1ππβ1 +
2
3βπβ1ππ = π½ β ππβ1
βπβ1, (5.17)
to be used as the missing last equation.The second way of specifying the missing first and last equations is called
the not-a-knot condition, which assumes that
π0(π₯) = π1(π₯), ππβ2(π₯) = ππβ1(π₯).
Considering the first of these equations, from (5.5) we have
π0(π₯β π₯0)3 + π0(π₯β π₯0)
2 + π0(π₯β π₯0) + π0
= π1(π₯β π₯1)3 + π1(π₯β π₯1)
2 + π1(π₯β π₯1) + π1.
Now two cubic polynomials can be proven to be identical if at some value of π₯,the polynomials and their first three derivatives are identical. Our conditions ofcontinuity at π₯ = π₯1 already require that at this value of π₯ these two polynomialsand their first two derivatives are identical. The polynomials themselves will beidentical, then, if their third derivatives are also identical at π₯ = π₯1, or if
π0 = π1.
From (5.13), we have
1
3β0(π1 β π0) =
1
3β1(π2 β π1),
or after simplification
β1π0 β (β0 + β1)π1 + β0π2 = 0, (5.18)
which provides us our missing first equation. A similar argument at π₯ = π₯π β 1also provides us with our last equation,
βπβ1ππβ2 β (βπβ2 + βπβ1)ππβ1 + βπβ2ππ = 0. (5.19)
The MATLAB subroutines spline.m and ppval.m can be used for cubic splineinterpolation (see also interp1.m). I will illustrate these routines in class andpost sample code on the course web site.
5.4 Multidimensional interpolation
Suppose we are interpolating the value of a function of two variables,
π§ = π(π₯, π¦).
The known values are given by
π§ππ = π(π₯π, π¦π),
with π = 0, 1, . . . , π and π = 0, 1, . . . , π. Note that the (π₯, π¦) points at whichπ(π₯, π¦) are known lie on a grid in the π₯β π¦ plane.
34 CHAPTER 5. INTERPOLATION
Let π§ = π(π₯, π¦) be the interpolating function, satisfying π§ππ = π(π₯π, π¦π). Atwo-dimensional interpolation to find the value of π at the point (π₯, π¦) may bedone by first performing π + 1 one-dimensional interpolations in π¦ to find thevalue of π at the π + 1 points (π₯0, π¦), (π₯1, π¦), . . . , (π₯π, π¦), followed by a singleone-dimensional interpolation in π₯ to find the value of π at (π₯, π¦).
In other words, two-dimensional interpolation on a grid of dimension (π +1) Γ (π + 1) is done by first performing π + 1 one-dimensional interpolationsto the value π¦ followed by a single one-dimensional interpolation to the valueπ₯. Two-dimensional interpolation can be generalized to higher dimensions. TheMATLAB functions to perform two- and three-dimensional interpolation areinterp2.m and interp3.m.
Chapter 6
Integration
We want to construct numerical algorithms that can perform definite integralsof the form
πΌ =
β« π
π
π(π₯)ππ₯. (6.1)
Calculating these definite integrals numerically is called numerical integration,numerical quadrature, or more simply quadrature.
6.1 Elementary formulas
We first consider integration from 0 to β, with β small, to serve as the buildingblocks for integration over larger domains. We here define πΌβ as the followingintegral:
πΌβ =
β« β
0
π(π₯)ππ₯. (6.2)
To perform this integral, we consider a Taylor series expansion of π(π₯) aboutthe value π₯ = β/2:
π(π₯) = π(β/2) + (π₯β β/2)π β²(β/2) +(π₯β β/2)2
2π β²β²(β/2)
+(π₯β β/2)3
6π β²β²β²(β/2) +
(π₯β β/2)4
24π β²β²β²β²(β/2) + . . .
6.1.1 Midpoint rule
The midpoint rule makes use of only the first term in the Taylor series expansion.Here, we will determine the error in this approximation. Integrating,
πΌβ = βπ(β/2) +
β« β
0
((π₯β β/2)π β²(β/2) +
(π₯β β/2)2
2π β²β²(β/2)
+(π₯β β/2)3
6π β²β²β²(β/2) +
(π₯β β/2)4
24π β²β²β²β²(β/2) + . . .
)ππ₯.
35
36 CHAPTER 6. INTEGRATION
Changing variables by letting π¦ = π₯ β β/2 and ππ¦ = ππ₯, and simplifying theintegral depending on whether the integrand is even or odd, we have
πΌβ = βπ(β/2)
+
β« β/2
ββ/2
(π¦π β²(β/2) +
π¦2
2π β²β²(β/2) +
π¦3
6π β²β²β²(β/2) +
π¦4
24π β²β²β²β²(β/2) + . . .
)ππ¦
= βπ(β/2) +
β« β/2
0
(π¦2π β²β²(β/2) +
π¦4
12π β²β²β²β²(β/2) + . . .
)ππ¦.
The integrals that we need here are
β« β2
0
π¦2ππ¦ =β3
24,
β« β2
0
π¦4ππ¦ =β5
160.
Therefore,
πΌβ = βπ(β/2) +β3
24π β²β²(β/2) +
β5
1920π β²β²β²β²(β/2) + . . . . (6.3)
6.1.2 Trapezoidal rule
From the Taylor series expansion of π(π₯) about π₯ = β/2, we have
π(0) = π(β/2)β β
2π β²(β/2) +
β2
8π β²β²(β/2)β β3
48π β²β²β²(β/2) +
β4
384π β²β²β²β²(β/2) + . . . ,
and
π(β) = π(β/2) +β
2π β²(β/2) +
β2
8π β²β²(β/2) +
β3
48π β²β²β²(β/2) +
β4
384π β²β²β²β²(β/2) + . . . .
Adding and multiplying by β/2 we obtain
β
2
(π(0) + π(β)
)= βπ(β/2) +
β3
8π β²β²(β/2) +
β5
384π β²β²β²β²(β/2) + . . . .
We now substitute for the first term on the right-hand-side using the midpointrule formula:
β
2
(π(0) + π(β)
)=
(πΌβ β β3
24π β²β²(β/2)β β5
1920π β²β²β²β²(β/2)
)+
β3
8π β²β²(β/2) +
β5
384π β²β²β²β²(β/2) + . . . ,
and solving for πΌβ, we find
πΌβ =β
2
(π(0) + π(β)
)β β3
12π β²β²(β/2)β β5
480π β²β²β²β²(β/2) + . . . . (6.4)
6.2. COMPOSITE RULES 37
6.1.3 Simpsonβs rule
To obtain Simpsonβs rule, we combine the midpoint and trapezoidal rule toeliminate the error term proportional to β3. Multiplying (6.3) by two andadding to (6.4), we obtain
3πΌβ = β
(2π(β/2) +
1
2(π(0) + π(β))
)+ β5
(2
1920β 1
480
)π β²β²β²β²(β/2) + . . . ,
or
πΌβ =β
6
(π(0) + 4π(β/2) + π(β)
)β β5
2880π β²β²β²β²(β/2) + . . . .
Usually, Simpsonβs rule is written by considering the three consecutive points0, β and 2β. Substituting β β 2β, we obtain the standard result
πΌ2β =β
3
(π(0) + 4π(β) + π(2β)
)β β5
90π β²β²β²β²(β) + . . . . (6.5)
6.2 Composite rules
We now use our elementary formulas obtained for (6.2) to perform the integralgiven by (6.1).
6.2.1 Trapezoidal rule
We suppose that the function π(π₯) is known at the π + 1 points labeled asπ₯0, π₯1, . . . , π₯π, with the endpoints given by π₯0 = π and π₯π = π. Define
ππ = π(π₯π), βπ = π₯π+1 β π₯π.
Then the integral of (6.1) may be decomposed as
β« π
π
π(π₯)ππ₯ =
πβ1βπ=0
β« π₯π+1
π₯π
π(π₯)ππ₯
=
πβ1βπ=0
β« βπ
0
π(π₯π + π )ππ ,
where the last equality arises from the change-of-variables π = π₯βπ₯π. Applyingthe trapezoidal rule to the integral, we haveβ« π
π
π(π₯)ππ₯ =
πβ1βπ=0
βπ
2(ππ + ππ+1) . (6.6)
If the points are not evenly spaced, say because the data are experimental values,then the βπ may differ for each value of π and (6.6) is to be used directly.
However, if the points are evenly spaced, say because π(π₯) can be computed,we have βπ = β, independent of π. We can then define
π₯π = π+ πβ, π = 0, 1, . . . , π;
38 CHAPTER 6. INTEGRATION
and since the end point π satisfies π = π+ πβ, we have
β =πβ π
π.
The composite trapezoidal rule for evenly space points then becomesβ« π
π
π(π₯)ππ₯ =β
2
πβ1βπ=0
(ππ + ππ+1)
=β
2(π0 + 2π1 + Β· Β· Β·+ 2ππβ1 + ππ) . (6.7)
The first and last terms have a multiple of one; all other terms have a multipleof two; and the entire sum is multiplied by β/2.
6.2.2 Simpsonβs rule
We here consider the composite Simpsonβs rule for evenly space points. Weapply Simpsonβs rule over intervals of 2β, starting from π and ending at π:
β« π
π
π(π₯)ππ₯ =β
3(π0 + 4π1 + π2) +
β
3(π2 + 4π3 + π4) + . . .
+β
3(ππβ2 + 4ππβ1 + ππ) .
Note that π must be even for this scheme to work. Combining terms, we haveβ« π
π
π(π₯)ππ₯ =β
3(π0 + 4π1 + 2π2 + 4π3 + 2π4 + Β· Β· Β·+ 4ππβ1 + ππ) .
The first and last terms have a multiple of one; the even indexed terms have amultiple of 2; the odd indexed terms have a multiple of 4; and the entire sum ismultiplied by β/3.
6.3 Local versus global error
Consider the elementary formula (6.4) for the trapezoidal rule, written in theform β« β
0
π(π₯)ππ₯ =β
2
(π(0) + π(β)
)β β3
12π β²β²(π),
where π is some value satisfying 0 β€ π β€ β, and we have used Taylorβs theoremwith the mean-value form of the remainder. We can also represent the remainderas
ββ3
12π β²β²(π) = O(β3),
where O(β3) signifies that when β is small, halving of the grid spacing β de-creases the error in the elementary trapezoidal rule by a factor of eight. We callthe terms represented by O(β3) the Local Error.
6.4. ADAPTIVE INTEGRATION 39
π π π π π
Figure 6.1: Adaptive Simpson quadrature: Level 1.
More important is the Global Error which is obtained from the compositeformula (6.7) for the trapezoidal rule. Putting in the remainder terms, we have
β« π
π
π(π₯)ππ₯ =β
2(π0 + 2π1 + Β· Β· Β·+ 2ππβ1 + ππ)β
β3
12
πβ1βπ=0
π β²β²(ππ),
where ππ are values satisfying π₯π β€ ππ β€ π₯π+1. The remainder can be rewrittenas
ββ3
12
πβ1βπ=0
π β²β²(ππ) = βπβ3
12
β¨π β²β²(ππ)
β©,
whereβ¨π β²β²(ππ)
β©is the average value of all the π β²β²(ππ)βs. Now,
π =πβ π
β,
so that the error term becomes
βπβ3
12
β¨π β²β²(ππ)
β©= β (πβ π)β2
12
β¨π β²β²(ππ)
β©= O(β2).
Therefore, the global error is O(β2). That is, a halving of the grid spacing onlydecreases the global error by a factor of four.
Similarly, Simpsonβs rule has a local error of O(β5) and a global error ofO(β4).
6.4 Adaptive integration
The useful MATLAB function quad.m performs numerical integration usingadaptive Simpson quadrature. The idea is to let the computation itself decideon the grid size required to achieve a certain level of accuracy. Moreover, thegrid size need not be the same over the entire region of integration.
We begin the adaptive integration at what is called Level 1. The uniformlyspaced points at which the function π(π₯) is to be evaluated are shown in Fig.6.1. The distance between the points π and π is taken to be 2β, so that
β =πβ π
2.
40 CHAPTER 6. INTEGRATION
Integration using Simpsonβs rule (6.5) with grid size β yields
πΌ =β
3
(π(π) + 4π(π) + π(π)
)β β5
90π β²β²β²β²(π),
where π is some value satisfying π β€ π β€ π.Integration using Simpsonβs rule twice with grid size β/2 yields
πΌ =β
6
(π(π) + 4π(π) + 2π(π) + 4π(π) + π(π)
)β (β/2)5
90π β²β²β²β²(ππ)β
(β/2)5
90π β²β²β²β²(ππ),
with ππ and ππ some values satisfying π β€ ππ β€ π and π β€ ππ β€ π.We now define
π1 =β
3
(π(π) + 4π(π) + π(π)
),
π2 =β
6
(π(π) + 4π(π) + 2π(π) + 4π(π) + π(π)
),
πΈ1 = ββ5
90π β²β²β²β²(π),
πΈ2 = β β5
25 Β· 90(π β²β²β²β²(ππ) + π β²β²β²β²(ππ)
).
Now we ask whether π2 is accurate enough, or must we further refine the cal-culation and go to Level 2? To answer this question, we make the simplifyingapproximation that all of the fourth-order derivatives of π(π₯) in the error termsare equal; that is,
π β²β²β²β²(π) = π β²β²β²β²(ππ) = π β²β²β²β²(ππ) = πΆ.
Then
πΈ1 = ββ5
90πΆ,
πΈ2 = β β5
24 Β· 90πΆ =
1
16πΈ1.
Then sinceπ1 + πΈ1 = π2 + πΈ2,
andπΈ1 = 16πΈ2,
we have for our estimate for the error term πΈ2,
πΈ2 =1
15(π2 β π1).
Therefore, given some specific value of the tolerance tol, if1
15(π2 β π1)
< tol,
then we can accept π2 as πΌ. If the tolerance is not achieved for πΌ, then weproceed to Level 2.
The computation at Level 2 further divides the integration interval from πto π into the two integration intervals π to π and π to π, and proceeds with the
6.4. ADAPTIVE INTEGRATION 41
above procedure independently on both halves. Integration can be stopped oneither half provided the tolerance is less than tol/2 (since the sum of both errorsmust be less than tol). Otherwise, either half can proceed to Level 3, and so on.
As a side note, the two values of πΌ given above (for integration with stepsize β and β/2) can be combined to give a more accurate value for I given by
πΌ =16π2 β π1
15+ O(β7),
where the error terms of O(β5) approximately cancel. This free lunch, so tospeak, is called Richardsonβs extrapolation.
42 CHAPTER 6. INTEGRATION
Chapter 7
Ordinary differentialequations
We now discuss the numerical solution of ordinary differential equations. Theseinclude the initial value problem, the boundary value problem, and the eigen-value problem. Before proceeding to the development of numerical methods, wereview the analytical solution of some classic equations.
7.1 Examples of analytical solutions
7.1.1 Initial value problem
One classic initial value problem is the π πΆ circuit. With π the resistor and πΆthe capacitor, the differential equation for the charge π on the capacitor is givenby
π ππ
ππ‘+
π
πΆ= 0. (7.1)
If we consider the physical problem of a charged capacitor connected in a closedcircuit to a resistor, then the initial condition is π(0) = π0, where π0 is the initialcharge on the capacitor.
The differential equation (7.1) is separable, and separating and integratingfrom time π‘ = 0 to π‘ yields β« π
π0
ππ
π= β 1
π πΆ
β« π‘
0
ππ‘,
which can be integrated and solved for π = π(π‘):
π(π‘) = π0πβπ‘/π πΆ .
The classic second-order initial value problem is the π πΏπΆ circuit, with dif-ferential equation
πΏπ2π
ππ‘2+π
ππ
ππ‘+
π
πΆ= 0.
Here, a charged capacitor is connected to a closed circuit, and the initial condi-tions satisfy
π(0) = π0,ππ
ππ‘(0) = 0.
43
44 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
The solution is obtained for the second-order equation by the ansatz
π(π‘) = πππ‘,
which results in the following so-called characteristic equation for π:
πΏπ2 +π π +1
πΆ= 0.
If the two solutions for π are distinct and real, then the two found exponentialsolutions can be multiplied by constants and added to form a general solution.The constants can then be determined by requiring the general solution to satisfythe two initial conditions. If the roots of the characteristic equation are complexor degenerate, a general solution to the differential equation can also be found.
7.1.2 Boundary value problems
The dimensionless equation for the temperature π¦ = π¦(π₯) along a linear heat-conducting rod of length unity, and with an applied external heat source π(π₯),is given by the differential equation
β π2π¦
ππ₯2= π(π₯), (7.2)
with 0 β€ π₯ β€ 1. Boundary conditions are usually prescribed at the end points ofthe rod, and here we assume that the temperature at both ends are maintainedat zero so that
π¦(0) = 0, π¦(1) = 0.
The assignment of boundary conditions at two separate points is called a two-point boundary value problem, in contrast to the initial value problem where theboundary conditions are prescribed at only a single point. Two-point boundaryvalue problems typically require a more sophisticated algorithm for a numericalsolution than initial value problems.
Here, the solution of (7.2) can proceed by integration once π(π₯) is specified.We assume that
π(π₯) = π₯(1β π₯),
so that the maximum of the heat source occurs in the center of the rod, andgoes to zero at the ends.
The differential equation can then be written as
π2π¦
ππ₯2= βπ₯(1β π₯).
The first integration results in
ππ¦
ππ₯=
β«(π₯2 β π₯)ππ₯
=π₯3
3β π₯2
2+ π1,
7.1. EXAMPLES OF ANALYTICAL SOLUTIONS 45
where π1 is the first integration constant. Integrating again,
π¦(π₯) =
β« (π₯3
3β π₯2
2+ π1
)ππ₯
=π₯4
12β π₯3
6+ π1π₯+ π2,
where π2 is the second integration constant. The two integration constants aredetermined by the boundary conditions. At π₯ = 0, we have
0 = π2,
and at π₯ = 1, we have
0 =1
12β 1
6+ π1,
so that π1 = 1/12. Our solution is therefore
π¦(π₯) =π₯4
12β π₯3
6+
π₯
12
=1
12π₯(1β π₯)(1 + π₯β π₯2).
The temperature of the rod is maximum at π₯ = 1/2 and goes smoothly to zeroat the ends.
7.1.3 Eigenvalue problem
The classic eigenvalue problem obtained by solving the wave equation by sepa-ration of variables is given by
π2π¦
ππ₯2+ π2π¦ = 0,
with the two-point boundary conditions π¦(0) = 0 and π¦(1) = 0. Notice thatπ¦(π₯) = 0 satisfies both the differential equation and the boundary conditions.Other nonzero solutions for π¦ = π¦(π₯) are possible only for certain discrete valuesof π. These values of π are called the eigenvalues of the differential equation.
We proceed by first finding the general solution to the differential equation.It is easy to see that this solution is
π¦(π₯) = π΄ cosππ₯+π΅ sinππ₯.
Imposing the first boundary condition at π₯ = 0, we obtain
π΄ = 0.
The second boundary condition at π₯ = 1 results in
π΅ sinπ = 0.
Since we are searching for a solution where π¦ = π¦(π₯) is not identically zero, wemust have
π = π, 2π, 3π, . . . .
46 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
The corresponding negative values of π are also solutions, but their inclusiononly changes the corresponding values of the unknown π΅ constant. A linearsuperposition of all the solutions results in the general solution
π¦(π₯) =
ββπ=1
π΅π sinπππ₯.
For each eigenvalue ππ, we say there is a corresponding eigenfunction sinπππ₯.When the differential equation can not be solved analytically, a numericalmethod should be able to solve for both the eigenvalues and eigenfunctions.
7.2 Numerical methods: initial value problem
We begin with the simple Euler method, then discuss the more sophisticatedRunge-Kutta methods, and conclude with the Runge-Kutta-Fehlberg method,as implemented in the MATLAB function ode45.m. Our differential equationsare for π₯ = π₯(π‘), where the time π‘ is the independent variable, and we will makeuse of the notation οΏ½οΏ½ = ππ₯/ππ‘. This notation is still widely used by physicistsand descends directly from the notation originally used by Newton.
7.2.1 Euler method
The Euler method is the most straightforward method to integrate a differentialequation. Consider the first-order differential equation
οΏ½οΏ½ = π(π‘, π₯), (7.3)
with the initial condition π₯(0) = π₯0. Define π‘π = πΞπ‘ and π₯π = π₯(π‘π). A Taylorseries expansion of π₯π+1 results in
π₯π+1 = π₯(π‘π +Ξπ‘)
= π₯(π‘π) + Ξπ‘οΏ½οΏ½(π‘π) + O(Ξπ‘2)
= π₯(π‘π) + Ξπ‘π(π‘π, π₯π) + O(Ξπ‘2).
The Euler Method is therefore written as
π₯π+1 = π₯(π‘π) + Ξπ‘π(π‘π, π₯π).
We say that the Euler method steps forward in time using a time-step Ξπ‘,starting from the initial value π₯0 = π₯(0). The local error of the Euler Methodis O(Ξπ‘2). The global error, however, incurred when integrating to a time π , isa factor of 1/Ξπ‘ larger and is given by O(Ξπ‘). It is therefore customary to callthe Euler Method a first-order method.
7.2.2 Modified Euler method
This method is of a type that is called a predictor-corrector method. It is alsothe first of what are Runge-Kutta methods. As before, we want to solve (7.3).The idea is to average the value of οΏ½οΏ½ at the beginning and end of the time step.That is, we would like to modify the Euler method and write
π₯π+1 = π₯π +1
2Ξπ‘
(π(π‘π, π₯π) + π(π‘π +Ξπ‘, π₯π+1)
).
7.2. NUMERICAL METHODS: INITIAL VALUE PROBLEM 47
The obvious problem with this formula is that the unknown value π₯π+1 appearson the right-hand-side. We can, however, estimate this value, in what is calledthe predictor step. For the predictor step, we use the Euler method to find
π₯ππ+1 = π₯π +Ξπ‘π(π‘π, π₯π).
The corrector step then becomes
π₯π+1 = π₯π +1
2Ξπ‘
(π(π‘π, π₯π) + π(π‘π +Ξπ‘, π₯π
π+1)).
The Modified Euler Method can be rewritten in the following form that we willlater identify as a Runge-Kutta method:
π1 = Ξπ‘π(π‘π, π₯π),
π2 = Ξπ‘π(π‘π +Ξπ‘, π₯π + π1),
π₯π+1 = π₯π +1
2(π1 + π2).
(7.4)
7.2.3 Second-order Runge-Kutta methods
We now derive all second-order Runge-Kutta methods. Higher-order methodscan be similarly derived, but require substantially more algebra.
We consider the differential equation given by (7.3). A general second-orderRunge-Kutta method may be written in the form
π1 = Ξπ‘π(π‘π, π₯π),
π2 = Ξπ‘π(π‘π + πΌΞπ‘, π₯π + π½π1),
π₯π+1 = π₯π + ππ1 + ππ2,
(7.5)
with πΌ, π½, π and π constants that define the particular second-order Runge-Kutta method. These constants are to be constrained by setting the local errorof the second-order Runge-Kutta method to be O(Ξπ‘3). Intuitively, we mightguess that two of the constraints will be π+ π = 1 and πΌ = π½.
We compute the Taylor series of π₯π+1 directly, and from the Runge-Kuttamethod, and require them to be the same to order Ξπ‘2. First, we compute theTaylor series of π₯π+1. We have
π₯π+1 = π₯(π‘π +Ξπ‘)
= π₯(π‘π) + Ξπ‘οΏ½οΏ½(π‘π) +1
2(Ξπ‘)2οΏ½οΏ½(π‘π) + O(Ξπ‘3).
Now,οΏ½οΏ½(π‘π) = π(π‘π, π₯π).
The second derivative is more complicated and requires partial derivatives. Wehave
οΏ½οΏ½(π‘π) =π
ππ‘π(π‘, π₯(π‘))
]π‘=π‘π
= ππ‘(π‘π, π₯π) + οΏ½οΏ½(π‘π)ππ₯(π‘π, π₯π)
= ππ‘(π‘π, π₯π) + π(π‘π, π₯π)ππ₯(π‘π, π₯π).
48 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
Therefore,
π₯π+1 = π₯π +Ξπ‘π(π‘π, π₯π) +1
2(Ξπ‘)
2 (ππ‘(π‘π, π₯π) + π(π‘π, π₯π)ππ₯(π‘π, π₯π)
). (7.6)
Second, we compute π₯π+1 from the Runge-Kutta method given by (7.5).Substituting in π1 and π2, we have
π₯π+1 = π₯π + πΞπ‘π(π‘π, π₯π) + πΞπ‘π(π‘π + πΌΞπ‘, π₯π + π½Ξπ‘π(π‘π, π₯π)
).
We Taylor series expand using
π(π‘π + πΌΞπ‘, π₯π + π½Ξπ‘π(π‘π, π₯π)
)= π(π‘π, π₯π) + πΌΞπ‘ππ‘(π‘π, π₯π) + π½Ξπ‘π(π‘π, π₯π)ππ₯(π‘π, π₯π) + O(Ξπ‘2).
The Runge-Kutta formula is therefore
π₯π+1 = π₯π + (π+ π)Ξπ‘π(π‘π, π₯π)
+ (Ξπ‘)2(πΌπππ‘(π‘π, π₯π) + π½ππ(π‘π, π₯π)ππ₯(π‘π, π₯π)
)+O(Ξπ‘3). (7.7)
Comparing (7.6) and (7.7), we find
π+ π = 1,
πΌπ = 1/2,
π½π = 1/2.
There are three equations for four parameters, and there exists a family ofsecond-order Runge-Kutta methods.
The Modified Euler Method given by (7.4) corresponds to πΌ = π½ = 1 andπ = π = 1/2. Another second-order Runge-Kutta method, called the MidpointMethod, corresponds to πΌ = π½ = 1/2, π = 0 and π = 1. This method is writtenas
π1 = Ξπ‘π(π‘π, π₯π),
π2 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π1
),
π₯π+1 = π₯π + π2.
7.2.4 Higher-order Runge-Kutta methods
The general second-order Runge-Kutta method was given by (7.5). The generalform of the third-order method is given by
π1 = Ξπ‘π(π‘π, π₯π),
π2 = Ξπ‘π(π‘π + πΌΞπ‘, π₯π + π½π1),
π3 = Ξπ‘π(π‘π + πΎΞπ‘, π₯π + πΏπ1 + ππ2),
π₯π+1 = π₯π + ππ1 + ππ2 + ππ3.
The following constraints on the constants can be guessed: πΌ = π½, πΎ = πΏ + π,and π+ π+ π = 1. Remaining constraints need to be derived.
7.2. NUMERICAL METHODS: INITIAL VALUE PROBLEM 49
The fourth-order method has a π1, π2, π3 and π4. The fifth-order methodrequires up to π6. The table below gives the order of the method and the numberof stages required.
order 2 3 4 5 6 7 8minimum # stages 2 3 4 6 7 9 11
Because of the jump in the number of stages required between the fourth-order and fifth-order method, the fourth-order Runge-Kutta method has someappeal. The general fourth-order method starts with 13 constants, and one thenfinds 11 constraints. A particularly simple fourth-order method that has beenwidely used is given by
π1 = Ξπ‘π(π‘π, π₯π),
π2 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π1
),
π3 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π2
),
π4 = Ξπ‘π (π‘π +Ξπ‘, π₯π + π3) ,
π₯π+1 = π₯π +1
6(π1 + 2π2 + 2π3 + π4) .
7.2.5 Adaptive Runge-Kutta Methods
As in adaptive integration, it is useful to devise an ode integrator that au-tomatically finds the appropriate Ξπ‘. The Dormand-Prince Method, which isimplemented in MATLABβs ode45.m, finds the appropriate step size by compar-ing the results of a fifth-order and fourth-order method. It requires six functionevaluations per time step, and constructs both a fifth-order and a fourth-ordermethod from these function evaluations.
Suppose the fifth-order method finds π₯π+1 with local error O(Ξπ‘6), and thefourth-order method finds π₯β²
π+1 with local error O(Ξπ‘5). Let π be the desirederror tolerance of the method, and let π be the actual error. We can estimate πfrom the difference between the fifth- and fourth-order methods; that is,
π = |π₯π+1 β π₯β²π+1|.
Now π is of O(Ξπ‘5), where Ξπ‘ is the step size taken. Let Ξπ be the estimatedstep size required to get the desired error π. Then we have
π/π = (Ξπ‘)5/(Ξπ)5,
or solving for Ξπ ,
Ξπ = Ξπ‘(ππ
)1/5
.
On the one hand, if π < π, then we accept π₯π+1 and do the next time stepusing the larger value of Ξπ . On the other hand, if π > π, then we rejectthe integration step and redo the time step using the smaller value of Ξπ . Inpractice, one usually increases the time step slightly less and decreases the timestep slightly more to prevent the waste of too many failed time steps.
50 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
7.2.6 System of differential equations
Our numerical methods can be easily adapted to solve higher-order differentialequations, or equivalently, a system of differential equations. First, we show howa second-order differential equation can be reduced to two first-order equations.Consider
οΏ½οΏ½ = π(π‘, π₯, οΏ½οΏ½).
This second-order equation can be rewritten as two first-order equations bydefining π’ = οΏ½οΏ½. We then have the system
οΏ½οΏ½ = π’,
οΏ½οΏ½ = π(π‘, π₯, π’).
This trick also works for higher-order equation. For another example, the third-order equation
...π₯ = π(π‘, π₯, οΏ½οΏ½, οΏ½οΏ½),
can be written as
οΏ½οΏ½ = π’,
οΏ½οΏ½ = π£,
οΏ½οΏ½ = π(π‘, π₯, π’, π£).
Now, we show how to generalize Runge-Kutta methods to a system of dif-ferential equations. As an example, consider the following system of two odes,
οΏ½οΏ½ = π(π‘, π₯, π¦),
οΏ½οΏ½ = π(π‘, π₯, π¦),
with the initial conditions π₯(0) = π₯0 and π¦(0) = π¦0. The generalization of the
7.3. NUMERICAL METHODS: BOUNDARY VALUE PROBLEM 51
commonly used fourth-order Runge-Kutta method would be
π1 = Ξπ‘π(π‘π, π₯π, π¦π),
π1 = Ξπ‘π(π‘π, π₯π, π¦π),
π2 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π1, π¦π +
1
2π1
),
π2 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π1, π¦π +
1
2π1
),
π3 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π2, π¦π +
1
2π2
),
π3 = Ξπ‘π
(π‘π +
1
2Ξπ‘, π₯π +
1
2π2, π¦π +
1
2π2
),
π4 = Ξπ‘π (π‘π +Ξπ‘, π₯π + π3, π¦π + π3) ,
π4 = Ξπ‘π (π‘π +Ξπ‘, π₯π + π3, π¦π + π3) ,
π₯π+1 = π₯π +1
6(π1 + 2π2 + 2π3 + π4) ,
π¦π+1 = π¦π +1
6(π1 + 2π2 + 2π3 + π4) .
7.3 Numerical methods: boundary value prob-lem
7.3.1 Finite difference method
We consider first the differential equation
β π2π¦
ππ₯2= π(π₯), 0 β€ π₯ β€ 1, (7.8)
with two-point boundary conditions
π¦(0) = π΄, π¦(1) = π΅.
Equation (7.8) can be solved by quadrature, but here we will demonstrate anumerical solution using a finite difference method.
We begin by discussing how to numerically approximate derivatives. Con-sider the Taylor series approximation for π¦(π₯+ β) and π¦(π₯β β), given by
π¦(π₯+ β) = π¦(π₯) + βπ¦β²(π₯) +1
2β2π¦β²β²(π₯) +
1
6β3π¦β²β²β²(π₯) +
1
24β4π¦β²β²β²β²(π₯) + . . . ,
π¦(π₯β β) = π¦(π₯)β βπ¦β²(π₯) +1
2β2π¦β²β²(π₯)β 1
6β3π¦β²β²β²(π₯) +
1
24β4π¦β²β²β²β²(π₯) + . . . .
52 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
The standard definitions of the derivatives give the first-order approximations
π¦β²(π₯) =π¦(π₯+ β)β π¦(π₯)
β+O(β),
π¦β²(π₯) =π¦(π₯)β π¦(π₯β β)
β+O(β).
The more widely-used second-order approximation is called the central differ-ence approximation and is given by
π¦β²(π₯) =π¦(π₯+ β)β π¦(π₯β β)
2β+O(β2).
The finite difference approximation to the second derivative can be found fromconsidering
π¦(π₯+ β) + π¦(π₯β β) = 2π¦(π₯) + β2π¦β²β²(π₯) +1
12β4π¦β²β²β²β²(π₯) + . . . ,
from which we find
π¦β²β²(π₯) =π¦(π₯+ β)β 2π¦(π₯) + π¦(π₯β β)
β2+O(β2).
Sometimes a second-order method is required for π₯ on the boundaries of thedomain. For a boundary point on the left, a second-order forward differencemethod requires the additional Taylor series
π¦(π₯+ 2β) = π¦(π₯) + 2βπ¦β²(π₯) + 2β2π¦β²β²(π₯) +4
3β3π¦β²β²β²(π₯) + . . . .
We combine the Taylor series for π¦(π₯+ β) and π¦(π₯+ 2β) to eliminate the termproportional to β2:
π¦(π₯+ 2β)β 4π¦(π₯+ β) = β3π¦(π₯)β 2βπ¦β²(π₯) + O(h3).
Therefore,
π¦β²(π₯) =β3π¦(π₯) + 4π¦(π₯+ β)β π¦(π₯+ 2β)
2β+O(β2).
For a boundary point on the right, we send β β ββ to find
π¦β²(π₯) =3π¦(π₯)β 4π¦(π₯β β) + π¦(π₯β 2β)
2β+O(β2).
We now write a finite difference scheme to solve (7.8). We discretize π₯ bydefining π₯π = πβ, π = 0, 1, . . . , π + 1. Since π₯π+1 = 1, we have β = 1/(π + 1).The functions π¦(π₯) and π(π₯) are discretized as π¦π = π¦(π₯π) and ππ = π(π₯π). Thesecond-order differential equation (7.8) then becomes for the interior points ofthe domain
βπ¦πβ1 + 2π¦π β π¦π+1 = β2ππ, π = 1, 2, . . . π,
with the boundary conditions π¦0 = π΄ and π¦π+1 = π΅. We therefore have a linearsystem of equations to solve. The first and πth equation contain the boundaryconditions and are given by
2π¦1 β π¦2 = β2π1 +π΄,
βπ¦πβ1 + 2π¦π = β2ππ +π΅.
7.3. NUMERICAL METHODS: BOUNDARY VALUE PROBLEM 53
The second and third equations, etc., are
βπ¦1 + 2π¦2 β π¦3 = β2π2,
βπ¦2 + 2π¦3 β π¦4 = β2π3,
. . .
In matrix form, we haveβββββββββ
2 β1 0 0 . . . 0 0 0β1 2 β1 0 . . . 0 0 00 β1 2 β1 . . . 0 0 0...
......
.... . .
......
...0 0 0 0 . . . β1 2 β10 0 0 0 . . . 0 β1 2
βββββββββ
βββββββββ
π¦1π¦2π¦3...
π¦πβ1
π¦π
βββββββββ =
βββββββββ
β2π1 +π΄β2π2β2π3...
β2ππβ1
β2ππ +π΅
βββββββββ .
The matrix is tridiagonal, and a numerical solution by Guassian eliminationcan be done quickly. The matrix itself is easily constructed using the MATLABfunction diag.m and ones.m. As excerpted from the MATLAB help page, thefunction call ones(m,n) returns an m-by-n matrix of ones, and the function calldiag(v,k), when v is a vector with n components, is a square matrix of ordern+abs(k) with the elements of v on the k-th diagonal: π = 0 is the main diag-onal, π > 0 is above the main diagonal and π < 0 is below the main diagonal.The πΓ π matrix above can be constructed by the MATLAB code
M=diag(-ones(n-1,1),-1)+diag(2*ones(n,1),0)+diag(-ones(n-1,1),1); .
The right-hand-side, provided f is a given n-by-1 vector, can be constructedby the MATLAB code
b=h^2*f; b(1)=b(1)+A; b(n)=b(n)+B;
and the solution for u is given by the MATLAB code
y=Mβb;
7.3.2 Shooting method
The finite difference method can solve linear odes. For a general ode of the form
π2π¦
ππ₯2= π(π₯, π¦, ππ¦/ππ₯),
with π¦(0) = π΄ and π¦(1) = π΅, we use a shooting method. First, we formulatethe ode as an initial value problem. We have
ππ¦
ππ₯= π§,
ππ§
ππ₯= π(π₯, π¦, π§).
54 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
The initial condition π¦(0) = π΄ is known, but the second initial condition π§(0) = πis unknown. Our goal is to determine π such that π¦(1) = π΅.
In fact, this is a root-finding problem for an appropriately defined function.We define the function πΉ = πΉ (π) such that
πΉ (π) = π¦(1)βπ΅.
In other words, πΉ (π) is the difference between the value of π¦(1) obtained fromintegrating the differential equations using the initial condition π§(0) = π, andπ΅. Our root-finding routine will want to solve πΉ (π) = 0. (The method is calledshooting because the slope of the solution curve for π¦ = π¦(π₯) at π₯ = 0 is givenby π, and the solution hits the value π¦(1) at π₯ = 1. This looks like pointing agun and trying to shoot the target, which is π΅.)
To determine the value of π that solves πΉ (π) = 0, we iterate using the Secantmethod, given by
ππ+1 = ππ β πΉ (ππ)ππ β ππβ1
πΉ (ππ)β πΉ (ππβ1).
We need to start with two initial guesses for π, solving the ode for the twocorresponding values of π¦(1). Then the Secant Method will give us the next valueof π to try, and we iterate until |π¦(1) β π΅| < tol, where tol is some specifiedtolerance for the error.
7.4 Numerical methods: eigenvalue problem
For illustrative purposes, we develop our numerical methods for what is perhapsthe simplest eigenvalue ode. With π¦ = π¦(π₯) and 0 β€ π₯ β€ 1, this simple ode isgiven by
π¦β²β² + π2π¦ = 0. (7.9)
To solve (7.9) numerically, we will develop both a finite difference method anda shooting method. Furthermore, we will show how to solve (7.9) with homo-geneous boundary conditions on either the function π¦ or its derivative π¦β².
7.4.1 Finite difference method
We first consider solving (7.9) with the homogeneous boundary conditions π¦(0) =π¦(1) = 0. In this case, we have already shown that the eigenvalues of (7.9) aregiven by π = π, 2π, 3π, . . . .
With π interior points, we have π₯π = πβ for π = 0, . . . , π + 1, and β =1/(π + 1). Using the centered-finite-difference approximation for the secondderivative, (7.9) becomes
π¦πβ1 β 2π¦π + π¦π+1 = ββ2π2π¦π. (7.10)
Applying the boundary conditions π¦0 = π¦π+1 = 0, the first equation with π = 1,and the last equation with π = π, are given by
β2π¦1 + π¦2 = ββ2π2π¦1,
π¦πβ1 β 2π¦π = ββ2π2π¦π.
7.4. NUMERICAL METHODS: EIGENVALUE PROBLEM 55
The remaining πβ 2 equations are given by (7.10) for π = 2, . . . , πβ 1.It is of interest to see how the solution develops with increasing π. The
smallest possible value is π = 1, corresponding to a single interior point, andsince β = 1/2 we have
β2π¦1 = β1
4π2π¦1,
so that π2 = 8, or π = 2β2 = 2.8284. This is to be compared to the first
eigenvalue which is π = π. When π = 2, we have β = 1/3, and the resultingtwo equations written in matrix form are given by(
β2 11 β2
)(π¦1π¦2
)= β1
9π2
(π¦1π¦2
).
This is a matrix eigenvalue problem with the eigenvalue given by π = βπ2/9.The solution for π is arrived at by solving
det
(β2β π 1
1 β2β π
)= 0,
with resulting quadratic equation
(2 + π)2 β 1 = 0.
The solutions are π = β1,β3, and since π = 3ββπ, we have π = 3, 3
β3 =
5.1962. These two eigenvalues serve as rough approximations to the first twoeigenvalues π and 2π.
With A an π-by-π matrix, the MATLAB variable mu=eig(A) is a vectorcontaining the π eigenvalues of the matrix A. The built-in function eig.m cantherefore be used to find the eigenvalues. With π grid points, the smaller eigen-values will converge more rapidly than the larger ones.
We can also consider boundary conditions on the derivative, or mixed bound-ary conditions. For example, consider the mixed boundary conditions given byπ¦(0) = 0 and π¦β²(1) = 0. The eigenvalues of (7.9) can then be determinedanalytically to be ππ = (πβ 1/2)π, with π a natural number.
The difficulty we now face is how to implement a boundary condition onthe derivative. Our computation of π¦β²β² uses a second-order method, and wewould like the computation of the first derivative to also be second order. Thecondition π¦β²(1) = 0 occurs on the right-most boundary, and we can make use ofthe second-order backward-difference approximation to the derivative that wehave previously derived. This finite-difference approximation for π¦β²(1) can bewritten as
π¦β²π+1 =3π¦π+1 β 4π¦π + π¦πβ1
2β. (7.11)
Now, the πth finite-difference equation was given by
π¦πβ1 β 2π¦π + π¦π+1 = ββ2π¦π,
and we now replace the value π¦π+1 using (7.11); that is,
π¦π+1 =1
3
(2βπ¦β²π+1 + 4π¦π β π¦πβ1
).
56 CHAPTER 7. ORDINARY DIFFERENTIAL EQUATIONS
Implementing the boundary condition π¦β²π+1 = 0, we have
π¦π+1 =4
3π¦π β 1
3π¦πβ1.
Therefore, the πth finite-difference equation becomes
2
3π¦πβ1 β
2
3π¦π = ββ2π2π¦π.
For example, when π = 2, the finite difference equations become(β2 1
23 β 2
3
)(π¦1π¦2
)= β1
9π2
(π¦1π¦2
).
The eigenvalues of the matrix are now the solution of
(π+ 2)
(π+
2
3
)β 2
3= 0,
or3π2 + 8π+ 2 = 0.
Therefore, π = (β4 Β±β10)/3, and we find π = 1.5853, 4.6354, which are ap-
proximations to π/2 and 3π/2.
7.4.2 Shooting method
We apply the shooting method to solve (7.9) with boundary conditions π¦(0) =π¦(1) = 0. The initial value problem to solve is
π¦β² = π§,
π§β² = βπ2π¦,
with known boundary condition π¦(0) = 0 and an unknown boundary conditionon π¦β²(0). In fact, any nonzero boundary condition on π¦β²(0) can be chosen: thedifferential equation is linear and the boundary conditions are homogeneous, sothat if π¦(π₯) is an eigenfunction then so is π΄π¦(π₯). What we need to find here isthe value of π such that π¦(1) = 0. In other words, choosing π¦β²(0) = 1, we solve
πΉ (π) = 0, (7.12)
where πΉ (π) = π¦(1), obtained by solving the initial value problem. Again, aniteration for the roots of πΉ (π) can be done using the Secant Method. For theeigenvalue problem, there are an infinite number of roots, and the choice of thetwo initial guesses for π will then determine to which root the iteration willconverge.
For this simple problem, it is possible to write explicitly the equation πΉ (π) =0. The general solution to (7.9) is given by
π¦(π₯) = π΄ cos (ππ₯) +π΅ sin (ππ₯).
The initial condition π¦(0) = 0 yields π΄ = 0. The initial condition π¦β²(0) = 1yields
π΅ = 1/π.
7.4. NUMERICAL METHODS: EIGENVALUE PROBLEM 57
Therefore, the solution to the initial value problem is
π¦(π₯) =sin (ππ₯)
π.
The function πΉ (π) = π¦(1) is therefore given by
πΉ (π) =sinπ
π,
and the roots occur when π = π, 2π, . . . .If the boundary conditions were π¦(0) = 0 and π¦β²(1) = 0, for example, then
we would simply redefine πΉ (π) = π¦β²(1). We would then have
πΉ (π) =cosπ
π,
and the roots occur when π = π/2, 3π/2, . . . .