diff

27
MATH745 — Fall 2005 1 W ELCOME TO MATH 745 – –T OPICS IN N UMERICAL A NALYSIS Instructor: Dr. Bartosz Protas Department of Mathematics & Statistics Email: [email protected] Office HH 326, Ext. 24116 Course Webpage: http://www.math.mcmaster.ca/ ˜ bprotas/MATH745

Upload: karthikeya-kambampati

Post on 20-Jul-2016

3 views

Category:

Documents


0 download

DESCRIPTION

numerical analysis

TRANSCRIPT

Page 1: diff

MATH745 — Fall 2005 1

WELCOME TO MATH 745 –– TOPICS IN NUMERICAL ANALYSIS

Instructor: Dr. Bartosz Protas

Department of Mathematics & Statistics

Email: [email protected]

Office HH 326, Ext. 24116

Course Webpage: http://www.math.mcmaster.ca/˜bprotas/MATH745

Page 2: diff

MATH745 — Fall 2005 2

INTRODUCTION

• What is NUMERICAL ANALYSIS ?

– Development of COMPUTATIONAL ALGORITHMS for solutions of

problems in algebra and analysis

– Use of methods of MATHEMATICAL ANALYSIS to determine a priori

properties of these algorithms such as:

∗ ACCURACY,

∗ STABILITY,

∗ CONVERGENCE

– Application of these algorithms to solve actual problems arising in

practice

Page 3: diff

MATH745 — Fall 2005 3

PART I

Finite Differences

A Review

Page 4: diff

MATH745 — Fall 2005 4

DIFFERENTIATION VIA FINITE DIFFERENCESBASICSa (I)

• ASSUMPTIONS :

– f : Ω → R is a smooth function, i.e. is continuously differentiable

sufficiently many times,

– the domain Ω = [a,b] is discretized with a uniform grid

x1 = a, . . . ,xN = b, such that x j+1 − x j = h j = h (extensions to

nonuniform grids are straightforward)

• PROBLEM — given the nodal values of the function f, i.e., f j = f (x j),

j = 1, . . . ,N approximate the nodal values of the function derivatived fdx (x j) = f ′(x j), j = 1, . . . ,N

aDetails can be found in any standard textbook on elementary numerical analysis, e.g.,

K. Atkinson and W. Han, “Elementary Numerical Analysis”, Wiley, (2004).

Page 5: diff

MATH745 — Fall 2005 5

DIFFERENTIATION VIA FINITE DIFFERENCESBASICS (II)

• The simplest approach — Derivation of finite difference formulae viaTAYLOR–SERIES EXPANSIONS

f j+1 = f j +(x j+1 − x j) f ′j +(x j+1 − x j)

2

2!f ′′j +

(x j+1 − x j)3

3!f ′′′j + . . .

= f j +h f ′j +h2

2f ′′j +

h3

6f ′′′j + . . .

• Rearrange the expansion

f ′j =f j+1 − f j

h−

h2

f ′′j + · · · =f j+1 − f j

h+O(h),

where O(hα) denotes the contribution from all terms with powers of h

greater or equal α (here α = 1).

• Neglecting O(h), we obtain a FIRST ORDER FORWARD–DIFFERENCE

FORMULA :(

δ fδx

)

j=

f j+1 − f j

h

Page 6: diff

MATH745 — Fall 2005 6

DIFFERENTIATION VIA FINITE DIFFERENCESBASICS (III)

• Backward difference formula is obtained by expanding f j−1 about x j andproceeding as before:

f ′j =f j − f j−1

h−

h2

f ′′j + . . . =⇒

(

δ fδx

)

j=

f j − f j−1

h

• Neglected term with the lowest power of h is the LEADING–ORDER

APPROXIMATION ERROR , i.e., Err =∣

∣f ′(x j)−

(

δ fδx

)

j

∣≈Chα

• The exponent α of h in the leading–order error represents the ORDER OF

ACCURACY OF THE METHOD — it tells how quickly the approximation

error vanishes when the resolution is refined

• The actual value of the approximation error depends on the constant C

characterizing the function f

• In the examples above Err = − h2 f ′′j , hence the methods are FIRST–ORDER

ACCURATE

Page 7: diff

MATH745 — Fall 2005 7

DIFFERENTIATION VIA FINITE DIFFERENCESHIGHER–ORDER FORMULA (I)

• Consider two expansions:f j+1 = f j +h f ′j +

h2

2f ′′j +

h3

6f ′′′j + . . .

f j−1 = f j −h f ′j +h2

2f ′′j −

h3

6f ′′′j + . . .

• Subtracting the second from the first:

f j+1 − f j−1 = 2h f ′j +h3

3f ′′′j + . . .

• Central Difference Formula

f ′j =f j+1 − f j−1

h−

h2

6f ′′′j + . . . =⇒

(

δ fδx

)

j=

f j+1 − f j−1

2h

• The leading–order error is h2

6 f ′′′j , thus the method is SECOND–ORDER

ACCURATE

• Manipulating four different Taylor series expansions one can obtain afourth–order central difference formula :

(

δ fδx

)

j=

− f j+2 +8 f j+1 −8 f j−1 + f j−2

12h, Err =

h4

30f (v)

Page 8: diff

MATH745 — Fall 2005 8

DIFFERENTIATION VIA FINITE DIFFERENCESAPPROXIMATION OF THE SECOND DERIVATIVE• Consider two expansions:

f j+1 = f j +h f ′j +h2

2f ′′j +

h3

6f ′′′j + . . .

f j−1 = f j −h f ′j +h2

2f ′′j −

h3

6f ′′′j + . . .

• Adding the two expansions

f j+1 + f j−1 = 2 f j +h2 f ′′j +h4

12f iv

j + . . .

• Central difference formula for the second derivative:

f ′′j =f j+1 −2 f j + f j−1

h2 −h2

12f iv

j + . . . =⇒

(

δ2 fδx2

)

j=

f j+1 −2 f j + f j−1

h2

• The leading–order error is h2

12 f ivj , thus the method is SECOND–ORDER

ACCURATE

Page 9: diff

MATH745 — Fall 2005 9

DIFFERENTIATION VIA FINITE DIFFERENCESAN ALTERNATIVE APPROACH (I)

• An alternative derivation of a finite–difference scheme:

– Find an N–th order accurate interpolating function p(x) which

interpolates the function f (x) at the nodes x j, j = 1, . . . ,N, i.e., such that

p(x j) = f (x j), j = 1, . . . ,N

– Differentiate the interpolating function p(x) and evaluate at the nodes to

obtain an approximation of the derivative p′(x j) ≈ f ′(x j), j = 1, . . . ,N

• Example:

– for j = 2, . . . ,N −1, let the interpolant have the form of a quadraticpolynomial p j(x) on [x j−1,x j+1] (Lagrange interpolating polynomial)

p j(x) =(x− x j)(x− x j+1)

2h2 f j−1 +−(x− x j−1)(x− x j+1)

h2 f j +(x− x j−1)(x− x j)

2h2 f j+1

p′j(x) =(2x− x j − x j+1)

2h2 f j−1 +−(2x− x j−1 − x j+1)

h2 f j +(2x− x j−1 − x j)

2h2 f j+1

– Evaluating at x = x j we obtain f ′(x j) ≈ p′j(x j) =f j+1− f j−1

2h(i.e., second–order accurate center–difference formula)

Page 10: diff

MATH745 — Fall 2005 10

DIFFERENTIATION VIA FINITE DIFFERENCESAN ALTERNATIVE APPROACH (II)

• Generalization to higher–orders straightforward

• Example:

– for j = 3, . . . ,N −2, one can use a fourth–order polynomial as interpolant

p j(x) on [x j−2,x j+2]

– Differentiating with respect to x and evaluating at x = x j we arrive at thefourth–order accurate finite–difference formula

(

δ fδx

)

j=

− f j+2 +8 f j+1 −8 f j−1 + f j−2

12h, Err =

h4

30f (v)

• Order of accuracy of the finite–difference formula is one less than the order

of the interpolating polynomial

• The set of grid points needed to evaluate a finite–difference formula is called

STENCIL

• In general, higher–order formulas have larger stencils

Page 11: diff

MATH745 — Fall 2005 11

DIFFERENTIATION VIA FINITE DIFFERENCESTAYLOR TABLE (I)

• A general method for choosing the coefficients of a finite difference formula

to ensure the highest possible order of accuracy

• Example: consider a one–sided finite difference formula ∑2p=0 αp f j+p,

where the coefficients αp, p = 0,1,2 are to be determined.

• Form an expression for the approximation error

f ′j −2

∑p=0

αp f j+p = ε

and expand it about x j in the powers of h

Page 12: diff

MATH745 — Fall 2005 12

DIFFERENTIATION VIA FINITE DIFFERENCESTAYLOR TABLE (II)

• Expansions can be collected in a Taylor table

f j f ′j f ′′j f ′′′j

f ′j 0 1 0 0

−a0 f j −a0 0 0 0

−a1 f j+1 −a1 −a1h −a1h2

2 −a1h3

6

−a2 f j+2 −a2 −a2(2h) −a2(2h)2

2 −a2(2h)3

6

– the leftmost column contains the terms present in the expression for theapproximation error

– the corresponding rows (multiplied by the top row) represent the terms obtainedfrom expansions about x j

– columns represent terms with the same order in h — sums of columns are thecontributions to the approximation error with the given order in h

• The coefficients αp, p = 0,1,2 can now be chosen to cancel the

contributions to the approximation error with the lowest powers of h

Page 13: diff

MATH745 — Fall 2005 13

DIFFERENTIATION VIA FINITE DIFFERENCESTAYLOR TABLE (III)

• Setting the coefficients of the first three terms to zero:

−a0 −a1 −a2 = 0

1−a1h−a2(2h) = 0

−a1h2

2−a2

(2h)2

2= 0

=⇒ a0 = −32h

, a1 =2h, a2 = −

12h

• The resulting formula:(

δ fδx

)

j=

− f j+2 +4 f j+1 −3 f j

2h

• The approximation error — determined the evaluating the first column withnon–zero coefficient:

(

−a1h3

6−a2

(2h)3

6

)

f ′′′j =h2

3f ′′′j

The formula is thus SECOND–ORDER ACCURATE

Page 14: diff

MATH745 — Fall 2005 14

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (I)

• Quick review of FUNCTIONAL ANALYSIS background

– NORMED SPACES X : ∃‖ · ‖ : X → R such that ∀x,y ∈ X

‖x‖ ≥ 0,

‖x+ y‖ ≤ ‖x‖+‖y‖,

‖x‖ = 0 ⇔ x ≡ 0

– Banach spaces

– vector spaces: finite–dimensional (RN) vs. infinite–dimensional (lp)

– function spaces (on Ω ⊆ RN ): Lebesgue spaces Lp(Ω), Sobolev spaces

W p,q(Ω)

– Hilbert spaces: inner products, orthogonality & projections, bases, etc.

– Linear Operators: operator norms, functionals, Riesz’ Theorem

Page 15: diff

MATH745 — Fall 2005 15

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (II)

• Assume that f and f ′ belong to a function space X ; DIFFERENTIATIONddx : f → f ′ can then be regarded as a LINEAR OPERATOR d

dx : X → X

• When f and f ′ are approximated by their nodal values as f = [ f1 f2 . . . fN ]T

and f′ = [ f ′1 f ′2 . . . f ′N ]T , then the differential operator ddx can be

approximated by a DIFFERENTIATION MATRIX A ∈ RN×N such that f′ = Af

; How can we determine this matrix?

• Assume for simplicity that the domain Ω is periodic, i.e., f0 = fN and

f1 = fN+1; then differentiation with the second–order center difference

formula can be represented as the following matrix–vector product

f ′1

.

.

.

f ′N

=1h

0 12 − 1

2

− 12 0

. . .. . .

. . .

0 12

− 12 − 1

2 0

f1

.

.

.

fN

Page 16: diff

MATH745 — Fall 2005 16

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (III)

• Using the fourth–order center difference formula we would obtain a

pentadiagonal system ⇒ increased order of accuracy entails increased

bandwidth of the differentiation matrix A

• A is a TOEPLITZ MATRIX , since is has constant entries along the the

diagonals; in fact, it a also a CIRCULANT MATRIX with entries ai j

depending only on (i− j)(modN)

• Note that the matrix A defined above is SINGULAR (has a zero eigenvalue

λ = 0) — Why?

• This property is in fact inherited from the original “continuous” operator ddx

which is also singular and has a zero eigenvalue

• A singular matrix A does not have an inverse (at least, now in the classical

sense); what can we do to get around this difficulty?

Page 17: diff

MATH745 — Fall 2005 17

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (IV)

• Matrix singularity ⇔ linearly dependent rows ⇔ the LHS vector does not

contain enough information to determine UNIQUELY the RHS vector

• MATRIX DESINGULARIZATION — incorporating additional information into

the matrix, so that its argument (the RHS vector) can be determined uniquely

• Example — desingularization of the second–order center difference

differentiation matrix:

– in a center difference formula, even and odd nodes are decoupled

– knowing f ′j , j = 1, . . . ,N and f1, one can recover f j , j = 3,5, . . . (i.e., the

odd nodes) only ⇒ f2 must also be provided

– hence, the zero eigenvalue has multiplicity two

– when desingularizing the differentiation matrix one must modify at least

two rows (see, e.g., sing_diff_mat_01.m )

Page 18: diff

MATH745 — Fall 2005 18

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (V)

• What is WRONG with the differentiation operator?

• The differentiation operator ddx is UNBOUNDED !

One usually cannot find a constant C ∈ R independent of f , such that∥

ddx

f (x)∥

X≤C‖ f‖X , ∀ f∈X

For instance, f (x) = eikx, so that |C| = k → ∞ for k → ∞ ...

• Unfortunately, finite–dimensional emulations of the differentiation operator

(the DIFFERENTIATION MATRICES ) inherit this property

• OPERATOR NORM for matrices

‖A‖22 = max

‖x‖=1‖Ax‖2

2 = maxx

(Ax,Ax)

(x,x)= max

x

(x,ATAx)

(x,x)= λmax(A

TA) = σ2

max(A)

Thus, the 2–norm of a matrix is given by the square root of its largest

SINGULAR VALUE σmax(A)

Page 19: diff

MATH745 — Fall 2005 19

DIFFERENTIATION VIA FINITE DIFFERENCESAN OPERATOR PERSPECTIVE (VI)

• As can be rigorously proved in many specific cases, ‖A‖2 grows without

bound as N → ∞ (or, h → 0) ⇒ this is a reflection of the unbounded nature of

the underlying ∞–dim operator

• The loss of precision when solving the system Ax = b is characterized by the

CONDITION NUMBER (with respect to inversion) κp(A) = ‖A‖p‖A−1‖p

– for p = 2, κ2(A) =σmax(A)σmin(A)

– when the condition number is “large”, the matrix is said to be

ILL–CONDITIONED — solution of the system Ax = b is prone to

round–off errors

– if A is singular, κp(A) = +∞

Page 20: diff

MATH745 — Fall 2005 20

DIFFERENTIATION VIA FINITE DIFFERENCESSUBTRACTIVE CANCELLATION ERRORS

• SUBTRACTIVE CANCELLATION ERRORS — when comparing two numbers

which are almost the same using finite–precision arithmetic , the relative

round–off error is proportional to the inverse of the difference between the

two numbers

• Thus, if the difference between the two numbers is decreased by an order of

magnitude, the relative accuracy with which this difference may be calculated

using finite–precision arithmetic is also decreased by an order of magnitude.

• Problems with finite difference formulae when h → 0 — loss of precisiondue to finite–precision arithmetic( SUBTRACTIVE CANCELLATION ), e.g., fordouble precision:

1.0000000000012345−1.0≈1.2e−12 (2.8% error)

1.0000000000001234−1.0≈1.0e−13 (19.0% error)

. . .

Page 21: diff

MATH745 — Fall 2005 21

DIFFERENTIATION VIA FINITE DIFFERENCESCOMPLEX STEP DERIVATIVEa

• Consider the complex extension f (z), where z = x+ iy, of f (x) and computethe complex Taylor series expansion

f (x j + ih) = f j + ih f ′j −h2

2f ′′j − i

h3

6f ′′′j +O(h4)

• Take imaginary part and divide by h

f ′j =ℑ( f (x j + ih))

h+

h2

6f ′′′j +O(h3) =⇒

(

δ fδx

)

j=

ℑ( f (x j + ih))

h

• Note that the scheme is second order accurate — where is conservation of

complexity?

• The method doesn’t suffer from cancellation errors, is easy to implement and

quite usefulaJ. N. Lyness and C. B.Moler, “Numerical differentiation of analytical functions”, SIAM J. Numer

Anal 4, 202-210, (1967)

Page 22: diff

MATH745 — Fall 2005 22

DIFFERENTIATION VIA FINITE DIFFERENCESPADE APPROXIMATION (I)

• GENERAL IDEA — include in the finite–difference formula not only thefunction values , but also the values of the FUNCTION DERIVATIVE at theadjacent nodes, e.g.:

b−1 f ′j−1 + f ′j +b1 f ′j+1 −1

∑p=−1

αp f j+p = ε

• Construct the Taylor table using the following expansions:

f j+1 = f j +h f ′j +h2

2f ′′j +

h3

6f ′′′j +

h4

24f (iv)

j +h5

120f (v)

j + . . .

f ′j+1 = f ′j +h f ′′j +h2

2f ′′′j +

h3

6f (iv)

j +h4

24f (v)

j + . . .

NOTE — need an expansion for the derivative and a higher order expansion

for the function (more coefficient to determine)

Page 23: diff

MATH745 — Fall 2005 23

DIFFERENTIATION VIA FINITE DIFFERENCESPADE APPROXIMATION (II)

• The Taylor table

f j f ′j f ′′j f ′′′j f (iv)j f (v)

j

b−1 f ′j−1 0 b−1 b−1(−h) b−1(−h)2

2 b−1(−h)3

6 b−1(−h)4

24

f ′j 0 1 0 0 0 0

b1 f ′j+1 0 b1 b1h b1h22 b1

h36 b1

h424

−a−1 f j−1 −a−1 −a−1(−h) −a−1(−h)2

2 −a−1(−h)3

6 −a−1(−h)4

24 −a−1(−h)5

120

−a0 f j −a0 0 0 0 0 0

−a1 f j+1 −a1 −a1h −a1h22 −a1

h36 −a1

h424 −a1

h5120

• The algebraic system:

0 0 −1 −1 −1

1 1 h 0 −h

−h h −h2/2 0 −h2/2

h2/2 h2/2 h3/6 0 −h3/6

−h3/6 h3/6 −h4/24 0 −h4/24

b−1

b1

a−1

a0

a1

=

0

−1

0

0

0

=⇒

b−1

b1

a−1

a0

a1

=

1/4

1/4

3/(4h)

0

−3/(4h)

Page 24: diff

MATH745 — Fall 2005 24

DIFFERENTIATION VIA FINITE DIFFERENCESPADE APPROXIMATION (III)

• The Pade approximation:

14

(

δ fδx

)

j+1+

(

δ fδx

)

j+

14

(

δ fδx

)

j−1=

34h

(

f j+1 − f j−1)

Leading–order error h4

30 f (v)j ( FOURTH–ORDER ACCURATE )

• The approximation is NONLOCAL , in that it requires derivatives at theadjacent nodes which are also unknowns; Thus all derivatives must bedetermined at once via the solution of the following algebraic system

. . .. . .

. . .

1/4 1 1/4

. . .. . .

. . .

...(

δ fδx

)

j−1(

δ fδx

)

j(

δ fδx

)

j+1...

=

...

...34h

(

f j+1 − f j−1)

...

...

Page 25: diff

MATH745 — Fall 2005 25

DIFFERENTIATION VIA FINITE DIFFERENCESPADE APPROXIMATION (IV)

• Closing the system at ENDPOINTS (where neighbors are not available) —

use a lower–order one–sided (i.e., forward or backward) finite–difference

formula

• The vector of derivatives can thus be obtained via solution of the following

algebraic system

B f′ =32

A f =⇒ f′ =32

B−1

A f

where

– B is a tri–diagonal matrix with bi,i = 1 and bi,i−1 = bi,i+1 = 14 ,

i = 1, . . . ,N

– A is a second–order accurate differentiation matrix

Page 26: diff

MATH745 — Fall 2005 26

DIFFERENTIATION VIA FINITE DIFFERENCESMODIFIED WAVENUMBER ANALYSIS (I)

• How do finite differences perform at different WAVELENGTHS ?

• Finite–Difference formulae applied to THE FOURIER MODE f (x) = eikx with

the (exact) derivative f ′(x) = ikeikx

• Central–Difference formula:(

δ fδx

)

j=

f j+1 − f j−1

2h=

eik(x j+h) − eik(x j−h)

2h=

eikh − e−ikh

2heikx j = i

sin(hk)h

f j = ik′ f j ,

where the modified wavenumber k′ , sin(hk)h

• Comparison of the modified wavenumber k′ with the actual wavenumber k

shows how numerical differentiation errors affect different Fourier

components of a given function

Page 27: diff

MATH745 — Fall 2005 27

DIFFERENTIATION VIA FINITE DIFFERENCESMODIFIED WAVENUMBER ANALYSIS (II)

• Fourth-order central difference formula(

δ fδx

)

j

=− f j+2 +8 f j+1 −8 f j−1 + f j−2

12h=

23h

(

eikh − e−ikh)

f j −1

12h

(

eik2h − e−ik2h)

f j

= i

[

43h

sin(hk)−16h

sin(2hk)

]

f j = ik′ f j

where the modified wavenumber k′ ,[ 4

3h sin(hk)− 16h sin(2hk)

]

• Fourth–order Pade scheme:

14

(

δ fδx

)

j+1+

(

δ fδx

)

j

+14

(

δ fδx

)

j−1=

34h

( f j+1 − f j−1) ,

where(

δ fδx

)

j+1= ik′eikx j+1 = ik′eikh f j and

(

δ fδx

)

j−1= ik′eikx j−1 = ik′e−ikh f j.

Thus:

ik′(

14

eikh +1+14

e−ikh

)

f j =34h

(

eikh − e−ikh)

f j

ik′(

1+12

cos(kh)

)

f j = i32h

sin(hk) f j =⇒ k′ ,3sin(hk)

2h+hcos(hk)