advanced computational fluid dynamics aa215a lecture 3...

Post on 23-Jan-2021

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Advanced Computational Fluid Dynamics

AA215A Lecture 3

Polynomial Interpolation: Numerical Differentiation

and Integration

Antony Jameson

Winter Quarter, 2016, Stanford, CALast revised on January 7, 2016

Contents

3 Polynomial Interpolation: Numerical Differentiation and Integration 23.1 Interpolation Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23.2 Error in the Interpolating Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 33.3 Error of Derivative with Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . 33.4 Error of the kth Derivative of the Interpolation Polynomial . . . . . . . . . . . . . . 43.5 Interpolation with a Triangle of Polynomials . . . . . . . . . . . . . . . . . . . . . . . 53.6 Newton’s Form of the Interpolation Polynomial . . . . . . . . . . . . . . . . . . . . . 63.7 Hermite Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.8 Integration Formulas using Polynomial Interpolation . . . . . . . . . . . . . . . . . . 103.9 Integration with a weight function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.10 Newton Cotes formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.11 Gauss Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.12 Formulas for Gauss integration with constant weight function . . . . . . . . . . . . . 133.13 Error bound for Gauss integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.14 Discrete orthogonality of orthogonal polynomials . . . . . . . . . . . . . . . . . . . . 143.15 Equivalence of interpolation and least squares approximation using Gauss integration 143.16 Gauss Lobato Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.17 Portraits of Lagrange, Newton and Gauss . . . . . . . . . . . . . . . . . . . . . . . . 16

1

Lecture 3

Polynomial Interpolation: NumericalDifferentiation and Integration

3.1 Interpolation Polynomials

So far we have not required Pn(x) to equal f(x) at any points in the interval. This permits somesmoothing! A natural approach is to make Pn(x) = a0 + a1x + ... + anx

n = f(x) at n + 1 pointsx0, x1, ..., xn. Then we have n+ 1 equations for the n+ 1 coefficients. Such a polynomial is calledan interpolation polynomial. A solution exists because the determinant of the equations is theVandemonde determinant

D =

∣∣∣∣∣∣∣∣∣

1 x0 ... xn0

1 x1 ... xn1

......

. . ....

1 xn ... xnn

∣∣∣∣∣∣∣∣∣=

i>j

(xi − xj) 6= 0 (3.1)

This can be seen by noting that D vanishes if any xi equals any xj , but D is a polynomial of justthe same order as the product. The interpolation polynomial is unique since if there were 2 suchpolynomials Pn(x), Qn(x), then

Pn(xi)−Qn(xi) = 0 for i = 0, 1, ..., n (3.2)

But then Pn(x)−Qn(x) is an nth degree polynomial with n+ 1 roots, so it must be zero.In practice it is easiest to construct Pn(x) indirectly as a sum of specially chosen polynomials

rather than solve for the aj . To do this note that

φn,j(x) =(x− x0)(x− x1)...(x− xj−1)(x− xj+1)...(x− xn)

(xj − x0)(xj − x1)...(xj − xj−1)(xj − xj+1)...(xj − xn)= 1 when x = xj (3.3)

Then we can get

Pn(x) =n∑

j=0

f(xj)φn,j(x) (3.4)

which equals f(x) at x = x0, x1, ..., xn, and is the only such polynomial as has just been shown.The φn,j(x) are called Lagrange interpolation coefficients. We can write

φn,j(x) =wn(x)

(x− xj)w′n(xj)(3.5)

2

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 3

wherewn(x) = (x− x0)(x− x1)...(x− xn) (3.6)

3.2 Error in the Interpolating Polynomial

We shall show that if f(x) has an (n + 1)th derivative and Pn(x) is an interpolating polynomialthen the remainder is

f(x)− Pn(x) =(x− x0)(x− x1)...(x− xn)

(n+ 1)!f (n+1)(ξ) (3.7)

where ξ is in the interval defined by x0, x1, ..., xn.Let Sn(x) be defined by

f(x)− Pn(x) = wn(x)Sn(x) (3.8)

wherewn(x) = (x− x0)(x− x1)...(x− xn) (3.9)

DefineF (z) = f(z)− Pn(z)− wn(z)Sn(x) (3.10)

This is continuous in z and vanishes at n+2 points x0, x1, ..., xn, x. Thus by Rolle’s theorem F ′(z)vanishes at n+1 points, F ′′(z) vanishes at n points, ..., and finally F (n+1)(z) vanishes at one pointξ. But

dn+1

dzn+1Pn(z) = 0 (3.11)

dn+1

dzn+1wn(z)Sn(x) = (n+ 1)!Sn(x) (3.12)

ThusF (n+1)(z) = fn+1(z)− (n+ 1)!Sn(x) (3.13)

and setting z = ξ

Sn(x) =1

(n+ 1)!f (n+1)(ξ) (3.14)

f(x)− Pn(x) =wn(x)

(n+ 1)!f (n+1)(ξ) (3.15)

Since the error depends on wn(x) we naturally ask how the xi should be distributed to minimizewn(x).

3.3 Error of Derivative with Lagrange Interpolation

Suppose thatf(x) = Pn(x) +

wn

(n+ 1)!f (n+1)(ξ) (3.16)

Then

f ′(x) = P ′n(x) +w′n

(n+ 1)!f (n+1)(ξ) +

wn

(n+ 1)!f (n+2)(ξ)

dx(3.17)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 4

where

w′n =n∑

j=0

n∏

k 6=j

(x− xk) (3.18)

Then at an interpolation point,

f ′(xj) = P ′n(xj) +n∏

k 6=j

(xj − xk)f (n+1)

(n+ 1)!(ξ) (3.19)

with equal intervals h

f ′(x0) = P ′(x0) +hn

n+ 1f (n+1)(ξ) (3.20)

since(x1 − x0)(x2 − x0)...(xn − x0) = n!hn (3.21)

With one interval,

f ′(x0) =f(x1)− f(x0)

h+h

2f ′′(ξ) (3.22)

3.4 Error of the kth Derivative of the Interpolation Polynomial

Let fn+1(x) be continuous and let

Rn(x) = f(x)− Pn(x) (3.23)

Then

R(k)n (x) =

n−k∏

j=0

(x− ξj)f (n+1)(ξ)

(n+ 1− k)!(3.24)

where the distinct points ξj are independent of x and lie in the intervals xj < ξj < xj+k for j =0, 1, ..., n− k and ξ(x) is some point in the interval containing x and the ξj .Proof:Rn = f−Pn has n+1 continuous derivatives and vanishes at n+1 points x = xj , for j = 0, 1, ..., n.Apply Rolle’s theorem k ≤ n times. The zeros are then distributed as illustrated.

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 5

This leads to the following table for the distribution of the n− k + 1 zeros ξj of R(k)n .

R R(1) R(2) · · · R(k)

x0

x1 (x0, x1)x2 (x1, x2) (x0, x2)...xk (xk−1, xk) (xk−2, xk) · · · (x0, xk)...xn (xn−1, xn) (xn−2, xn) · · · (xn−k, xn)

Define

F (z) = R(k)n (z)− α

n−k∏

j=0

(z − ξj) (3.25)

For any x distinct from the ξj choose α such that

F (x) = 0 (3.26)

Then F (z) has n − k + 2 zeros and by Rolle’s theorem F (n−k+1)(z) has one zero, η, say, in theinterval containing x and the ξj .Thus

0 = F (n−k+1)(η) (3.27)

= Rn+1n (η)− α(n− k + 1)! (3.28)

= fn+1(η)− α(n− k + 1)! (3.29)

or

α =fn+1(η)

(n− k + 1)!(3.30)

It follows that for a mesh width bounded by h

R(k)n ≤Mhn−k+1 (3.31)

where M is proportional to sup∣∣fn+1(x)

∣∣ in the interval.

3.5 Interpolation with a Triangle of Polynomials

Set

w0(x) = (x− x0)w1(x) = (x− x0)(x− x1)wk(x) = (x− x0)(x− x1)...(x− xk)

Now to interpolate f(x) with values fi at xi we set

p(x) = a0 + a1(x− x0) + a2(x− x0)(x− x1)... (3.32)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 6

Then

p0(x0) = f0 = a0

p1(x1) = f1 = a0 + a1(x1 − x0)p2(x2) = f2 = a0 + a1(x2 − x0) + a2(x2 − x0)(x2 − x1)

...

so we can solve for the ai in succession

ak =fk − a0 − a1(xk − x0)− ...− ak−1(xk − x0)...(xk − xk−2)

(xk − x0)(xk − x1)...(xk − xk−1)=fk − pk−1(xk)wk−1(xk)

(3.33)

3.6 Newton’s Form of the Interpolation Polynomial

The Lagrangian form of the interpolation polynomial has the disadvantage that the coefficientshave to be recomputed if a new interpolation point is added. To avoid this let

Pn(x) =n∑

k=0

akwk−1(x) (3.34)

wherewk(x) = (x− x0)(x− x1)...(x− xk) and w−1(x) = 1 (3.35)

We can determine the ak so that

Pn(xj) = f(xj) for j = 0, 1, ..., n (3.36)

Suppose this has been done for Pk−1, and we now add xk. Then

Pk(x) = Pk−1(x) + akwk−1(x) (3.37)

Pk(xj) = Pk−1(xj) = f(xj) for j = 0, 1, ..., k − 1 (3.38)

Pk(xk) = Pk−1(xk) + akwk−1(xk) = f(xk) (3.39)

where ak = f(xk)−Pk−1(xk)wk−1(xk) .

On the other hand the coefficient of the highest order term in the Lagrange expression is

n∑

j=0

f(xj)w′n(xj)

(3.40)

so by comparison

an =n∑

j=0

f(xj)w′n(xj)

(3.41)

Thus an is a linear combination of f(xj) for j = 0, 1, ..., n. Multiply by

xn − x0 = xn − xj + xj − x0 (3.42)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 7

Then

an(xn − x0) = −n∑

j=0

f(xj)w′n(xj)

(xj − xn) +n∑

j=1

f(xj)w′n(xj)

(xj − x0)

= −n∑

j=0

f(xj)∏k 6=j(xj − xk)

+n∑

j=1

f(xj)∏k 6=j(xj − xk)

(3.43)

These are just the expressions for an−1 in the intervals x0 to xn−1 and x1 to xn. Thus if we define

f [x0, ..., xn] = an =n∑

j=0

f(xj)∏k 6=j(xj − xk)

(3.44)

we find thatf [x0, ..., xn](x0 − xn) = f [x1, ..., xn]− f [x0, ..., xn−1] (3.45)

and starting fromf [x0] = f(x0) (3.46)

we havef [x0, x1] =

f [x1]− f [x0]x1 − x0

(3.47)

f [x0, x1, x2] =f [x1, x2]− f [x0, x1]

x2 − x0(3.48)

giving Newton’s form of the interpolation polynomial

Pn(x) = f [x0] + (x− x0)f [x0, x1] + ...+ (x− x0)...(x− xn−1)f [x0, ..., xn] (3.49)

The square bracketed expressions are Newton’s divided differences, and we now have Newton’s formfor the interpolation polynomial:

Pn(x) = f [x0] + (x− x0)f [x0, x1] + ...+ (x− x0)...(x− xn−1)f [x0, ..., xn] (3.50)

where each new term is independent of the previous terms.To estimate the magnitude of the higher divided differences we can use the result already

obtained for the remainder.Theorem: Let x0, x1, ..., xk−1 be distinct points and let f be continuous in interval containing allthese points. Then for some point ξ in this interval

f [x0, ..., xk−1, x] =f (k)(ξ)k!

(3.51)

Proof: From Newton’s formula

f(x)− Pk−1(x) = (x− x0)...(x− xk−1)f [x0, ..., xk−1, x] = (x− x0)...(x− xk−1)f (k)(ξ)k!

(3.52)

by the remainder theorem. But x is distinct from x0, x1, ..., xk−1 so the theorem follows on dividingout the factors.

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 8

3.7 Hermite Interpolation

We can generalize interpolation by matching derivatives as well as values at the interpolationpoints. Such a polynomial is called the osculating polynomial and the procedure is called Hermiteinterpolation.

Let H2n+1(x) be a polynomial of degree 2n+ 1 such that

H2n+1(x) = f(xj) for j = 0, 1, ..., n (3.53)

H ′2n+1(x) = f ′(xj) for j = 0, 1, ..., n (3.54)

The 2n + 2 coefficients are needed to satisfy 2n + 2 conditions. H2n+1(x) can be found indirectlyas in Lagrange interpolation.

Let ψn,j(x) and γn,j(x) be polynomials of degree 2n+ 1 such that

ψn,j(xi) = δij , ψ′n,j(xi) = 0, for i = 0, 1, ..., n

γn,j(xi) = 0 , γ′n,j(xi) = δij , for i = 0, 1, ..., n

Then the Hermite interpolation polynomial is

H2n+1(x) =n∑

j=0

(f(xj)ψn,j(x) + f ′(xj)γn,j(x)

)(3.55)

It can be directly verified by differentiation that the required polynomials are

ψn,j(x) = (1− 2φ′n,j(x)(x− xj))φ2n,j(x) (3.56)

andγn,j(x) = (x− xj)φ2

n,j(x) (3.57)

where φn,j(x) are the Lagrange polynomials satisfying

φn,j(xi) = δij for i = 0, 1, ..., n (3.58)

The error in Hermite interpolation is

f(x)−H2n+1(x) = ω2n(x)

f2n+2(ξ)(2n+ 2)!

(3.59)

whereωn(x) = (x− x0)(x− x1)...(x− xn) (3.60)

This can be proved in the same way as the error estimate for Lagrange interpolation where now

F (ξ) = f(ξ)−H2n+1(ξ)− ω2n(ξ)Sn(x) (3.61)

and on the first application of Rolle’s theorem F ′(ξ) has 2n+ 2 distinct zeros.The Hermite polynomial can be obtained via a passage to the limit from the Lagrange polyno-

mial P2n+1(x) which interpolates f(x) at the 2n+ 2 points

x0, x0 + h, x1, x1 + h, ..., xn, xn + h (3.62)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 9

as h→ 0 to produce n+ 1 pairs of coincident points. Define

N(h) =∏

k=1,n, k 6=j

(x− xk + h) (3.63)

andD(h) =

k=1,n, k 6=j

(xj − xk + h) (3.64)

Then the contribution of the points xj and xj + h to P2n+1(x) is

Q(x) = f(xj)N(0)D(0)

N(−h)D(−h)

x− (xj + h)xj − (xj + h)

+ f(xj + h)N(0)D(h)

N(−h)D(0)

x− xj

h

Expanding f(xj + h) in a Taylor series

Q(x) = f(xj)N(0)D(0)

N(−h)D(−h)

x− (xj + h)xj − (xj + h)

+(f(xj) + hf ′(xj) +O(h2)

) N(0)D(h)

N(−h)D(0)

x− xj

h

= f(xj)N(0)D(0)

[N(−h)D(−h) −

x− xj

hN(−h)

(1

D(−h) −1

D(h)

)]

+ f ′(xj)(x− xj)N(0)D(0)

N(−h)D(h)

+O(h)

Also1

D(−h) −1

D(h)= 2h

D′(0)D2(0)

+O(h2) (3.65)

Thus in the limit as h→ 0

Q(x) = f(xj)N2(0)D2(0)

(1− 2(x− xj)

D′(0)D(0)

)

+ f ′(xj)(x− xj)N2(0)D2(0)

whereN(0)D(0)

= φn,j(x) (3.66)

and D′(0) equals ddxN(0) evaluated at xj , so that

D′(0)D(0)

= φ′n,j(xj) (3.67)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 10

3.8 Integration Formulas using Polynomial Interpolation

Let Pn−1(x) interpolate f(x) at x0, x1, ..., xn. Then

Pn(x) =n∑

j=0

f(xj)φn,j(x) (3.68)

where

φn,j(x) =

∏k 6=j(x− xk)∏k 6=j(xj − xk)

(3.69)

so thatφn,j(xk) = δjk (3.70)

Now approximate∫ ba f(x)dx by

∫ ba Pn(x)dx where typically a = x0, b = xn. Then

∫ b

af(x)dx =

n∑

j=0

Ajf(xj) + E(f) (3.71)

where E(f) denotes the error, and

Aj =∫ b

aφn,j(x)dx (3.72)

The result is exact if f(x) is a polynomial of degree ≤ n, since then Pn(x) = f(x). This is calledprecision of degree n.

3.9 Integration with a weight function

We may include a weight function w(x) ≥ 0 in the integral. Then

∫ b

af(x)w(x)dx =

n∑

j=n

Ajf(xj) + E(f) (3.73)

where now

Aj =∫ b

aφn,j(x)w(x)dx (3.74)

Depending on w(x), analytical expressions are not necessarily available for evaluating the coeffi-cients Aj .

3.10 Newton Cotes formulas

These are derived by approximating f(x) by an interpolation formula using n + 1 equally spacedpoints in [a, b] including the end points. The first 3 such formulas are:Trapezoidal rule ∫ b

af(x)dx =

h

2(f(a) + f(b)) + E(f), h = b− a (3.75)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 11

Simpson’s rule∫ b

af(x)dx =

h

3f(a) +

4h3f(a+ h) +

h

3f(b) + E(f), h =

b− a

2(3.76)

and Simpson’s 38 formula

∫ b

af(x)dx =

3h8f(a) +

9h8f(a+ h) +

9h8f(a+ 2h) +

3h8f(b) + E(f), h =

b− a

3(3.77)

3.11 Gauss Quadrature

When the interpolation points x are fixed the integration formula has n + 1 degrees of freedomcorresponding to the coefficients Aj for j = 0, ..., n. Accordingly we can find values of Aj whichyield exact values of the integral of all polynomials of degree n,

Pn(x) = a0 + a1x+ ...+ anxn, (3.78)

since these also have n + 1 degrees of freedom corresponding to the coefficients aj for j = 0, ..., n.If we are also free to choose the integration points xj , then we have 2n + 2 degrees of freedomcorresponding to xj and Aj , for j = 0, ..., n. Now it is possible to find values of xj and Aj whichenable exact integration of polynomials of degree ≤ 2n + 1. One could try to find the requiredvalues by solving 2n+ 2 nonlinear equations

n∑

j=0

Ajf(xj) =∫ b

af(x)dx (3.79)

where f(x) = 1, x, ..., x2n+1 in turn. However the required values can be found indirectly as follows.Let φi(x) be orthogonal polynomials for the weight function w(x), so that

∫ b

aφj(x)φk(x)w(x) = 0 for j 6= k (3.80)

Choose xj as the zeros of φn+1(x). Then integration using polynomial interpolation is exact forpolynomials up to degree 2n+ 1.Proof:If f(x) is a polynomial of degree ≤ 2n+ 1 it can be uniquely expressed as

f(x) = q(x)φn+1(x) +R(x) (3.81)

where q(x) and R(x) are polynomials of degree ≤ n. Then

∫ b

af(x)w(x)dx =

∫ b

aq(x)φn+1(x)w(x)dx+

∫ b

aR(x)w(x)dx =

∫ b

aR(x)w(x)dx (3.82)

since φn+1(x) is orthogonal to all polynomials of degree < n+ 1.

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 12

Also the approximate integral is

I =n∑

j=0

Ajf(xj)

=n∑

j=0

Ajq(xj)φn+1(xj) +n∑

j=0

AjR(xj)

=n∑

j=0

AjR(xj)

(3.83)

since φn+1(xj) is zero for j = 0, 1, ..., n. But then

I =∫ b

aR(x)w(x)dx (3.84)

exactly, since R(x) is a polynomial of degree ≤ n.The degree of precision 2n+1 is the maximum attainable. Consider the polynomial of φ2

n+1(x)of degree 2n+ 2. Then

I = 0 (3.85)

since φn+1(xj) = 0 for j = 0, 1, ..., n. But

∫ b

aφ2

n+1(x)w(x)dx > 0 (3.86)

because φ2n+1(x) > 0 except at the zeros x0, x1, ..., xn.

When applied to an arbitrary smooth function f(x) which can be expanded as

f(a) + (x− a)f ′(a) + ...+(x− a)2n+2

(2n+ 2)!f (2n+2)(ξ) (3.87)

the error is proportional to a bound on |f (2n+2)(x)| because the preceding terms in the Taylor seriesare integrated exactly.

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 13

3.12 Formulas for Gauss integration with constant weight function

Number of Points xj Aj Precision Order

1 0 2 1 2

2 ± 1√3

1 3 4

30 8

9 5 6±√

155

59

4±r

3−2q

65

718+

√30

36 7 8

±r

3+2q

65

718−√30

36

5

0 128225

9 10±√

5− 2√

107

322+13√

70900

±√

5 + 2√

107

322−13√

70900

Table 3.1: Formulas for Gauss integration with constant weight function

3.13 Error bound for Gauss integration

Let H2n+1(x) be the Hermite interpolation polynomial to f(x) at the integration points. The Gaussintegration formula is exact for H2n+1(x). Hence

∫ b

aH2n+1(x)w(x)dx =

n∑

j=0

AjQ(xj) =n∑

j=0

Ajfj (3.88)

Thus ∫ b

af(x)w(x)dx−

n∑

i=0

Ajfj =∫ b

a(f(x)−H2n+1(x))w(x)dx (3.89)

According to the error formula for Hermite interpolation

f(x)−H2n+1(x) = P 2n(x)

f (2n+2)(ξ)

(2n+ 2)!(3.90)

wherePn(x) = (x− x0)(x− x1)...(x− xn) (3.91)

is the orthogonal polynomial normalized so that its leading coefficient is unity.Then the error is

E =1

(2n+ 2)!

∫ b

aw(x)P 2

n(x)f (2n+2)(ξ)dx (3.92)

Since w(x)P 2n(x) ≥ 0 the error lies in the range between

min f (2n+2)(ξ)A and max f (2n+2)(ξ)A (3.93)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 14

where

A =1

(2n+ 2)!

∫ b

aw(x)P 2

n(x)dx (3.94)

and henceE = Af (2n+2)(η) (3.95)

for some value of η in [a, b].

3.14 Discrete orthogonality of orthogonal polynomials

Let φj(x) be orthogonal polynomials satisfying

(φj , φk) =∫ b

aφj(x)φk(x)w(x)dx = 0 for j 6= k (3.96)

Let xi be the zeros of φn+1(x).Then if j ≤ n, k ≤ n, φj(x)φk(x) is a polynomial of degree ≤ 2n. Accordingly (φj , φk) is

evaluated exactly by Gauss integration. This implies that the φj satisfy the discrete orthogonalitycondition

n∑

i=0

Aiφj(xi)φk(xi) = 0, given j 6= k, j ≤ n, k ≤ n (3.97)

where Ai are the coefficients for Gauss integration at the zeros of φn+1.

3.15 Equivalence of interpolation and least squares approximationusing Gauss integration

Let f∗(x) be the least squares fit to f(x) using orthogonal polynomials φj(x) for the interval [a, b]with weight function w(x) ≥ 0. Then

f∗(x) =n∑

j=0

c∗jφj(x) (3.98)

where

c∗j =(f, φj)(φj , φj)

=

∫ ba f(x)φj(x)w(x)dx∫ b

a φ2j (x)w(x)dx

(3.99)

Let xi be the zeros of φn+1(x), and let Pn(x) be the interpolation polynomial to f(x) satisfying

Pn(xi) = f(xi) for i = 0, 1, ..., n (3.100)

Pn(x) can be expanded as

Pn(x) =n∑

k=0

ckφk(x) (3.101)

Then the ck are determined by the conditions

n∑

k=0

ckφk(xi) = f(xi) for i = 0, 1, ..., n (3.102)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 15

Also the polynomials φj(x) satisfy the discrete orthogonality condition

n∑

i=0

Aiφj(xi)φk(xi) = 0 given j 6= k, j ≤ n, k ≤ n (3.103)

where Ai are the coefficients for Gauss integration.Now multiplying (3.102) by Aiφj(xi) and summing over i, it follows that

cj =∑n

i=0Aif(xi)φj(xi)∑ni=0Aiφ2

j (xi)(3.104)

This is exactly the formula (3.99) for evaluating the least squares coefficient c∗j by Gauss integration.Alternative Proof:The coefficients cj are actually the least squares coefficients which minimize

J = ||f −n∑

j=0

cjφj ||2 (3.105)

in the discrete semi-norm in which

∫ f(x)−

n∑

j=0

cjφj(x)

2

w(x)dx (3.106)

is evaluated by Gauss integration at the zeros xi of φn+1(x) as

J =n∑

i=0

Ai

f(xi)−

n∑

j=0

cjφj(xi)

2

w(x0) (3.107)

But as a consequence of the interpolation condition (3.99) the coefficients cj yield the value J = 0,thus minimizing J .

3.16 Gauss Lobato Integration

To integrate f(x) over [-1,1] choose integration points at −1, 1, and the zeros of Qn−1, which arethe zeros of

(1 + x)(1− x)φn−1(x) (3.108)

Then letf(x) = q(x)(1− x2)φn−1(x) + r(x) (3.109)

where if f(x) is a polynomial of degree ≤ 2n− 1, q(x) and r(x) are polynomial of degree ≤ n− 2.Now let the polynomials φj be orthogonal for the weight function

w(x) = 1− x2 (3.110)

Then

I =∫ 1

−1f(x)dx =

∫ 1

−1q(x)φn−1(x)w(x)dx+

∫ 1

−1r(x)dx =

∫ 1

−1r(x)dx (3.111)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 16

since φn−1 is orthogonal to all polynomials of lower degree. Also the approximate integral is

I =n∑

j=0

Ajf(xj) (3.112)

where

Aj =∫ 1

−1φn,j(x)dx (3.113)

Then

I =n∑

j=0

Ajq(xj)(1− x2j )φn−1(xj) +

n∑

j=0

Ajr(xj) =n∑

j=0

Ajr(xj) (3.114)

since (1− x2j )φn−1(xj) = 0 for j = 0, 1, ..., n. But then

I = I (3.115)

since r(x) is a polynomial of degree < n. Here φj are the Jacobi polynomials P (1,1)j , where P (α,β)

j

is orthogonal for the weight(1− x)α(1 + x)β (3.116)

3.17 Portraits of Lagrange, Newton and Gauss

Figure 3.1: Joseph-Louis Lagrange (1736-1813)

LECTURE 3. NUMERICAL DIFFERENTIATION AND INTEGRATION 17

Figure 3.2: Sir Isaac Newton (1642-1726)

Figure 3.3: Carl Friedrich Gauss (1777-1855)

top related