numerical analysis phd qualifying exam university of

14
Numerical Analysis PhD Qualifying Exam University of Vermont, Winter 2011 1. (a) Given an initial guess x 0 , derive Newton’s method to find a better guess x 1 for approximating the root of a function f (x). (b) Apply Newton’s method to the function f (x)=1/x using an initial guess of x 0 =1 and find a (simple) analytical expression for x 50 . Solution: (a) Newton’s method suggests we find root of the line tangent to f (x) at the point x 0 , and use this root as our new guess. For an initial guess of x 0 , we’re looking for a line through the point (x 0 ,f (x 0 ) with slope f (x 0 ). The point-slope form for this line is y - f (x 0 )= f (x 0 )(x - x 0 ). Substituting y =0 into this line, we find f (x 0 )(x - x 0 )=0 - f (x) x - x 0 = - f (x 0 ) f (x 0 ) x = x 0 - f (x 0 ) f (x 0 ) We label this better guess x for the root of f (x) by x 1 and iterate. (b) x i+1 = x i - f (x i ) f (x i ) = x i - 1 x - 1 x 2 =2x i Given an initial guess of x 0 =1, we find x 50 =2 50 . 2. Solve the following linear system with naive Gaussian elimination (i.e. without partial pivoting) mach 10 1 1 1 x 1 x 2 = 1 0 using (1) infinite precision and (2) a computer whose machine epsilon is given by mach . Label your solutions x true and x comp respectively. Why is there such large difference between the two? Note that the first pivot mach 10 is much larger than the smallest number the computer can represent. Solution: (1) infinite precision 1

Upload: others

Post on 16-Oct-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Numerical Analysis PhD Qualifying Exam University of

Numerical Analysis PhD Qualifying Exam

University of Vermont, Winter 2011

1. (a) Given an initial guess x0, derive Newton’s method to find a better guess x1 for approximating

the root of a function f(x). (b) Apply Newton’s method to the function f(x) = 1/x using an

initial guess of x0 = 1 and find a (simple) analytical expression for x50.

Solution:

(a) Newton’s method suggests we find root of the line tangent to f(x) at the point x0, and use

this root as our new guess. For an initial guess of x0, we’re looking for a line through the point

(x0, f(x0) with slope f ′(x0). The point-slope form for this line is y − f(x0) = f ′(x0)(x− x0).

Substituting y = 0 into this line, we find

f ′(x0)(x− x0) = 0− f(x)

x− x0 = − f(x0)

f ′(x0)

x = x0 −f(x0)

f ′(x0)

We label this better guess x for the root of f(x) by x1 and iterate.

(b)

xi+1 = xi −f(xi)

f ′(xi)

= xi −1x

− 1x2

= 2xi

Given an initial guess of x0 = 1, we find x50 = 250.

2. Solve the following linear system with naive Gaussian elimination (i.e. without partial pivoting)

[εmach

101

1 1

] [x1

x2

]=

[1

0

]using (1) infinite precision and (2) a computer whose machine epsilon is given by εmach. Label

your solutions ~xtrue and ~xcomp respectively. Why is there such large difference between the two?

Note that the first pivot εmach

10is much larger than the smallest number the computer can

represent.

Solution:

(1) infinite precision

1

Page 2: Numerical Analysis PhD Qualifying Exam University of

[εmach

101

0 1− 10εmach

] [x1

x2

]=

[1

− 10εmach

]

Backward substitution gives

~x =

[− 10

10−εmach10

10−εmach

]

This is the correct answer, i.e. ~xtrue, which is approximately [−1, 1]>.

(2) On a double precision computer, the system reduces to the solution of

[εmach

101

0 − 10εmach

] [x1

x2

]=

[1

− 10εmach

]

which has the solution ~xcomp = [0, 1]>.

The difference |~xtrue − ~xcomp| is quite large, O(100). The reason is as follows. Assuming we’re

using a double precision computer, where εmach = O(10−16), the first pivot is O(10−17). As a

result, the multiplier used in naive Gaussian elimination is O(1017) leading to swamping.

The difference occurs during back substitution. Letting α = 1010−εmach

, back substitution in part

(1) produces an equation for x1 of the form

εmach

10x1 + α = 1

whose solution involves calculating 1− α. This calculation suffers from catastrophic cancellation

in part (2).

3. Apply Gram-Schmidt to find a QR-factorization of the matrix.

A =

2 3

−2 −6

1 0

Solution:

Take y1 =

2

−2

1

. Then r11 = ‖y1‖2 = 3 and q1 =y1

r11

=

2/3

−2/3

1/3

. Then

y2 = v2 − q1(qT1 · v2) =

3

−6

0

2/3

−2/3

1/3

(6) =

−1

−2

−2

. r12 = (qT1 · v2) = 6 and

2

Page 3: Numerical Analysis PhD Qualifying Exam University of

r22 = ‖y2‖2 = 3. So q2 =

−1/3

−2/3

−2/3

. If we stop here, we have

A =

2/3 −1/3

−2/3 −2/3

1/3 −2/3

[3 6

0 3

].

If we wish to use QR for least squares, then we continue in this manner with an arbitrarily chosen

v3 =

1

0

0

to generate q3. A complete QR-factorization of A is

A =

2/3 −1/3 2/3

−2/3 −2/3 1/3

1/3 −2/3 −2/3

3 6

0 3

0 0

. Of course, there are several other QR decompositions

up to a negative sign, e.g. negative entries in column 2 can be switched to positive provided

R22 = −3.

4. Given an IVP y′ = f(t, y), methods for numerical integration are distinguished by their

approximation of the integral in the formula y(t + h) = y(t) +∫ ti+1

tif(t, y)dt. Derive the

degree-2 Adam’s Bashforth method (AB2) given by wi+1 = wi + h2(3fi − fi−1) in two steps:

(1) Approximate f(t, y) with a polynomial Pn(t) of degree n interpolating the n + 1 points

(ti−n, fi−n), ..., (ti−1, fi−1), (ti, fi). Note that you should determine the degree n based on the

specified order of accuracy of AB2. It may help to label the constant step-size in time

h = ti − ti−1.

(2) Evaluate the integral∫ ti+1

tiPn(t)dt

Solution:

The degree 2 Adams’ Bashforth method requires a polynomial of degree 1. First, we use the

Newton form of the interpolating polynomial to find P1(t), namely

P1(t) = fi−1 +fi − fi−1

ti − ti−1

(t− ti−1

)= fi−1 +

fi − fi−1

h

(t− ti−1

)Then we integrate

3

Page 4: Numerical Analysis PhD Qualifying Exam University of

∫ ti+1

ti

P1(t)dt = fi−1h +(fi − fi−1)

[(ti+1 − ti−1)

2 − (ti − ti−1)2]

2h

= fi−1h +(fi − fi−1)

[(2h)2 − h2

]2h

= fi−1h +(fi − fi−1)3h

2

wi+1 = wi + h3fi − fi−1

2

5. Method

Yn+1 − Yn−1 =h

8(5fn+1 + 6fn + 5fn−1), where fn ≡ f(xn, Yn), etc. (1)

can be used to solve the initial-value problem

y′ = f(x, y), y(x0) = y0. (2)

Using the equation y′ = −λy as a model problem (λ > 0), show that this method is A-stable.

Note: A notation hλ/8 ≡ z, so that z > 0, should be helpful.

Solution:

4

Page 5: Numerical Analysis PhD Qualifying Exam University of

GS V

4- A

If

IK

H-

«V

-f

-*- ^ •4- ^T rv I 0

im rv V O

-\ ! " -t- ^ ^

^ t ^

x ^

yf

^ "

t ^ V o

~^r o

T\ *S O

00 £ ou)

5

Page 6: Numerical Analysis PhD Qualifying Exam University of

\

rt> |1N

S —

ir + o *: V

-t- JcV

(V

^ O r<J f° V

t A

A

rO

0x5

A

_

-t

-»• ii

h 5:

rv

^ VA)

CM V

6

Page 7: Numerical Analysis PhD Qualifying Exam University of

6. Describe how you would solve a boundary-value problem on x ∈ [a, b]:

y′′ = y3 − x, y(a) = α, y(b) = β (1)

with second-order accuracy.

If you choose to use a finite-difference discretization, do the following:

• Write the equation at an internal point.

• Write the equations at the boundary points.

• Write your system of equations in matrix (or matrix-vector) form.

• Describe what method you would use to solve (or attempt to solve) your system of

equations. Provide only brief necessary details about the method’s setup; do not go deeply

into its workings.

Note: If several alternative methods can be used, describe only one of them, not all.

Also, your method does not have to be the best one; it should be just a reasonable method.

If you choose to use the shooting method, do the following:

• Write the equation (or equations) that you would be solving numerically.

• Explain what method(s) you would use to solve this equation (or these equations). You do

not need to write the equations of the method(s); just write its (their) name(s) and, if

needed, briefly justify your choice.

Note: You need to describe just one of the above methods of solution, not both.

Solution:

\

rt> |1N

S —

ir + o *: V

-t- JcV

(V

^ O r<J f° V

t A

A

rO

0x5

A

_

-t

-»• ii

h 5:

rv

^ VA)

CM V

7

Page 8: Numerical Analysis PhD Qualifying Exam University of

UO

±\-

0a -v

o i z-+ = A V

U»| 0

Wj =

7^

v^ ^ ®8

Page 9: Numerical Analysis PhD Qualifying Exam University of

A rs

3^r /\fetJW.

> "-" (JOL&M*\W**+ .

we

W^e \5 t(ue ex*^f~

^1K"

fu*v^^^^ °̂ *t̂

0

-h A e - ^ ^.3'Y'e ^ _B/ + 0 )̂

A

XV

V

9

Page 10: Numerical Analysis PhD Qualifying Exam University of

a

\}<JLuj>^

10

Page 11: Numerical Analysis PhD Qualifying Exam University of

7. A method

Un+1j − Un

j =κ

h2

(Un

j+1 − Unj − Un+1

j + Un+1j−1

)(1)

is proposed by some people in the computational finance community in connection with solving

the initial-boundary-value problem for the Heat equation:

ut = uxx, x ∈ [0, 1], t ≥ 0; u(0, t) = α, u(1, t) = β, u(x, 0) = ϕ(x). (2)

(In Eq. (1), κ and h are the temporal and spatial steps, and Unj is the numerical approximation

to u(jh, nκ).)

(a) Explain how this seemingly implicit scheme can be solved recursively, i.e. without inverting

any matrix.

Hint: Draw the grid for the BVP (2) and try to find the solution node-by-node at the first time

level. (The initial condition is prescribed at the zeroth time level.)

(b) Use the von Neumann analysis to show that this scheme is unconditionally stable.

Solution:

11

Page 12: Numerical Analysis PhD Qualifying Exam University of

L

^ vn^o in <:=<§> ^<

-A A? - O+O ^V71

- J','^ ' ^ + ̂ >4lH-vf 14W

m- MI

12

Page 13: Numerical Analysis PhD Qualifying Exam University of

n

<M

i

•~>4

-H

^>

t J

Q —

QsQ

LJ i- •f II

x*r

«—

~

"1-

•SQ

L. L

~

II

0 A -

^ ^

rtl —

H §^

il

13

Page 14: Numerical Analysis PhD Qualifying Exam University of

^ d><HNM

^ | J

j -H ( ^ 4-

-I-

14