copyright 2006, c. w. gear l1-1 numerical methods for evolutionary systems c. w. gear celaya,...

42
Copyright 2006, C. W. Gear L1-1 Numerical Methods for Evolutionary Systems C. W. Gear Celaya, Mexico, January 2007 Plan: to survey classes of problems that are important in chemical engineering and to look at the properties of various methods. Objective: to give the audience an understanding of the issues to be concerned with when selecting a computer code to solve a given problem. Note: In most cases a user should not be writing his or her own code to solve a problem, but use one of the many off-the-shelf packages that are available. Real world problems present many challenges to a code, so one should restrict oneself to using codes that have undergone considerable development and improvement whenever possible. However, it is important to understand the issues that govern accuracy and speed of solution before choosing a code. Outline 1. Fundamental Ordinary Differential Equations 2. Stiff Differential Equations 3. Differential-Algebraic Equations 4. Slow Manifolds Listings of many of many of the MATLAB programs used for the examples appear in these slides. These are present only for completeness and are not intended to be read! Those who have an electronic version of these slides can copy them into a MATLAB file if they wish to experiment with them. A number of references are cited on the slides. Copies of those that include Gear as an author can be found at www.princeton.edu/~wgear

Post on 21-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Copyright 2006, C. W. GearL1-1

Numerical Methods for Evolutionary SystemsC. W. Gear

Celaya, Mexico, January 2007

Plan: to survey classes of problems that are important in chemical engineering and to look at the properties of various methods.

Objective: to give the audience an understanding of the issues to be concerned with when selecting a computer code to solve a given problem.

Note: In most cases a user should not be writing his or her own code to solve a problem, but use one of the many off-the-shelf packages that are available. Real world problems present many challenges to a code, so one should restrict oneself to using codes that have undergone considerable development and improvement whenever possible. However, it is important to understand the issues that govern accuracy and speed of solution before choosing a code.

Outline

1. Fundamental Ordinary Differential Equations2. Stiff Differential Equations3. Differential-Algebraic Equations4. Slow Manifolds

Listings of many of many of the MATLAB programs used for the examples appear in these slides. These are present only for completeness and are not intended to be read! Those who have an electronic version of these slides can copy them into a MATLAB file if they wish to experiment with them.

A number of references are cited on the slides. Copies of those that include Gear as an author can be found at

www.princeton.edu/~wgear

Copyright 2006, C. W. GearL1-2

(Ordinary) Differential Equations

1 2

(non-autonomous - depends on )

(Autonomous - independent of )

may be a vector [ , ,..., ] .

Could even be infinite-dimensional (a

( , )

PDE

(

).

Can turn a

)

non-auto

TN

t

t

y y y

dyf y t

dtdy

f yd

y

t

1

1

nomous system into an autonomous

one by defining an additional variable with

additional differential equation

Sometimes we have higher-order equation

/ 1.

s - for exampl

N

N

y t

dy dt

2

2

1 2

1 22 1 2

e

Can turn this into larger, first-order system with substitution

to get

( , )

,

/

, ( , )

d y dyF y

dt dt

z y z dy dt

dz dzz F z z

dt dt

Copyright 2006, C. W. GearL1-3

A differential equation specifies a vector direction everywhere (it is defined):

t

y

If we pick an initial value a curve with these vectors as tangents is the solution

However, there is a family of solutions – one for every initial value

Initialvalue

Copyright 2006, C. W. GearL1-4

We will concentrate on evolutionary problems in this course.

These are initial value problems. The solution is specified at a given time and we want to know the solution at future points in time.

Ordinary differential equations (ODEs) have unique solutions to an initial value problem under mild continuity assumptions on f(y). We will generally assume that the right-hand side (the function f(y)) is differentiable as often as we need.

It is important to realize that if f(y) has discontinuities in its values or its derivatives, some methods may not perform as well – we will look at some examples of this. Such situations can occur when, for example, a change is made to the operating environment of a system such as a reactor.

When the ODE system has more than one component, we can also have boundary value problems in which data is specified at more than one point in time (usually two). These problems frequently occur in control problems when we know where we start and we want to arrive at a specific conclusion – for example, sending a rocket to Mars. These generally involve a different type of method than initial value problems, and existence and uniqueness may not be true. We will not discuss these in this course.

Copyright 2006, C. W. GearL1-5

The differential equation defines the solution, y(t), for all values of the independent variable t. However, in a computer we can only represent a finite amount of information so we usually represent the solution at a set of discrete time points

t0 < t1 < t2 < … tN

where tN is the final time we want to get to.

We will calculate numerical approximations to y(tn) at each of these points, tn, and name these approximations yn.

t0 is the starting point for the integration and we are given the initial value y0 = y(t0).

Copyright 2006, C. W. GearL1-6

t

y

t1 t2 t3 t4

We approximate the solution of a differential equation locally by some easily computable function – such as a polynomial. The simplest is a straight line!

Local errorIn 1st step

Local errorin 2nd step

Local errorin 3rd step

Local errorin 4th step

Global errorafter 4 steps

y0

y1

y2

y3

y4

Suppose we are trying to approximate the solution:

Copyright 2006, C. W. GearL1-7

In the previous slide we approximated the solution over one step by using the tangent to the curve which is given by dy/dt = f(y). If we know y we can compute this. The method, called the Forward Euler method is

yn+1 = yn + (tn+1 – tn) f(yn)

Usually we write the step size as h = tn+1 – tn (or as hn if it varies from step to step).

The step size (and the method) are chosen to get the accuracy we need. How big is the error? We can look at it with Taylor’s series. We have

2 2

1 2

21

( ) ( )2

or

( ) ( ) '( ) "( ) / 2 (exact)

n nn n

n n n n

dy h d yy t y t h

dt dt

y t y t hy t h y

L

(This assumes sufficient differentiability.) Since Euler’s method matches the first two terms, the local error is h2y’’/2. This tells us that if we halve the step size, h, then the local error reduces by a factor of four. However, if we halve the step size, we will take twice as many steps to cover the same ground, so we introduce twice as many local errors, so the global error at the end of a fixed interval will reduce by a factor of two – that is, the global error in this case is proportional to h (i.e, h to the FIRST power) so this is called a

FIRST ORDER METHOD(see example from MATLAB program)

Copyright 2006, C. W. GearL1-8

No. Steps Error Error/h

2 2.5000e-001 0.500 4 1.2500e-001 0.500 8 6.2500e-002 0.500 16 3.1250e-002 0.500 32 1.5625e-002 0.500 64 7.8125e-003 0.500

Forward Euler method for y’ = t with various step sizes over interval [0,1](MATLAB program for this on next slide)Because y” is constant (=1) local error is exactly h2/2 and global error is exactly Nh2/2 where number of steps, N = 1/h.It is often helpful to plot errors on log-log graph to see the order (from the slope of the line). (It is exactly a straight line in this example because there are no higher-order derivatives in the solution. Generally the log-log plot of the error will only approach a straight line as h gets small.)

Solutions for different step sizes

Copyright 2006, C. W. GearL1-9

%Ex1.m Forward Euler method for y' = t, y(0) = 0 over interval [0,1].color = 'krgbmc'; % Sequence of colors for plotting at each step sizefigure(1); hold off % Plot results for different h's here% Labeling for printed outputfprintf('No. Steps Error Error/h\n\n')for i = 1:6 % Run for 6 different step sizes N = 2^i; h = 1/N; % Number of steps and step size y = 0; % Initial value for n = 1:N % Do N steps t = (n-1)*h; % t at start of step yold = y; y = y + h*t; % Forward Euler step %plot new segment of solution plot([t;t+h], [yold;y],['-' color(i)],'LineWidth',2 ) axis([0 1 0 0.5]); xlabel('Time, t'); ylabel('Solution y'); hold on pause(2/N) % Make it run slowly to watch output end % Print error at end of interval fprintf('%6i %10.4e %5.3f\n',N,0.5-y,(0.5-y)/h); Error(i) = 0.5 - y; % Save errors for log plot nsave(i) = n; % ... and n'send print -dpsc Ex1-1; pause % print copy of plot and wait for key strokefigure(2) %Log-log plot of errorshold offloglog(nsave,Error,'-b','LineWidth',3)xlabel('Number steps');ylabel('Error');axis([1E0 1E3 1E-3 1E0]); print -dpsc Ex1-2

Copyright 2006, C. W. GearL1-10

First-order methods are not accurate enough for most problems – that is the step size has to be so small that the large number of steps takes too much computer time. We need to get more accurate methods.

TAYLOR’S Series methods

Attempt to compute more derivatives in the Taylor’s series. These are not usually worth considering since it is hard to compute the high derivatives for most problems.

For example, consider forming several derivatives of the following simple system:

Would be difficult to do by hand correctly. There are computer programs that do it but they may be computationally expensive.

Copyright 2006, C. W. GearL1-11

Higher-order methods – Runge-Kutta methodsSome integration methods can be found by using the equality

1

1( ) ( )n

n

t

n n t

dyy t y t dt

dt

and considering various approximations for dy/dt. For example, the Forward Euler method is obtained by setting dy/dt = f(t) ≈ f(y(tn). Suppose instead we use the mid-point of the interval and set dy/dt = f(t) ≈ f(y(tn+1/2) to get

1 1/ 2( )n n ny y hf y Unfortunately, we don’t know yn+1/2 so we cannot evaluate this formula. However, we could estimate yn+1/2 using Forward Euler with yn+1/2 = yn + hf(y )/2 to get the method:

0

1 0

1 1

( )

1( )

2

n

n

n n

k hf y

k hf y k

y y k

This is an example of class of methods known as Runge-Kutta methods. The important thing about these methods is that they are ONE STEP. They start with only the knowledge of yn and give a procedure for calculating yn+1.They take MORE THAN ONE evaluation of f(y) in each step – and this affects their computational cost.

Copyright 2006, C. W. GearL1-12

The ORDER of Runge-Kutta methods. (This is algebraically messy so we will only look at the example on the previous slide since it is relatively simple.)

We expand everything in Taylor’s series, usually around tn

0

22 3 2

1 2

22 3 2

1 2

2 3

1

3

( / 2) ( ) / 2 ( ) / 8

/ 2 ( ) / 8

( ) ( ) ( )

so

Since

and

the local err

"(

or is

) '''( )2 6

"

'''(6

n

n n n n n

n n n n n

n n n n n

k hy

f fk hf y hy hf y h y h y

y y

f fy y hy h y h y

y y

h hy t y t hy t hy t hy t

fy y

y

hhy

L

L

L

23 2

2) ( ) / 8n n

ft h y

y

L

Hence, the method is SECOND ORDER.

Copyright 2006, C. W. GearL1-13

There are Runge-Kutta methods of many orders. For example

THIRD ORDER

0 1 0

2 1 1 0 2

1( ); ( )

32

( ); ( 3 ) / 43

n n

n n n

k hf y k hf y k

k hf y k y y k k

FOURTH ORDER

0 1 0

2 1 3 2

1 0 1 2 3

1( ); ( )

21

( ); ( )2

( 2 2 ) / 6

n n

n n

n n

k hf y k hf y k

k hf y k k hf y k

y y k k k k

These are not the only choices of coefficients to achieve these orders (2, 3, and 4).Note that the number of function evaluations (also called stages) is 2, 3 or 4 for each of these methods. HOWEVER, a FIFTH ORDER METHOD takes at least 6 stages, and the number rapidly increases. Also note, these are not the best choices for coefficients – leave that to an off-the shelf code!

We want to examine the consequences of using different orders, and we will use these examples of

orders 1 through 4 for tests on the simple problem y’ = λy+At5/120, with λ = -2, A = 1, y(0) = y0

to compute y(1).

Copyright 2006, C. W. GearL1-14

Below we see the log-log plot of the errors at all 4 orders versus number of steps. The higher order is clearly superior for smaller step sizes (larger number of steps and smaller errors) but for large step sizes we see that the first order method is more accurate than the second order one. We see if we are happy with a very approximate answer, a lower order method may be less expensive, but as we desire more and more accuracy, higher-order methods become better. (The MATLAB code for this is shown on the next slide.)

Copyright 2006, C. W. GearL1-15

%Ex2.m Comparison of 1st through 4th order RK methods on y' = lambda*y + A*t^5/120, y(0) = y0;lambda = -2; A = 1;y0 = -0.013; const = y0*lambda^6 + A; % Constant of integration for initial value y0for i = 1:6 % 6 different step sizes N = 2^i; h = 1/N; % Number of steps and step size Init = [y0;0]; % Vector is [y; t] y1 = Init; y2 = Init; y3 = Init; y4 = Init; % Initial values for four orders for n = 1:N %Do each step %First order method k0 = h*fun(lambda,A,y1); y1 = y1 + k0; %Second order method k0 = h*fun(lambda,A,y2); k1 = h*fun(lambda,A,y2 + k0/2); y2 = y2 + k1; %Third order method k0 = h*fun(lambda,A,y3); k1 = h*fun(lambda,A,y3 + k0/3); k2 = h*fun(lambda,A,y3+2*k1/3); y3 = y3 + (k0 + 3*k2)/4; %Fourth order method k0 = h*fun(lambda,A,y4); k1 = h*fun(lambda,A,y4 + k0/2); k2 = h*fun(lambda,A,y4 + k1/2); k3 = h*fun(lambda,A,y4 + k2);y4 = y4 + (k0 + 2*k1 + 2*k2 + k3)/6; end %Compute errors tL = y1(2)*lambda; % t*lambda at end of interval Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6; Err_y1(i) = abs(Solution - y1(1)); Err_y2(i) = abs(Solution - y2(1)); Err_y3(i) = abs(Solution - y3(1)); Err_y4(i) = abs(Solution - y4(1)); nsteps(i) = N;endfigure(3)loglog(nsteps,Err_y1,'-k',nsteps,Err_y2,'-r',nsteps,Err_y3,'-b',nsteps,Err_y4,'-g','LineWidth',2)legend('Order 1','Order 2','Order 3','Order 4')xlabel('Number steps'); ylabel('Error');print -dpsc Ex2

function derivative = fun(lambda,A,y)derivative = [lambda*y(1)+y(2)^5/120; 1];

Copyright 2006, C. W. GearL1-16

In the last example, we plotted error against the number of steps. However, if we have a complex problem the majority of the computer time is spent evaluating the derivatives, not in performing the step-by-step integration. Hence, we often measure the “cost” by counting the number of derivative evaluations (often called function evaluations). If we don’t have a complex problem, there is little point in worrying about which method is fastest because on today’s PCs, a simple problem can be executed in much less time than it takes to enter the problem into the computer!

The Runge-Kutta method takes an increasing number of function evaluations as the order is increased. Below we show the output from the error versus the number of function evaluations for the last example. We can see that the cost advantage of lower order methods for low accuracy is increased.

Copyright 2006, C. W. GearL1-17

The “problem” with RK methods is that they find out a lot of information about the solution during one step (for example, they have estimates of its derivatives, but all of that information is thrown away before the next step is started. Multi-step Methods use information from past steps to try to increase the accuracy of the solution without using additional function evaluations. One important class of these method can be found by again considering the equivalence:

1

1( ) ( ) ( ) (1)n

n

t

n n ty t y t y t dt

Let us suppose that we have already computed the solution at a number of points, tn-i for i = 0, -1, -2, …along with the derivative y’n-i. We could use this information to approximate y’(t) by interpolation so that we could estimate the integral in eq. (1). For example, if we just used one additional past point we have, from the Lagrange interpolation formula,

11

1 1

'( ) n nn n

n n n n

t t t ty t y y

t t t t

;

If we now substitute this in (1) and integrate we get the approximation

1 n 1

3 1 + -

2 2n n ny y hy hy

• This is a two-step method – it uses information from two places, tn and tn-1. • It is a second-order method, but it only uses one function evaluation

It is called the (2-step) Adams-Bashforth method. There are q-step Adams Bashforth (AB) methods for all q > 0. They take the form:

1 1-1

q

n n i n iiy y h hy

and are of q-th order. We can compute the coefficients in a number of ways – but is not something

the average user ever needs to do! (We will come back to this issue a little later.) The next example contains the coefficient values for orders up to four.

Copyright 2006, C. W. GearL1-18

In the next example we will integrate the problem from the previous example by one- through four-step AB methods, and plot the errors (solid lines) compared with those of the RK methods (dotted lines) used previously. The first order methods are identical. The “kinks” at the start of the graphs for the 2nd and 3rd order methods are to do with issues we will discuss later.

Note that there is a difficulty with multi-step methods – we need to know the values at multiple points before we can get started. Automatic, off-the-shelf codes handle this in various ways. For this simple example, we are going to compute the exact values – since we know the solution! The Matlab code for this is shown on the next slide.

Copyright 2006, C. W. GearL1-19

%Ex3.m Comparison of 1st through 4th order Adams Bashforth & RK methods on y' = lambda*y+ A*t^5/120, y(0) = y0;Ex2 % Ex2.m should be executed before this is run to compute some values that are used.% Integration coefficients: beta(q,:) are the coefficients for the q-step methodbeta = [[1 0 0 0]; [3 -1 0 0]/2; [23 -16 5 0]/12; [55 -59 37 -9]/24];for i = 1:6 % 6 different step sizes N = 2^i; h = 1/N; % Number of steps and step size Init = [y0;0]; % Vector is [y; t] % Compute derivative at past points tL = -lambda*h; Derivative1 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; tL = -2*lambda*h; Derivative2 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; tL = -3*lambda*h; Derivative3 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; for order = 1:4 yd = [Derivative1; Derivative2; Derivative3]; %Previous derivative value array y = Init; for n = 1:N %Do each step NOTE This is not an efficient code when order < 4. derivative = fun(lambda,A,y); yd = [derivative(1); yd]; %Now we have all four derivatives y(1) = y(1) + h*beta(order,:)*yd; %Adams Bashforth formula y(2) = n*h; %Compute t directly yd = yd(1:3); %Save the last three derivatives end %Compute errors tL = y(2)*lambda; % t*lambda at end of interval Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6; ErrAB(order,i) = abs(Solution - y(1)); end nsteps(i) = N;endfigure(5)hold offloglog(nsteps,ErrAB(1,:),'-k',nsteps,ErrAB(2,:),'-r',nsteps,ErrAB(3,:),'-b',nsteps,ErrAB(4,:),'-g','LineWidth',2)hold onloglog(nsteps,Err_y1,':k',nsteps*2,Err_y2,':r',nsteps*3,Err_y3,':b',nsteps*4,Err_y4,':g','LineWidth',3)legend('AB-1','AB-2','AB-3','AB-4','RK-1','RK-2','RK-3','RK-4')xlabel('Number Evaluations'); ylabel('Error');print -dpsc Ex3

Copyright 2006, C. W. GearL1-20

Note that the costs of the two methods (RK and AB) were comparable at the same order and accuracy for this problem. However, that is not always the case. The error plot below is for the equation y’ = -2y, y(0) = 1 by RK and AB for orders 1 through 4. Here the multi-step method is superior, which is the reason that multi-step methods are often preferable unless there are other reasons to prefer one step methods (which we will discuss later). At 4 th order, AB is nearly twice as efficient (in function evaluations).

Copyright 2006, C. W. GearL1-21

Adams multi-step methods of very high orders were used in the pre-computer days by astronomers (and others) because of the low amount of work (really important if you are doing it by hand!) Naturally there was a wish to squeeze everything one could out of a single function evaluation (the costly part).Returning to our integral formula:

1

1( ) ( ) ( ) (1)n

n

t

n n ty t y t y t dt

Let us use a different interpolation formula to approximate y’(t) that also uses y’n+1

1 1 1 11 1

1 1 1 1 1 1 1

( )( ) ( )( ) ( )( )'( )

( )( ) ( )( ) ( )( )n n n n n n

n n nn n n n n n n n n n n n

t t t t t t t t t t t ty t y y y

t t t t t t t t t t t t

;

If we substitute this is (1) and integrate we will get:

1 1 n 1

5 2 1 + -

12 3 12n n n ny y hy hy hy

This is also a 2-step method, but to evaluate the right-hand side we need y’n+1=f(yn+1) which depends on the as-yet unknown y’n+1. In other word, we have to (approximately) solve the implicit equation

1 n 11

5 2 1 + ( ) -

12 3 12n nn ny hf hy hyy y

This appears to be computationally costly, but it will turn out that an approximation can be found with little additional computation and that the benefits of these implicit methods are worth the small additional computation time.These methods are called Adams-Moulton methods. A q-step Adams Moulton method has order q+1.

Copyright 2006, C. W. GearL1-22

How might we “solve” such an implicit method? One approach is the Predictor-Corrector implementation that is used in many codes. In this approach we get a first approximation by using an explicit method (such as an Adams-Bashforth method) called the predictor, and then do functional iteration of the implicit integration method (such as Adams Moulton) called the corrector. For the two-step Adams Moulton method on the last slide, the functional iteration is

n 1

n

1

1 1 1

3 1 + -

2 25 2 1

+ ( ) - 12 3 12

n n

n

n

n n n

y hy hy

y hf h

p

y p y hy

This is a 2-step 3rd order predictor-corrector method. We haven’t said what value we use for y’n at the next step - there are two options: we could use f(pn) or f(yn). In the former case we first PREDICT

pn+1, then EVALUATE f(pn+1),, then CORRECT to get yn+1. In the latter case, we have one more EVALUATION. Hence we call these two cases PEC and PECE methods, respectively. Note that a PEC method has one function evaluation per step, while a PECE method has two. (Off the shelf codes will typically automatically select between these two, depending on the needs of the equation being integrated.)

The next example (Ex4) compares the results of Adams-Bashforth against PEC and PECE implementations of Adams Bashforth/Moulton. The next slide shows the comparative error results. The code is on the following slide.

11 1 n 1

5 2 1 + ( ) -

12 3 12n np pn ny hf hy y hyy

Usually only one function iteration is done. For example, if we use the 2-step Adams-Bashforth/Moulton pair as a predictor-corrector (PC) combination, we perform the operations

Copyright 2006, C. W. GearL1-23

We see that the PC methods are superior to Adams Bashforth in terms of function evaluations, while the PECE implementation is slower than the PEC implementation.Note that PEC-q and PECE-q are of order q+1 while AB-q is of order q.

Copyright 2006, C. W. GearL1-24

%Ex4.m Comparison of one- through 4-step Adams Bashforth/Moulton methods in PEC and PECE implementations% versus Adams Bashforth the equation % y' = lambda*y+ A*t^5/120, y(0) = y0;lambda = -2; A = 1;y0 = -0.013; const = y0*lambda^6 + A; % Constant of integration for initial value y0% Integration coefficients: betaB(q,:) are the AB coefficients for the q-step methodbetaB = [[1 0 0 0]; [3 -1 0 0]/2; [23 -16 5 0]/12; [55 -59 37 -9]/24];% Integration coefficients: betaM(q,:) are the AM coefficients for the q-step methodbetaM = [[1 1 0 0 0]/2; [5 8 -1 0 0]/12; [9 19 -5 1 0]/24; [251 646 -264 106 -19]/720];for i = 1:9 % 9 different step sizes N = 2^i; h = 1/N; % Number of steps and step size Init = [y0;0]; % Vector is [y; t] % Compute derivative at past points tL = 0; Derivative0 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; tL = -lambda*h; Derivative1 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; tL = -2*lambda*h; Derivative2 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; tL = -3*lambda*h; Derivative3 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5; for order = 1:4 yd = [Derivative0; Derivative1; Derivative2; Derivative3]; %Previous derivative value array y = Init; yE = y; ydE = yd; yB = y; ydB = yd; %y is PEC, yE is PECE, and yB is AB method for n = 1:N t = n*h; yB(1) = yB(1) + h*betaB(order,:)*ydB; yB(2) = t; %Adams-Bashforth method dB = fun(lambda,A,yB); ydB = [dB(1); ydB]; p = y(1) + h*betaB(order,:)*yd; pE = yE(1) + h*betaB(order,:)*ydE; %Predictors d = fun(lambda,A,[p; t]); dE = fun(lambda,A,[pE; t]); %Now we have all five derivatives yd = [d(1); yd]; ydE = [dE(1); ydE]; %Adams Moulton formula y(1) = y(1) + h*betaM(order,:)*yd; yE(1) = yE(1) + h*betaM(order,:)*ydE; dE = fun(lambda,A,[yE(1); t]); ydE(1) = dE(1); %Final evaluation for PECE y(2) = t; yE(2) = t; yd = yd(1:4); ydE = ydE(1:4); ydB = ydB(1:4); %Save the last four derivatives end %Compute errors tL = y(2)*lambda; % t*lambda at end of interval Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6; ErrPEC(order,i) = abs(Solution - y(1)); ErrPECE(order,i) = abs(Solution - yE(1)); ErrB(order,i) = abs(Solution - yB(1)); end nsteps(i) = N;endfigure(6)hold offloglog(nsteps,ErrB(1,:),':k',nsteps,ErrB(2,:),':r',nsteps,ErrB(3,:),':b',nsteps,ErrB(4,:),':g','LineWidth',2)hold onloglog(nsteps,ErrPEC(1,:),'--k',nsteps,ErrPEC(2,:),'--r',nsteps,ErrPEC(3,:),'--b',nsteps,ErrPEC(4,:),'--g','LineWidth',2)loglog(2*nsteps,ErrPECE(1,:),'-k',2*nsteps,ErrPECE(2,:),'-r',2*nsteps,ErrPECE(3,:),'-b',2*nsteps,ErrPECE(4,:),'-g','LineWidth',2)legend('AB-1','AB-2','AB-3','AB-4','PEC-1','PEC-2','PEC-3','PEC-4','PECE-1','PECE-2','PECE-3','PECE-4',3)xlabel('Number Evaluations'); ylabel('Error');title('Adams Bashforth and Adams Moulton in PEC and PECE implementations')print -dpsc Ex4

Copyright 2006, C. W. GearL1-25

There are many variants of multi-step methods, although generally only two are widely used – Adams methods and Backward Differentiation methods we will discuss later. All methods are based on the idea of approximating yn+1 by a formula using previous values of y and y’. The most general form of this is the q-step method

1 1 11 0

q q

n i n i i n ii iy y h hy

If β0is zero, this is an explicit method, otherwise it is an implicit method.

How can we choose the coefficients αi and βi? Since higher-order methods look to be better – at least for high accuracy, we usually choose them to get high order. A “guaranteed” way to find them is to expand in Taylor’s series. If we ask that the first p+1 terms in the Taylor’s series of the right-hand side match those on the left, we will have a p-th order method (the error will contain an hp+1 term). Clearly we get linear equations in the coefficients, and we have 2q + 1 coefficients, so it looks as if we could get order 2q. For example, the q = 2 case is called Milne’s method. It is:

1 1 1 1( 4 ) / 3n n n n ny y h y hy hy It is 4th order accurate (and implicit). We will see that these higher-order methods are no good for q > 2 and may not be very good for q = 2. We can best illustrate the problem with an even simpler explicit method, the mid-point rule. It is

which is second-order accurate and explicit. Let us use it to solve the problem y’ = λy, y(0) = 1. The graph on the next slide shows the relative error for λ = 1 and λ = -1. (Relative error is the error relative to the solution)

1 1 2n n ny y hy

Copyright 2006, C. W. GearL1-26

%Ex5 Mid-point method for y' = y and y' = -y.N = 2^4; h = 2/N; %Number of steps and step size%First, y' = yyv = [1; exp(-h)]; %Vector of past y's.YP = [yv(1)]; %Array for resultsfor i = 1:N y = yv(2) + h*2*yv(1); yv = [y; yv(1)]; YP = [YP y];end%Now, y' = -yyv = [1; exp(h)]; %Vector of past y's.YM = [yv(1)]; %Array for resultsfor i = 1:N y = yv(2) - h*2*yv(1); yv = [y; yv(1)]; YM = [YM y];endtrange = (0:N)*h;plot(trange,(YP-exp(trange))./exp(trange),'-b',... trange,(YM-exp(-trange))./exp(-trange),'--r','LineWidth',2')legend('y'' = y','y'' = -y')xlabel('time'); ylabel('Error')print -dpsc Ex5

Notice how the error for y’ = -y is increasing more rapidly and is oscillating. What is going on? To find this out, we need to understand an import issue about multi-step methods.

Copyright 2006, C. W. GearL1-27

When we solve a multi-step method, we are solving a .

Thus, in the previous example, with we have computed the f

ollowing sequenc

differen

ce equat

i

n

e

o

y y

1 1

1 1 11 1

This is a 2-step . The general solution of the q-step difference equation

is where are the roots of

differenc

In this

e equ

ation

ex

a

2

{ }

n n n

n q q iq q qi i in i n i n i i i i

y y h y

y y y c

2

2 4 22

3 21 2

1 11 22

mple, the polynomial is where . Hence the roots are

or and

Thus we see that and

Hence the solution has the f

8 2

orm

2 1

1

exp( ) O( ) exp( ) O

( )

h

h h

L L

21 2

The first term is a second order accurate approximation to the solution, the second is an

extraneous term due to the second root of the difference equation.

[exp( ) O( )] ( 1) [exp( ) O( )]nn n ny c t h c t h

If is positive, the second term

decays and is not a problem, but if is negative, the second term grows and oscillates in sign,

as we saw in this example.

Copyright 2006, C. W. GearL1-28

The phenomenon we saw in the last example is a lack of stability. Stability is an issue that is important in virtually all numerical computation schemes. Whenever you have a finite-dimensional process (as any process on a computer must be) that starts with q numerical values, say yn and produces new values, say yn+1, and this process is repeated, we have the potential for instability. Instability means that a small perturbation in the initial value or any intermediate value, grows in size as the computation proceeds. Thus, if the process is

yn+1 = F(yn)

and we make a small perturbation to y0, say to y0 + ε0, then y1 is changed to

1 1 0 0 0 0( ) ( )F

y F y F yy

Thus where J0 is the Jacobian of F with respect to y. If J0 were constant

then we would have J is a q by q matrix so has q eigenvalues. If any of these are larger than one, Jn increases unboundedly with n and we have potential instability problems.

In this example we start with two values, yn and yn-1 and compute “two” new ones, yn+1 and yn (of

course, we don’t actually compute yn since we already have it but the computational process passes these two values onto the next step. Hence there are two eigenvalues, and we saw that for the last example that one was about exp(hλ) and the other was about -exp(-hλ). For the equation y’ = λy one root has to be close to exp(hλ) since that is how much the solution changes by in one step of length h. No other roots should be larger than this, or errors will grow faster than the solution and we will have instability. Three slides ago we said that although we could get 2q-th order q-step methods, they were not generally useful. That is because of an important theorem proved in 1957 often called the (first) Dahlquist barrier (after the person who first proved it). It states that a q-step multi-step method of order greater than q+2 will always have one root greater than 1 for the problem y’ = 0 and that this is also true for order greater than q+1 when q is odd. These methods are unstable. With even q and order q+2 (as in Milne’s method) there is more than one root with magnitude one. These methods are weakly stable (as long as there are not repeated roots) and will not work for some problems.

1 0 0 0

FJ

y

0n

n J

Copyright 2006, C. W. GearL1-29

Generally speaking we would like all of the roots (eigenvalues) for the “null” problem y’ = 0 (except for the one that has to be 1 for the solution to be correct) to be less than one. Then we have a strongly stable method. The Adams methods are strongly stable.

STABILITY is an important concept in computation. In ODEs it is not a problem for one-step methods like RK methods because only one value is passed from step to step. (Actually if we have a system of s s ODEs, s values are passed from step to step, but there will be s eigenvalues equal to 1 for the null problem.) However, it is very important for multi-step methods, and will be important in other problems later in these lectures.

Another concept that is important in the solution of these types of problems is CONVERGENCE. In almost all cases we cannot solve a differential equation exactly on a computer. (We can only really do that if we can find an explicit integral in a form that can be evaluated exactly, and those are generally not very important problems.) Hence, we approximate the solution of a differential equation by computing approximate values for it at a series of points, ti., a distance h apart. We saw earlier that as we made h smaller, we got more and more accurate values. A method is CONVERGENT if its error goes to zero as h goes to zero. (In practice round off errors from the finite precision of a computer make it impossible to get errors smaller than some amount – in fact, in many computations as one keeps increasing the number of steps, the error will ultimately start to increase because of the accumulation of round-off errors.) What convergence means practically is that over some useful range we can reduce h and get more accuracy. The order of convergence is the power of h in the global error term.

A final fundamental concept is CONSISTENCY. A method is consistent if the local error decreases faster than h. Thus the first order methods we have talked about which have local errors proportional to h2 are consistent.

Then we have the important theorem: A method that is STABLE and CONSISTENT is CONVERGENT.These give us two easy to very criteria that enable us to guarantee we have convergence.

Copyright 2006, C. W. GearL1-30

CONVERGENCE we have already given an outline of the proof of convergence. The idea is that over an interval of length L we will take L/h steps of length h. If each step

introduces a local error than can be bounded by h1+ θ for some θ > 0 and if the amplification of any errors introduced is bounded by K then the errors at the end cannot sum up to anything greater than

K(L/h) h1+ θ = KL1+ θ h θ

Which goes to zero as h goes to zero. The bound on local error growth is a result of stability, and the bound on the size of the local errors comes from consistency. Of course, the details of the proof are tedious (and not of interest here).

We use convergent methods, not because we plan to drive the step size to zero (we don’t have enough computer time for that) but because we can, if we need to, reduce the step size to get more accuracy should our results not be accurate enough. It also provides us with a simple way to estimate the error in a computation – repeat it using a smaller step size and see how the results changes. This is not usually the least expensive way to estimate errors, but it can be used as a last resort when other methods can’t be applied. We will discuss it a little later.

Copyright 2006, C. W. GearL1-31

Virtually all codes for ODEs automatically adjust the step size to control the error and some also adjust the order of the method. Typically a code has input parameters that specify both relative error and absolute error The former is the size of the error relative to the size of the solution while the latter is just the size of the error. The reason for having both is that if the solution is changing in size by a large amount, we may want to just specify the number of digits of accuracy we would like, and this is done with relative error – e.g., a relative error of 10 -5 would indicate that one would like 5 digits of accuracy. However, if the solution passes through zero, a relative accuracy will not work, since it would require zero error at that point which is not usually possible. Hence most codes also allow for an absolute accuracy criterion to be specified. If both are specified the code tries to satisfy the least demanding.

Unless one has a very good idea of the size of the solution, both should be specified. For example, if we just specified an absolute error of, say, 10-5, and the solution grew to 1015 a computer with less than a 20-digit representation of numerical would not be able to meet the error request because of round off errors. Most computers have about 16-digits of accuracy in double precision (7 in single) so the if the solution was much larger than 1010 we could not get an absolute error of 10-5.

100,000,000,000.000,00

approximate position of round-off in IEEE double precision floating point is 10-5

If, on the other hand, we only used relative error, we would get into trouble if the solution got too small.

Copyright 2006, C. W. GearL1-32

Earlier we defined local error (the error made in a single step), and global error (the accumulated error after many steps. Obviously the user would like to control the global error (and many probably believe that they are doing this) but this is almost always not possible. Codes usually estimate the local error and try to control it in some way. To see the issues involved, we will look at some of the error estimation methods.

One of the oldest is called Richardson extrapolation. It applies to any method for which we have some idea of the rate of convergence. In the context of ODEs we may know the order of the error as a function of h. For example, we know that the global error of the Forward Euler method is proprtional to h. Suppose we do two integrations using Forward Euler, one with step size h getting an answer S1 and one with step size h/2 getting an answer S2. Now we have

S1 = S + Kh + O(h2) and S2 = S + Kh/2 + O(h2)

where S is the true integral. If we view these as two equations in Kh and S we can solve for both, finding that

Kh = (S1 – S2)/2 + O(h2) and S = 2S2 – S1 + O(h2)

Thus we can either estimate the error in the computed answers (Kh) or can get a more accurate estimate of the answer (S) – which is what the Richardson extrapolation formula was originally used for.

This method can be used to estimate the global error in the solution of ODEs if they are solved with a fixed step size. However, it is computationally expensive (two integrations) and can’t be done until the integration has been completed. If we then decided that the error was too big, we would have to go back to the beginning and repeat the process with a smaller step size. We would like to choose each step so that the error is controlled – and we would like to do this in the most efficient manner.

Copyright 2006, C. W. GearL1-33

We can use Richardson extrapolation on a single step to estimate the local error. Suppose we have a p-th order, one-step method, written as yn+1 = M(h,yn). Then, if we apply it once with step size h/2 we have:

1 21 1

12

1 1

2

12

2 1 1 1 1

2 2

( , ( )) ( ) O( );

( , ( )

One step:

Two steps:

....

....

) ( ) O( ); 2 2

( , ) ( ) ( , ) O( )

2 2 2

p pn n

pp

nn

pp

n n

u M h y t y t Kh h

h hv M y t y t K h

h h hv v M v y t M y K h

12

1

11 2

( ) 2 O( );2

(

hen )c 1e 2

pp

n

p p

hy t K h

u v Kh

Thus this allows us to estimate the local error, and if it is too large we could repeat the step with a smaller h to get an error of the size that would be acceptable. Of course, since we have an estimate of the error, we might naturally want to subtract it from the answer to get greater accuracy! The answer is then yn+1 = (v2 – 2-pu1)/(1 – 2-p).

This is a problem whenever we do error estimation. If we have a good estimate of the error then it seems obvious that we should take the better answer by removing the error. But then we have no error estimate!

Usually we take the better answer and assume that the neglected higher-order error is smaller than the term we just estimated.

Copyright 2006, C. W. GearL1-34

EXAMPLE Richardson extrapolation of Forward Euler

1

1 2 1 1

12 1

1 2 11

One step of size

;

; ( );2 2

22 ( )

1

Two steps of size /

2 2

2

n n

n n

n n n

u y hy

h hv y y v v f v

v u hy v u

h

y hf y y

h

The result is the second-order RK method we discussed earlier. Generally, when we use Richardson extrapolation to improve the accuracy we just get another integration formula.We could have used Richardson extrapolation to estimate the error – using either the improved value or one of the original Euler integrations. What we would have then is a Runge-Kutta formula with an embedded error estimate. Such methods are commonly used in RK codes where they are often called Runge-Kutta pairs. In effect, there are two different Runge-Kutta formula – usually sharing many function evaluations, whose difference provides an error estimate.

When we have a code with an error estimate, that estimate is used to control the step size. If the codes uses a p-th order method and is asked to bound the local error by some quantity ε, it will estimate the error in each step it takes, getting, say, the error estimate E. If E ≤ ε the step will be accepted. If E > ε it will repeat the calculation with a smaller step. Since the local error is proportional to hp+1 it will use a new step size of

hnew = γhold(ε /E)p+1

where γ < 1 is a “safety factor” chosen so that the chance of the repeated step having too large an error is small. Whether the step is accepted or rejected, the step size for the next step is adjusted using this formula so that its error will probably be acceptable – the objective being to try to maximize the step size while minimizing the number of rejected steps so as to minimize total computation time.

Copyright 2006, C. W. GearL1-35

Multi-step method error estimation.

When a multi-step method is used, the difference between the predictor and the corrector provides an estimate of the error. For example, if we use a p+1 step Adams-Bashforth predictor (order p+1) and a p-step Adams-Moulton corrector (also order p+1), they both have local errors proportional to hp+2y(p+2) where y(p+2) is the (p+2)-nd derivative of y so the predictor corrector difference gives a direct estimate of hp+2y(p+2).

Most automatic codes that use multi-step methods use this mechanism to estimate the local error and either accept the step or reject it and repeat it. The step control algorithm is similar to the one we just discussed for one step methods.

One other feature of automatic multi-step codes is that the order is also selected automatically. Since the local error is approximately equal to a product of a power of the step size, an error coefficient determined by the particular formula used, and a derivative, the code can estimate the derivatives at several different orders using backward differences of the computed solution and thus estimate which order would allow the largest step size. In a typical code, a step is taken. If the error estimate is too large, the step is repeated with a smaller step size and possibly a lower order. If the step is successful, consideration is given to increasing the step size for the next step.

However, because a multi-step method has additional stability issues because of the additional values carried along, the step or order is not usually changed for a number of steps following any change.

Multi-step methods have a starting problem because they need past values. Most codes handle this by starting at 1st order where they are one-step methods and need no additional information. Then they use their automatic order control to slowly raise the order to that most appropriate for the problem. However, this starting phase is less efficient than when the method is when working at its optimal order. Generally, multi-step methods are more efficient than Runge-Kutta methods when high orders are useful – which usually happens when one wants high accuracy. The inefficiency in their starting phase is usually overcome by the efficiency in the high order phase, but we will see cases when this is not true.

Copyright 2006, C. W. GearL1-36

Note that most of the error estimation methods we have discussed estimate the local error. The user would usually like to control the global error but that is not, in general possible. (In fact we can only get approximate estimates of the local error so we may not even control that correctly.)

The reason we cannot control global error inexpensively in a code is that while we are integrating, we have no idea how the solution may change in the future. This is illustrated below where we see that a small local error at one point may lead to a large global error component later. If you need to estimate the global error, you have to integrate more than once with different steps sizes and look at the difference between the two computed solutions. With some codes, you can run twice (or more) with different error requests.

t

y

t

Local error

Its contribution to global error

Copyright 2006, C. W. GearL1-37

The following Example illustrates the use of an automatic step control code. It integrates the Van der Pol equation with parameter μ = 1 with 9 different tolerances from 10-2

to 10-10. Since we don’t know the true answer, we will assume that the value obtained with the tightest tolerance is the correct answer, and subtract it from the other 8 answers to estimate the error in each.

The Matlab code ode113 is used. It is a variable order, variable step Adams code. An option allows us to print the number of function evaluations and other integration statistics and we have gathered these and plotted the bsolute errors in the answers versus the number of function evaluations and versus the requested tolerance. Note that the error does not appear to change very much between the two largest tolerances (although it actually has the opposite sign so changes a lot). At very large tolerance, codes are often not very responsive to the tolerance specification. In the middle of the range the code has the good property that the error decreases as the tolerance is decreased (this is called tolerance proportionality), and that the number of function evaluations decreases as we ask for less accuracy. (If we assume that the step size is roughly proportional to the inverse of the number of function evaluations, we can estimate that the average order of the method used is about 8 from this graph.)

Copyright 2006, C. W. GearL1-38

Code for last example

% Ex5A Example of using an automatic code with different error estimates.% Run on van der Pol equation with mu = 1;Last = [];for i = 1:9 error = 0.1*10^(-i); options = odeset('RelTol',error,'AbsTol',error,'Stats','on'); fprintf('\n') [T,Y] = ode113(@fun5a,[0 20],[0; 1],options); Last = [Last; Y(end,1)];endLastd = Last(1:8) - Last(9); %Assume that last result is "true value" at endfor i = 1:8 fprintf('%25.15e\n',Lastd(i));end% Note: the following numbers of function evaluations had to be taken from the printed output% whent he program was executed (caused by the 'stats','on' spec in odeset for options).Fnevals = [208; 296; 398; 518; 640; 779; 945; 1149];figure(5)loglog(Fnevals,abs(Lastd),'-b')ylabel('Error')xlabel('Function Evaluations')axis([200 2000 1E-9 1E-2]);print -dpsc Ex5a_1figure(6)loglog(0.1*10.^(-1:-1:-8),abs(Lastd),'-g')ylabel('Error')xlabel('Tolerance') print -dpsc Ex5a_2

function derivative = fun5a(t,y)% Van der Pol equationmu = 1;derivative(1,1) = y(2);derivative(2,1) = -y(1) + mu*(1-y(1)^2)*y(2);

Copyright 2006, C. W. GearL1-39

Discontinuities Earlier we said that we would assume that f in y’ = f(y) had as many derivatives as we needed. We have seen that the local error is proportional to a higher derivative. If the function has discontinuities in a derivative, that analysis breaks down. The graph below is the solution of the ODE y’ = c(t).A.y4, y(0) = 0, c(t) = 1 for 0 ≤ t < 1/3, c(t) = 0.5 for 1/3 ≤ t ≤ 1

using the 1st through 4th order RK from example 2. Note that the slopes of the log-log plots of the errors for 2nd through 4th orders are no longer 2 through 4 as before. The slope of the fourth order case is one, not four, meaning that the order of the method has dropped to one from the four we would normally have gotten from the method. The errors of the 2nd and 3rd order methods are somewhat erratic but the average slope is close to one.

Copyright 2006, C. W. GearL1-40

The problem is that the discontinuity caused an error of order h in the step that straddles the discontinuity at 1/3 (which was deliberately chosen so that it is never at a step boundary in this example). The stranger behavior of the 2nd and 3rd order methods has to do with the position within the step at which the discontinuity occurs. It can be explained, but is not worth our time here. The important fact to learn from this example is that discontinuities can cause a break down in the error behavior and will give problems to an automatic method. (Most automatic codes will work because they will tend to estimate very large errors and so may take an excessive amount of computer time.)

This slide we show results from the same code but with the discontinuity at t = 0.5, which is always on a step boundary (since we have divided the interval [0,1] into a power of 2 equal steps). The discontinuity is on the left of 0.5 (meaning that if we evaluate at exactly 0.5 we will get the value that holds to the right of 0.5). Note that the 2nd and 3rd order methods have slopes 2 and 3 but the 4th order method still has slope one. In this example, the particular RK methods used do not use the value at the end point of an interval in orders 1 through 3, but do in the case of order 4. This suggests a solution to the problem. When there is a discontinuity, a step boundary must be placed at the discontinuity, and when the step to the left of the discontinuity is executed, all evaluations must use the values before the discontinuity. Similarly, all evaluations for the step to the right must use values to the right.

Copyright 2006, C. W. GearL1-41

Note that is it not good enough just to move the location of the discontinuity from the left to the right of its location. The graph on this slide is the same example with the discontinuity to the right of 0.5.

In typical problems with discontinuities, they are caused by actions such as the “throwing of switches” that introduce a change to the model. When such problems are simulated, it is important to integrate up to the point of the discontinuity, then “throw the switch” (change the model) and then continue integrating. It is easy to integrate up to the time of the discontinuity if it happens at a known time. However, in many problem, the "switch is thrown" when a certain variable exceeds some value, and it is unknown when this will happen. If the model permits integration beyond the point at which the "switch is thrown" without throwing it, then is it often more efficient to compete the integration step that passes the discontinuity, interpolate to find the value at the discontinuity, then restart. So, if we change the model when a variable exceeds some value, we simply check that varaible after each step, and if it has exceeded the value we use inverse interpolation to find the time at which it happened, and then interpolate for all other value to that time point before restarting.One step methods like Runge Kutta have a major advantage for problems with many discontinuities because no information is extrapolated from step to step. Multi-step methods will suffer a similar breakdown in order, so it will be necessary to integrate up to the discontinuity, and then restart to the right of the discontinuity. Since this is a less efficient part of the multi-step method, they may not perform as well.

Copyright 2006, C. W. GearL1-42

%Ex6.m RK methods on y' = c(t)*A*t^4% where c(t) is a piecewise constant function - i.e. has a discontinuity at d% Discontinuity is to left of d if s == 0, otherwise to right of d.A = 1;y0 = 0; for version = 1:3 d = min(version/3,1/2); % Location of discontinuity s = max(0,version-2); Solution = A*(1+d^5)/10; % Solution at end point t = 1; for i = 1:6 % 6 different step sizes N = 2^i; h = 1/N; % Number of steps and step size Init = [y0;0]; % Vector is [y; t] y1 = Init; y2 = Init; y3 = Init; y4 = Init; % Initial values for four orders for n = 1:N %Do each step %First order method k0 = h*fun6(s,d,A,y1); y1 = y1 + k0; %Second order method k0 = h*fun6(s,d,A,y2); k1 = h*fun6(s,d,A,y2 + k0/2); y2 = y2 + k1; %Third order method k0 = h*fun6(s,d,A,y3); k1 = h*fun6(s,d,A,y3 + k0/3); k2 = h*fun6(s,d,A,y3+2*k1/3); y3 = y3 + (k0 + 3*k2)/4; %Fourth order method k0 = h*fun6(s,d,A,y4); k1 = h*fun6(s,d,A,y4 + k0/2); k2 = h*fun6(s,d,A,y4 + k1/2); k3 = h*fun6(s,d,A,y4 + k2);y4 = y4 + (k0 + 2*k1 + 2*k2 + k3)/6; end %Compute errors Err_y1(i) = abs(Solution - y1(1)); Err_y2(i) = abs(Solution - y2(1)); Err_y3(i) = abs(Solution - y3(1)); Err_y4(i) = abs(Solution - y4(1)); nsteps(i) = N; end figure(5+version) loglog(nsteps,Err_y1,'-k',nsteps,Err_y2,'--r',nsteps,Err_y3,':b',nsteps,Err_y4,'-.g','LineWidth',2) legend('Order 1','Order 2','Order 3','Order 4') xlabel('Number steps'); ylabel('Error'); if s == 0 title(['discontiuity at left of ' num2str(d)]) else title(['discontiuity at right of ' num2str(d)]) end eval(['print -dpsc Ex6_' num2str(version)])End

function derivative = fun6(s,d,A,y)%Example of discontinuous functionif s == 0 if y(2) < d derivative = [A*y(2)^4; 1]; else derivative = [0.5*A*y(2)^4; 1]; endelse if y(2) <= d derivative = [A*y(2)^4; 1]; else derivative = [0.5*A*y(2)^4; 1]; endend

Matlab code for discontinuous example