solving ordinary differential equations using taylor series

31

Click here to load reader

Upload: y-f

Post on 06-Aug-2016

243 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Using Taylor Series GEORGE CORLISS Marquette University and Y. F. CHANG University of NebraskamLincoln

Equations

Taylor series methods compute a solution to an initial value problem in ordinary differential equations by expanding each component of the solution in a long series. A portable translator program accepts statements of the system of differential equations and produces a portable FORTRAN object code which is then run to solve the system. At each step of the integration, the object program generates the series for each component of the solution, analyzes that series to determine the optimal step, and extends the solution by analytic continuation. The translator is easy to use, yet it is powerful and flexible. The computer time required by this approach consists of time to run the translator plus time to run the object code, CPU time and storage requirements depend on the size and complexity of the system of ODEs. Theoretical estimates and empirical test results are given for Hull's test problems, and comparisons with DVERK and DGEAR from IMSL are given. The computer time for all preproeessmg, compilation, and linking Is included Taylor series method executes faster and yields a more accurate answer than the standard methods for most of the problems in the test set. The Taylor series method is most attractwe for small systems and for stringent accuracy tolerances.

Categories and Subject Descriptors: G.1.7 [Numerical Analysis]' Ordinary Differential Equatmns-- error analysis, zn~t~al value problems; s~ngle step methods; GA [Mathematics of Computing]: Mathematical Software--effictency; portabd~ty

General Terms Algorithms, Performance

Additional Key Words and Phra~ses. Taylor series method

1. INTRODUCTION

T h e s o l u t i o n of in i t i a l va lue p r o b l e m s in o r d i n a r y d i f fe ren t ia l e q u a t i o n s us ing a p rep rocesso r is a n a t t r ac t i ve a l t e rna t i ve to s t a n d a r d m e t h o d s for some types of p rob l ems . T h e A T S M C C ( A u t o m a t i c T a y l o r Ser ies by Morr is , Chang , a n d Corliss) m e t h o d cons is t s of (1) a t r a n s l a t o r wh ich accep t s s t a t e m e n t s of the s y s t e m of o r d i n a r y d i f fe rent ia l e q u a t i o n s a n d p r o d u c e s a F O R T R A N ob jec t code to solve t he s y s t e m a n d (2) a l i b r a ry of s u b r o u t i n e s cal led b y the ob jec t code to d e t e r m i n e the r ad ius of conve rgence (Rc) of each c o m p o n e n t of the so lu t ion . We

Authors' addresses: G. Corliss, Department of Mathematics, Statistics, and Computer Science, Marquette University, Milwaukee, WI 53233, Y. F. Chang, Department of Computer Science, University of Nebraska--Lmcoln, Lincoln, NE 68588. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice ts given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to repubhsh, requires a fee and/or specific permission. © 1982 ACM 0098-3500/82/0600-0114 $00 75

ACM Transactions on Mathematical Software, Vol 8, No. 2, June 1982, Pages 114-144

Page 2: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 115

discuss the translator, the algorithm used by the object code it produces, and the tests comparing the costs of solving Hull's test problems [26] using ATSMCC, DVERK, and DGEAR from the IMSL library. ~

The ATSMCC translator is a portable FORTRAN program. It was designed to be easy to use and flexible enough to produce object code in several different forms. The function of the translator is discussed in Section 2, and Section 3 contains examples of its use.

ATSMCC solves systems of nonstiff to moderately stiff systems of initial value problems in ordinary differential equations for which each component of the solution is analytic {infinitely differentiable) on the interval of integration. Char- acterizations of problems which ATSMCC can solve, limitations of the method, and observations for the effective use of the method are given in Section 4.

The FORTRAN object code must (a) generate the power series for each component of the solution, (b) estimate the location and order of all primary singularities {those on the circle of convergence), (c) choose an integration stepsize as large as possible consistent with error control constraints, and (d) extend the solution by analytic continuation. Each of these tasks of the object program is examined in turn in Section 5.

In Section 6 we describe tests to compare ATSMCC with the IMSL routines DVERK and DGEAR on the basis of accuracy, computer time, and storage requirements. The ATSMCC method was more accurate than either of the standard methods on 90 percent of the test problems. The object code executed roughly as fast as the standard methods overall, but ATSMCC was faster on 85 percent of the test problems with an accuracy requirement of 1.0E-12 and slower on 85 percent of the problems with an accuracy requirement of 1.0E-3. When all the computer time for preprocessing, compiling, linking, and execution was included, the relatively high system-dependent cost of linking with the IMSL routines overwhelmed most other differences in CPU times.

In the rest of this section we outline the relationship of the ATSMCC to earlier work.

The solution of an initial value problem in ordinary differential equations expanded as a Taylor series has been given as both a classical and a numerical method for many years. As early as 1946, J. C. P. Miller used recurrence schemes for the terms of a Taylor series to compute the Airy integral for the British Association Mathematical Table [32]. In order to overcome the difficulty of finding an analytic expression for arbitrarily high derivatives, several authors [20, 33, 31, 2, 5, 36] have written translator programs to accept differential equations as input and to produce object code for solving the system. Gear [18] described similar translator packages whose object code solved the differential equations by standard methods. In 1966, Moore [33] used the control of the truncation error in solving differential equations by long Taylor series as an application of interval analysis. He has continued to promote the use of Taylor series as a numerical tool [14, 34, 35]. Rall [39] gives other applications of Taylor series methods.

The work reported here has much in common with the TAYLOR package developed by Barton et al. [2, 3, 37]. Our work differs from theirs in many details. Major advances have been made in series analysis and in ease of use.

' Internatmnal Mathematmal & Statistical Libraries, Inc., NBC Building, Sixth Floor, 7500 Bellaire Blvd., Houston, TX 77036.

ACM Transactmns on Mathematmal Software, Vol. 8, No. 2, June 1982

Page 3: Solving Ordinary Differential Equations Using Taylor Series

116 G. Corliss and Y. F. Chang

In Sections 5.2 and 5.3 we discuss the estimation of the location and order of the primary singularity. This information about the analytic behavior of the solution is used to provide precise error control. The series analysis is based on a simple categorization of series types, an approach which was dismissed in [3] as being difficult and computationally expensive. The categorization has a firm mathematical basis in the work of Darboux and includes a special treatment for functions which are entire. The computational cost of these tests is much less than the cost of series generation, yet their accuracy and reliability make it possible to eliminate much of the computation required by TAYLOR.

The second area of improvement is in the ease of use. Most importantly, the entire ATSMCC package is written in FORTRAN. Since TAYLOR is not available in a portable form, a direct comparison with ATSMCC was not possible. Among the features of ATSMCC which make it easier to use are (1) a simpler input format, (2) no driver is necessary, although a driver may be used, (3) a choice of single- or double-precision code, (4) more readable FORTRAN object code, (5) several provisions for output as the solution proceeds, and (6) access to information about the analytic behavior of the solution.

2. ATSMCC TRANSLATOR

The ATSMCC translator is a special-purpose compiler written in portable FOR- TRAN for the solution of initial value problems in ordinary differential equations. This compiler accepts simple FORTRAN-like statements of the differential equations and the initial conditions with or without special control statements. This translator is truly portable because it was designed with all of the machine- dependent parameters specified in one area of the program. This design has passed the PFORT verifier [40] with no errors or warnings and has been implemented on five different manufacturers' computers (IBM, CDC, DEC, Xerox, Prime), as well as on a Heath H-8 microcomputer. The translator and a detailed A T S M C C User Manual [8] are available on computer tape from Chang.

The machine-dependent parameters are: (1) the number of bits used to repre- sent a character, (2) the number of characters stored in one word of memory, (3) the under/overflow limits, (4) single-precision limit, (5) double-precision limit, (6) the machine's internal character code (ASCII, BCD, etc.), and (7) right/left justification of characters in a machine word. These parameters are specified in a short main program whose only functions are to specify these and other control parameters and to call the body of the translator. The main program also contains documentation to help the user of ATSMCC.

To ensure program portability, all of the input characters are changed into numeric codes which will fit into an 8-bit word so that this translator can run even on popular microcomputers. The translator manipulates these numeric codes to perform the compilation and then writes the FORTRAN object program. All of the manipulations are in integer arithmetic for portability.

3. USE OF THE TRANSLATOR

The ATSMCC translator program is a tool for the solution of nonstiff initial value problems in ordinary differential equations whose right-hand side can be given as a finite sequence of +, - , * , / , **, EXP, SIN, COS, TAN, SINH, COSH, TANH,

ACM Transac t ions on Mathematmal Software, Vol 8, No 2, June 1982

Page 4: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential EqualJons Using Taylor Series • 117

ALOG, ACOS, ASIN, ATAN, or of functions which are solutions to differential equations of this type. Moore [35] called this the class of functions most commonly used in computing. In this section, we discuss the coding of simple problems and a few of the more advanced features. More details are contained in [8].

The ATSMCC translator accepts four blocks of input, each terminated by an end of block mark "$".

The first input block is used to specify the system of differential equations and translator options. To enter the differential equations, DIFF(Y, X, N) is used to denote the Nth derivative of the dependent variable y with respect to the independent variable x (1 _< N _< 6). Using the DIFF(,,) function, the user specifies the system of ODEs with FORTRAN-like statements using standard FORTRAN operators and functions.

The second input block may be empty and is used to insert nonexecutable FORTRAN statements at the beginning of the generated object program. The most common use of this feature is to cause the translator to produce a FUNC- TION or SUBROUTINE object program instead of a main program. This feature is also useful when other statements are inserted into other parts of the object program which require declared variables or COMMON areas.

The third input block is used to specify the interval of integration and the initial conditions. It may also be used to change the default values of some of the control parameters.

The fourth input block may be empty and is used to insert statements at the end of the integration step loop. By placing statements in this input block, the user may dynamically control such aspects of the computation as the amount of information printed at each step or to interrupt the computation at a specific value of any component of the solution.

The simplest possible input requires

differential equations (input block 1), (empty input block 2), interval starting and ending points (input block 3), initial conditions (input block 3), and (empty input block 4).

For example, to solve y " -- y ' + y2, y(0) = 4, y'(0) = 2, on the interval [0, 3], requires the input

DIFF(Y, X, 2) = DIFF(Y, X, 1) + Y.Y $ $ START = 0.0 END = 3.0 Y(1) = 4.0 Y(2) = -2 .0 $ $

Many of the features and control parameters are implemented by specifying default values which may be overridden. For example, to solve the equation y(6). = 5y (4) + 2y' + 7y on the interval [0, 12] with initial conditions y(0) = 2.0, y'(0) = 1.0, y "(0) = - 1.0, y " (0) = 1.0, y (4) (0) = 0.5, and y (~) (0) = 0.2 with local accuracy of 1.0E-4 in .an absolute error per unit step measure with the values of the

ACM Transactions on Mathematical Software, Vol. 8, No. 2, June 1982

Page 5: Solving Ordinary Differential Equations Using Taylor Series

118 G. Corliss and Y. F. Chang

solution printed only at each integer, the following input is required:

DIFF(Y, T, 6) ffi 5.DIFF(Y, T, 4) + 2,DIFF(Y, T, 1) + 7,Y $ $ START ffi 0.0 END = 12.0 Y(1) = 2.0 Y(2) = 1.0 Y(3) --- -1.0 ¥ ( 4 ) = 1 .0 Y(5) = 0.5 Y(6) -- 0.2 DLTXPT = 1.0 ERRLIM = 1 . 0 E - 4 IERTYP -- 2 MPRINT = 2 $

.$

As in this case, the form of the input is often simpler than the verbal specification of the problem.

The translator initializes default values for the control parameters. Lines in the third input block are inserted directly into the object code, so any of the default values may be overridden. For parameters whose default values are acceptable, no action is required. This method of implementation allows beginning users to solve problems without being aware of all the available options.

Some of the more advanced features available are invoked by commands directed to the translator in the first input block. The most useful features implemented in this way include the generation of double-precision code, the use of more than 30 terms in the series, and the suppression of the series analysis for components of the solution which are known a priori to be ones which do not restrict the integration stepsize. This last feature is especially useful to reduce the computational cost for problems with some nonstiff and some only moderately stiff components.

4. APPLICABILITY

This section groups together the properties of problems which this method can solve, the known limitations of the method, and some guidance for using the method efficiently and effectively. These points are expanded at various points in this paper when the related properties of the translator or of the object code are being discussed.

The ATSMCC method can solve

(1) systems of nonstiff to moderately stiff systems of initial value problems in ordinary differential equations

(2) in which the highest order derivative of each dependent variable is given explicitly on the left-hand side of an equation as a finite sequence of +, - , *, / , **, EXP, SIN, COS, TAN, SINH, COSH, TANH, ALOG, ACOS, ASIN, ATAN, or of functions which are solutions to the differential equations of this type, and

(3) for which the solution is piecewise analytic on the interval of integration.

ACM Transact ions on Mathemat ica l Software, Vol 8, No 2, June 1982

Page 6: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 119

The known limitations of the method are

(1) As furnished on tape, the translator expects that derivatives are of order at most 6, that there are at most 20 equations in the system involving at most 140 variables, and that the total number of functions or products of variables is at most 140 minus the number of variables.

(2) This method cannot handle without manual intervention solutions which are polynomials, singular problems which require the application of l'Hospital's rule, or problems which experience catastrophic subtractive errors in series generation.

Knowledge of what an algorithm cannot do helps one to understand what it can do well. Accordingly, the rest of this section consists of observations based on the results reported in this paper and on the experience of the early users of ATSMCC.

This method is most attractive for problems with stringent accuracy require- ments, for problems which must be solved repeatedly (like parameter identifi- cation), or for quick and easy problems {like students' assignments). In seve- ral instances, the very high-order and precise error control used by ATSMCC have enabled it to solve problems which standard methods had been unable to solve.

The object code complexity and execution time depends on the number of functions and products of variables in the system, not on the size of the system or the order of the derivatives involved. There is no penalty for high-order derivatives. ATSMCC appears to be most attractive for systems with fewer than 15-20 functions or products.

The analytic information about the location and order of singularities in the solution sometimes provides insight into the behavior of the system. For example, this method has been used to map the first natural boundary ever recognized in the solution of a nonlinear dynamics problem [10].

The first equation in the first input block must begin D I F F ( , , ) ~- . . . . After that, the equations may be entered in any order, but the highest order derivative of each dependent variable must occur explicitly on the left-hand side of some equation. The translator sorts the equations into the proper order for processing.

Linear equations with constant coefficients, especially those with full matrices, are probably done more efficiently by other methods designed for such problems.

Functions which are defined in a piecewise manner require the use of input blocks 2 and 4.

The default series length of 30 terms provided by the translator is appropriate for most problems, but there are a few circumstances in which a user may wish to change the series length. When a series begins with many terms equal to zero, the series used should be lengthened to include at least 10-15 nonzero terms. For example, one component of the solution to problem C4 begins with 50 terms equal to zero. For problems with modest accuracy requirements, a shorter series length may be slightly faster, but the accuracy of the estimates for Rc is compromised. Problems with stringent accuracy requirements, or with no functions or products, may run slightly faster if a longer series is used.

ACM Transact ions on Mathematmal Software, Vol, 8, No. 2, June 1982

Page 7: Solving Ordinary Differential Equations Using Taylor Series

120 G. Corliss and Y. F. Chang

5. OBJECT CODE

We now turn to a discussion of the FORTRAN object code produced by the ATSMCC translator. A sample object program for solving the Painlev~ equation y " ffi 6y 2 + x, y(0) ffi 1, y'(0) ffi 0, appears as an appendix to this paper. Many variants of the code may be produced; the exact form depends on the user's requests. Here we discuss the simplest form the object code may take. Some of the options available are discussed in Section 3.

The object code implements the Taylor series algorithm for solving initial value problems in ordinary differential equations:

Initialize control parameters;

Assign initial conditions, starting and ending values for the independent variable, and optional method control parameters;

Loop for each integration step

Initialize the first few series terms;

Generate the remaining series terms;

Determine the stepsize as a function of the location and order of the primary singularities, series length, error tolerance, and type of error measure to be controlled;

Print the solution at user-selected points;

Extend the solution by analytic continuation.

The stepsize used to expand the series is the same as the radius of convergence at the preceding step. After a series is generated and the truncation error is estimated, the stepsize is adjusted to control the error. This is easily done because a series does not need to be generated again when a stepsize change is made.

The terms of the series for a function g(x) expanded at Xo with a stepsize of h := x - x0 are stored as reduced derivatives G(k + 1) :ffig(k)(xo)hk/k!. The stepsize h may be varied to control underflow or overflow which may occur during the generation of the series.

A method which uses an infinite Taylor series is A stable, but in practice the series must be truncated to N terms. Then the characteristic polynomial is p(x, y) -- x - ~ y~k)/k!, the same as for a class of Runge-Kutta methods [44]. For example, for N = 20 and 40, the real-valued stability intervals are [--8.85, 0] and [-16.29, 0], respectively. Consequently, power series methods are best suited to nonstiff problems, although their high order and consequently large steps allow for the accurate solution of some moderately stiff problems. Barton [1] has succeeded in using a modified Taylor series method for solving stiff problems. Chang has developed a different modification for stiff problems which will be incorporated into a later version of the ATSMCC translator.

The remaining parts of this section follow the outline given above for the object program.

5.1 Series Generation

Recurrence relations derived from the differential equation are used to generate the terms of the power series. If u(x) :ffi f (x ) × g(x), then the Leibnitz rule for

ACM Transactions on Mathematmal Software, Vol. 8, No 2, June 1982.

Page 8: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 121

differentiating products becomes

k

U(k) = ~ F ( j ) × G(k - j + 1), J=l

when phrased in terms of reduced derivatives. Hence, operations occurring in the differential equations are reformulated as products whenever possible. For ex- ample, if u(x):= [f(x)] s, then u' = s fS- l f ' = su f ' / f , so that u ' f = sf 'u. Then Leibnitz's rule can be applied to both products. Henrici [24] credits this obser- vation to J. C. P. Miller. Several authors have used this device to give tables of recurrence relations for common rational and exponential functions. The same approach has also been used in a processor which differentiates entire computer programs by accepting a FORTRAN function subprogram F and producing a FORTRAN subprogram which returns the power series for F [28].

The task of generating the recurrence relations is done automatically by the ATSMCC translator, and it is done only once. The same recurrence relations are used at each integration step and can be used if the same equation must be solved several times with different parameters or with different initial conditions.

For some equations and some initial conditions, machine roundoff may affect the series generation. We call this the "last pole situation" because it is observed to occur when the solution reaches a region between a nonlinear portion and a quasi-linear portion of the solution. The effects of roundoff on the taft of the series result in progressive underestimation of the radius of convergence (Rc). The solution is accurately computed, but the integration steps taken are much smaller than necessary. Such equations appear to be asymptotically linear, perturbed linear, or other special cases which yield to known methods. Test problem E2 (when integrated in the negative direction) and B4 are examples of this phenomenon.

Several fast series generation techniques are known. The use of Leibnitz's rule requires that the number of arithmetic operations needed to compute the kth coefficient grows at most linearly with k, so that the cost of generating N terms grows no faster than N 2. Brent and Kung [4] gave an algorithm whose cost is O ( N log N). The ATSMCC translator uses the algorithm based on Leibnitz's rule because (1) it is much simpler for the translator to apply, so much less translation time is required; and (2) the constant associated with N 2 is typically much smaller than the constant associated with N log N. Hence the Leibnitz rule approach is superior for typical series lengths of 15 to 30 terms. The number of arithmetic operations required by the object code for each integration step is considered further in Section 6.2.

Included in the series generation loop is code to adjust the stepsize h to prevent underflow or overflow. The magnitude of the kth series term is roughly propor- tional to (h/Rc) k. Hence, [ log [ h / R c [[ > (largest acceptable exponent) /N causes under/overflow. For example, using 30 terms on an IBM 370 requires that 10 .2 < [ h / R c [ < 102, so there is considerable latitude in the choice for h. If a potential under/overflow is detected at the kth term, h is increased or decreased by an appropriate amount to prevent under/overflow in the (as yet uncomputed) Nth term. Then the series generation is started again from the beginning of that step.

ACM Transact ions on Mathemat ica l Software, Vol. 8, ]No. 2, June 1982.

Page 9: Solving Ordinary Differential Equations Using Taylor Series

122 • G. Corliss and Y. F. Chang

The adjustment of h is potentially expensive, but advantage is taken of the fact that h may vary widely without causing an under/overflow. First, the test is made only on the first step because h changes slowly from one step to the next. At each step, h = r × Rc, where r is problem and tolerance dependent, and I r I < 1. The radius of convergence (Rc) changes by at most a factor of 1 - I r I per step, so that once an appropriate h is found on the first step, the danger of subsequent under/ overflow is minimal. Second, only the first component of a system is checked. While it is possible that the first component may be safely generated and the other components may under/overflow, this is a relatively rare occurrence which may be handled by the user supplying an appropriately larger or smaller stepsize.

5.2 Ser ies Analys is

After generating the series for the solution, the object code calls a subroutine RDCON which estimates the series radius of convergence, the location and order of primary singularities, and returns the largest step which can be taken consistent with error control and stability constraints.

The characterization and analysis of singularities is a very old problem. A complete characterization of all types of singularities of an analytic function by its Laurent series was known by the 1870s. The problem of estimating the radius of convergence of a power series is nearly as old, and many estimates are known. Hadamard's classical thesis on the subject in 1892 [22] included estimates for Rc, for all the poles on the circle of convergence, and for the location and order of all poles of highest order on the circle of convergence. Henrici [23] observed that the problem of determining the poles of a meromorphic function from the coefficients of its Taylor series is completely solved. However, methods using a Pad~ table or Hankel determinants are often computationally expensive, especially if the num- ber of poles on the circle of convergence is not known a priori. Further, their proofs do not hold for functions like log(z) or z 4/3, whose singularities are not poles but branch points. Accordingly, we use other tests which take advantage of the known structure of singularities of real-valued ODEs and which are inexpen- sive to compute. Pearce [38] grouped series analysis techniques into two classes: (a) those of Pad~ type which fit sequences of approximating functions of known singularity structure to the series coefficients, and (b) those of Darboux type which are based on the Darboux Theorem [15] giving the asymptotic form of the coefficients. We employ tests of the Darboux type because of their firm theoretical basis and their ease of implementation.

Characterizations of the singularities which occur in the solutions of ODEs are also very old. For example, [25] discussed the work of Paul Painlev~ and his colleagues in the 1890s to give all six equations of the form w" -- f ( z , w, w') for which all branch points and essential singularities in the solution are fixed. This remains an active area among mathematicians studying the theoretical solution of ODEs in the complex domain.

Our approach to series analysis was motivated by the observation that series for solutions to ODEs follow a few very definite patterns which are characterized by the locations of primary singularities. In general, the coefficients of a power series follow no patterns, so few theorems about truncated series can be proved. However, series which are real-valued on the real axis can have poles, logarithmic

ACM Transac t ions on Mathemat ica l Software, Vol 8, No 2, June 1982

Page 10: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 123

>-.

c_9 c~ _I

-T

' I

l

zi I

XXXx X x Y " =6Y2 +X x x x XO = 0.5

x x x H = 0.42

X X

X X

X X ×

X X

X X

X X

X X

X

i i - t - - - - - I I I t

4 8 i)_ 16 _~ 24 28

K - TE~MS Sf THE SERIES

Fig. 1. Se r i e s w i t h o n e r e ~ p r i m a r y pole.

X X

-T x Y" =6Y 2 +X f x x

x XO = 0.0 cu~. x

H x

X X x ×

- - Ln ~ X ~ x × ~ X X ~" X X

OC;_ X X ~.5 X __1 X

X ~ X

X × t

×

J i i ~ . - _ i i t i i

S 4 8 : 2 16 20 24 28

K - TERMS 3f THE SERIES

Fig. 2. Se r i e s w i t h a n e a r b y s e c o n d a r y po le

b ranch points, and essential singularities only on the real axis or in conjugate pairs. Fur ther , the effects of all secondary singularities d isappear if sufficiently long series are used.

Figures 1-3 i l lustrate pa t t e rns of series t e rms which are typica l of three common singulari ty structures. T h e series graphed in Figure I has one p r imary singularity a t x -- 1.206, and a secondary singularity at x = -1 .250 which has no appa ren t effect on the series. However , when the same function is expanded a t x -- 0 in Figure 2, the effect of the secondary singulari ty very close to the circle of convergence is apparent . T h e series graphed in Figure 3 has p r imary singularities a t x -- 5.15 + /0.50, and the series has the appearance of a sine curve. These

ACM Transactions on Mathematical Software, Vol. 8, No. 2, June 1982.

Page 11: Solving Ordinary Differential Equations Using Taylor Series

124 • G. Corliss and Y. F. Chang

>. , ,

(..,3 c3 _ l

C,J

(:3

T

I

0

X Y I ' = 2~[Y1 xx ~ Y2' = Y1*Y2

I × I ~ XO = 4.10 +

X x x H = 0.67

i ~X x x

[ , x ~ ×

C X ¢

I I I I

4 8 12 16 20

K - TERHS OF THE SERIES

- YI~Y2)

- Y2

I

24

¢

28

Fig. 3. Series with a conjugate parr of primary poles.

pat te rns guided our early work and continue to provide valuable insights into the analytic propert ies of solutions.

These figures and the ,discussion above suggest using the following model problems to mot ivate series analysis:

(a - z) -~, s ~ 0, - 1 , . . . v(z; a, z) := v(z) such tha t v"-S}(z) = v(z; a, 1), s = 0, - 1 . . . .

and

w(z; a, b, s) = (z 2 - 2bz + a2) -~, s ~ O, - 1 . . . . ,

and

k W ( k + l ) 2 W ( k ) ( k + s l ) h ( h ) 2 --- - - c o s 0 - W ( k - 1 ) ( k + 2 s - 2 ) a " (2) a

The R D C O N subrout ine uses a s t ra tegy mot ivated by these two model prob- lems to est imate the series radius of convergence.

(1) Does the series have a single pr imary singularity? If so, use the th ree- te rm analysis (3TA).

(2) Does it have only a conjugate pair of pr imary singularities? If so, use the six- t e rm analysis (6TA).

ACM Transactions on Mathematical Software, Vol 8, No 2, June 1982

where s, a, and b are real numbers. Le t cos 8 = b /a . T h e n w has singularities at a x exp(_+~8). We chose v(z; a, s) as the form for the model problem because any function with only one pr imary pole or logarithmic singularity has the form C(v(z ; a, s)), where C is analytic in some region [41].

T h e reduced derivatives at z = 0 for the model problems satisfy the recurrence relat ions

h k V ( k + 1) = V ( k ) ( k + s - 1) - , (1)

a

Page 12: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 125

[NIIIQLIZE AND L ~ TEST INPUT ,.] ~ TOP HUMP SERIES)

ANALYSIS FOR LINERR P R O B L E M S |

1 3 TERM TEST I

~ "COMPUTE SERIES ~ PARQMETERS

16 TERM TEST I

co. UTE S RIES PRRQMETERS

i~Op LINE~ __ICOMPUTE SERIES IQNQLYSIS ~-- ~ PQRQMETERS

V OPTIMAL STEPSIZEI COMPUTRTIONS !

Fig. 4. Flow of control for R D C O N series analysis.

(3) Is the solution an entire function? If so, use the top-hump analysis. (4) If none of these est imate Rc with sufficient accuracy, use the top-line

heuristic.

T h e interact ion of 3TA, 6TA, top-hump, and top-line in estimating the radius of convergence of a series can be summarized by Figure 4. Each of these tests is considered individually below.

In case (1) for a series with a single algebraic or logarithmic singularity on the circle of convergence, est imate Bc by solving two copies of eq. (1).

T h r e e - T e r m Analysis (3TA)

h h V(k + 1) V(k) . . . . k - (k - 1) Rc a V(k) V ( k - 1)

We have analyzed this test [5] and have shown tha t it is more accurate than the usual rat io tes t est imate for Rc and Hadamard ' s est imate for the order [21].

ACM Transactions on Mathematmal Software, Vol 8, No 2, June 1982.

Page 13: Solving Ordinary Differential Equations Using Taylor Series

126 • G. Corl iss and Y. F. Chang

Equivalent estimates were given in [16]. References [9] and [27] give different analyses of the error in 3TA in the presence of secondary singularities. Hunter and Guerrieri [27] also gave estimates for h / a which are asymptotically more accurate than the 3TA. Their system of two nonlinear equations in h / a and s is ill-conditioned. We have observed that for N = 30 terms, their tests often suffer from catastrophic cancellation, and the 3TA yields more accurate results.

The 3TA is designed specifically to handle series which have one primary singularity, and lack of convergence usually means that this assumption is mistaken. In order to detect when the series has singularities which are not of this form, RDCON computes two estimates for h / R c using different terms of the series. If the two estimates do not agree, then the series does not have one real primary singularity, so the presence of a conjugate pair of primary singularities is investigated.

In case (2) above, a function with a conjugate pair of singularities is asymptotic to the series for the model problem w(z ; a, b, s), so we have investigated tests based on eq. (2) [6, 9]. A similar test is given in [27] where a nonlinear system of equations must be solved. In RDCON, a linear system of equations is formed by using six consecutive terms of the series to form four copies of eq. (2) for the coupled unknowns x l = ( h / a ) c o s 0, x2 = s ( h / a ) c o s 0, x3 = (h /a ) 2, and x4 = s(h/a)2:

Six-Term Analysis (6TA)

k W ( k + 1) = (k - 1 ) W ( k ) x ~ + W ( k ) x 2 - (k - 2)W(k - 1)x3

- 2 W ( k - 1 ) x 4 , for k - - N - 1 , . . . , N - 4 , (3)

where N is the length of the series being analyzed. This system is scaled and solved by Gaussian elimination with partial pivoting. Then a measure of the relative accuracy of the solution is obtained from the residual of another copy of eq. (3) with k = N - 5. If the residual is small, then the series radius of convel/gence, as well as the order and location of the conjugate pair of singularities, is computed from x~, x2, x3, and x4. If the residual is large, if x~ - (h/Rc) 2 < 0, or if [cos 0[ > 1, then this is interpreted to mean that the series has secondary singularities. Since the 3TA would have already reached a similar conclusion, the series must have a complicated structure of singularities on or near the circle of convergence. More complicated tests are possible, but an appropriate response to the presence of multiple singularities is to take a safely small step determined by case (4). At the next step the primary singularities will become apparent so that subsequent steps may proceed using 3TA or 6TA.

The discussion of the top-hump analysis used for solutions which are entire (case (3) above) will follow an explanation of case (4) because the two are similar.

In case (4) above, there are secondary singularities close to the circle of convergence. A heuristically motivated top-line analysis produces a conservative estimate for Rc from the slope of a linear upper envelope (a straight line fitting the points from above) of a graph of In [ Y(k)[ versus k. Figure 1 shows a typical pattern of this graph for one primary singularity. The slope approaches In [ h / R c [

ACM Transactions on Mathematical Software, Vol 8, No. 2, June 1982

Page 14: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Dffferenttal Equations Using Taylor Series • 127

as k --, 00. The presence of a nearby secondary singularity, or of additional singularities on the circle of convergence, has effects similar to those shown in Figures 2 and 3. In those figures, the slope of a linear upper envelope is approximately In I h/Rc ]. This illustrates the heuristic motivation of the top-line analysis. To our knowledge, this heuristic has not appeared in the li terature (although [27] suggests a different analysis based on graphing), so we will discuss it more fully.

Like the 3TA, the top-line analysis is suggested by a s tudy of the series for the model problem v(z; a, s). Let ~ - 1 Y(k) be an arbitrary series solution to a differential equation and have radius of convergence Rc. Roughly speaking, there exists a constant C and an order s such tha t [ Y(k)[ <_ C I V(k)]. For example, the reduced derivatives of w(z; a, b, 1) satisfy W(k) = (h/a)ksin(kO + d), for some d, so tha t an upper envelope for I W(k)[ is given by V(k) for v(z; a, 1), as suggested by Figure 3.

Next we consider the effect of the order s of the singularity on the graph of In ] V(k)]. The order is increased or decreased by term-by-term differentiation or integration, respectively. For example, let v(z) = v(z; a, 2). Then V(k) = ka-2(h/a) k-l, so tha t

ln[ V(k)[ = (k - 1)inlh/a I + In k - 2 ln[ a [.

If we view k as a continuous variable, then

d ( l n l V ( k ) [ ) - I n l h l + l ~ Ihl dk ~ In , as k - * 00,

and

d2(ln I V(k)[) _ _k_ 2 < 0.

dk 2

This suggests the following conclusions about the upper envelope of the graph of lnl Y(k)[ versus k:

(1) If the order of the primary singularity is 1, then the slope is In[ h/Rc [. (2) If the order is not 1, then the slope converges to In] h/Rc ] as k --* o0 at a rate

roughly proportional to 1/k. (3) If the order is not 1, then the upper envelope is not linear. For orders larger

than 1, the graphs open downward. The concavity approaches zero very rapidly as k --+ o0. For orders less than 1, the graphs are concave up, the slope underest imates In] h/Rc I, and Rc is overestimated.

To estimate Rc from the graph of In I Y(k)] versus k, the top-line analysis shifts the order of the series by repeated termwise differentiation or integration. For each order, a linear upper envelope is fit. For some series (exp(x), for example), there is no order for which the concavity is negligible, so the search for an order must be limited. Although the singularity may occur with any order, it is unusual for the solution to a differential equation to have singularities whose order lies beyond the interval Is - 11 - 3. Also, it is safer to accept an order for which the graph opens downward than one which opens upward so tha t Rc will be safely underest imated. Consequently, we begin by computing the series J'SJ" y(k) and fit

ACM Transactions on Mathematwal Software, Vol. 8, No. 2, June 1982

Page 15: Solving Ordinary Differential Equations Using Taylor Series

128 • G. Corliss and Y. F Chang

a linear upper envelope. If that graph is linear or opens downward, that estimate for the slope is accepted. If that graph opens upward, the series is differentiated termwise to reduce the second derivative of the graph, and a new top-line is fit. This process is repeated until the graph is linear, opens downward, or until seven termwise differentiations have been done. If the series for y " still opens upward, the search stops, and the estimate for Rc is reduced by 10 percent. This heuristic analysis gives a conservative estimate for Rc for series which do not satisfy the assumptions of either 3TA or 6TA. We must not overestimate Rc, or we may be attempting to sum a divergent series, but an underestimation of Rc only results in a slight increase in the cost of solving the problem. Following a safely small step, the troublesome secondary singularity has either become the primary singularity, or else it is relatively far away. In either case, the solution can proceed.

In case (3) above, solutions which are entire (or which have relatively large radii or convergence) require special treatment. If the ATSMCC translator recognizes that the solutions to a system of differential equations is entire, then it sets a flag in the object code as a signal to RDCON to bypass the three tests discussed above in favor of a heuristically motivated "top-hump" analysis. The name is taken from the appearance of the graph of In I Y(k) I versus k for exponential functions. The size of the hump is determined by the dominant eigenvalue of the system. Top-hump analysis is similar to the top-line analysis. Consider an exponential function y(t) = exp(kt), with series terms Y(k) = (~to)h-1/(k -- 1)!. Then ln] Y ( k + 1)] = k ln] ~tol - In k!, so we fit a linear upper envelope to the graph of In I Y(k + 1) I + In k! to estimate kt0.

The series analysis discussed in this section assumes that the series being analyzed is long enough that the effects of secondary singularities are negligible. In an earlier paper [9], the authors investigated the effects of secondary singular- ities on these tests. The estimate for the order of the primary singularity is much more sensitive than the estimate for Rc.

5 3 Optimal Stepsize for Error Control

After completing its analysis of the series, RDCON computes the largest integra- tion stepsize which may be taken subject to error control constraints. As Gear [19] has observed, all error estimation and stepsize control strategies in codes for the numerical solution of ODEs are based on some model of the problem. We could estimate the error from the magnitude of the first neglected term in the Taylor series, but it is more accurate to estimate the error based on the two model problems which motivated the series analysis. First, consider the error for the model problem v(z) := v(z; a, s). More general functions will be considered later in this section. Following [42], let the local error be LEN:= v(z) - ~N V(k). We wish to choose an optimal stepsize h* as large as possible while controlling one of the following error measures for v:

(1) absolute local error per step, I LEN I; (2) absolute local error per unit step, I LEN/h I; (3) relative local error per step, I LEN/V(1)I; (4) relative local error per unit step, I LEN/(h x V(1))I; (5) mixed local error per step, I LEN/(1 + V(1))I; or (6) mixed local error per unit step, I LEN/(h x (1 + V(1))) I.

ACM Transac t ions on Mathematmal Software, Vol. 8, No. 2, June 1982.

Page 16: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor S~rie$ • 129

This step also depends on the length (N) of the series, the order (s) and locations of the pr imary singularities, and the desired error tolerance (E).

Le t v(z ) be expanded using a step which satisfies 0 < h / a < 1. Le t d : = ( N - 1 + s ) / N , A : = r V ( N ) / ( 1 - r), and B : = r d V ( N ) / ( 1 - rd) . Assume ei ther tha t N is large enough or tha t r is small enough to satisfy 0 < r d < 1. T h e n [12] showed tha t LEN < A, for s < 1, and LEN < B, for s > 1.

We illustrate the approach used by RDCON to compute h* with an example in which we control the local error per step of a series for which s < 1. We are investigating the effects of different stepsizes, so let V ( N , h) denote the N t h t e rm of the series being analyzed which was generated using a stepsize h. Le t V ( N , h*) denote the N t h t e rm which would result from expanding the series using h*. T h e n V ( N , h*) = V ( N , h ) ( h * / h ) N-l. Let r* : = h * / a = r ( h * / h ) . We seek h* for which

LEN < A = r* V ( N , h*) 1 - r *

V ( N , h) (r*) N

r N-1 1 - r* E.

T h e n r* is the single real root on (0, 1) of the polynomial

V(N, h) p ( x ) := rN_---------- Y - X N + ~X - - ~.

Let g : = (ErN-~/V(n, h)) ~/N. T h e n limN-.~ g ffi 1, so tha t limN_~ g / ( N + g) ffi O, and p ( g - g 2 / ( N + g) ) = O ( ( g / ( N + g))2) ~ 0 as N--* ao. Hence x = g - g2 / (N + g) is an approximate root of p, and

h* ffi R c g 1 N + g "

Equat ion (4) gives the optimal step to control absolute local error per step for the model problem v(z; a, s), with s < 1. Similar expressions for h* for o ther types of error measures, for more general problems, and for s > 1 are given in [7].

T h e choice of h* controls the errors in the function values. N te rms of the series for y are known, so N - r terms of the series for y(~) can be obtained if needed by r te rm-by- te rm differentiations. Hence y(~) is known with accuracy O ( h N - r - 1 ) , for r > 0 with no additional error control. T h e ATSMCC translator program accepts equations involving derivatives of order as high as six, bu t the resulting reduct ion of the theoretical order of convergence of the me thod is not impor tan t in practice because the number of terms used is so large.

The preceding discussion assumed tha t the series being analyzed had only one pr imary singularity. If the series has a conjugate pair (a and d) of pr imary singularities of order s, then its series is asymptot ic to ~ F ( k ) , the series for f ( x ) :--- (a - z) -~ + (5 - z) -~. If ~ V(k ) is the series for v(z) ffi (I a I - z) -~, then F ( k + 1) = 2 V ( k + 1)cos(s + k)8, where ~ - arg a. Hence the t runcat ion error for f is at most twice the t runcat ion error for v.

The derivat ion of eq. (4) given above assumed 0 < r -- h / a , so tha t the series for v(z; a, s) has a constant sign for N sufficiently large. In tha t case, the bounds

ACM Transactions on Mathematical Software, Vol. 8, No. 2, June 1982.

Page 17: Solving Ordinary Differential Equations Using Taylor Series

130 G. Corliss and Y. F. Chang

given for the local error are tight. However, ff the series being tested has sign changes, these bounds remain valid, but are more conservative. Consequently, for strictly alternating series, the last term is used to bound the local error.

5.4 Analytic Continuation

The subroutine RDCON discussed above returns a step based on each component of the solution in turn, and the object program finds the smallest of the steps.

The path along which analytic continuation is performed is not limited to the real axis. Corliss [11] modified an earlier version of the software discussed in the present paper and used it to vault over poles on or near the real axis by extending the solutions into the complex plane of the independent variable.

Stetter [45] outlined global error estimates which are possible with methods for which the global error is proportional to the local error. Taylor series methods satisfy Stetter's requirements for proportionality, provided that the integration stepsizes depend only on the accuracy constraints. If the solution is desired at intermediate points, it is produced by series expansion rather than by stepping to the desired point so that the course of the integration is not affected by any output requests. In this way, the proportionality of the local and global errors is maintained, and the computational cost of producing solution values at inter- mediate points is minimized.

Taylor series methods are compatible with several different global error esti- mation strategies which have been proposed, including interval analysis [33], proportionality of local and global errors [45], or Zadunaisky's device [30].

5.5 Effects of Instability in Series Generation

The nonlinear recurrence relations used to generate the series are usually theo- retically unstable, but these instabilities will be shown to have no significant effect on the solution to the ordinary differential equation. In practice, the theoretical instability is so slight that we had never observed it until we generated a series with 10,000 terms. Even then, the solution to the ODE was computed correctly. A simple example will show why this is so.

Consider the series expanded at x0 for the solution to the equation

y , = y2, y(xo) = yo. (5)

The reduced derivatives satisfy the recurrence relation

h k Y ( k + l) - ~ ~ Y ( j ) x Y ( k + I - j ) .

.l--1

Let Y ( k ) denote the exact solution to this recurrence, and let U(k) denote the computed solution with U(1) -- Y(1) x (1 + E). Then U(k) = Y (k ) x (1 + e) k, so that the relative error in computing Y ( k ) is (1 + e) k - 1 = ek. This error is so small that we had generated 1000 series terms without noticing it.

To see the effect on the analytic continuation, we consider the sum of the convergent series ~. Y (k). The exact solution to eq. (5) is y (x) = 1/(c - x), where c = xo + 1/yo, so that

Y(k ) = yo(yoh) k-~ and U(k) = yo(1 + e)(y0h(1 + e)) k-~.

ACM TransacUons on Mathematmal Software, Vol 8, No 2, June 1982

Page 18: Solving Ordinary Differential Equations Using Taylor Series

Solvmg Ordinary Differential Equations Using Taylor Series • 131

The stepsize h for the analytic continuation is chosen small enough to make the series convergent. In practice (see Section 5.3), h is usually chosen to make 0.4 < y o h < 0.7. Then

S y0[1 -- ( yoh ) N] yo Y ( k ) - ---~ ~ a s N --> oo ,

k=l 1 - y o h 1 - y o h

and

N yO(1 + E)[1 -- (yoh(1 + ¢))N] yo(1 + ~) E U ( k ) = - - , .

k=l 1 - yoh(1 + ~) 1 - yoh(1 + e)

In the limit, the series summation has relative error

y0(1 + ~) 1 - y o h - e

1 - yoh(1 + e) y0 1 - y0h(1 + e) "

Hence, summing the series to perform the analytic continuation is a stable process, even though the series generation is not. The theoretical instability of the series generation typically causes a relative error in the analytic continuation of about 2e or 3e. This error is insignificant compared to the error which results from truncating the series. That explains why we have never been troubled by instabilities in generating even very long series.

6. COSTS OF THE TRANSLATOR

Now we address the important issue of comparing the performance of the Taylor series method as implemented with the ATSMCC translator against the perform- ance of standard methods for solving initial value problems in ordinary differential equations. In terms of accuracy and CPU execution time, the software presented in this paper is competitive with standard methods represented by the Interna- tional Mathematical Subroutine Library (IMSL) routines DVERK and DGEAR. Since the Taylor series approach is competitive, its ease of use, flexibility, and the analytic information it provides about the solution make it very attractive.

Several important articles [17, 26, 29, 43] have appeared comparing methods and/or codes for solving nonstiff problems in ODEs. Such comparisons serve not so much to promote good codes as to eliminate poor ones from further consider- ation. None of these comparisons has included Taylor series methods because (1) they do not fit conventional testing strategies, and (2) portable codes were not available when these studies were done. One usual measure of efficiency, the counting of function evaluations, is not appropriate for series methods. Conse- quently, the only meaningful basis for comparison is CPU execution time. Since there has been considerable discussion about what should be counted in the "total" CPU time, this paper reports all of the time required so that the reader may decide which times are most important for his or her type of problem.

Comparison is further complicated by the fact that there are two distinct algorithms being executed: (1) the compiler to translate the statement of the ODE into a FORTRAN object code, and (2) the series algorithm used by the object code to solve the equation. These were presented in Sections 2 and 5, respectively. Our main interest is in the cost of actually solving the problem, but the preprocessor cost must also be included.

ACM Transact ions on Mathemat ica l Software, Vol. 8, No. 2, June 1982

Page 19: Solving Ordinary Differential Equations Using Taylor Series

132 • G. Corhss and Y. F. Chang

In the remainder of this section, we present the results of comparative testing on the basis of accuracy, time, and storage, and study the effects of problem size and complexity on the performance of our software.

6.1 Comparat ive Test Results

The software presented in this paper has been compared with standard methods for solving ODEs represented by the IMSL routines DVERK and DGEAR. The purpose of these tests is to show that the ATSMCC is worthy of further attention. We have attempted to follow the advice of [13] regarding the design of numerical software testing experiments. To minimize the inevitable biases of authors testing their own software, we have used HuU's standard set of test problems [26], and we are reporting all computer time and storage costs involved. The authors believe that these tests indicate that the ATSMCC method is clearly superior for problems being solved repeatedly with stringent accuracy requirements and for the quick and easy solution of small systems. It is hoped that this paper will lead to extensive testing of this software by impartial investigators.

Here we specify the environment in which these tests were conducted, discuss the test problems used, and present results comparing these three methods based on accuracy, CPU execution time, and storage requirements. The hardware and software test environments are given in Tables I and II. All of the tests were done in double-precision arithmetic using 14 significant hexadecimal digits.

The object code from the ATSMCC translator is normally not accessed in the same manner as DVERK and DGEAR. For the purposes of this test, a short driver routine was written which (1) accessed the CPU timer, (2) called a subroutine form of the object code, (3) computed elapsed CPU time, and (4) printed the results.

The test problems used are from the standard set used by Hull et al. [26], which is designed to test numerical methods for solving nonstiff ODEs. We have added two problems to the test set to illustrate the behavior of series methods near a singularity. F1. y " = 6y 2 + t, y(0) = 1, y'(O) -~ O, t / = 1.2.

The first Painlev~ transcendent has a sequence of poles of order two on the real axis. The poles closest to t = 0 occur at 1.206 and -1.250. As shown in Figure 2, the secondary singularity affects the series at t -- 0. However, after one step is taken, the primary singularity at 1.206 is accurately located. This problem has served as our standard test problem for solutions with only a real primary singularity throughout the development process. Many of the test problems check the ability to follow decaying solutions; this one tests the relative accuracy achieved for an increasing solution.

1 F2. y , = y2, y(0) = 1, t f - - 1.0, y ( t ) = 1--~t"

This problem was included at the suggestion of Professor R. E. Moore to illustrate that series methods stop short of a real singularity blocking the solution. This problem tests the ability of a method to detect that a solution does not exist.

A more complete explanation of the testing procedures used and tables showing the performance of each of the three methods on each test problem at each tolerance is contained in [7]. Only summaries of that information are given in the body of this paper. ACM Transac tmns on Mathemat ica l Software, Vo|. 8, No 2, June 1982

Page 20: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series

Table I. Computer Testing Environment

• 1 3 3

Computer Operating System Locahon Job Stream Language Compiler

Xerox Sigma 9 Xerox/Honeywell CP-V/F00 Marquette University Batch FORTRAN Honeywell American National Standard FORTRAN

Table II. Software Test Environment

ATSMCC Compiler Options

DOUBLE (double precision) LENVAR = 90 (for C4 only) Increased storage (for C4 and C5 only) Inserted calls to timer routine

Object Code Control Parameters Defaults accepted

NSTEPS = 40 (except for D3, D4, & D5) HMAX- = 1.0E37 H = 1.0 D L T X P T = 0.0 LENSER = 30 (except for C4) LENVAR = LENSER + 3 + order of DE

Parameters speofied m input block 3: MPRINT = 0 (no output printed) ERRLIM = 1 0E-3 , 1 0E-6 , 1.0E-9, 1.0E-12 IERTYP = 4 (relahve error per unit step) NSTEPS = U0 (for D3, D4, & D5)

DVERK IMSL version 7 for Xerox Sigma 9 Runge-Kutta-Verner fifth and sixth order method Double precision

DGEAR IMSL version 7 for Xerox Sigma 9 Variable order Adams predictor corrector method of Gear Double precision METH = 1 (Adams method) MITER = 0 (Functional Iteration)

6.1.1 Accuracy. ATSMCC yielded better accuracy on these test problems than the standard methods. Of the 186 possible comparisons (26 problems, 3 or 4 tolerances, 2 error types), ATSMCC was more accurate than DGEAR in 182 cases and more accurate than DVERK in 172 cases. In several cases, the accuracy achieved by the ATSMCC object code was at least three orders of magnitude better than the accuracy achieved by the standard methods.

The superior performance of ATSMCC shown by Figure 5 for the average absolute error is typical of its performance for both absolute and relative errors at all tolerances tested. It appears that ATSMCC could be tuned for faster execution times by allowing less accurate results.

ACM Transactions on Mathematical Software, Vol 8, No. 2, June 1982.

Page 21: Solving Ordinary Differential Equations Using Taylor Series

134 G. Corliss and Y. F. Chang

Q( Q~ LiJ

8 . J

® DGE~R + DVERK / / ~

C~

I

1 I i I I !

B 9 C D E

TEST PROBLEM CLASSES

Fig. 5. Abso]ute error for tolerance = ]E-6.

6.1.2 CPU Execution Time. The user of the standard codes DVERK or DGEAR incurs CPU costs at five stages:

(S1) implement IMSL on the host machine; ($2) create and edit driver routine, function subprogram, and initial data; ($3) compile the driver routine and function subprogram; ($4) link with the IMSL library; and ($5) execute the resulting load module to solve the problem.

Stage $5 is the one of primary interest. We ignore stage S1 as transparent to most users. We are forced to ignore stage $2 as too variable to provide meaningful information. The costs for stages $3 and $4 are highly machine and system dependent. Stages $3 and $4 are relatively insignificant for large problems, or for problems which will be solved repeatedly, but their contribution to the total CPU cost of solving the test problem set is large enough to be significant. (If a standardized driver is used, it may be linked at stage $4 without the compilation at stage $3.) In our tests, one driver and one function subprogram is used to define all the problems, so the times for stages $3 and $4 for each problem are not directly available.

The times included for stages $3 and $4 in the totals for DVERK and DGEAR are projections. A driver and an individual function subprogram were written for problems A5, C3, C4, and E2. DGEAR and DVERK required 14.3 and 7.9 seconds, respectively, for stages $3 and $4 for each of these problems.

The user of the ATSMCC translator incurs similar CPU costs at six stages:

(T1) implement ATSMCC on the host machine, (T2) create and edit the input, (T3) preprocess to translate the input into the object code, (T4) compile the FORTRAN object code,

ACM Transactzons on Mathemat ica l Software, Vol. 8, No 2, June 1982

Page 22: Solving Ordinary Differential Equations Using Taylor Series

Solwng Ordinary Differential Equations Using Taylor Series • 135

Table III. Number of Problems on Which Each Method Was Faster

Tolerance

Method 1E-3 1E-6 1 E - 9 1E-12 Total

Comparison based on executmn t~mes only

ATSMCC 6 14 25 26 71 DGEAR 21 13 2 1 37

ATSMCC 4 3 14 23 44 D V E R K 23 24 13 2 62

Translate + executmn

ATSMCC 2 18 20 D G E A R 27 27 25 9 88

ATSMCC 6 6 D V E R K 27 27 27 19 100

Total C P U t~rnes

ATSMCC 25 25 25 26 101 D G E A R 2 2 2 1 7

ATSMCC 13 15 16 21 65 D V E R K 14 12 11 4 41

Note D V E R K was unable to compute a solution to problems D5 and F1 for a tolerance of 1E-12, so those two problems were not counted for the comparisons involving DVERK.

(T5) link with the ATSMCC subroutine library, and (T6) execute the resulting load module to solve the problem.

As above, we ignore stage T1. We also ignore stage T2 as too variable, although we believe it is typically less costly and less error prone than the corresponding stage $2. Stage T3 is unique to our software. Like stages T4 and T5 to follow, it may be relatively insignificant for problems which will be solved repeatedly. This preprocessing cost was measured directly and is included in the total times for each tolerance, although this cost is actually incurred only once for each problem.

If a problem is to be solved repeatedly, then comparisons based on execution times are appropriate. The total elapsed execution times for the entire set of test problems were 324.29 seconds, 198.24 seconds, and 178.98 seconds for DGEAR, ATSMCC, and DVERK, respectively, so ATSMCC was competitive with the standard methods. ATSMCC was significantly faster at tolerances of 1E-9 and 1E-12, and it was faster on 85 percent of the problems requiring a tolerance of 1E-12.

If a problem is to be solved only once, then the comparisons based on the total CPU times are appropriate. Table III shows that ATSMCC is clearly faster than either of the standard methods, even at modest accuracy requirements. Taken together, the accuracy and speed performances of ATSMCC make it very attrac- tive.

The total CPU times required by the three methods being tested are highly system dependent. Most of the test problems are small enough that they can be

ACM TransacUons on Mathematmal Software, Vol 8, No 2, June 1982.

Page 23: Solving Ordinary Differential Equations Using Taylor Series

136 G. Codiss and Y. F. Chang

Table IV. Total CPU Times in Seconds

M e t h o d D G E A R D V E R K A T S M C C

Ent i re t es t se t 1869.5 1032.4 1419.0 W i t h o u t C4 and C5 1684.7 917.5 900.8 W i t h o u t Class C 1458.3 799.2 742.7

solved quickly by each method. The translation time required by ATSMCC is large enough to erase almost all of the advantage that method enjoys in the execution time comparisons. However, the differences in execution time are completely obscured (to the advantage of ATSMCC) by the time required by stages $3, $4, T4, and T5 on the host system. At stages T4 and $3, the FORTRAN object code generated by the ATSMCC translator is usually much longer than the FORTRAN code for the driver and function subroutines required by the standard methods. However, the ATSMCC code requires the FORTRAN com- piler to process only one program segment, while DVERK requires the processing of two segments (a driver and a subroutine), and DGEAR requires three segments (a driver and two subroutines). At stages T5 and $4, the linker must combine the relatively large image of the ATSMCC object code with three subroutines from the ATSMCC library, while the standard methods require the linker to combine a relatively small driver image with several subroutines from the IMSL library. In total, the overhead incurred by the standard methods at stages $3 and $4 is larger than that incurred by ATSMCC at stages T4 and T5, except on the large test problems C4 and C5. The overhead incurred by each method on the host system was typically much larger than the execution time to solve the problem. Consequently, the total CPU times reflect overhead costs almost exclusively.

The ATSMCC method incurred very large overhead costs for problems C4 and C5, the only large problems in the test set. The execution times for these problems show a similar pattern to that shown in Table III; the standard methods were faster for less stringent tolerances, and ATSMCC was faster for stringent accuracy tolerances. However, for these two problems, stages T3, T4, and T5 for ATSMCC required 117.83 seconds for each tolerance, while stages $3 and $4 required 28.6 and 15.8 seconds for DGEAR and DVERK, respectively.

Table IV compares the total CPU times required by each method for the entire set of test problems and only the smaller test problems. This indicates that the ATSMCC method is best suited to systems with fewer than ten equations. Since a differential equation of order n ___ 6 is counted as only one equation, this method is relatively more attractive for systems of higher order differential equations.

6.1.3 Storage. Each of the stages listed above for each method incurs a storage cost. We ignore the cost of long-term disk storage and the relatively small cost of main memory for linking. The results are reported in terms of K words = 1024 32-bit words required on the host machine for these tests (Table V).

For standard methods, we will consider the storage costs of stages $3 and $5. Stage $3 is included because the storage cost of the corresponding stage T4 is sometimes quite high. The cost of $3 represents the storage needed to compile the driver routine and function subprogram(s). It grows with the number and complexity of the equations in the differential equation problem. The cost of $5 is the storage required to run the load module. It grows roughly as N 2 + 17N.

ACM Transactions on Mathematical Software, Vol. 8, No 2, June 1982.

Page 24: Solving Ordinary Differential Equations Using Taylor Series

Solwng Ordinary Differential Equations Us,rig Taylor Series • 137

Table V. Storage Costs m K Words of Mare Memory

Problem A5 C3 C4 E2

Compile for standard methods ($3) 10 10 10 10 Run DVERK ($5) 5 6 6 5 Run DGEAR ($5) 7 7 9 7 Translate (T3) 34 34 43 34 Compile object code (T4) 10 11 12 10 Run object code (T6) 9 10 21 9

For the series method, we consider the storage required by stages T3, T4, and T6. At stage T3, the translator used 34K words of main memory to process up to 20 equations. The modifications required to process 51 equations for problem C4 required 43K words. A detailed discussion of the complexity of the object code as a function of the number and complexity of the differential equations follows in Section 6.2. Both the length of the object code and the amount of storage it requires for execution grow approximately with the number of equations plus the number of products required to define them, from 146 lines for problem F2 to 1363 lines for problem C5.

6.2 Growth Rates

Any set of test problems such as Hun's may be criticized as not being "realistic." In order to enable a user to estimate the cost of a real-world problem, this section discusses the rates at which CPU execution time and storage costs for the ATSMCC object code grow with the size and complexity of the system of differential equations being solved. This information, when combined with the test results reported in Section 6.1, permit estimating the cost of solving problems outside of the test set.

The CPU time needed to solve a problem is proportional to the number of integration steps required, which depends on the structure of the singularities in the solution. As such, the number of steps is very difficult to estimate a priori. Several of the problems in the test set required only 2 or 3 steps to compute the solution with very high accuracy, while even the most difficult problem (D5) required only 104 steps to solve with an absolute error of 2.2E-11. This is possible because of the very high order used (30 terms).

In this section, we consider first the cost per step. Then we consider factors which affect the total cost. The CPU time per integration step depends primarily on

(1) the number of differential equations in the system (denoted by M), (2) the total number of "products or functions" in the system (denoted by P) , (3) the number of terms of the power series which are computed (denoted by N),

and (4) the nature of the primary singularities.

By "products or functions," we mean occurrences of *, / , **, SQRT, SIN, TANH, etc., in the differential equations. Each product or function requires the generation and storage of one or two temporary series.

Let r denote the total number of temporary series (2 x P _< r _< 3 x P), and let d, denote the order of the highest derivative in equation i (1 __ i ~ M). Then the

ACM Transactions on Mathematmal Software, Vol. 8, No. 2, June 1982.

Page 25: Solving Ordinary Differential Equations Using Taylor Series

138 G. Corliss and Y. F. Chang

Table VI. Flops for Analysis and Continuation of One Series

Task Flops

Overhead 29 3TA 19 6TA 231 Top-hne 0 56N 2 + 26.2N + 49 T op -hum p

alternating 0.6N + 15 not alternating 0.08N 2 + 2.4N + 16

Analytic continuation (5N + 7)d, + 17

storage cost for any system is M

storage locations = OH + Y. (N + 3 + d,) + r × N. t--1

OH (overhead) is approximately equal to the total storage requirements for the very simple problem A1. The arrays for the solution components are longer (by d, + 3) than the arrays for temporary series because the former include locations to store the characters of the variable name, flags, and initial conditions for the analytic continuation.

For most problems, the number of floating-point operations (flops) required per integration step to generate all of the series is usually in the range of N2(r + M ) to 6N2(r + M) , depending on which products or functions are present. For problems with no products, the number of flops is nearly proportional to M * N .

The number of flops required for series analysis is more difficult to establish a priori because it depends on the structure of the singularities encountered as the solution proceeds. If that structure is known, then it is also known which tests in the subroutine RDCON will be used for most of the integration steps (see Section 5.2). Then the operation counts shown in Table VI can be used to estimate the cost of the series analysis.

The CPU execution time per step is the sum of the costs outlined above for series generation, for series analysis, and for analytic continuation. A substantial reduction in this cost is possible if it is known which component(s) of the solution will actually control the stepsize choice. Then the series analysis for the other components may be omitted (using the translator option NORDCV). This is most effective for problems without products since the cost of generating the series for such problems is proportional to M × N, while the cost of analyzing each component is proportional to 0.08N 2. Table VII shows typical savings which are possible when only one series needs to be analyzed.

The total CPU execution time required to solve a problem depends on the number of integration steps taken, so the total cost is proportional to the cost per unit step

a N 2 + b N + c C ( N ) =

h*

where h* is the optimal stepsize (see Section 5.3), and a, b, and c reflect the total number of floating-point operations required for series generation, analysis, and ACM Transactions on Mathematmal Software, Vol 8, No 2, June 1982

Page 26: Solving Ordinary Differential Equations Using Taylor Series

Solwng Ordinary Differential Equations Using Taylor Series • 139

Table VII Savings in CPU Time with NORDCV (ERRLIM = 1E-9)

CPU execution time (seconds) to analyze

Percent Problem M N All series One series savings

C1 10 30 0.70 0.41 41 C2 10 30 1 74 1.30 25 C3 10 30 1.10 0.60 45 C4 51 90 3.10 2.56 17

analytic continuation. C(N) typically has a minimum of around N ffi 15 to 20, but that minimum is so broad that the series length may be doubled without increasing the execution time by more than 20 percent. Longer series allow more accurate and reliable estimates of the radius of convergence, so the ATSMCC object code uses 30 terms.

7 CONCLUSIONS

The ATSMCC translator accepts an easily prepared, FORTRAN-like statement of a system of initial value problems in ordinary differential equations. It is easy to use, yet it is both powerful and flexible. The FORTRAN object code performed very well when compared to the IMSL subroutines DGEAR and DVERK based on speed of execution, accuracy, and reliability. The absolute and relative accur- acies of the solutions computed by ATSMCC were nearly always better than those of DGEAR and DVERK. For problems being solved repeatedly, the execution time required by the ATSMCC FORTRAN object code for problems with stringent accuracy requirements was consistently faster. For problems being solved only once, the computer time for all preprocessing, compiling, linking, and running must be considered. In that case, the ATSMCC method was faster for systems with fewer than 10 equations, with accuracy requirements of 1.0E-6 or less. These results indicate that the ATSMCC method is very attractive, espe- cially for problems which must be solved frequently with high accuracy, for systems of high-order equations, or for the quick and easy solution of small systems.

APPENDIX: SAMPLE OBJECT CODE

C*+*+*+*+*+* C THIS PROGRAM WAS PRODUCED BY THE ATSMCC TRANSLATOR VERSION 1.6@ C COPYRIGHTED DECEMBER, 1979. C CHANG, CORLISS, + MORRIS C+*+*+*+*+ *+*+*÷ *+*+*+*+*+~+*+*+*+*+*+*+*+*+*+*÷*+*+*÷~+*+*+*+*+*÷*+~+ C C THE USER WILL FIND A COMPLETE DISCUSSION OF THE VARIABLES LISTED BELOW C AND THEIR DEFAULT VALUES IN THE SOURCE LISTING OF THE MAIN PROGRAM. C C DLTXPT, ERRLIM, H, HMAX, IERTYP, KTSTIF, C LENSER, LENVAR, LRUN, MPRINT, MSTIFF, NSTEPS. C C THIS PROGRAM WAS WRITTEN FOR THE FOLLOWING INPUTS-

ACM Transacttons on ACM Transactions on Mathematical Software, Vol. 8, No 2, June 1982

Page 27: Solving Ordinary Differential Equations Using Taylor Series

140 • G. Corhss and Y. F. Chang

C C DIFF (Y, X, 2) - 6.@*Y*Y + X C ........

C NO INSTRUCTIONS IN SECOND INPUT BLOCK C ........

COMMON /RPASS/ START, END, H, HMAX, XPRINT, DLTXPT, ERRLIM, A RCREAL, RCIMAG, ORDER, RADIUS, RDCERR B /IPASS/ LENSER, LENVAR, MPRINT, IERTYP, MSTIFF, LRUN, C KNTSTP, KTSTIF, KXPNUM, KDIGS, IOLIST DIMENSION TMPAAD(3@), TMPAAC(3@), TMPAAB(3@), TMPAAA(3@), Y(35),

A X(2), T~mAAF(2), TMPAAE(3@) DATA TMPAAF / 3HY ,3H / Y(33) - TMPAAF(1) Y(34) - TMPAAF(2)

i@ FORMAT(73HI ATSMCC VERSION 1.6@, DEC., 1979. AUTOMATIC PROGRAM ASOLUTION RESULTS./9H ******)

II FORMAT(/5X,IIHSTEP NUMBER, I6,13N AT THE POINT,IPEI2.4) 12 FORMAT(5X,I@HVALUES OF ,2A3,1X,IP3EI3. 5/23X,IP3EI3. 5) 13 FORMAT(5X,21HSTEPSIZE ADJUSTED TO ,IPEI3. 5) 14 FORMAT(/5X,35HTllE SOLUTION STOPPED NORMALLY AFTER, I4,24H STEPS AS

ASET BY NSTEPS. ) 16 FORMAT(/5X,64HTNE ADJUSTMENT FOR STEPSIZE SEEMS TO BE IN A LOOP.

APLEASE TRY A /5X.22HSHORTER SERIES LENGTH. ) C C INITIALIZE VARIABLES TO DEFAULT VALUES. C ........

NSTEPS ~' 4@ HMAX., I.E 37 H - 1 . E @

DLTXPT ,. @. E@ ERRLIM " I.E-6 LENSER " 3@ MPRINT = 4 IERTYP- 6 MSTIFF =" @ IOLIST " 6

C ........

C START OF THIRD INPUT BLOCK C ........

START - @.@ END - 1.2 X(1) - i.@

Y(2) " @.@ C ........

C END OF THIRD INPUT BLOCK C ........

C MORE INITIALIZATIONS C ........

IF (MSTIFF.GE.2@) DLTXPT - @.E@ DLTXPT - SIGN(DLTXPT, (END-START)) H - SIGN(H, (END-START)) KDIGS " 6 KXPNUM- 7~) XPRINT - START + DLTXPT LENVAR " 35 LRUN " 1 KTSTIF " @ IF (LENSER.GT.(LENVAR- 5)) LENSER " LENVAR - 5 IF (MPRINT.LT.2) GO TO 17

ACM Transactions on ACM TransaeUons on Mathemat*cal Software, Vol 8, No 2, June 1982

Page 28: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 141

WRITE(IOLIST,I@) WRITE(IOLIST,II) KTSTIF, START

WRITE(6,12) Y(33), Y(34), Y(1), Y(2)

C ........

C LOOP FOR INTEGRATION STEPS. C ........

17 DO 27 KINTS'I, NSTEPS KOUNT " 0 KNTSTP = KINTS

19 CONTINUE X(1) = START x ( 2 ) = H

Y(2) = Y(2)*(H) C ........

C PRELIMINARY SERIES CALCULATIONS C

INSIDE THE LOOP, PRINT THE DESIRED OUTPUT

TMPAAD(1) - .6E+~I*Y(1) TMPAAE(1) = TMPAAD(1)*Y(1) Y(3) = (TMPAAE(1) + X(1))*(H*HI2.E@) TMPAAD (2) = .6E+~I*Y (2) TMPAAE(2) = TMPAADCI)*Y(2) + TMPAAD(2)*Y(1) Y(4) = (TMPAAE(2) + X(2))*(H*H/6.E0)

C ........

C LOOP FOR SERIES CALCULATIONS C ........

DO 23 K" 5, LENSER KA=K-I KB-K-2 TMPAAD (KB) - .6E+~I*Y (KB) TMPAAE(KB) - ¢.E¢

KZ ,, 1 +KB DO 3@ N=I, KB

L=KZ-N TMPAAE(KB) = TMPAAE(KB) + TMPAAD(N)*Y(L)

3@ CONTINUE Y ( K ) " ( T M P A A E ( K B ) ) * ( H * H / F L O A T ( K B * K A ) )

IF (KTSTIF.NE.@) GO TO 23 C ........

C TEST AND ADJUST H TO AVOID OVER/UNDER FLOW. C ........

AL = ABS(Y(K)) IF (AL.LE.I.E-74) GO TO 23 IF (AL.LT.I.E37.AND.AL.GT.I.E-37) GO TO 23 SHIFT = -37.E0 IF (AL.LT.I.E0) SHIFT- 37.E0 KOUNT = KOUNT + I IF (KOUNT.LT.5) GO TO 22 WRITE(IOLIST, 16) GO TO 28

22 CONTINUE Y(2) = Y(2)/(H) H = H * AL** (i. E0/FLOAT (l-K)) *i. El** (SHIFT/FLOAT(LENSER-I) ) WRITE(IOLIST, 13) H GO TO 19

23 CONTINUE C ........

C CALCULATE RADIUS OF CONVERGENCE AND TAKE OPTIMUM STEP. C ........

CALL RDCON(Y, HNEW~I, TMPAAB, TMPAAC)

ACM Transactlons on ACM Transactlons on Mathematlcal Software, VoI. 8, No 2, June ][982

Page 29: Solving Ordinary Differential Equations Using Taylor Series

142 • G Corl iss and Y. F Chang

24

'IF (LRUN.NE.I) GO TO 28 HNEW = AMINI(HMAX, HNEW@I) CALL STEP(HNEW, KENDFG) IF (KENDFG.EQ.3) KENDFG - 1 CALL RESET(Y, RNEW, KENDFG, 2, XPRINT " TMPAAA(LENSER)

C I

C NO INSTRUCTIONS C

IN FOURTH INPUT BLOCK

25 GO TO ( 2 6 , 2 8 , 2 4 ) , KENDFG 26 H - ItNEW

START = START + HNEW iF (MPRINT.LT.4) GO TO 27 WRITE(IOLIST,11) KNTSTP, START WRITE(6,12) Y(33) , Y(34) , Y(1 ) , Y(2)

27 CONTINUE WRITE (IOLIST, 14) NSTEPS

28 CONTINUE 29 STOP

END

ACKNOWLEDGMENTS

The authors would like to express their gratitude to Roy Morris for his design and coding of the translator program, to students John Fauss and Manuel Prieto for their work on the series analysis, to Professors Ray Moore, Mike Ziegler, and Phil Bender for many helpful suggestions, and to the referees for their assistance in making this presentation more concise.

REFERENCES

1 BARTON, D. On Taylor series and stiff equations. ACM Trans. Math. Softw. 6, 3(Sept. 1980), 280-294.

2. BARTON, D., WILLERS, I .M, AND ZAHAR, R V.M. The automatic solution of ordinary differential equations by the method of Taylor series. Comput. J 14 (1971), 243-248

3. BARTON, D., WILLERS, I.M., AND ZAHAR, R.V.M. Taylor series methods for ordinary differential equations--An e~caluation. In Mathematical Software, John Rice (Ed.), Academm Press, New York, 1971, pp. 369-390.

4. BRENT, R.P., AND KUNG, H.T. Algorithms for composition and reversion of power series. In Analytic Computational Complexity, J.F. Traub (Ed.), Academic Press, New York, 1976, pp 217- 225

5. CHANG, Y.F. Automatic solution of chfferentlal equations. In Constructwe and Computational Methods for Differential and Integral Equations, D.L Colton and R.P. Gilbert (Eds.). Lecture Notes m Mathematics, Vol. 430, Springer-Verlag, New York, 1974, pp 61-94.

6. CHANG, Y.F., AND CORLISS, G.F. Ratio-like and recurrence relat]on tests for convergence of series. J Inst. Math. Appl. 25 (1980), 349-359.

7. CHANG, Y.F., AND CORLISS, G.F. Compiler for the solution of ordinary differential equations using Taylor series. Tech. Rep., Marquette Univ, Marquette, Wis., 1981.

8. CHANG, Y.F., CORLISS G F., AND MORRIS, R. ATSMCC User Manual. Dep of Mathematics, Statistics and Computer Scmnce Rep., Marquette Univ., Marquette, Wis, 1979

9. CHANG, Y.F., FAUSS, J., PRIETO, M., AND CORLISS, G.F. Convergence analysis of compound Taylor serms In Proc. 7th Conf. on Numerical Mathematics and Computing, Univ. Manitoba, Winnipeg, Man., Canada, 1978, pp. 129-152.

10. CHANG, Y F., TABOR, M., WEISS, J., AND CORLISS, G On the analytic structure of the Henon Helles system. Phys Lett. A 85,4 (1981), 211-213.

ACM Transactions on ACM Transactions on Mathemattcal Software, Vo|. 8, No. 2, June 1982.

Page 30: Solving Ordinary Differential Equations Using Taylor Series

Solving Ordinary Differential Equations Using Taylor Series • 143

11 CORLISS, G.F. Integrating ODEs m the complex plane--Pole vaulting. Math. Compt. 35 (1980), 1181-1189

12 CORLISS, G.F., AND LOWERY, D. Choosing a stepsize for Taylor series methods for solving ODE's. J. Comput. Appl. Math. 3 (1977), 251-256.

13. CROWDER, H , DEMBO, R.S., AND MULVEY, J.M. On reporting computational experiments with mathematmal software. ACM Trans. Math. Softw. 5, 2 (June 1979), 193-203.

14. DANIEL, J W, AND MOORE, R E. Computatton and Theory ~n Ordinary Differential Equations. Freeman, San Francisco, 1970.

15. DARBOUX, G. Memoire sur l 'approximatlon des fonctions de treK-grands nombres, et sur une classe ~tendu de d~veloppements en serie J. Math., 4, 3(1878), 55-56 and 377-416.

16 DOMB, C., AND SYKES, M.F. On the susceptibility of a ferromagnetic above the Curie point. In Proc. Royal Society of London, Series A, Vol. 240, 1957, pp. 214-228.

17. ENRIGHT, W.H., AND HULL, T.E. Test results on initial value methods for non-stiff ordinary differential equations. SIAM J. Numer. Anal 13 (1976), 944-961.

18. GEAR, C.W. Experience and problems with the software for the automatic solution of ordinary differential equations. In Mathematical Software, John Rice (Ed.), Academic Press, New York, 1971, pp. 211-227

19. GEAR, C.W. Runge-Kutta starters for multistep methods. ACM Trans. Math. Softw. 6, 3(Sept. 1980), 263-279.

20. GIBBONS, A. A program for the automatic solution of differential equations using the method of Taylor serms. Comput. J. 3 (1960), 108-111

21. GOLOMB, M. Zeros and poles of functions defined by Taylor series. Bull. Am. Math. Soc. 49 (1943), 581-592.

22. HADAMARD, J. Essal sur l '~tude des functions donn~es par leur d4veloppement de Taylor. J Math. Pures Appl. 8 (1892), 101-186

23 HENRICI, P. Apphed and Computattonal Complex Analys~s, Vols. I and II. Wiley, New York, 1974 and 1977.

24. HENRICI, P. Automatm computations with power series. J. ACM 3, 1 (Jan. 1956), 10-15. 25 HILLE, E. Ordinary Differential Equatzons in the Complex Plane. Wiley, New York, 1976 26 HULL, T.E., ENRIGHT, W.H., FELLEN, B.M., AND SEDGWICK, A.E. Comparing numerical

methods for ordinary differential equatmns SIAM J. Numer. Anal 9 (1972), 603-637. 27. HUNTER, C., AND GUERRIERI, B. Deducing the propertms of singularities of functions from their

Taylor series coefficmnts. SIAM J. Appl. Math. 39 (1980), 248-263. 28 KEDEM, G. Automatic differentiation of computer programs. ACM Trans. Math. Softw. 6,

2(June 1980), 150-165. 29 KROGH, F.T. On testing a subroutine for the numerical integration of ordinary differential

equatmns. J. ACM 20, 4 (Oct. 1973), 545-562. 30. LAMBERT, J.D. The initial value problem for ordinary differential equations. In The State of the

Art ~n Numerical Analys~s, D. Jacobs (Ed.), Academm Press, New York, 1977, pp. 451-500. 31. LEAVITT, J A. Methods and applications of power series. Math. Comp. 20 (1966), 46-52. 32. MILLER, J.C.P. The Airy integral. In The Br~tlsh Association for the Advancement of Science

Mathemattcal Tables, Part-Vol. B, Cambridge University Press, London, 1946. 33. MOORE, R.E Interval Analys~s. Prentice-Hall, Englewood Cliffs, N.J., 1966. 34. MOORE, R.E. Mathematical Elements of Sc~ent~fw Computing. Holt, Rinehart and Winston,

New York, 1975. 35. MOORE, R.E Methods and Apphcatzons of IntervalAnalys~s Society for Industrial and Apphed

Mathematics, Phfladelphm, Pa., 1979. 36. NORMAN, A.C. Expanding the solutions of implicit sets of ordinary differential equations in

power series. Comput. J. 19 (1976), 63-68. 37. NORMAN, A.C TAYLOR User's Manual. University Computer Laboratory, Cambridge, Eng-

land, 1973. 38 PEARCE, C.J. Transformation methods m the analysis for critical properties. Advances ~n

Physics 27 (1978), 89-145. 39. RALL, L.B. Automatic D~fferent~at~on: Techniques and Applzcat~ons Lecture Notes in Com-

puter Scmnce, Vol 120, Springer-Verlag, New York, 1981. 40. RYDER, B.G. The PFORT verifier. Soft. Pract Exper. 4 (1974), 359-377.

ACM Transactmns on ACM Transactmns on Mathematmal Software, Vol. 8, No. 2, June 1982

Page 31: Solving Ordinary Differential Equations Using Taylor Series

144 • G. Corhss and Y. F. Chang

41. SAKS, S., AND ZYGMUND, A. Analytic Functions (2nd ed.), (E.J. Scott, Transl.). Polish Scientific Publishers, Warsaw, 1965.

42. SHAMPIN~., L.F. Local error control in codes for ordinary differential equations Appl. Math Comput. 3 {1977), 189-~210.

43. SHAMPINE, L.F., WATTS, H.A., AND DAVENPORT, S.M. Solving nonstiff ordinary differential equations--The state of the art. SIAM Rev. 18 (1976), 376-411

44. STETTER, H.J. Analysis of Dlscrettzation Methods for Ordinary Differential Equations Sprin- ger-Verlag, New York, 1973.

45 STETTEI~, H.J. Global error estimation in ODE-solvers. In Proc. Biennial Conf. (Dundee, Scotland, 1977), G.A. Watson (Ed.). Lecture Notes in Mathematics, Vol. 630, Springer-Verlag, New York, 1978, pp. 179-189.

Received May 1981; accepted February 1982

ACM Transactions on ACM Transactions on Mathematical Software, Vol 8, No 2, June 1982