linear multistep method

Upload: shankar-prakash-g

Post on 09-Apr-2018

234 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Linear Multistep Method

    1/31

    Linear multistep method

    Abstract

    Linear multistep methods are used for the numerical solution of ordinary differential

    equations. Conceptually, a numerical method starts from an initial point and then takes a

    short step forward in time to find the next solution point. The process continues with

    subsequent steps to map out the solution. Single-step methods (such as Euler's method)

    refer to only one previous point and its derivative to determine the current value.

    Methods such as Runge-Kutta take some intermediate steps (for example, a half-step) to

    obtain a higher order method, but then discard all previous information before taking a

    second step. Multistep methods attempt to gain efficiency by keeping and using the

    information from previous steps rather than discarding it. Consequently, multistep

    methods refer to several previous points and derivative values. In the case of linear

    multistep methods, a linear combination of the previous points and derivative values is

    used.

    Definitions

    Numerical methods for ordinary differential equations approximate solutions to initial

    value problems of the form

    The result is approximations for the value ofy(t) at discrete times ti:

    ti = t0 + ih

    yi =y(ti) =y(t0 + ih)

    fi =f(ti,yi)

    where h is the time step (sometimes referred to as t).

    A linear multistep method uses a linear combination ofyi andyi' to calculate the value of

    y for the desired current step.

    1

    http://en.wikipedia.org/wiki/Numerical_ordinary_differential_equationshttp://en.wikipedia.org/wiki/Numerical_ordinary_differential_equationshttp://en.wikipedia.org/wiki/Euler's_methodhttp://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methodshttp://en.wikipedia.org/wiki/Linear_combinationhttp://en.wikipedia.org/wiki/Numerical_ordinary_differential_equationshttp://en.wikipedia.org/wiki/Numerical_ordinary_differential_equationshttp://en.wikipedia.org/wiki/Euler's_methodhttp://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methodshttp://en.wikipedia.org/wiki/Linear_combination
  • 8/7/2019 Linear Multistep Method

    2/31

    Multistep method will use the previouss steps to calculate the next value. Consequently,

    the desired value at the current processing stage is yn +s.

    A linear multistep method is a method of the form

    where h denotes the step size and f the right-hand side of the differential equation. The

    coefficents and determine the method. The designer of the

    method chooses the coefficients; often, many coefficients are zero. Typically, the

    designer chooses the coefficients so they will exactly interpolate y(t) when it is an nth

    order polynomial.

    If the value ofbs is nonzero, then the value ofyn + s depends on the value off(tn + s,yn + s).

    Consequently, the method is explicit if bs = 0. In that case, the formula can directly

    compute yn + s. If then the method is implicit and the equation foryn + s must be

    solved. Iterative methods such as Newton's method are often used to solve the implicit

    formula.

    Sometimes an explicit multistep method is used to "predict" the value ofyn +s. That value

    is then used in an implicit formula to "correct" the value. The result is a Predictor-

    corrector method.

    Examples

    Consider for an example the problem

    The exact solution isy(t) = et.

    2

    http://en.wikipedia.org/wiki/Explicithttp://en.wikipedia.org/wiki/Implicithttp://en.wikipedia.org/wiki/Iterative_methodshttp://en.wikipedia.org/wiki/Newton's_methodhttp://en.wikipedia.org/wiki/Predictor-corrector_methodhttp://en.wikipedia.org/wiki/Predictor-corrector_methodhttp://en.wikipedia.org/wiki/Explicithttp://en.wikipedia.org/wiki/Implicithttp://en.wikipedia.org/wiki/Iterative_methodshttp://en.wikipedia.org/wiki/Newton's_methodhttp://en.wikipedia.org/wiki/Predictor-corrector_methodhttp://en.wikipedia.org/wiki/Predictor-corrector_method
  • 8/7/2019 Linear Multistep Method

    3/31

    One-Step Euler

    A simple numerical method is Euler's method:

    Euler's method can be viewed as an explicit multistep method for the degenerate case of

    one step.

    This method, applied with step size on the problemy' =y, gives the following

    results:

    Two-Step Adams Bashforth

    Euler's method is a one-step method. A simple multistep method is the two-step Adams

    Bashforth method

    This method needs two values,yn + 1 andyn, to compute the next value,yn + 2. However,

    the initial value problem provides only one value,y0 = 1. One possibility to resolve this

    issue is to use they1 computed by Euler's method as the second value. With this choice,

    the AdamsBashforth method yields (rounded to four digits):

    3

  • 8/7/2019 Linear Multistep Method

    4/31

    The exact solution at t= t4 = 2 is , so the two-step AdamsBashforth

    method is more accurate than Euler's method. This is always the case if the step size is

    small enough.

    Multistep Method Families

    Three families of linear multistep methods are commonly used: AdamsBashforth

    methods, AdamsMoulton methods, and thebackward differentiation formulas (BDFs).

    AdamsBashforth methods

    The AdamsBashforth methods are explicit methods. The coefficients are as 1 = 1

    and , while the bj are chosen such that the methods has orders

    (this determines the methods uniquely).

    The AdamsBashforth methods withs = 1, 2, 3, 4, 5 are (Hairer, Nrsett & Wanner 1993,

    III.1; Butcher 2003, p. 103):

    this is simply the Euler method;

    The coefficients bj can be determined as follows. Use polynomial interpolation to find

    the polynomialp of degrees 1 such that

    4

    http://en.wikipedia.org/wiki/Backward_differentiation_formulahttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFButcher2003%23CITEREFButcher2003http://en.wikipedia.org/wiki/Polynomial_interpolationhttp://en.wikipedia.org/wiki/Backward_differentiation_formulahttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFButcher2003%23CITEREFButcher2003http://en.wikipedia.org/wiki/Polynomial_interpolation
  • 8/7/2019 Linear Multistep Method

    5/31

    The Lagrange formula for polynomial interpolation yields

    The polynomial p is locally a good approximation of the right-hand side of the

    differential equationy' = f(t,y) that is to be solved, so consider the equationy' = p(t)

    instead. This equation can be solved exactly; the solution is simply the integral ofp. This

    suggests taking

    The AdamsBashforth method arises when the formula for p is substituted. The

    coefficients bj turn out to be given by

    Replacingf(t,y) by its interpolantp incurs an error of orderhs, and it follows that thes-

    step AdamsBashforth method has indeed orders (Iserles 1996, 2.1)

    The AdamsBashforth methods were designed by John Couch Adams to solve a

    differential equation modelling capillary action due to Francis Bashforth. Bashforth

    (1883) published his theory and Adams' numerical method (Goldstine 1977).

    AdamsMoulton methods

    The AdamsMoulton methods are similar to the AdamsBashforth methods in that they

    also have as 1 = 1 and . Again the b coefficients are chosen

    5

    http://en.wikipedia.org/wiki/Lagrange_polynomialhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFIserles1996%23CITEREFIserles1996http://en.wikipedia.org/wiki/John_Couch_Adamshttp://en.wikipedia.org/wiki/Capillary_actionhttp://en.wikipedia.org/w/index.php?title=Francis_Bashforth&action=edit&redlink=1http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFBashforth1883%23CITEREFBashforth1883http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFBashforth1883%23CITEREFBashforth1883http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFGoldstine1977%23CITEREFGoldstine1977http://en.wikipedia.org/wiki/Lagrange_polynomialhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFIserles1996%23CITEREFIserles1996http://en.wikipedia.org/wiki/John_Couch_Adamshttp://en.wikipedia.org/wiki/Capillary_actionhttp://en.wikipedia.org/w/index.php?title=Francis_Bashforth&action=edit&redlink=1http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFBashforth1883%23CITEREFBashforth1883http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFBashforth1883%23CITEREFBashforth1883http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFGoldstine1977%23CITEREFGoldstine1977
  • 8/7/2019 Linear Multistep Method

    6/31

    to obtain the highest order possible. However, the AdamsMoulton methods are implicit

    methods. By removing the restriction that bs = 0, ans-step AdamsMoulton method can

    reach orders + 1, while ans-step AdamsBashforth methods has only orders.

    The AdamsMoulton methods withs = 0, 1, 2, 3, 4 are (Hairer, Nrsett & Wanner 1993,

    III.1; Quarteroni, Sacco & Saleri 2000):

    this is thebackward Euler method;

    this is the trapezoidal

    rule;

    The derivation of the AdamsMoulton methods is similar to that of the AdamsBashforth

    method; however, the interpolating polynomial uses not only the points tn1, tns, as

    above, but also tn. The coefficients are given by

    6

    http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFQuarteroniSaccoSaleri2000%23CITEREFQuarteroniSaccoSaleri2000http://en.wikipedia.org/wiki/Backward_Euler_methodhttp://en.wikipedia.org/wiki/Backward_Euler_methodhttp://en.wikipedia.org/wiki/Trapezoidal_rulehttp://en.wikipedia.org/wiki/Trapezoidal_rulehttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFQuarteroniSaccoSaleri2000%23CITEREFQuarteroniSaccoSaleri2000http://en.wikipedia.org/wiki/Backward_Euler_methodhttp://en.wikipedia.org/wiki/Trapezoidal_rulehttp://en.wikipedia.org/wiki/Trapezoidal_rule
  • 8/7/2019 Linear Multistep Method

    7/31

    The AdamsMoulton methods are solely due to John Couch Adams, like the Adams

    Bashforth methods. The name of Forest Ray Moulton became associated with these

    methods because he realized that they could be used in tandem with the Adams

    Bashforth methods as a predictor-correctorpair (Moulton 1926); Milne (1926) had the

    same idea. Adams used Newton's method to solve the implicit equation (Hairer, Nrsett

    & Wanner 1993, III.1).

    Analysis

    The central concepts in the analysis of linear multistep methods, and indeed any

    numerical method for differential equations, are convergence, order, and stability.

    The first question is whether the method is consistent: is the difference equation

    a good approximation of the differential equation y' = f(t,y)? More precisely, a

    multistep method is consistent if the local error goes to zero as the step size h goes to

    zero, where the local erroris defined to be the difference between the resultyn +s of the

    method, assuming that all the previous values are exact, and the exact

    solution of the equation at time tn +s. A computation using Taylor series shows out that a

    linear multistep method is consistent if and only if

    All the methods mentioned above are consistent (Hairer, Nrsett & Wanner 1993, III.2).

    7

    http://en.wikipedia.org/wiki/John_Couch_Adamshttp://en.wikipedia.org/wiki/Forest_Ray_Moultonhttp://en.wikipedia.org/wiki/Predictor-corrector_methodhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFMoulton1926%23CITEREFMoulton1926http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFMilne1926%23CITEREFMilne1926http://en.wikipedia.org/wiki/Newton's_methodhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Numerical_ordinary_differential_equations#Analysishttp://en.wikipedia.org/wiki/Taylor_serieshttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/John_Couch_Adamshttp://en.wikipedia.org/wiki/Forest_Ray_Moultonhttp://en.wikipedia.org/wiki/Predictor-corrector_methodhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFMoulton1926%23CITEREFMoulton1926http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFMilne1926%23CITEREFMilne1926http://en.wikipedia.org/wiki/Newton's_methodhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Numerical_ordinary_differential_equations#Analysishttp://en.wikipedia.org/wiki/Taylor_serieshttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993
  • 8/7/2019 Linear Multistep Method

    8/31

    If the method is consistent, then the next question is how well the difference equation

    defining the numerical method approximates the differential equation. A multistep

    method is said to have orderp if the local error is of orderO(hp + 1) as h goes to zero.

    This is equivalent to the following condition on the coefficients of the methods:

    The s-step AdamsBashforth method has orders, while the s-step AdamsMoulton

    method has orders + 1 (Hairer, Nrsett & Wanner 1993, III.2).

    These conditions are often formulated using the characteristic polynomials

    In terms of these polynomials, the above condition for the method to have order p

    becomes

    In particular, the method is consistent if it has order one, which is the case if(1) = 0

    and '(1) = (1).

    If the roots of the characteristic polynomial all have modulus less than or equal to 1 and

    the roots of modulus 1 are of multiplicity 1, we say that the root condition is satisfied.

    The method is convergent if and only ifit is consistent and the root condition is satisfied.

    Consequently, a consistent method is stable if and only if this condition is satisfied, and

    thus the method is convergent if and only if it is stable.

    Furthermore, if the method is stable, the method is said to be strongly stable ifz= 1 is

    the only root of modulus 1. If it is stable and all roots of modulus 1 are not repeated, but

    8

    http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/w/index.php?title=Root_condition&action=edit&redlink=1http://en.wikipedia.org/wiki/If_and_only_ifhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/w/index.php?title=Root_condition&action=edit&redlink=1http://en.wikipedia.org/wiki/If_and_only_if
  • 8/7/2019 Linear Multistep Method

    9/31

    there is more than one such root, it is said to be relatively stable. Note that 1 must be a

    root; thus stable methods are always one of these two.

    Example

    Consider the AdamsBashforth three-step method

    The characteristic equation is thus

    First and second Dahlquist barriers

    These two results were proved by Germund Dahlquist and represent an important bound

    for the order of convergence and for the A-stability of a linear multistep method.

    First Dahlquist barrier

    A zero-stable and linearq-step multistep methods cannot attain an order of convergence

    greater than q + 1 ifq is odd and greater than q + 2 ifq is even. If the method is also

    explicit, then it cannot attain an order greater than q (Hairer, Nrsett & Wanner 1993,

    Thm III.3.5).

    Second Dahlquist barrier

    There are no explicit A-stable and linear multistep methods. The implicit ones have order

    of convergence at most 2 (Hairer & Wanner 1996, Thm V.1.4).

    Goldstine, Herman H. (1977),A History of Numerical

    Analysis from the 16th thrMethods for nonstiff

    problems

    Consider an initial value problem

    9

    http://en.wikipedia.org/wiki/Germund_Dahlquisthttp://en.wikipedia.org/wiki/Stiff_equation#A-stabilityhttp://en.wikipedia.org/w/index.php?title=Zero_stability&action=edit&redlink=1http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Stiff_equation#A-stabilityhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerWanner1996%23CITEREFHairerWanner1996http://en.wikipedia.org/wiki/Herman_Goldstinehttp://www.scholarpedia.org/article/Initial_Value_Problemshttp://en.wikipedia.org/wiki/Germund_Dahlquisthttp://en.wikipedia.org/wiki/Stiff_equation#A-stabilityhttp://en.wikipedia.org/w/index.php?title=Zero_stability&action=edit&redlink=1http://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerN.C3.B8rsettWanner1993%23CITEREFHairerN.C3.B8rsettWanner1993http://en.wikipedia.org/wiki/Stiff_equation#A-stabilityhttp://en.wikipedia.org/wiki/Linear_multistep_method#CITEREFHairerWanner1996%23CITEREFHairerWanner1996http://en.wikipedia.org/wiki/Herman_Goldstinehttp://www.scholarpedia.org/article/Initial_Value_Problems
  • 8/7/2019 Linear Multistep Method

    10/31

    (1)

    where the solution can be scalar- or vector-valued. Let be a sequence of

    grid points, which for simplicity is supposed to be equidistant with step size

    . A numerical method is an algorithm that yields approximations to the

    solution at the grid points.

    Abstract linear spaces

    Cartesian geometry, introduced by Fermat and Descartes around 1636, had a very large

    influence on mathematics bringing algebraic methods into geometry. By the middle of

    the 19th Century however there was some dissatisfaction with these coordinate methods

    and people began to search for direct methods, i.e. methods of synthetic geometry which

    were coordinate free.

    10

    http://www.gap-system.org/~history/Mathematicians/Fermat.htmlhttp://www.gap-system.org/~history/Mathematicians/Descartes.htmlhttp://www.scholarpedia.org/article/Image:Adams.gifhttp://www.gap-system.org/~history/Mathematicians/Fermat.htmlhttp://www.gap-system.org/~history/Mathematicians/Descartes.html
  • 8/7/2019 Linear Multistep Method

    11/31

    It is possible however to trace the beginning of the vector concept back to the beginning

    of the 19th Century with the work of Bolzano. In 1804 he published a work on the

    foundations of elementary geometry Betrachtungen ber einige Gegenstnde der

    Elementargoemetrie. Bolzano, in this book, considers points, lines and planes as

    undefined elements and defines operations on them. This is an important step in the

    axiomatisation of geometry and an early move towards the necessary abstraction for the

    concept of a linear space to arise.

    The move away from coordinate geometry was mainly due to the work ofPoncelet and

    Chasles who were the founders of synthetic geometry. The parallel development in

    analysis was to move from spaces of concrete objects such as sequence spaces towards

    abstract linear spaces. Instead of substitutions defined by matrices, abstract linearoperators must be defined on abstract linear spaces.

    In 1827 Mbius publishedDer barycentrische Calcula geometrical book which studies

    transformations of lines and conics. The novel feature of this work is the introduction of

    barycentric coordinates. Given any triangleABCthen if weights a, b and c are placed at

    A, B and C respectively then a point P, the centre of gravity, is determined. Mbius

    showed that every point P in the plane is determined by the homogeneous coordinates

    [a,b,c], the weights required to be placed atA,B and Cto give the centre of gravity at P.

    The importance here is that Mbius was considering directed quantities, an early

    appearence of vectors.

    In 1837 Mbius published a book on statics in which he clearly states the idea of

    resolving a vector quantity along two specified axes.

    Between these two works ofMbius, a geometrical work by Bellavitis was published in

    1832 which also contains vector type quantities. His basic objects are line segments AB

    and he considers AB and BA as two distinct objects. He defines two line segments as

    'equipollent' if they are equal and parallel, so, in modern notation, two line segments are

    equipollent if they represent the same vector. Bellavitis then defines the 'equipollent sum

    of line segments' and obtains an 'equipollent calculus' which is essentially a vector space.

    11

    http://www.gap-system.org/~history/Mathematicians/Bolzano.htmlhttp://www.gap-system.org/~history/Mathematicians/Bolzano.htmlhttp://www.gap-system.org/~history/Mathematicians/Poncelet.htmlhttp://www.gap-system.org/~history/Mathematicians/Chasles.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Bellavitis.htmlhttp://www.gap-system.org/~history/Mathematicians/Bellavitis.htmlhttp://www.gap-system.org/~history/Mathematicians/Bolzano.htmlhttp://www.gap-system.org/~history/Mathematicians/Bolzano.htmlhttp://www.gap-system.org/~history/Mathematicians/Poncelet.htmlhttp://www.gap-system.org/~history/Mathematicians/Chasles.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Bellavitis.htmlhttp://www.gap-system.org/~history/Mathematicians/Bellavitis.html
  • 8/7/2019 Linear Multistep Method

    12/31

  • 8/7/2019 Linear Multistep Method

    13/31

    Another mathematician who was moving towards geometry without coordinates was

    Grassmann. His work is highly original but the notion of barycentric coordinates

    introduced by Mbius was his main motivation. Grassmann's contribution Die

    Ausdehnungslehre appeared in several different versions. The first was in 1844 but it was

    a very difficult work to read, and clearly did not find favour with mathematicians, so

    Grassmann tried to produce a more readable version which appeared in 1862. Clebsch

    inspired Grassmann to work on this new version.

    Grassmann studied an algebra whose elements are not specified, so are abstract

    quantities. He considers systems of elements on which he defines a formal operation of

    addition, scalar multiplication and multiplication. He starts with undefined elements

    which he calls 'simple quantities' and generates more complex quantities using specifiedrules.

    But ... I go further, since I call these not just quantities but simple quantities. There are

    other quantities which are themselves compounded quantities and whose characteristics

    are as distinct relative to each other as the characteristics of the different simple quantities

    are to each other. These quantities come about through addition of higher forms ...

    His work contains the familiar laws of vector spaces but, since he also has amultiplication defined, his structures satisfy the properties of what are today called

    algebras. The precise structures are now known as Grassmann algebras. The ideas of

    linearly independent and linearly dependent sets of elements are clearly contained in

    Grassmann's work as is the idea of dimension (although he does not use the term). The

    scalar product also appears in Grassmann's 1844 work.

    Grassmann's 1862 version ofDie Ausdehnungslehre has a long introduction in which

    Grassmann gives a summary of his theory. In this introduction he also defends his formal

    methods which had clearly been objected to by a number of mathematicians. Grassmann's

    justification comes very close to saying that he is setting up an axiomatic theory and this

    shows that he is well ahead of his time.

    13

    http://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Clebsch.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Clebsch.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.html
  • 8/7/2019 Linear Multistep Method

    14/31

    Cauchy and Saint-Venant have some claims to have invented similar systems to

    Grassmann. Saint-Venant's claim is a fair one since he published a work in 1845 in which

    he multiples line segments in an analogous way to Grassmann. In fact when Grassmann

    read Saint-Venant's paper he realised that Saint-Venant had not read his 1844 work and

    sent two copies of the relevant parts to Cauchy, asking him to pass one copy to Saint-

    Venant.

    However, rather typically of Cauchy, in 1853 he published Sur les clefs algbrique in

    Comptes Rendus which describes a formal symbolic method which coincides with that of

    Grassmann's method (but makes no reference to Grassmann). Grassmann complained to

    the Acadmie des Sciences that his work had priority over Cauchy's and, in 1854, a

    committee was set up to investigate who had priority. We still await the committee'sreport!

    The first to see the importance of Grassmann's work was Hankel. In 1867 he wrote a

    paper Theorie der complexen Zahlensysteme concerning formal systems where

    combination of the symbols are abstractly defined. He credits Grassmann's Die

    Ausdehnungslehre as giving the foundation for his work.

    The first to give an axiomatic definition of a real linear space was Peano in a bookpublished in Torino in 1888. He credits Leibniz, Mbius's 1827 work, Grassmann's 1844

    work and Hamilton's work on quaternions as providing ideas which led him to his formal

    calculus.

    Peano's 1888 book Calcolo geometrico secondo l'Ausdehnungslehre di H. Grassmann

    preceduto dalle operazioni della logica deduttiva is remarkable. It gives the basic calculus

    of set operation introducing the modern notation , , for intersection, union and an

    element of. It was many years before this notation was to become accepted, in fact

    Peano's book seems to have had very little influence for many years. It is equally

    remarkable for containing an almost modern introduction to linear spaces and linear

    algebra.

    14

    http://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Societies/Paris.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Hankel.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Leibniz.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Hamilton.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Saint-Venant.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Societies/Paris.htmlhttp://www.gap-system.org/~history/Mathematicians/Cauchy.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Hankel.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Leibniz.htmlhttp://www.gap-system.org/~history/Mathematicians/Mobius.htmlhttp://www.gap-system.org/~history/Mathematicians/Grassmann.htmlhttp://www.gap-system.org/~history/Mathematicians/Hamilton.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.html
  • 8/7/2019 Linear Multistep Method

    15/31

    In Chapter IX of the book Peano gives axioms for a linear space.

    It is hard to believe that Peano writes the following in 1888. It could almost come from a

    1988 book! The first is for equality of elements

    1. (a = b) if and only if (b = a), if (a = b) and (b = c) then (a = c).

    2. The sum of two objects a and b is defined, i.e. an object is defined denoted by a +

    b, also belonging to the system, which satisfies:

    If (a = b) then (a + c = b + c), a + b = b + a, a + (b + c) = (a + b) + c,

    and the common value of the last equality is denoted by a + b + c.

    3. If a is an object of the system and m a positive integer, then we understand by ma

    the sum of m objects equal to a. It is easy to see that for objects a, b, ... of the

    system and positive integers m, n, ... one hasIf (a = b) then (ma = mb), m(a+b) = ma+mb, (m+n)a = ma+na,

    m(na) = mna, 1a = a.

    We suppose that for any real number m the notation ma has a meaning such that

    the preceeding equations are valid.

    Peano goes on to state the existence of a zero object 0 and says that 0a = 0, that a - b

    means a + (-b) and states it is easy to show that a - a = 0 and 0 + a = a.

    Peano defines a linear system to be any system of objects satisfying his four conditions.

    He goes on to define dependent objects and independent objects. He then defines

    dimension.

    Definition: The number of the dimensions of a linear system is the maximal number of

    linearly independent objects in the system.

    He proves that finite dimensional spaces have a basis and gives examples of infinite

    dimensional linear spaces. Peano considers entire functions f(x) of a variable x, defines

    the sum off1(x) andf2(x) and the product off(x) by a real numberm. He says:-

    15

    http://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.html
  • 8/7/2019 Linear Multistep Method

    16/31

    If one considers only functions of degree n, then these functions form a linear system

    with n + 1 dimensions, the entire functions of arbitrary degree form a linear system with

    infinitely many dimensions.

    Peano defines linear operators on a linear space, shows that by using coordinates one

    obtains a matrix. He defines the sum and product of linear operators.

    In the 1890's Pincherle worked on a formal theory of linear operators on an infinite

    dimensional vector space. However Pincherle did not base his work on that of Peano,

    rather on the abstract operator theory ofLeibniz and d'Alembert.

    Like so much work in this area it had very little immediate impact and axiomatic infinite

    dimensional vector spaces were not studied again until Banach and his associates took up

    the topic in the 1920's.

    Although never attaining the level of abstraction which Peano had achieved, Hilbert and

    his student Schmidt looked at infinite dimensional spaces of functions in 1904. Schmidt

    introduced a move towards abstraction in 1908 introducing geometrical language into

    Hilbert space theory. The fully axiomatic approach appeared in Banach's 1920 doctoral

    dissertation.

    Overview

    A system of linear inequalities defines a polytope as a feasible region. The simplex

    algorithm begins at a starting vertex and moves along the edges of the polytope until it

    reaches the vertex of the optimum solution.

    Consider a linear programming problem,

    maximize

    subject to

    with the variables of the problem, a vector representing the linear form to optimize, A a

    rectangularp,n matrix and the linear constraints.

    16

    http://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Pincherle.htmlhttp://www.gap-system.org/~history/Mathematicians/Pincherle.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Leibniz.htmlhttp://www.gap-system.org/~history/Mathematicians/D'Alembert.htmlhttp://www.gap-system.org/~history/Mathematicians/Banach.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Hilbert.htmlhttp://www.gap-system.org/~history/Mathematicians/Schmidt.htmlhttp://www.gap-system.org/~history/Mathematicians/Schmidt.htmlhttp://www.gap-system.org/~history/Mathematicians/Hilbert.htmlhttp://www.gap-system.org/~history/Mathematicians/Banach.htmlhttp://en.wikipedia.org/wiki/System_of_linear_inequalitieshttp://en.wikipedia.org/wiki/Polytopehttp://en.wikipedia.org/wiki/Vertex_(geometry)http://en.wikipedia.org/wiki/Linear_formhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Pincherle.htmlhttp://www.gap-system.org/~history/Mathematicians/Pincherle.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Leibniz.htmlhttp://www.gap-system.org/~history/Mathematicians/D'Alembert.htmlhttp://www.gap-system.org/~history/Mathematicians/Banach.htmlhttp://www.gap-system.org/~history/Mathematicians/Peano.htmlhttp://www.gap-system.org/~history/Mathematicians/Hilbert.htmlhttp://www.gap-system.org/~history/Mathematicians/Schmidt.htmlhttp://www.gap-system.org/~history/Mathematicians/Schmidt.htmlhttp://www.gap-system.org/~history/Mathematicians/Hilbert.htmlhttp://www.gap-system.org/~history/Mathematicians/Banach.htmlhttp://en.wikipedia.org/wiki/System_of_linear_inequalitieshttp://en.wikipedia.org/wiki/Polytopehttp://en.wikipedia.org/wiki/Vertex_(geometry)http://en.wikipedia.org/wiki/Linear_form
  • 8/7/2019 Linear Multistep Method

    17/31

  • 8/7/2019 Linear Multistep Method

    18/31

    simplices on which it performs badly. It is an open question if there is a pivot rule with

    polynomial time, or even sub-exponential worst-case complexity.

    Nevertheless, the simplex method is remarkably efficient in practice. It has been known

    since the 1970s that it has polynomial-time average-case complexity under various

    distributions. These results on "random" matrices still didn't quite capture the desired

    intuition that the method works well on "typical" matrices. In 2001 Spielman and Teng

    introduced the notion ofsmoothed complexity to provide a more realistic analysis of the

    performance of algorithms.[3]

    Other algorithms for solving linear programming problems are described in the linear

    .programming article.

    Algorithm

    The simplex algorithm requires the linear programming problem to be in augmented

    form, so that the inequalities are replaced by equalities. The problem can then be written

    as follows in matrix form:

    MaximizeZin:

    where are the introduced slack variables from the augmentation process, ie the non-

    negative distances between the point and the hyperplanes representing the linear

    constraintsA.

    By definition, the vertices of the feasible region are each at the intersection of n

    hyperplanes (either from A or ). This corresponds to n zeros in the n+p variables of (x,

    xs). Thus 2 feasible vertices are adjacent when they share n-1 zeros in the variables of (x,

    xs). This is the interest of the augmented form notations : vertices and jumps along edges

    of the polytope are easy to write.

    The simplex algorithm goes as follows :

    18

    http://en.wikipedia.org/wiki/Polynomial_timehttp://en.wikipedia.org/wiki/Best,_worst_and_average_casehttp://en.wikipedia.org/wiki/Smoothed_complexityhttp://en.wikipedia.org/wiki/Simplex_algorithm#cite_note-2%23cite_note-2http://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Linear_programming#Augmented_form_.28slack_form.29http://en.wikipedia.org/wiki/Linear_programming#Augmented_form_.28slack_form.29http://en.wikipedia.org/wiki/Polynomial_timehttp://en.wikipedia.org/wiki/Best,_worst_and_average_casehttp://en.wikipedia.org/wiki/Smoothed_complexityhttp://en.wikipedia.org/wiki/Simplex_algorithm#cite_note-2%23cite_note-2http://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Linear_programminghttp://en.wikipedia.org/wiki/Linear_programming#Augmented_form_.28slack_form.29http://en.wikipedia.org/wiki/Linear_programming#Augmented_form_.28slack_form.29
  • 8/7/2019 Linear Multistep Method

    19/31

    Start off by finding a feasible vertex. It will have at least n zeros in the variables

    of (x, xs), that will be called the n current non-basic variables.

    Write Z as an affine function of the n current basic variables. This is done by

    transvections to move the non-zero coefficients in the first line of the matrix

    above.

    Now we want to jump to an adjacent feasible vertex to increase Z's value. That

    means increasing the value of one of the basic variables, while keeping all n-1

    others to zero. Among the n candidates in the adjacent feasible vertices, we

    choose that of greatest positive increase rate inZ, also called direction of highest

    gradient.

    o The changing basic variable is therefore easily identified as the one with

    maximum positive coefficient inZ.

    o If there are already no more positive coefficients in Zaffine expression,

    then the solution vertex has been found and the algorithm terminates.

    Next we need to compute the coordinates of the new vertex we jump to. That

    vertex will have one of the p current non-basic variables set to 0, that variable

    must be found among thep candidates. By convexity of the feasible polytope, the

    correct non-basic variable is the one that, when set to 0, minimizes the change in

    the moving basic variable. This is easily found by computing p ratios ofcoefficients and taking the lowest ratio. That new variable replaces the moving

    one in the n basic variables and the algorithm loops back to writing Z as a

    function of these.

    Example

    x=0 is clearly a feasible vertex so we start off with it : the 3 current basic variables arex,

    y andz. LuckilyZis already expressed as an affine function ofx,y,zso no transvectionsneed to be done at this step. Here Z 's greatest coefficient is -4, so the direction with

    highest gradient isz. We then need to compute the coordinates of the vertex we jump to

    increasingz. That will result in setting eithers ortto 0 and the correct one must be found.

    Fors, the change in zequals 10/1=10 ; fortit is 15/3=5. tis the correct one because it

    19

    http://en.wikipedia.org/wiki/Affine_functionhttp://en.wikipedia.org/wiki/Shear_matrixhttp://en.wikipedia.org/wiki/Affine_functionhttp://en.wikipedia.org/wiki/Shear_matrix
  • 8/7/2019 Linear Multistep Method

    20/31

    minimizes that change. The new vertex is thus x=y=t=0. RewriteZas an affine function

    of these new basic variables :

    Now all coefficients on the first row of the matrix have become nonnegative. That means

    Z cannot be improved by increasing any of the current basic variables. The algorithm

    terminates and the solution is the vertex x=y=t=0. On that vertex Z=20 and this is its

    maximum value. Usually the coordinates of the solution vertex are needed in the standard

    variablesx,y,z; so a little substitution yieldsx=0,y=0 andz=5.

    Implementation

    The tableau form used above to describe the algorithm lends itself to an immediate

    implementation in which the tableau is maintained as a rectangular (m+1)-by-(m+n+1)

    array. It is straightforward to avoid storing the m explicit columns of the identity matrix

    that will occur within the tableau by virtue ofB being a subset of the columns of . This

    implementation is referred to as the standard simplex method. The storage and

    computation overhead are such that the standard simplex method is a prohibitively

    expensive approach to solving large linear programming problems.

    In each simplex iteration, the only data required are the first row of the tableau, the

    (pivotal) column of the tableau corresponding to the entering variable and the right-hand-

    side. The latter can be updated using the pivotal column and the first row of the tableau

    can be updated using the (pivotal) row corresponding to the leaving variable. Both the

    pivotal column and pivotal row may be computed directly using the solutions of linear

    systems of equations involving the matrix B and a matrix-vector product using A. These

    observations motivate the revised simplex method, for which implementations are

    distinguished by their invertible representation ofB.

    In large linear programming problems A is typically a sparse matrix and, when the

    resulting sparsity of B is exploited when maintaining its invertible representation, the

    20

    http://en.wikipedia.org/wiki/Sparse_matrixhttp://en.wikipedia.org/wiki/Sparse_matrix
  • 8/7/2019 Linear Multistep Method

    21/31

    revised simplex method is a vastly more efficient solution procedure than the standard

    simplex method. Commercial simplex solvers are based on the primal (or dual) revised

    simplex method.

    Analytic geometry

    Three geometric linear functions the red and blue ones have the same slope ( m), while

    the red and green ones have the same y-intercept (b).

    Main article: linear equation

    In analytic geometry, the term linear function is sometimes used to mean a first-degree

    polynomialfunction of one variable. These functions are known as "linear" because they

    are precisely the functions whose graph in the Cartesian coordinate plane is a straight

    line.

    Such a function can be written as

    f(x) = mx + b

    (called slope-intercept form), where m and b are realconstants and x is a real variable.

    The constant m is often called the slope or gradient, while b is the y-intercept, which

    gives the point of intersection between the graph of the function and they-axis. Changing

    m makes the line steeper or shallower, while changing b moves the line up or down.

    Examples of functions whose graph is a line include the following:

    f1(x) = 2x + 1

    f2(x) =x / 2 + 1

    f3(x) =x / 2 1.

    The graphs of these are shown in the image at right.

    21

    http://en.wikipedia.org/wiki/Linear_equationhttp://en.wikipedia.org/wiki/Analytic_geometryhttp://en.wikipedia.org/wiki/Polynomialhttp://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Variable_(math)http://en.wikipedia.org/wiki/Graph_of_a_functionhttp://en.wikipedia.org/wiki/Cartesian_coordinate_planehttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Coefficienthttp://en.wikipedia.org/wiki/Slopehttp://en.wikipedia.org/wiki/Y-intercepthttp://en.wikipedia.org/wiki/Linear_equationhttp://en.wikipedia.org/wiki/Analytic_geometryhttp://en.wikipedia.org/wiki/Polynomialhttp://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Variable_(math)http://en.wikipedia.org/wiki/Graph_of_a_functionhttp://en.wikipedia.org/wiki/Cartesian_coordinate_planehttp://en.wikipedia.org/wiki/Real_numberhttp://en.wikipedia.org/wiki/Coefficienthttp://en.wikipedia.org/wiki/Slopehttp://en.wikipedia.org/wiki/Y-intercept
  • 8/7/2019 Linear Multistep Method

    22/31

    Vector spaces

    In advanced mathematics, a linear function means afunction that is alinear map, that is,

    a map between two vector spaces that preserves vector addition andscalar

    multiplication.

    For example, ifx andf(x) are represented as coordinate vectors, then the linear functions

    are those functionsfthat can be expressed as

    f(x) = Mx,

    where M is a matrix. A function

    f(x) = mx + b

    is a linear map if and only ifb = 0. For other values ofb this falls in the more general

    class ofaffine maps.

    Abstract: The linear stability analysis for linear multistep methods leads to study the

    location of the roots of the associated characteristic polynomial with respect to the unit

    circle in the complex plane. It is known that if the discrete problem is an initial value one,

    it is su_cient to determine when all the roots are inside the unit disk. This requirement is,

    however, conicting with the order conditions, as established by the Dahlquist barrier. The

    conict disappears if one uses a linear multistep method coupled with boundary conditions

    (BVMs). In this paper, a rigorous analysis of the linear stability for some classes of

    BVMs

    is presented. The study is carried out by using the notion of type of a polynomial.

    c 2007 European Society of Computational Methods in Sciences and Engineering

    Keywords: Linear multistep methods, Stability of numerical methods, polynomial type

    Mathematics Subject Classi_cation: 65L06, 65L20

    1 Introduction

    One of the most important problems to be solved when a _rst-order di_erential equation

    in Rs is

    22

    http://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Linear_maphttp://en.wikipedia.org/wiki/Linear_maphttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Scalar_multiplicationhttp://en.wikipedia.org/wiki/Scalar_multiplicationhttp://en.wikipedia.org/wiki/Scalar_multiplicationhttp://en.wikipedia.org/wiki/Coordinate_vectorhttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Affine_maphttp://en.wikipedia.org/wiki/Function_(mathematics)http://en.wikipedia.org/wiki/Linear_maphttp://en.wikipedia.org/wiki/Vector_spacehttp://en.wikipedia.org/wiki/Scalar_multiplicationhttp://en.wikipedia.org/wiki/Scalar_multiplicationhttp://en.wikipedia.org/wiki/Coordinate_vectorhttp://en.wikipedia.org/wiki/Matrix_(mathematics)http://en.wikipedia.org/wiki/Affine_map
  • 8/7/2019 Linear Multistep Method

    23/31

    approximated by a linear multistep method is the control of the errors between the

    continuous and

    the discrete solutions. In the last forty years a lot of e_orts have been done in this _eld,

    mainly

    when the methods are applied to dissipative problems. In such a case, the study of the

    propagation

    of the errors is made by means of a linear di_erence equation (frequently called error

    equation)

    1Published electronically April 14, 2007

    2Corresponding author. E-mail: [email protected]

    2 L. Aceto R. Pandol_ D. Trigiante

    depending on a complex parameter. Considering that the resulting equation is, in general,

    of order

    greater than one, some additional conditions must be _xed in order to get the solution we

    are

    interested in. When all of them are chosen at the _rst points of the interval of integration

    we are

    solving discrete initial value methods (IVMs). It is well-known that in this case the

    asymptotic

    stability of the zero solution of the error equation is equivalent to require that the

    associate stability

    polynomial is a Schur polynomial, that is all its roots have modulus less than one, as the

    complex

    parameter, say q, varies into the left-hand complex plane. This request is, however,

    conicting with

    the order conditions, as established by the Dahlquist barrier. On the other hand, since

    only one

    condition is inherited by the continuous problem there is no valid reason to use discrete

    IVMs. It is

    possible to split the additional conditions part at the beginning and part at the end of the

    interval

    23

  • 8/7/2019 Linear Multistep Method

    24/31

    of integration, solving a boundary value method (BVM). In this case the concept of

    stability needs

    to be generalized. In such a case the notion of well-conditioning is more appropriate.

    Essentially,

    such notion requires that under a perturbation __ of the imposed conditions, the

    perturbation of

    the solution _y should be bounded as follows

    k_yk _ _k__k

    where _ is independent on the number of points in the discrete mesh. If a discrete

    boundary value

    problem is considered, the error equation is well-conditioned if the number of initial

    conditions is

    equal to the number of roots of the stability polynomial inside the unit circle and the

    number of

    conditions at the end of the interval of integration is equal to the number of roots outside

    of the

    unit circle [6]. This result generalizes the stability condition for IVMs where all the roots

    need

    to be inside the unit disk. In order to control that the number of roots inside the unit circle

    is

    constant for q 2 C; the notion of type of a polynomial p(z) of degree k is useful. A

    polynomial is

    of type:

    T(p(z)) = (r1; r2; r3); r1 + r2 + r3 = k;

    if r1 is the number of roots inside, r2 on the boundary and r3 outside the unit disk in the

    complex

    plane, respectively.

    Denoting, as usual, by _k(z) and _k(z) the _rst and second characteristic polynomials of a

    generic

    k-step linear multistep method and by q = h_ the characteristic complex parameter, where

    h is the

    24

  • 8/7/2019 Linear Multistep Method

    25/31

    stepsize and _ is the parameter in the usual test equation (i.e., y0 = _y), the classical A-

    stability

    condition requires that T(_k(z) q _k(z)) = (k; 0; 0) for all q 2 C: In the BVM

    approach, the

    A-stability condition requires that only r2 vanishes, i.e.,

    T(_k(z) q _k(z)) = (r1; 0; r3); for all q 2 C:

    The stability problem is then reduced to study whenever or not T(_k(z)q _k(z))

    remains constant

    for q 2 C; no matter if the discrete problem is an IVM or a BVM one. Of course, we

    have now

    much more freedom since the case r3 = 0 is only one of the allowed possibilities.

    Except for very simple cases, the proof that the number of roots inside the unit circle

    remains

    constant for all q 2 C is often checked numerically. In this paper a theoretical analysis

    is carried

    out for the classes of linear multistep methods generalizing the Backward Di_erentiation

    Formulas

    (GBDFs) and the Adams methods (OGAMs, GAMs) [2, 6]. The starting point of this

    study willbe either the explicit form of the coe_cients or relevant properties of them.

    2 Polynomial type and the stability problem for LMMs

    The study of the location of the zeros of special polynomials with respect to a curve in

    the complex

    plane is an old problem whose pioneering works go back to Schur [7]. Classical criteria

    such as

    Schur's or Routh-Hurwitz are in general di_cult to apply to high degree polynomials. The

    following

    result provide conditions which are simpler to check in our subsequent analysis.

    c 2007 European Society of Computational Methods in Sciences and Engineering

    (ESCMSE)

    Stability analysis of LMMs via polynomial type variation 3

    25

  • 8/7/2019 Linear Multistep Method

    26/31

    Theorem 2.1 Let p(z) be the real polynomial of degree k de_ned by

    p(z) =

    Xk

    j=0

    ajzj

    and p_(z) = zk p(z1) its adjoint. Suppose that the following conditions are satis_ed:

    i) p(1) 6= 0;

    ii) there exists m 2 N; m _ k; such that:

    z2mk+1p(z) p_(z) = ak(z 1)2m+1:

    Then,

    T(p(z)) =

    (km; 0;m) if (1)m ak p(1) > 0;

    (km 1; 0;m + 1) if (1)m ak p(1) < 0:

    Proof. We _rst prove that p(z) has no zeros on the unit circle when p(1) 6= 0: Suppose

    that

    p(ei^_) = 0; for a _xed ^_ 2 (0; 2_): Then, p_(ei^_) = eik^_p(ei^_) = 0 since the

    coe_cients are real.

    Moreover, from hypothesis ii) it turns out to be

    0 = ei(2m

    k+1)^_p(ei^_)

    p_(ei^_) = ak(ei^_

    1)2m+1:This equality is only veri_ed for ^_ = 0; which is excluded by hypothesis i):

    Let g be de_ned by

    g(z) =

    z2(mk)+1 p(z2)

    p(1)

    :

    Supposing that n is the number of zeros of p inside the unit circle (counted with

    multiplicity), g

    has np _ 2(km) 1 poles and nr _ 2n zeros inside the unit circle. We determine the

    winding

    number w of g(z) around the circle jzj = 1 (see, e.g., [7]). The hypothesis ii); relating p

    and p_;

    26

  • 8/7/2019 Linear Multistep Method

    27/31

    yields

    g(z) g(z1) =

    z2(2mk+1) p(z2) z2k p(z2)

    z2m+1 p(1)

    =

    z2(2mk+1) p(z2) p_(z2)

    z2m+1 p(1)

    =

    ak(z2 1)2m+1

    z2m+1 p(1)

    =

    ak(z z1)2m+1

    p(1)

    :

    Then,

    Im g(ei_) =

    g(ei_) g(ei_)

    2 i

    =ak(ei_ ei_)2m+1

    2 i p(1)

    =

    ak(2 i sin _)2m+1

    2 i p(1)

    =

    (1)mak

    2 p(1)

    (2 sin _)2m+1:

    This quantity vanishes for _ = 0; _: Moreover, according to the sign of (1)m ak p(1) it

    is either

    27

  • 8/7/2019 Linear Multistep Method

    28/31

    positive or negative between (0; _) and the opposite between (_; 2_): Considering that w

    assumes

    the value +1 in the _rst case and1 in the other one and that the principle of the

    argument yields

    w = nr np = 2(n k +m) + 1; immediately it follows that n = km; if (1)m ak

    p(1) > 0 and

    n = km1; if (1)m ak p(1) < 0: Since p has a total of k zeros, the assertion follows.

    c 2007 European Society of Computational Methods in Sciences and Engineering

    (ESCMSE)

    4 L. Aceto R. Pandol_ D. Trigiante

    Example 2.1 Let

    p(z) = 10 5z + z2:

    It satis_es the hypothesis ii) with m = k = 2: Moreover, since (1)m ak p(1) = 6; it

    follows that T(p(z)) =

    (0; 0; 2): In fact, the roots of p(z) are: z1 = 5+ip15

    2 ; z2 = 5ip15

    2 :

    Example 2.2 Consider the polynomial

    p(z) =

    7 + 3z + 5z2 + z3:In this case the hypothesis ii) is veri_ed with m = 2: Moreover, since (1)m ak p(1) = 2;

    we get T(p(z)) =

    (1; 0; 2): In fact, the roots of p(z) are: z1 _= 0:8662; z2 _=2:2108; z3 _=3:6554:

    Example 2.3 Let

    p(z) =9 + z + 5z2 + z3:

    Here p(1) =2 and the condition ii) is veri_ed with m = 2: Since (1)m ak p(1) =2;

    one has that

    T(p(z)) = (0; 0; 3): The roots are: z1 _=4:2731; z2 _=1:8596; z3 _= 1:1326:

    The above result turns out to be useful in studying the stability properties for discrete

    problems

    obtained by using Boundary Value Methods (BVMs) with (k1; k2)-boundary conditions,

    i.e., k-step

    28

  • 8/7/2019 Linear Multistep Method

    29/31

    linear multistep methods (LMMs) to which k1 initial conditions and k2 = k k1 _nal

    ones are

    imposed.

    For the sake of completeness, we briey report the essential results on the stability

    problem for

    BVMs (see [6] for details). The characteristic (or stability) polynomial of the method is

    _k(z; q) = _k(z) q_k(z); q 2 C; (1)

    where

    _k(z) :=

    Xk

    j=0

    _(k)

    j zj ; _k(z) :=

    Xk

    j=0

    _(k)

    j zj : (2)

    Although the locution \well-conditioning" would be more appropriate dealing with

    boundary valueproblems, we shall continue to use the term stability, as customary. It is known that the

    wellconditioning

    of a linear boundary value problem, either continuous or discrete, is related to the

    so called dichotomy. In the discrete case it essentially states that the number of initial

    conditions

    should be equal to the number of roots of the characteristic polynomial inside the unit

    circle

    and, of course, the number of conditions at the end of the interval of integration should be

    equal

    to the number of roots outside the unit circle. Then, supposing that k1 is the number of

    the

    29

  • 8/7/2019 Linear Multistep Method

    30/31

  • 8/7/2019 Linear Multistep Method

    31/31

    When I bought the company from Comsat, both the main frame version and the PC

    version where unstable and models such as T-junctions, crosses and others fairly

    inaccurate at higher frequencies. Becoming a partner in the DARPA MIMIC program,

    and mostly financed from earnings rather the DARPA money, these things were changed

    by developing EM-based models, and verified by Raytheon and Texas Instruments.

    The most significant contributions were:

    N-dimensional nodal noise analysis for all types of both linear and nonlinear circuits,

    including oscillators, mixers and amplifiers under large signal conditions.

    http://www.microwaves101.com/encyclopedia/darpa.cfmhttp://www.microwaves101.com/encyclopedia/darpa.cfm