a least mean squares algorithm for differential sampled … disadvantage since it is more sensitive...

6
IMRI TRANSACTONS ON COMPUTERS, VOL. c-24, NO. 6, JUN 1975 A Least Mean Squares CUBIC Algorithm for On-Line Differential of Sampled Analog Signals JOHN H. J. ALLUM Abstract-A digital computer algorithm is developed for on-line time differentiation of sampled analog voltage signals. The deriva- tive is obtained by employing a least mean squares technique. The recursive algorithm results in a considerable reduction in computer time compared to a complete new solution of the normal equations each time a new data point is accepted. Implementations of the algorithm on a digital computer is discussed. Examples are simulated on a DEC PDP-8 computer. Index Terms-Digital algorithms, digital differentiation, digital simulation, least mean squares approximation. INTRODUCTION THE TIME DERIVATIVE (hereafter called the ve- locity) of an analog signal or continuous time function (hereafter called the position) is of interest in many fields of research. This information can be obtained by a variety of techniques. The technique used in this paper involves least mean squares fitting of a cubic time function through a number of successive samples of the input position func- tion and calculating the slope of the curve at the time of the last sample. Data points are fitted to a cubic time function by em- ploying a recursive approach to calculate the sums of time and position products required for the minimization of the error function. The advantage of this approach, com- pared to a complete recalculation of these sums each time a new data point is accepted, is a considerable reduction in computer time by approximately (N - 2)/N percent. N is the number of data points being fitted. High-pass filters such as *+se( s + a or (2) S+a2 (s + a.)a 2X where s is the Laplacian operator, are commonly employed Manuscript received January 11, 1973; revised October 1, 1973. This work was supported in part by NASA Grant NGR-22-009-025 and completed at the Man-Vehicle Laboratory, Department of Aeronautics and Astronautics, Massachusetts Institute of Tech- nology, Cambridge, Mass. 02139. The author is with Neurologische Klinik mit Abteilung fur Neurophysiologie der Universitat Freiburg, Freiburg, Germany. to achieve, over a limited frequency range, a, approximate analog differentiation [1]. The first approach (1) has a major disadvantage since it is more sensitive to noise in the signal than to the signal (of frequency content less than a) itself. Noise is more attenuated in the second ap- proach (2) but the usable bandwidth is reduced by an octave. Thus it would be highly desirable if measurement and control systems could employ an on-line digital dif- ferentiation technique which at least duplicates the low frequency sensitivity of analog differentiators and at the same time improves the signal-to-noise ratio of the differ- entiated signal. Digital differentiation described herein has a phase error comparable to analog methods; however the bandwidth extends completely to a rad/s; furthermore the rolloff after a rad/s is third order. THE CUBIC ALGORITHM The algorithm fits a curve of the form x = aO + ait + a2t2 + a3t3, through the last N data points by minimizing the error function i=O e2= (x - ast?3 - a2ti2 - aiti - ao)2, -(N-1) with respect to a3, a2, a,, and ao. The derivative of the curve is dx/dt = aln + 2a2" + 3a3"t2, where the superscript n denotes a value calculated at the last or nth sample point. If the last sample point is made equivalent to time zero, then the velocity at the last data point is al". Minimizing the error function yields the matrix equa- tion (the normal equations), EN I l T2 Ts T2 Ts1Taoe 12 T3 T4 T3 T4 Ts T6 TBJLaan Bn Cn p (3) where 585 T1 nr r

Upload: ledien

Post on 12-May-2018

215 views

Category:

Documents


2 download

TRANSCRIPT

IMRI TRANSACTONS ON COMPUTERS, VOL. c-24, NO. 6, JUN 1975

A Least Mean Squares CUBIC Algorithm for On-Line

Differential of Sampled Analog Signals

JOHN H. J. ALLUM

Abstract-A digital computer algorithm is developed for on-linetime differentiation of sampled analog voltage signals. The deriva-tive is obtained by employing a least mean squares technique.The recursive algorithm results in a considerable reduction incomputer time compared to a complete new solution of the normalequations each time a new data point is accepted. Implementationsof the algorithm on a digital computer is discussed. Examples are

simulated on a DEC PDP-8 computer.

Index Terms-Digital algorithms, digital differentiation, digitalsimulation, least mean squares approximation.

INTRODUCTION

THE TIME DERIVATIVE (hereafter called the ve-

locity) of an analog signal or continuous time function(hereafter called the position) is of interest in many fieldsof research. This information can be obtained by a varietyof techniques. The technique used in this paper involvesleast mean squares fitting of a cubic time function througha number of successive samples of the input position func-tion and calculating the slope of the curve at the time ofthe last sample.Data points are fitted to a cubic time function by em-

ploying a recursive approach to calculate the sums of timeand position products required for the minimization ofthe error function. The advantage of this approach, com-

pared to a complete recalculation of these sums each timea new data point is accepted, is a considerable reductionin computer time by approximately (N - 2)/N percent.N is the number of data points being fitted.

High-pass filters such as

*+se(

s + a

or

(2)S+a2

(s + a.)a2Xwhere s is the Laplacian operator, are commonly employed

Manuscript received January 11, 1973; revised October 1, 1973.This work was supported in part by NASA Grant NGR-22-009-025and completed at the Man-Vehicle Laboratory, Department ofAeronautics and Astronautics, Massachusetts Institute of Tech-nology, Cambridge, Mass. 02139.

The author is with Neurologische Klinik mit Abteilung furNeurophysiologie der Universitat Freiburg, Freiburg, Germany.

to achieve, over a limited frequency range, a, approximateanalog differentiation [1]. The first approach (1) has a

major disadvantage since it is more sensitive to noise inthe signal than to the signal (of frequency content lessthan a) itself. Noise is more attenuated in the second ap-

proach (2) but the usable bandwidth is reduced by an

octave. Thus it would be highly desirable if measurementand control systems could employ an on-line digital dif-ferentiation technique which at least duplicates the lowfrequency sensitivity of analog differentiators and at thesame time improves the signal-to-noise ratio of the differ-entiated signal. Digital differentiation described hereinhas a phase error comparable to analog methods; howeverthe bandwidth extends completely to a rad/s; furthermorethe rolloff after a rad/s is third order.

THE CUBIC ALGORITHM

The algorithm fits a curve of the form

x = aO + ait + a2t2 + a3t3,

through the last N data points by minimizing the error

functioni=O

e2= (x - ast?3 - a2ti2 - aiti - ao)2,

-(N-1)

with respect to a3, a2, a,, and ao. The derivative of thecurve is

dx/dt = aln + 2a2" + 3a3"t2,

where the superscript n denotes a value calculated at thelast or nth sample point. If the last sample point is madeequivalent to time zero, then the velocity at the last datapoint is al".

Minimizing the error function yields the matrix equa-tion (the normal equations),

EN

I l

T2

Ts

T2 Ts1Taoe12 T3 T4

T3 T4 Ts

T6 TBJLaan

Bn

Cnp (3)

where

585

T1

nrr

IEEE TRANSACTIONS ON COMPUJTERS, JrUN 1975

i=O

Tk = E iA;Att; k = 1,2, 6-(N-1)

and At is the sampling interval. An, Bn, Cn, and En aresums of time and position products, viz.,

i=O

An-= E xi)-(N-1)

i=OBn = E xiiAt;

-(N-1)

i=O t=O

Cn = E xi22At2; En = E xisAt3.-(N-i) -(N-i)

Solving the matrix equation (3) for asn gives

ain = [ElAn + a2Bn/At + a3Cn/At2 + a4E"/At3J[1/At].

The a's are computed as ao = (-1) i times the determi-nant of the matrix in (3) with row i and Column 2 elimi-nated divided by the determinant of the matrix in (3).

It remains to derive the recursion formulas for A", Bn,Cn, and En in a form that may be updated between sampletimes to produce a velocity estimate. For clarity, N = 11will be assumed and for brevity, only Cn will be fully de-rived. The expression for Cn is

i-O

C"n= Xx,i2At2.-(N-1)

Thus

Cn = (102Xp_10 + 92Xn-9 + 82Xn8

+ + 22X._2 + Xn-1) At2,

and

Cn+1 = (Xn + 102xn-9 + 92Xn-8 + * + 22Xn-1) At2,

where xn is the nth and most recent sample.

Cn+1 - Cn = (-102xn-o + l9xn_9

+ 17Xn-*8* + 3Xn-1 + xn) At2.

A new variable Dn is defined such that

Dn= 19x$n- + 17Xn-8 + *. + 3Xn-1 + xn,

and

Dn-1 = 19xn_10 + 17x49 + + Xn-l.

Hence

Dn - D"- = -19x,-Io + 2 (Xn-9 + Xn-8

+ .**+ Xn-1)

Nowi=O

A"n= xi-(N-1)

therefore

Dn = Dn-1 21xn_10 + 2An Xn-

and

Cn+l = Cn + (Dn -102xn1o) At2.

An, Bn, and En may be similarly obtained as recursionformulas. Hence by induction the complete set of recur-sion formulas is

An+1 = An - Xn-(N-1) + Xn+l2

Bn+1 = Bn - (An - Nxn_(NV-1)) At,

Cn+1 = C" + (Dn - (N-1)-2n-(N-1)) t2jDn = Dn-1- (2N - l)Xn_-(N-1) + 2An -Xn-

En+1 = En + ((N - 1) 3Xn_(-) - 2Cn+1 + D - Gn) At8,

and

Gn = Gn-1- (N - 1) (N -2)Xn-(N-)+ Dn - An + Xn-(2-l).

Gn is an intermediate recursion formula required to calcu-late En, in the same manner as Dn is needed for C".

PRACTICAL CONSIDERATIONS FOR AVELOCITY ALGORITHM

Three different differentiation techniques were ap-praised by the author, of which, the least mean squarescubic algorithm was the best compromise between com-putation time and noise in the differentiated signal.The techniques were 1) a six-point Lagrangian differenti-

ation method [2], [3], 2) least mean squares approxima-tion techniques [4], and 3) an analog pseudo-differentiatorof the form ass/(s + a) [1].A method using a six-point Lagrangian differentiation

formula requires a short and simple digital program toimplement; however, the noise level of the velocity outputis so high that the differentiated signal is not readilydiscernable even at frequencies in the range 0.1 to 1 Hz.The accuracy of a least mean squares differentiator

over a desired frequency range is dependent on severalfactors. The number of position points sampled over thetime period of the fit, the degree of the power series

M

x(t) = E aitii-O

which is the least mean squares approximation of theposition signal, the highest frequency for which the veloc-ity is required, and the sampling rate.

If the data interval is fixed an increase of the numberof sampled points in this interval will reduce the error ofthe approximation. Reducing the data interval, but keep-ing the number of sampled points fixed, will decrease theerror of the fitted function if there is no measurementnoise. In the presence of additive noise this effect isequivalent to a smoothing filter whose cutoff frequency isprogressively moved to higher frequencies. A reasonableassumption with regard to the number of points to chooseis to assume that the power series approximation can fita complete period of a sine wave. If the upper frequencylimit required for the differentiator is fixed, then thenumber of sampled points is given by NAt = T, where T

586

ALLcYM: CUBIC ALG0RrTHM

.-.. _,, .-.. iFa l~~~~~~~~........

...... .... ._. . .... .... ...

...~~~~~~~~~. ........ .... ._.... ..*et .-*t+t-----tt-tt -t- ---._. .... ..__. .--l.-*-- ti-..~~......... .. ....I.:*- ...*. ..*.. ....1

,7.

...~~~~.

* *t.Ap"- tt1- :::::! : .:t: :: :::::: : -..... ... .... .... ....

:::t::t::t:-l:..:t:;t::::...1: ....::1.r:: i: : t _ 7 - .~~~....rt- _ :..t: .--.. ....|... F .-.. ~~~~~~~~~~.. ..-.... _ -i l-l......

:; ' ... ... .. -- | l:-- t 't t. -... ... |- - ..-t--;--=-....1 ) _ _ _ _. _. _ ... 1.;~~~~~.;l.... .....'... ... .... ..;...---::: :::§:7 ::::: ::::.::t:w:: :::: :::: ::::;: :::::::I:: ::::: :.. :::,.- :::| ::: ::,:::::1::-t - - -t .,, s.... .... .. . ^ ... .... .... . .. .... .... .... ....

: ,.Wal..gM C__ ___ ...I 'i..l.'-.l.'..; ':~~~~~~~... ... ......' .... ..:l::

1::

..:.: ......... jl .... . - e .. ... .... .........I:-~~~~~~~~~~~~~~~~~~~~~~.:-t- ....iT

,: ....f-:t -t:: l l :: l |: 1 :'1 :|: .: :..: : : t1 .t: -.,:,.,.t....l-. Z .. ,... ^rZ.."yt I,x,S F:t~...... ...*. -.'.1...

1..... .... t_ -0::.::|::.::ti.fi: i..::::i:::: ::-:t::::::-:t:::: ::::t~~~ ~~~~~~~~~:::: ::-:::-::,,'. : ::t~:.:_ ,t.-_=2w:... =.... ....''-:::--i::: --::-b::::~~~~~~~~~~~~~~....; -.:tt .t...+ 1.bi ..te l - :1: : :t:-:: :::t::~~~~~~~~~~~. ....si l....t::..

....4 ... -t-- -- t-*-- --o+- -tft+ _ l 'I * + H - --tt-----t- ....-

mEmEm}ll,lNl!,lilXlillllllilllhll?-llllllE.,lEillllXat1 w < ,t«ikk ||||||'lililili... .li,i

Fig. .. it....im tin sine ....aOfrvaiu

degreesof power series approximations.ge of f

is the period of the upper frequency limit. The programcycle time for the least mean squares parabolic, cubic,and quartic fit algorithms is approximately 3-5 ms, re-spectively, on a DEC PDP-8 digital computer whoseoperation cycle time is 1.5 As. Further, the simplest choice-for N from the computational standpoint is 2r + 1, wherer is an integer. Thus, if the upper frequency limit is 5 Hz,suitable choices for N and At, the sample interval, are33 and 6.25 ms. The sample interval of 6.25 ms also satis-fies the Nyquist sampling criterion [5].The question of the degree of the power series approxi-

mation chosen for the least mean squares approximationis answered in Fig. 1. At 5 Hz the error in the velocityestimate is drastically reduced as the fit is changed fromparabolic (M = 2) to cubic (M = 3); this reduction iscoincident with a reduction in the mean square deviationof the fit (0.484-0.079). Quartic fits, however, yield littleadvantage over cubic fits for sinusoidal signals. This re-duction in error is less clearly seen at 2 Hz. Again though,the quartic fit has little merit over the cubic. There is adisadvantage in both increased core space requirementsand program cycle time (the latter causes an increase ofthe one sampling period delay of the velocity output) ifthe pentic fit is employed, though it is more accurate thanthe cubic fit at 5 Hz. This latter inaccuracy can be miti-gated in a manner to be described below. Given the opera-tion cycle time of a DEC PDP-8 and a frequency limitof 5 Hz, the use of a least mean squares cubic differenti-ator is in a time and accuracy sense "optimal" over alldegrees of approximations.The performance of a least mean squares cubic differen-

tiator is described by the transfer function

1.038 (s/15.7 + 1)(s/44 + 1)5 (4)

in the region dc to 15 Hz, and is not dependent uponinput position amplitude for this frequency range. Thetransfer function (4) was hand fitted by asymptotic ap-proximation to a Bode plot of the digital differentiator[6]. Lead at 15.7 rad/s in (4) above accounts for theamplification of velocity at 5 Hz when the cubic fit algo-rithm is employed. The correction factor required to pro-duce pure differentiation in the range dc to 8 Hz is

0.972(8/15.7 + 1)

In the current method, the correction factor was attainedin two stages. For the first stage, attenuation by 0.972was realized with a multiplication.

For the second stage, the correction of lead above 2.5Hz was effected with a digital lead-lag filter

(s/47.1 + 1) (s/44 + 1)(s/15.7 + 1) (s/88 + 1)

which causes a small change in the velocity output phase(less than 100).The Bode magnitude and phase plots for the complete

digital program realization of the cubic algorithm-CUBIc-are shown in Figs. 2 and 3, respectively. Theseplots include the correction procedures described in thelast two paragraphs above. The -3 dB point, with respectto a pure differentiator, for the magnitude plot is at 9 Hz,whereas the 450 phase point is at 4.5 Hz, being previouslycorrected for the digital delay; at frequencies higher than

587

IEEE TRANSACTIONS ON COMPUTERS JUNE 1975

I .;2:7A:2J J.; 2i;lI2 LE--Jn-

Fig. 2. Bode magnitude plot for CIUBIC.

Phase error of digitallifferentiator including

2 -.2

Fig. 3. Bode phase error plot for CuBIC.

these points the least mean squares cubic fit techniquefails to act as a differentiator.

In some applications of CUBIc a very smooth response

was desired, but extra phase lag was not crucial. For theseapplications the digital differentiator output was filteredby a digital state space realization of a double lag filter,

1

(s/3+ 1)2

j, the lag constant was set automatically every 1.6 s at 8times the frequency of the digital differentiator inputwaveform (position). An estimate of current fundamentalfrequency was accomplished by dividing the peak velocityoutput by the peak position input; both peaks were as-

sumed to have occurred during the previous 1.6 s. Thedescribing function (4) attenuates above 8 Hz, hence thismethod of nonlinear filtering produces an even sharper

inn

10

1

588

_LVV

ALLUM: CUBIC ALGORITHM

(a) L7

(b) r

(c)

(d) -

I

Fig. 4. Example 1 of digital and analog differentiation. (a) and (c)1 Hz position input. (b) Pseudo-differentiator (acsl(s + a)) out-put. (d) CUBIC output. All calibration marks except (d) are 1 V;(d) 5 V.

attenuation above 8 Hz. From dc to 8 Hz the nonlinearfilter produces added phase lag of 100.

Pseudo-differentiation on an analog computer is fre-quently employed to obtain the time derivative of ananalog signal. The transfer function used,

as

(s + a)

involves two conflicts. Smooth responses are obtained ifthe constant a is small but an accurate response is ob-tained if a is large. To compare CUBIC with an equivalentpseudo-differentiator a was chosen midway between the-3 dB point and the 450 phase point of CUBIC, that isa = 42.5.Two examples of on-line differentiation using cuBIc and

a pseudo-differentiator are shown in Figs. 4 and 5. Sinewaves of fixed frequency were used as the position signalinput to an analog to digital converter whose output wasaccepted by cuBIc, and as the pseudo-differentiator input.In both examples CUBIC produced a smooth, accuratemagnitude estimate of the time derivative; the phase isunderestimated as described in Fig. 3. The pseudo-differen-tiator has an input amplitude limitation as no componentin its circuit may have a signal which exceeds 10 V inmagnitude. The input signals used in Figs. 4 and 5 areclose to this limitation.The digital routine, CUBIC, has been employed in a wide

(a) 1 F Li

_ _ = _.--.-, _. .,-.- _ .,..._ ,. ..,.........

(b)iF~~~~~~~~~I .........

~~~~~~~~~~~~. l. l-l-...

E~~~~~~~~~. -...._.,LA

-- AA1 I 1--'1A1, m1YAE4- 1(c(e) V0|-\IJl W:l'VXl :\J,I~~~~~...'....'l-.-'t.. .. aM-.

Fig. 5. Example 2 of digital and analog differentiation. (a) 0.1 Hzposition input. (b) Pseudo-differentiator (aS/(s + a)) output. (c)CIUBIC output including double lag ifiter j#2/(s + ,3)2. All calibrationmarks are 1 V.

range of biomedical research in the Man-Vehicle Labora-tory at the Massachusetts Institute of Technology. Oneof its current used is in the on-line calculation of slowphase velocity from raw eye movements during vestibular,

589

IEEE TRANSACTIONS ON COMPUTERS, VOL. c-24, No. 6, IuNE 1975

optokinetic, and caloric nystagmus [7]. CUBIC is avail-able from the DECUS program library [8].

ACKNOWLEDGMENTThe author wishes to acknowledge the helpful advice

provided by his colleagues Prof. C. M. Oman, Dr. S. Yasui,both of the Man-Vehicle Laboratory at the MassachusettsInstitute of Technology, and P. B. Mirchandani of theOperations Research Center at the Massachusetts Insti-tute of Technology.

REFERENCES[1] 0. E. Doebelin, Measureinent Systems: Application and Design.

New York: McGraw-Hill, 1966, p. 637.[2] M. Abramovitz and I. Stegen, Eds., Handbook of Mathematical

Functions (Applied Mathematics Series 55). Washington, D. C.:NBS, 1964, p. 882.

[3] W. G. Bickley, "Formulae for numerical differentiation," Math.Gaz., vol. 25, pp. 19-27, 1941.

[4] F. B. Hildebrand, Introduction to Numerical Analysis. NewYork: McGraw-Hill, 1956, p. 259.

[5] E. Jury, Sampled-Data Control Systems. New York: Wiley,1958.

[6] R. N. Clark, Introduction to Automatic Control Systems. NewYork: Wiley, 1962.

[7] J. H. J. Allum, J. R. Tole, and A. D. Weiss, "MITNYS-II-a

digital program for on-line analysis of nystagmus," to be pub-lishe'd.

[8] J. H. J. Allum, "cuBic-a digital program for on-line differen-tiation of sampled analog signals,' DECUS Program Library,Digital Equipment Corp., Maynard, Mass., Rep. DECUS 8-559,1973.

John H. J. Allum received the B.Sc. degreefrom Birmingham University, Birmingham,England in 1965 and the S.M. degree fromMassachusetts Institute of Technology, Cam-bridge, in 1968. He held a Kennedy Scholar-ship from 1966 until 1968.

_ I_ From 1968 until August 1973 he was a Re-search Assistant at the Man-Vehicle Labora-tory, Department of Aeronautics and Astro-nautics, Massachusetts Institute of Tech-nology, where he completed and defended

experimental work on his doctoral thesis: "The dynamic response ofthe human neuromuscular system for internal-external rotation ofthe humerus." Since August 1973 he has worked at the Neuro-logische Universitata.klinik mit Abteilung fur Neurophysiologie,Freiburg, Germany on problems of visual-vestibular interactions inman and the vestibular neurons of goldfish.

Function Approximation by Walsh SeriesCHUNG-KWONG YUEN

Abstract-Function approximation by a finite Walsh series is con-sidered. There are two methods for selecting the terms of a series.The process of threshold sampling gives a least-square error ap-proximation, but no error analysis technique is available. However,error analysis is possible if terms are selected according to degreesand subdegrees. It is shown that truncation is equivalent to droppingal terms with degrees greater than some amount. The error causedis a weighted integral of the first derivative, and an upper bound onthe expression can be derived. It is also shown that a truncatedWalsh series corresponds to a simple function table. Data compres-sion is equivalent to dropping terms with large enough subdegrees,with an estimable error. After a Walsh series has been selected, itis possible to modify the coefficients using Lawson's algorithm andreduce the maximum error.Computed examples are shown to conform to our theoretical anal-

ysis. While Walsh series are unsuitable for writing subroutines,hardware implementation of Walsh series for low-precision functionevaluation is promising.

Manuscript received November 14, 1973; revised November 12,1974.The author is with the Computer Centre, Australian National

University, Canberra, A.C.T., Australia.

Index Terms-Approximation, fast Walsh transform, functiongeneration, orthogonal series, Walsh functions.

I. INTRODUCTIONFUNCTION APPROXIMATION by Walsh series isFone of the earliest applications of Walsh functionssuggested by pioneering workers. It was studied by Polyakand Schreider in 1962 (republished in 1966 as [1]). Thiswork contains a number of important results on theproperties of Walsh series, but on the whole fails to presentWalsh series as a practical tool for function approximation.One of the difficulties Polyak and Schreider encountered

was the lack of error analysis techniques. This problem hasbeen to a great extent solved by more recent efforts. Afirst step in this direction was an integration by partsmethod [2] which can be used to find upper bounds onWalsh transforms of a function in terms of its derivatives[3]. This is then extended to obtain upper bounds oncertain partial sums of a Walsh series. Some results werepresented in a brief conference paper [4].

590