nonlinear regression methow - stat.ncsu.edu

28
'e :NQNLI:NE.AR METH0DS by A. RONALD GALLANT InStitute of Series No. 890 .,. aeptember 1973 ,

Upload: others

Post on 18-Oct-2021

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

'e

:NQNLI:NE.AR REG~SSION METH0DS

by

A. RONALD GALLANT

InStitute of Statistic~

Mim~Qgraph Series No. 890R~leigh .,. aeptember 1973,

Page 2: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

NONLINEAR REGRESSION METHOW

by

A. Ronald Gallant

ABSTRACT

The Modified Gauss-Newton method for finding the least squares

estimates of the parameters appearing in a nonlineaI'regression model

. is described. A description of software implementing

available to TUCC users is included.

Page 3: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

(t=l" 2" .... , n) •

A.Ronald Gallant

NONLINEAR REGRESSION METHOre·

by

arek-dimensional

A sequence of responses Yt to inputs ~ are assumed to be generated

accordingto;the nonlinear regression model

and the unknown par~eter .• 8 is p-dimensional. rv

I ­,

,., ,.,.,.,~=. {81 " 82,

8 = (.AI " 82" ••• " e .) •rv . J? .

Tone least squares estimater is that value

which minimizes

The following vector and matrix notation will be useful in describing the·

modified Gauss-Newton method.

Page 4: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

(n X 1)

·576 (50 Xl)

.824

·515

matrix whose t th row is Z/fUst'~)

consider the 50 responses (~Yt) and inputs

We assume that

il =

Zf(;y"g) = the p X 1 vector whose j th element is ...£..­ee j

sothat p=2 andk=l. The vectors and matrices introduced above are; for

Page 5: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

.•• 51582e e .1 .

1. . .833 82.833 8

1e ...

. .249 82.249 81e

The modified Gauss-Newton (Hartley, 1961) algorithm proceeds as follows.

Find a Al between 0 and 1 such that

From the starting estimate -8 compute""'0

. SSE (e .+A D ) S SSE (e ).\-Bo avo ""'0

Find a A between 0 and 1 s~ch that. 0

1)·· Let e = e +A D • Compute""'1 rvo CJ"VO .

Find a start value ~o; methods of finding start values are discussed later.

Page 6: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

These iterations are continued until terminated according to some stopping

As users of iterative methods are well aware, a mathematical proof of

rule; stopping rules are discussed later.

chosen does not fit the data, orii) Poor start values are chosen.

Hartley~s (1961) paper sets forth assUlllptions such that these iterations

"will converge to e. Slightly weaker assumptions (Gallant,' 1971) are listed'

in Appendix I.

,convergence isno guarantee that convergence will obtain ana coIrIputer. In

this author's experience, convergence fails for two reasons: i) The

When the inputs are scalars (k=l) the first difficulty, can' be avoided.

Plot the data. If your visual impression of the plotted data differs from the

regression model chosen, expect difficulties. When the inputs are vector

valued (k> 1) the same considerations apply but are more difficult to

verify.

The choice of start values is entirely an ad hoc process. They may be

obtained from prior knowledge of the situation, inspection of the data, grid'

search, or trial and error. For examples, see Gallant (1968) and Gallant and

Fuller (1973). A more general approach to finding start values is given by

Hartley and Booker (1965). A simpler variant of their idea is the following.

and inputs xNt.

~

(i = 1, ... , p).

(i :: 1, 2, "', p)

for ~

Page 7: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

to satisfySSEfS. + L D.)< SSE(S.)~~ ~f'V~ . "'l.

.481

~ = (.444, .823)'.

Illustrating with our example~ we will choose the

There are several ways of chosing Ai

largest and smallest inputs obtaining the equations

The solution of these equations is

at each i terative step. Hartley (1961) suggests two methods in his paI;le,r. In

this author's experience, it doesn't make very much difference how one chooses

A. • What is important is that the computer program check that the condition~

SSE (S. + A.D.) <SSE (e .). f'V~ ~f'V~ \-8~

every A between 0 and€

SSE(S. + AD.) < SSE(S.) •~ f'V~. rv~

taking the next iterative step.

In an intermediate step in Hartley's (1961) proof one sees that there is

For this reason, the author prefers the following method of choosing

For some a between 0 and 1, say a =.6, successively check the values, .

j = 0, 1, •••

and chose Ai to be the largest ~j suchthat

SSE(e. + ~ .D.) < SSE( e.).rv~. J~ KIJ.

Page 8: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

. There aJ:'e a variety of stopping rules or tests ·:for. convergence

tolerance € > 0 and terminate when

II k - ~+l II s € II ~ II

no Ai satisfying the

limits of themachinee This situation is discussed

paragraph•

to decide when to terminate the iterations. For example, one might

are printed out for each iteration until either this limit is reached

The author's preference is not to use a stopping rule other than some pre-

andSSE(A.). ~ .

The values of ,f!i

A can be found to improve 8.. The observations, predicted values,I"VJ.. .

that

chosen limit on.the number of·iteratives.

residuals from this last iteration are printed out as well.

iterations are identical to seven significant ·digitsandthe predicted values

(The symbol II ~ II denotes the Euclidean norm; II ~ 1/ = (Iii=l

one situation where one must stop. This is when no A can be

and residuals indicate that these values are acceptable they are used. A

further check is to try another start value and see if the same answers are

obtained.,..

The value of this last iteration will be taken as S

iteration are computed

From this last

Page 9: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

"'2 1 ... '"0= - SSE(e)

n

listed in Appendix II, the least squares estimator

shown in Exhibit IV. Users at NCSU, Duke, and UNC

source deck by calling the author.

DD DSN=NCS.ES.B4l39.GALLANT.GALLANT,DISP=SHR

DD DSN=SYSl.FORTLIB,DISP=SHR

.DD DSN=SYSI. SUBLIB, DISP=SHR

DD *

source code

II

IIG.SYSLIB

II

IIIIG.SYSIN

data

IINONLIN JOB xxx.yyy.zzzz,programmer-name

/ISTEPl EXEC FrGCG

Ilc.SYSIN DD *

The requisite JCL· is as follows:

The exponential example we have been considering will be used to illustrate

Source coding which will handle input and output is shown in Exhibit III

approximately normally distributed with mean ,g and variance-covariance

"'2 '"matrix (J ,e.A FORTRAN subroutine, IMGN, is available to TUCC users which will perform

one modified Gauss-Newton iterative step. The user is free to handle his own

input, output, and stopping rules. The documentation is displayed as Exhibit

the use of this program. Assume that the data of Exhibit I have been punched

one observation per card according to

Page 10: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

in documentation or difficulties with the program

coded in Exhibit V and Subroutine FUNCT is

page.

IINONLIN JOB xxx.yyy.zzzzz,programmer-name

IlsTEPl EXEC FTGCG

Ilc.SYSIN DD *source deck

subroutine INPUT deck

subroutine FUNCT deck

IIG.SYSLIB DD DSN=NCS.ES.B4l39·GALLANT.GALIANT,DISP=SHR

I I DD DSN=SYSI. FORTLIB, DISP=SHR

II DD DSN=SYSl.SUBLIB,DISP=SHR

I/G. SYSIN DD *data cards

II

outputfor·the example is shown in Exhibit VII. Note that the program

The deck arrangement is as follows:

'" '"prints a correlation matrix computed from t rather than I:. The variance-tv rv

covariance matrix ~2 £ can be recovered by using the standard errors printedrv .

Page 11: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

• Exhibit I • Observations

t Yt xt t Yt,,~!

0.824 0.883 26. 0.821 0.6250·515 0.249 zr 0·164 0.629

3 0·560 0·539 28 0·541 0.1504 0·949 0·995 29 0.418 0.2365 . 0.495 0.113 30 0.622 0.0656 0.874 0·722 31 0·510 0.2601 0.452 0·315 32 0·109 0·973··8 0.482 0·393 33 0.629 0.4959 0.600 0·521 34 0.407 0.212

10 0.660 0·535 35 0.654 0.81511 0.874 0.815 36 0.865 0·931

"12 0·554 0.629 31 0·562 0·55313 0·534 0.432 38 0·155 0.486• 14 0·175 0·931 39 0.844 0·93115 0.850 0.695 40 0·568 0.21416 0.693 0·792 41 0·982 0·90211 0.622 0.493 42 0.600 0.480 .18 0.626 0.830 43 0.610 0·16419 0·771 0·538 44 0.424 0.26020 0.616 0·753 45 0.639 0.68221 0.819 0·706 46 0.695 0·74822 0·741 0.410 47 0·507 0·345

0.481 0.106 48 0.657 0·3390·736 0·939 49 0·993 0·9290·753 0.680 50. 0·576 0·515

Page 12: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

Exhibit II

SUBROUTINES· CALLEDFUNCT, roMPRD, IBWEEP

PURPOSECOMPUTE A MODIFIED GAUS.s~NEWTON ITERATIVE STEP EOR TBEREGRESSIONMODEL YT=F(XT, THETA) +ET.

USAGECALL DMGN(FLJNCT, Y,X; Tl,N,K, IP, T2,E, D, C, VAR, IER)

IM}N 4/20/72

ARGUMENTSFUNCT - USER WRITTEN SUBROUTINE CONCERNING THE MJCTION TO BE

FITTED. FOR AN INPUT VECTOR XT AND PARAMETER THETA FUNCT .SUPPLIES THE VAlliE OF THE FUNCTION F(XT, THETA), STORED IN

. THE ARGUMENT VAL, AND THE PARTIAL DERIVATIVES WITH RESPECT

. TO THETA, STORED IN THE ARGUMENT. DEL.SUBROUTINE FUNCT IS OF THE FORM:SUBROUTINE FUNCT(XT,THETA,VAL,DEL,ISW)RBAL*8 XT(K),THETA(IP),VAL,DEL(IP)(STATEMENTS TO SUPPLY VAL)IF(ISW.EQ.l) RETURN(STATEMENTS TO SUPPLY DEL)RETURNENDTHIS STATEMENT MUST BE INCIDDED IN THE CALLING PROGRAM:EXTERNAL FUNCT

- AN N BY 1 VECTOR OF OBSERVATIONS.ELEMENTS OF Y ARE REAL*8.

- A K BYN MATRIX CONTAINING THE INPUT K BY 1 VECTORS STOREDAS COIDMNS OF X. COLUMN IOF X CONTAINS THE INPITT VECTORCORRESPONDING TO OBSERVATION Y(I). STORED COLUMNWISE(STORAGE MODE 0).ELEMENTS OF X ARE REAL*8.

- INPUT IP BY .l VECTOR CONTAINING THE START VALUE OF THETAOR TBE VAlVE COMPITTED IN THE PREVIOUS ITERATIVE STEP.ElEMENTS OF '1'1 ARE REAL*8.

N - NUMBER OF OBSERVATIONSINTEGER

K - DIMENSION OF AN INPITT VECTOR IN THE MODEL F(XT, THETA)INTEGER

IP - NUNBER OF PARAMETERS IN THE MODEL F{XT,THETA)INTEGER

'1'2 - AN IP BY 1 VECTOR CONTAINING THE l\J"'EW ESTIMATE OF THETA.THE ELEMENTS OF T2 ARE P,E.lH}~-8.

E - AN N BY 1 VEC1'OR OF RESIDUAIS FROM THE MODEL WITH THETA=TlELEMEN'l'S OF E ARE REAL*8.

D - AN IP BY 1 'lECTOR COIT'J'AINING AN UNMODIFIED GAUSS-NEWTONCORRECTION 'VECTOR. SET THETA=Tl +D TO OBTAIN AN UNMODIFIEDNEH BS'l'IU1ATE.ELEM.l~N'I'S 01" D ARE REAIJ*EJ

Page 13: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

C -AN IP BY IP r.1ATRIX CONTAINING THE ESTIMATED VARIANCE-COVARIANCE MATRIX OF Tl PROVIDED T1 IS THE IEASTSQUARES

L ESTIMATE OF THETA.' C=VAR*INVERSE(SUM(DEL*DEL'». STOREDCOLUMNWISE (STORAGE MODE O)~

. ELEMENTS OF C .A.REBEAL*8.-ESTIMATED VARIANCE OF OBSERVATIONSPROVIDEDT1 IS THE

LEAST SQUARES ESTIMATE•. REAL*8.

.. INTEGER ERROR PAR.AMBTER CODED AS FOLLOWS:IBR=O NO ERRORIER=l T2=T1+V*D WHERE V=.6**L FAILED TO REDUCE THE

. RESIDUAL SUM OF SQUARES FORL=O, 1, 2, ••• ,40.IBR.GT.9 AN INVERSION ERROR OCCURRED, UNITS POSITION OF

IER HAS THE SAME MEANING AS ABOVE.

REFERENCEHARTLEY,H.O. THE MODIFIED GAUSS-NEWTON METHOD FOR~rtE FI!'TINGQF .NON-LINEAR REGRESSION FUNCTIONS BY LEAST SQUARES.- TECHNOMETRICS,3-

PRCl{}AAMMERDR. A. RONALD GALLANTDEP~TMENT OFSTATIS TICSNORTH CAROLINA STATE UNIVERSITYRAIEIGH, NORTH CAROLINA 27607

Page 14: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

Exhibit III

REAL*8 R(3000)ISW=lCALL·INPlJT(ISW,N,K,IP,ITER,R,R,R)MO=l .Ml=N -fM0M2=K*N -lMlM3=IP -1M2M4=IP -1M3

. M5=N -fM4M6=IP. +M5M7=IP*IP -+M61-18=1 -MrWRITE(3,1)M3.WRITE (3, 2)00 10. I=l,M3R(I)=O.OOIOUT=lIOUT=2CALL NONLIN(R(MO), R(Ml), R(M2), R(M3) ,R(M4), R(M5), R(M6), R(MT.),

* "N,K,IP, ITER, lOUT)SUBROUTINE NONLIN(Y,X,Tl,T2,E,D,C,VAR,N,K,IP,ITER,IOUT)REAL*8Y(N),X(K,N), Tl(IP) ,T2(IP) ,E(N), D(IP) ,C(IP,IP), V,AASTOP .FOm~T('l'/11111111

*36x, '***********lCl(X xx X)O( X xx xx**X)( l( )O(X XX XXX)El( xxx xx X xX***XlE X X X)()( Xx'I*36X, '* . . *'!*36x,f.X- THE VECTOR R MUST BE DIMENSIONED AT LEAST *' /*36x, '* AS LARGE AS 1'018 =' ,19, , *'/*36X, '* *'1*36x, '-lC**** xXXX*****XXl(X )ElE )( XX***********"************ Xxx xxx X)0< XXl(*' )

FORMATe' 'IIIIIIII .*36x, , ** xxX*******x *X )DE X X X)( )(lex xxx)(loe xxx****x )HO()( XJ(X )El( X** **xX X lEX **'I*36x,'* *'/*36x, '* PLEASE REPORT ANY PROBLEMS WITH THIS PROGRAM TO; *'/*36x, '*DR. A. RONALD GALLANT *'/*36x, '* DEPARTMENT OF STATISTICS *'/*36x,'* NORTH CAROLINA STATE UNIVERSITY *'/*36x, '*RALEIGH, NORTH CAROLINA Z7607 *'/*36x,'* (919) 737-2531 *'/*36x,'* *'1*36x, ' )( l( l( *)( *xx x*******************"x-x X)()( xX)( x j( XxlC l( XX )c*******x IE)( l( X**' )

END

cc

1

I,

••

Page 15: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

SOURCE DECK FOR MODIFIED GAUSS-NEWTON NON"LJ:NEAR ESTIMATIONUSER SUPPLIED SUBROUTINES INPUT AND FUNCT REQUlREDe

Exhibit IV

SUBROUTINE FUNCT IS DESCRIBED IN THE DOCUMENTATION OF SUBROUTINEDMGN AND IS OF TEE FORM:SUBROUTINE FUNCT(XT, THETA, VAL, DEL, lSW)REAL*8XT(K) ,THETA(IP), VAL, DEL(IP)(CODING TO SUPPLY VAL)IF(ISW.EQ.l) RETURN(CODING TO SUPPLY DEL)

, RETURNEND

SUBROUTINE INPUT IS OF THE FORM:SUBROlJTINE INPUT(ISW,N,K,IP,ITER,9Y,X,TO)f{EAL*8 YeN) ,X(K,N), TO(IP)IF(ISW.EQ.l) GO TO 1I,F(ISW. EQ. 2) GO TO 2

1 CON'rINUE(CODING TO SpPPLY N, K, IP, ITER)REWRNCONTINUE(CODING TO SUPPLY Y, x, TO) .END

.'

ARGUMENTS OF INPUT AND FUNCT:ISW - INTEGER SWITCH.

SUPPLIED BY CALLING PROGRAM.INTEGER*4

- NUMBER OF OBSERVATIONS.SUPPLIED BY USER, AVAILABLE .TO .CALLING PROGRAM ON ·RETURN.INTEGER*4

.. DIMENSION OF AN INPUT VECTOR IN THE MODEL F(XT,TEETA).SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM' ON .RETURN.INTEGER*4 .

;I:P - NUMBER OF PARAMETERS IN THE MODEL F(XT, THETA).SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.INTEGER*4 .

. ITER - NUMBER OF ITERATIONS DESIRED.SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.INTEGER*4

Y " AN N BY 1 VECTOR OF OBSERVATIONS.SUPPLIED BY USER, AVAILABLE'TO CALLING PROGRAM ON RETURN•

. REAL*8- A K By'N MATRIX CONTAINING THE K BYl INPUT VECTORS STORED

AS COLllMJlTS OF X. COlln1N I OF X CONTAINS THE INPUT VECTORCORRESPONDING TO OBSERVATIONS Y(I).SUPPLIED BY USER, AVAlLABLB TO CALLING PROGRAlvf ON RETURN- .REAL-l(·8

, .

Page 16: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

'I'O .;, INPUT IPBY 1 VECTOR CONTAINING TEE START VAWE 'OF 'mETA.SUPPLIED BY USER, . AVAILABLE TO CALLING PROGRAM ON RETURN.BEAL*8

XT - A K BY 1 VECTOR CONTAINING AN INPUT VECTOR.SUPPLIED BY· CALLING PROGRAM.BEAL*8

'mETA-AN IPBY 1 VECTOR CONTAINING PARAMETER VAWES.SUPPLIED BY CALLING PROGRAM.·BEAt*8

-VAL=:F(XT,THETA) •SUPPLIED BY USER, AVAILABLE TO CALLING PROGRAM ON RETURN.REAL*8 .

.DEL - AN IP BY 1 VECTOR CONTAINING THE PARTIAL DERIVATIVES OFF(XT, THETA) WITH RESPECT TO THETA.SUPPLIED BY USER,; AVAILABLE TO CALLING PROGRAM ON RETURN. .

Page 17: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

i .i, .;,.,

i! .

. Exhibit V

SUBROUTIJ.IJE INPUT(ISW,N,K,IP,ITER,Y,X,TO)REAL*8 Y(50),X(l, 50), TO(2)IF(ISW.EQ.l) GO TO 1IF(ISW.EQ.2) GO TO 2

1 CONTINUEN=50K=l

. IP=2lTER=25RETURN

2 CONTINUETO(1)=.44400TO(2)=.823DO .READ(l,lOO) (Y(I),X(l,I),I=l,50)FORMAT(2F10.3)RETURNEND

Page 18: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

j.i

Exhibit VI

SUBROUTINE FUNCT(XT,THETA,VAL,OOL,ISW)BEAL*8 XT(l), THETA(2) , VAL, OOL(2)VAL=THETA(1)*DEXP(THETA(2)*XT(1)}IF(ISW.EQ.l) REwRN .DEL(1)=DEXP(THETA(2)*XT(1) )OOL(2)=THETA(~)*XT{1)*DEXP(THETA(2)*XT(1»RETURNEND

Page 19: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

~.....0".....

.c+

;~I.HI.I1I

e.

~~~

t,~

.!~I

'''1

I~

....... ',"'"'' --"'."" .-.---------.--',.

-"-,~~"~-'"---e:--·---" .'-'.-' -,."...

+.~.~.t•............ ~,............................•.•......• •• PLEASE REPORT ANY PROOLEHS WITH THIS PROCRAM TO; •• OR. A. RONALD GALLANT •.' Of.PARTMENT OF STATISTICS •• NORTH CAROLINA STATE UNIVERSITY •• RALEIGH. NGRTH CAROLINA 27607 •• 19191 737-2531 ~

• ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

~~~$~ttt.~ •••~•• ~~.~•• $ ••• ~ •••••••• $ •••••••••••~••••••*.~••· "~• THEVECTO~ R MUST BE DIMENSIONEO AT lEAST •• AS LARce AS Ma • 162. •• •••• t •••,t••• ,~•• t •••••••••e."""""""""""'Q" •••••

•,.I'.

.'

Page 20: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

iII

.1

iIiI

I·III•lh!I,j1~'4

1)

~i.,tI.

11!i

'.,.

ERR{lR SUM OF PAR.u~ETER INTERMfDIATeSTCP SQUARES NUMUE~ eSTIMATE

--- ..._--------- --------- ---------------0 0.7349j68"O 00 1 0.44"o00000 00

Z 0.8230,00000 00

1 0.454472620 00 1 0.447490530 002 0.673350780 00

2 0."'53567l10 00 1 O. "49 330" 70 002 0.65'1282920 00.

3 0.'053567080 00 1 0.449361500 00Z 0.65'H61440 00

'0 0.'053567080 00 1 0.449161660 002 0.659160910 00

5 0.453'567080 00 1 0.'049361660 00Z 0.6$9160900 00

6 0.'053567080 00 1 0.449361660 00Z 0.659160900 00

7 0.453567080 00 1 0.'049361660 002 0.659160900 00

0.4'09361660Z 0.659160900

ON THIS ESTIMATe

""It "'it .... "' ..... 1fI $ "' •• • ~ .. 6!. e ... $:" It: "' ••~.o. (t •••• 0 ..............••~ • •••••••••,••'•••

• ••• •• MOOlfl(DCAUS!;-'I(WT(it~ ITCRATIONS •• •• ••*~.~•••~.~*.~.' '..~ $ •••••••••••••••••••••••••••••••••••••••

~" ••~••••• , ••••••••••••• t •••••••••••••••••• ~ .

• •• •• CHECK THESE ITERATIONS TO see IF CONVERGE~CE HAS •• oeen ACHIEVED 8EFORE'USING THE FOLLOWING ReSULTS •• •• •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Page 21: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

17.65498.2123

l-STATISTIC

0.4535670850

2

0.9071340-02

0.2545250-010.8026480-01

...... 'f.'..... ~._...,. ..,_ .... N_~N~·__ ~,~, ........,....;..".,.,;'"":""""- ~:..i.;.-..:-.:-,-..:... ........:_~~ ....j

SlANOARD ERRORESTIMATE

0.4493620.659161

............. -- - -.-,..4�_ -.--.- ,.--. -.

ESTIMATED VARIANCE

MODIFIEO CAUSS-NEWTON PARAMeTER ESTI~ArES

RESIP\JAL SUM OF SCUAP£SNUMBER OF {lIlSERVA Tl eNSNUMBER OF'PARAMETERS

I2

PARAMETfRNUMseR

+-------------------------------------------------~----- -------------+

IIf

·fI

+------------------------------------------~-~----------------------+IIIIIIIIIIIfIIIIIf

.---------------~----------------------~~-_._-------~----------------+

.-

Page 22: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

__ ...__ w. •...... _~_ ~ _ __.;. ---.- ..-- __ ...------.-- - .... ~,.~~ ••

-i;

•• • '* •• i".. .... .... ..<; .... .... .... ..~ ..... .... .... ....

~.... .... w ..

" .. .... .,".. ".,.. - .... .. .. ..----... VI .,

... w ..'" .... « .... w .. ..... .. .. N.. w " 0-.. l& .. II'.. Cl .. ,.,.. .. .. .0'.. < .. •.. c.. " c•• .. Ifl. W ..... :r .... .. ..<> .. e ..... u. .. C> '".. Cl .. 0 '".. .. CO '".. )( .. CO '".. ... .. CO C'.. to: " . •.. .. .. - 0fl of .. I.. ... ..., ..." .ij .... ..

~----.. ... ..fl .. ct... < .... -' .... U .... '" ..... D: .." 0 .... V .... ..

~(:,;.. ..... ....

I. it.. .... ..

I .. .... .... ..I .. ".. ..I

..,. * •••.• *

IIII!I

·0

IIII

• !!

Page 23: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

•...' _ _,.._ _..•,

~

j

11

I'II

~I

Ij1

IlfSitlUAl

e

YKATy

·.····-••• • •• ~••• ~ •••••••••~ ••••••••••• ~ •••••••••~e ~ ••••••••

• • •· . .~• C8SCRV~ ~f;SPOnSr.s. "Rt01CTlO RE~PON~r~. RfSIDUALS •· ~• •~~~~~~~~~o*~~ce~$te.$.$~~~~~¢$.*~¢e~~$o.~ ••~t¢$.t.*•• ~••~t*.~,~~~~~*t~

O.8l40000 00 0.~042IS0 CO 0.1978460-01O.~h()OOO 00 O.')l'}SHO 00. -0.1451310-010.5600000 00 0.6410~SO 00 -0.8105530-010.9490000 ~O 0.S6!>8340 00 0.8316610-010.49S00CO 00 0.4841110 00 O.10~8~"n-Ol

0.6140000 00 0.72J7410 00 O.1507~90 000.4S70000 00 0.~S30"80 00 -0.1010~eo 000.4B2rOOO 00 0.~e7237C 00 -0.1002370 000.6000000 00 0.6334940 00 -0.3349470-010.6600000 00 0.6601910 00 -0.7907140-030.8740000 00 .0.7689~40 00 0.10S0360 00O. S540000 00 0.1>807360 00 -0.12623',11 000.534000D 00 0."973990 00 -0.63399l0-010.775COOO 00 0.8300670 00 -0.5506740-010.8500000 00 0.1104830 00 0.13~51l0 000.69110000 00 0.7S73q40 00 ~0.5939390-01

0.62~OOOO 00 0.6219090 00 0.9063520-040.6260000 00 0.77660$0 00 :-0.15060S0 000.7710000 00 0.6406~30 00 ·0.1303670 000.6160000 00 0.740bO~O 00 -0.12460&0 000.819000D CO 0.7156530 00 0.1033470 000.7410000 00 0.SA87v~0 00 O.I~1207U 000.4810000 00 0.411IA~ZO 00 -0.081QlbO-Ol0.7360000 00 O.8344~bD 00 -0.9~41610-0t·

O.7~brOCO CO 0.7034130 00 0.S45C73n.Ol·0.6210000 00 0.6764450 CO 0.14255S0 000.76400Cn 00 O.66073~n 00 O~837637~-01

0.5410000 00 0.496r630 00 0.449]740-010.47AOOCn 00 0.r,Z49·/l,0 00 -0'.469.,570-010.b220000 00 0.46'lOHL: ClO 0.1529670 000.5100dcD 00 0.5333670 00 -0.23367tD-Ol0.7090000 00 0.~S3~6qO 00 -0.14436~O 000.6290000 00 0.6227300 00 . 0.6210220-020.4C7COCO CO 0.S167~6~ 00 -0.1097560 CO0.654;)000 00 0.16B?1.4C 00 -0.1149640 000.0610600 00 0.CS1BR1D 00 O.711944D-020.5620000 00 0~6469q90 00~0.64'~8SU-Ol

0.11S0000 00 0.6190460 00 . 0.13'~54G 000.8440000 00 0.8300~70 00 0.1393260-010.5660000 00 0.5174370 00 • 0.'056260-010.9820000 00 0.8143510 00 0.1676490 000.6COOOCO 00 0.616603~ 00 -0~1660290-01

0.6100000 00 0.7435430 00 -0.1335430 000.4240000 00 0.5333670 00 -0.1093670 000.6390000 00 0.1044210 00 -0.6S42010-010.69S0000 00 0.7357430 00 -0.4014260-010.5070000 00 0.5641040 00 -0.5710400~01

,-__ ~_-,-_.;... ...... _:"-o..0..6~Q()OlLQ.lL ....... .;..._Q...5/:1~l1LOQ..':"" -0..a.9!illZ1U=.Q.L -':_:"'" .:..-0.9930000 00 0.8289740 00 .0.1640260 000.5760000 00 0.6309940 00 -0.5499370-01

e

• . , .....~,.• ,. , .. _ ..w .•." . • " .•• , , •• " ••"

i

t

II

I

!IiI

fiiII

i.!!

Page 24: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

REFERENCES

Ga11.~t, A.R. (1968) "A note on the measurement ofcost-qulintity relation­ships i11 the aircraft industry," Journal of the American StatisticalAssociation, 63. p. 1241-52.

Gallant, A. R. (1971) IfStatistical inference for nonlinear regression ~dels, II.

Ph. D. d:lssertation, Iowa state University.

"Inference for nonlinear models, tl Institute875, Raleigh. ~--~---=~~~~~

Ga.;l.lant, A. R. and Fu11er,W. A. (1973) "Fitting segmented polynomial modelswhCl>sejoin points have to be estimated," Journal of the Ame;rican StatisticalAssociation, 68. p. 144-47.

Hartley, H. O. (1961) liThe modified Gauss-Newtonmethod for the fitting ofnon..;linear regression functions by least squares," Technometrics,3.p•. 269-80.

Hartley, H~ O. and Booker,.A. (1965) "Non-linear least squaresThe Annals of Mathematical Statistics, 36. p. 638-50.

JEmnrich, R. L. (1969) "Asymptotic properties of· non-linear least squares.estimators," The Annals of Mathematical Statistics, 40. p. 633-43.

Malinvaud, E.(1970) "The consistency of nonlinear regressions," The Annalsof Mathematical Statistics, 41. p. 956-69.

Page 25: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

We are given a regression model

. . .. 0)

fa} 1 as follows:ct C1F

and Q (e ') :: Q (a")

e" in S ·such that

. APPENDIX I

Construct the sequence

D' = [F'(a )F (e )r1F'(e)[y - f (e)] •o 0·0 0 .. 0

. ,There does not exist e,

Find AO

which minimizes Q (80

+ ADo) over Ao = fA: .. 0:5 A :5 1,

e0+ ADo e:S} •

S.et 81. =e +AD000

Vf(Xt'e) exists a.nd is continuous over ·6 for

e € 6 implies t~e rank of F (e) is p-

Q (9 ) <'Q = inf[Q (e): e a boundary point of s}.o .

. (Yt,Xt ) (t = 1,2, ... , n). Let Q(a) =.sSE(S).

Conditions- 'There is a convex, bounded subset 6· of ..r;l). and a

0) Compute

. Find Al wb;ich minimizes Q (81 +AD1) over A1 = (A: 0 :5 A:5 1,

. 81 :/- AD1 € 51 ..

2) Set e2

=81 + A1D1 •

Construction.

e' --•

Page 26: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

Proof•.. Gallant (1971).

e is an interior·point of Sfor ex. 0:: 1 ,a. .

The sequence (ea.1 fl*

VQ (e*) "=. o.

Cone~usionseThen f'orthe sequence

Page 27: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

definitions are merely stated here, a complete discussionand'examples

APPENDIX II

In order to'obtain aSyIDptotic results, it is

1;>ehavior Of the 'inputs xt as n becomes large. A general way of specifying

limiting behavioro£' nonlinear- regression inputs is due to Malinvaud(1970).

are contained in his paper.

Let X· be a subset of Rk from which. the inputsxt (t =:

for each A€ G.

Let Gbethe Borel subsets of X and {~t)~=l

of' inputs. chosen from X. Let IA

(xt ) be the indicator function of

~ s"Ubset A of X- The measure JL n on (X,G) is defined by

g' with domain X

Definition. A sequence of measures £u} on (X,G) is said to converge." n

weakly to a mea~ure ~ on (X,G) if for every bounded,

fiS n ..... co -

Asymptotic results may be obtained under the following set of Assumptions-

Ass~tions. The parameter space 0 and the set X are compact sUbsets

of' the p-dimensional and k-dimensional reals, respectively- The response

function f(x,s) and the partial derivatives o~. 'f(x,e}' and be~~A. f(x,e)~ 1J

are continuous on X XO- The sequence of inputs (xtJ;=l is chosen such

that the sequence of measur~s L}oo converges wealdy· to a measure ~.1}.I.n n=l

Page 28: NONLINEAR REGRESSION METHOW - stat.ncsu.edu

t = [ J~~. :f(;x:,eO)~~. f(x, eO) au· (x) ]~.. J

ret} are independent and

2a .

an open set which, in turn, is contained in n. . Iff(x, e) = f(x,ao)

of f.L measure zero, it is assumed that 8 = eO. The p)( p

. '"

These Assumptions are patterned after those used by Malinvaud

~~ow that the least squares estimator is consistent. A similar set of

d~fined over . (x,a). The true value of

Assumptions which does not require that n be bounded or that the second

partialderivatives of f(x,e) exist may be found in Gallant (1971

An alternative set of Assumptions may be found in Jennrich (1969) •

Theorem. Let e denote the f'unctionof y which minimizes. A2 1 '"a = n- SSE(S). Under the Assumptions listed above:••••

'"estimator e is consistent for

"'2The estimator a is consistent for

n';lF~(~)F(~) isconsistentfort.

'"In (a-eo), is asymptotically normal with mean zero and variance..

. t·· 2 t- l.covar~ance ma r~xa •

lJ'0of. Gallant (1971 or 1973).

Remark. In computations, it is customary to absorb the term $ off-..Ii (e-eO)' in the variance-covar~ancematrix. Thus, one enters the tables by taking

..