eollor.t tnlno applied linear regression models

41
Tnlno Eollor.t Applied Linear Regression Models 'F John Neter Universitl, of Georgia Michael H. Kutner The Cleveland Clinic Foundation Christopher J. Nachtsheim Unit ersity of Minneso ta William Wasserman Syracuse Llniversitt' IBWIN Chicago . Bogatd . Lc.ndon . Madrid .

Upload: others

Post on 09-Jun-2022

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Eollor.t Tnlno Applied Linear Regression Models

TnlnoEollor.t Applied Linear

Regression Models

'F

John NeterUniversitl, of Georgia

Michael H. KutnerThe Cleveland Clinic Foundation

Christopher J. NachtsheimUnit ersity of Minneso ta

William WassermanSyracuse Llniversitt'

IBWINChicago . Bogatd .Lc.ndon . Madrid .

Page 2: Eollor.t Tnlno Applied Linear Regression Models

al ,nW|t concerned about Our Environment

D- -l| In recognition of the {act that our company is a large end-user of ftagile yet

Lt replenis-hable resources, we at IRWIN can assure you that every etfort is made to

meetorexceedEnvironmentalProtect ionAgency(EPA)recommendat ionsandrequire.ments for a'greener" workplace.

To preserve these natural assets, a number ol environmental policies' both companywroe

and departmenlspeciiic, have been rmplemented. From the use of 50o/o recycled paper in

ourtextbookstothepr int ingolpromot iona|mater ia|swithrecyc|edstockandsoyinkstoouroffice paper recycllng program, we are committed to reducing waste and replacing environ'

mentally unsafe products with safer alternatives'

@ The McGraw-Hil l Companies, Inc., 1983. 1989, and 1996

All rights reserved. No part of this publication may be reproduced, stored in a letrieval system, oI trans-

mitted, in any form or by any means, electronic, mechanical, photocopying, recording' or otherwise'

without the prior written permission of the publisher'

Trademark Acknowledgments:IBM and IBM pc are registefed trademarks oflnternational Business Machines corporation.

Disclaimer

Richard D. Irwin, Inc. makes no waranties, either expressed or implied, regarding the enclosed com-

puter software package, its merchantability or its fitness for any particular purpose. The exclusion of im-

plied *urrantiei is not permitted by some states. The exclusion may not apply to you' This warranty pro-

uid., you with specific legal rights. There may be other rights that you may have which may vary from

state to state.

Irwin Book Team

Publisher: Tom CassonSenior sponsoring editor: Richard T. Hercher, Jr'

Senior developmental editor: Gail Korosa

Senior marketing manager: Brian Kibby

Project editor: LYnne BaslerProduction supervisor: Laurie KerschDesigner: Matthew BaldwinCover designer: Stuart PatersonAssistant Manager, Graphics: Charlene R. Perez

Compositor/Art Studio: Graphic Composition, Inc'

Typeface: 10/12 Times RomanPrinter: R. R. Donnelley & Sons Company

Library of Congress Cataloging-in-Publication Data

Applied linear regression models / John Neter ' ' ' [et al']' - 3rd ed'

p.cm

lncludes index.

ISBN 0-256-08601-X

1. Regression analysis. I. Neter, John

QA278.2.A65 1996

519.5'36-dc20

Prfitted in the United States of America

34567890DO210987

95-37447

Page 3: Eollor.t Tnlno Applied Linear Regression Models

5.1 Matrices

Detinition of MatrixA matrix is a rectangular array of elements arranged in rows and columns. An example d

a matrix is: i:

D,

Matrix Approach toCnlpren

5 Simple LinearRegression AnalYsis

Matrix algebra is widely used for mathematical and statistical analysis' The matrix approach

is practicllly a necessity in multiple regression analysis, since it permits extensive systemf

ofequationi ancl large anays ofdata to be denoted compactly and operated upon efficiently'

In this chapter, we tirstiake up a brief introduction to matrix algebra. (A fuller treatmenl

of matrix algebra may be found in specialized texts such as Reference 5'1') Then wc

apply matrix-methods to the simple linear regression model discussed in previous chapten'

ettnougn matrix algebra is not really required for simple linear regression, the application

of matrix methocls to this case will provide a useful transition to multiple regression' which

will be taken uP in Parts II and III.

Readers lamiliar with matrix algebra may wish to scan the introductory parts of thit

chapter and l.cus upon the later parti dealing with the use of matrix methods in regressioR

analysis.

Column Column1)

Row I l- 16,000 nfRow 2 | 33,000 47 |Row 3 L 21,000 35 J

i

176

Page 4: Eollor.t Tnlno Applied Linear Regression Models

12 l6 ' le 8l

These two matrices have dimensions of 2 x 2 and 2 x 4, respectively. Norc thut in givingthe dimension of a matrix, we always specify the number of rows first urrtl thcrr lhc numberof columns.

As in ordinary algebra, we may use symbols to identify the elenrcnls ol :r rrurlrix:

" / : l j :2 j :3

i : l l a, atz on1i :2 | a2, azz ozz l

Note that the first subscript identifies the row number and the second thc r.rlurrrrr nurnber.We shall use the general notation aii for the element in the i th row and thc i rlr t.olrrrln. Inour above example, i : 1,2 and 7 - 1,2,3.

A matrix may be denoted by a symbol such as A, X, or Z.The symbol is irr lroll l ircc toidentify that it refers to a matrix. Thus, we might define for the above marrrr:

A:l at t otz ' ' ' l

Lazt az2 a:z )

Reference to the matrix A then implies reference to the 2 x 3 array just givcrr.Another notation for the matrix A just given is:

A: [a i j l i :1,2: j :1,2,3

This notation avoids the need for writ ing out all elements of the matrix by stl l irr:: ,rrlv thcgeneral element. It can only be used, of course, when the elements of a matrix rrr.r, svrrrlrols.

To summarize, a matrix with r rows and c columns wil l be represented cit lrer.irr l ir l l :

Other examples of matrices are:

A_

or in abbreviated form;

A:[a;1) i :1, . . . , r ; j -1, . .

or simply by a boldface symbol, such as A.

[ r o l l+ jl_s rol l : rs

Secriotr 5.1 Matrices 177

tacl ,

em:.rtl),le i r i

lers..t ior,hich

rhi:siol

leoatt ar t c l , ;

a- t t ar at ;

a; r o; t a; ;

A,t A, t Q.;

(5.1)

an(i

umr000)

's b1'I the

.C

Page 5: Eollor.t Tnlno Applied Linear Regression Models

178 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

Comments

L Do not think of a matrix as a number. It is a set of elements arranged in an array. Onlywhen the matrix has dimension I x I is there a single number in a matrix, in which case onecan think of it interchangeably as either a matrix or a number.

2. The following is not a matrix:

I v Ils ll " ll l0 ls lI s 16_]

since the numbers are not arranged in columns and rows. r

Square Matrix

A matrix is said to beexamples are:

square if the num

l+ 7lL3 el

ber of rows equals the number of columns. Two

l -1

I on arz ort

II ol azz or, ILo:, az2 ax J

Vector

Transpose

A matrix containing only one column is called acolumnvector or simply avector.Twoexamples arc;

The vector A is a 3 x I matrix, and the vector C is a 5 x I matrix.A nratrix containing only one row is called a row vector. Two examples are:

B' : [15 25 50] F' : [/i fr]

We use the primc syrnbol for row vectors for reasons to be seen shortly. Note that the rowvectorB/ isa I x 3 nratr ixandtherowvectorF/ isa 1x 2matr ix.

A single subscript suffices to identify the elements of a vector.

The transpose of a matrix A is another matrix, denoted by A', that is obtained by inter-changing corresponding columns and rows of the matrix A.

For example, if:

^: f i l . : [ r i ]L'ol l " rLcsl

,i,:li ' i]

Page 6: Eollor.t Tnlno Applied Linear Regression Models

Sulirn 5.1 Matices l7gthen the transpose A/ is:

A' : l ' - 7 3l2x3 L5 l0 4)

Note that the first column of A is the-first row_ of Ar, and similarly the sec.ncl c.lunrn of Ais the-secoJrdiow of A'..corr.espo"ai"gly,it"irst row of A has become rhe rirsr corumnof A/' and to onior. that theiime"r;;; ;f ;, indicated under the symbor A. bcconresreversed for the dimension of A,.As another example, consider:

Thus, the transpose of a column vector is a row vector, and vice versa. This is the reason;:1.ffi::3:?:'Jft'*i":,:lf:to iaeniriv a row vector, since ir may be thought oras

In general, we have:

= t l i t ) i :1, . . . ,c: i=1, . . . , r,v\

Row Columnindex index

ffiLf;"i:ment in the i th row and the 7th column in A is found in the 7th row and irh

Equality of Matrices

Two matrices A and B are said to be equal if they have the same dimension and if all:[Tii,","*T:;i;f;::;n5:i';:"*rserv,irtwomatricesareequar,rheircorresponding

,t, : f ,J] ,g, :ro 7 lol

T

I o ' ot , f

'X": l^ ' ; l : 1 ' i l i :1," . ' r ; j : r , . . . ,clu ' l or , | 'z

" \

J Row Column

(5.3) index index

,r,:li,l ,x, : fi]fo, , onl f n 21

.! . : lo, orr l n: f t+ t lLo, ot, _J 3x2

L t: 9l

A':cXr

thenA:Bimpl ies:

Similarly, if:

at:4 a2=7 a3:3

l - rI

or t or t I

l : : lt lLor, or ,

)

Page 7: Eollor.t Tnlno Applied Linear Regression Models

180 Chapter 5 Matix Approach to Simple Linear Regression Analysis

thenA:BimPl ies:

at t :17 at2:2

azt: 14 azz: 5

ay: 13 azz:9

Regression Examples

ln regression analysis, one basic matrix is the vector Y, consisting of the n observations onlhe response variable:

Note that thc transpose Y' is the row vector:

Yl : [Ir Yz Yn]lXn

Another basic matrix in regression analysis is the X matrix, which is defined as followsfor simple linear regression analysis:

5.2 Matrix Addition and SubtractionAdding or subtracting lwo matrices requires that they have the same dimension. The sum,or difference, of two matrices is another matrix whose elements each consist of the sum. orJiff'erence, of the corresponding elements of the two matrices. Suppose:

I r , IY:11' l

nxl | : IlY^ )

(s.4)

[l ::1(5.6.1 I" : l : : lnx2 l : ILt x ' )

The nratrix X consists of a column of ls and a column containing the n observations on thepredictor variatrlc X. Note that the transpose of X is:

(5 7, x, =l;, ;, . ": ]

A : l 1 l B :1 113x2 [ : 6J 3x2 l : 4 )

Page 8: Eollor.t Tnlno Applied Linear Regression Models

M ut ri.r At lJ i t i r trt und Subt raction 181

then:

A+B3x2

Similarly:

A_B3x2

In general, if:

A : [a i i lrx.c

: lbt j l l :1, ; - 1

then:

(s.8)1I"u

: lail * bill and ^,;j

: [a;i - b;i)

Formula (5.8) generalizes in an obvious way to addition and subtraction of more than twomatrices. Note also that A + B : B * A, as in ordinary algebra.

lz 6 l:14 8 |

L6 r0l

[o 21: lo 2l

L0 2)

Section 5.2

I t+r 4+21: lz+2 5+3 |

L3+3 6+4)

I t - t 4-21:12_2 5_31

L3-3 6-41ns on

rllows

on thc

e sunl,um, ()l

BrXc

flegression Example

The regression model:

f;*'Y'i""n comPactrY

Yi:E{Yi I*ei i :1, . . . ,n

in matrix notation. First, let us define the vector of the mean

/5 ql

and the vector of the error teffns:

E{Y}nXl

Iurr , t lI twrl I: l,,r,l

Y in (5.4), we can write the regression

Page 9: Eollor.t Tnlno Applied Linear Regression Models

r82 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

because:

5.3 MatrixMultiPlication

Multiplication of a Matrix by a Scalar

suppose the matrix A is given bY:

Similarly,l.A equals:

ln general, if A

(5.rr)

L-scalqt_i_s an ordinary number or a symbol reple!941!1ng a number. In multiplication of n '.matrix by a sialar, every element of the matrix is multiplied by the scalar. For example. I

o:13 i ]Then 4A, where 4 is the scalar, equals:

tll - f aall - f:: l : | ;ulrllL'l

: l',,,]

. L"l : L'''.i *'.1

Thus, the observations vector Y equals the sum of two vectors, a vector containing the

expected values and another containing the error terms'

T5 21l i i l r lsl i s l : tL3L, i . ) .1

= loil and .1. is a scalar, we have:

trA:Al- lLai j

oo: ol? i] : [,:2sl12)

where I denotes a scalar.If every element of a matrix has a common factor, this factor can be taken outside the '

matrix and treated as a scalar. For example:

Sirni lar ly:

^o:^[3 ] ] : [3]

I s 271 " f : e lLtr rs. l : ' ls e. l

7r l3r l

'11; l

Page 10: Eollor.t Tnlno Applied Linear Regression Models

Section 5.3 Matrix Multiplication 1g3

tllultiplication of a Matrix by a MatrixMultiplication of a matrix by a matrix may appear somewhat complicated at flrst, but a littlepractice will make it a routine operation.

Consider the two matrices:

,1,:11 i ] ,y, :u f ]The product AB will be a 2 x 2 matrix whose elements are obtained by finding the crossproducts of rows of A with columns of B and summing the cross p.oauar. For instance, tofind the element in the first row and the first column of the product AB, we work with thefirst row of A and the first column of B, as follows:

Al

*"* ' Ifzll l-f-4-lno*zf

^ t ) Lb" j

2(4)+5(5):33

The number 33 is the element in the first row and first column of the matrix AB.To find the element in the first row and second column of AB, we work with the firstrow of A and the second column of B:

Col. I

We take the cross products and sum:

A

*o*' [D-3-l l-+l+ l lRow2[4 t) ls

Col. 1

The sum of the cross products is:

2(6)+5(8):52

Continuing this process, we find the product AB to be:

Let us consider another example:

AB

now r [33 IL]Col. I

6l8J

Col. 2

AB

Row r [33 s2lLJ

Col. 1 Col .2

B

i6-llLEllCol. 2

f:,:l '^ i][l :] : i;i #l

,x, : l ; 3 i ] ,* , : f ; ]r i : f ; 3 3l [; ] : [r]

Page 11: Eollor.t Tnlno Applied Linear Regression Models

Chapter 5 Matrix Approach to Simple Linear Regression Analysis

When obtaining the product AB, we say that A is postmultiplied by B or B is premulti-plied by A. The reason for this precise terminology is that multiplication rules for ordinaryalgebra do not apply to matrix algebra. In ordinary algebra, xy : yx.In matrix algebra,AB + BA usually. In fact, even though the product AB may be defined, the product BA

may not be defined at all.In general, the product AB is only defined when the number of columns in A equals the

number of rows in I| so that there will be corresponding terms in the cross products. Thus,in our previous two cxamples, we had:

Equal

A,/ \BI x: 7x2\ ,vI ) inrcrrs ionol prot luct

Equal

A/\B : AB2x3 3xl 2xl

\ ,vDimensionof product

:AB2x2

Note thar r l re t l i r r rcnsiorro1' theproductABisgivenbythenumberof rowsinAandtherrurrrbcl ol coltrrrrns in l l . Note also that in the second case the product BA would not betlclirrctl sirrcr' t lrc rrrrrrrbcr of columns in B is not equal to the number of rows in A:

Unequal

z/\

l lcrc is rrrrollrel cxirrnple of matrix multiplication:

; t t t - [ ' r r r ar2 ot t l

L,1: I AZZ AZZ )

I r rgcrrcr ' : r l . i l Ah:rsdimensionrxcandBhasdimensioncxs,theproductABisamatrix ol t l irrrcrrsion r. x .r whose element in the i th row and lth column is:

aitbtrj

so lh i l l :

(5. 1 1) At l

'l'hus. in thc lirlcgoinu example, the element in the first row and second column of thegrlorluct Ali is:

3

,L.ortbm: atbt2 * arrbr, * arrby

as indeed we found by taking the cross products of the elements in the first row of A and

second column of B and summing.

A2x3

B3xl

f utt btrlIb^ bzzILbt, bt, )

* arrb, arrbr, * arrbr, + orrbrr l

* arrb, arrb,,2 * arrb22 * ozzbn )|

, r , ,b, , * arrb^-

f , , , ,b, , lazzbzt

c\-L

: [ i o,obo,l i -Lk=r

' l

Page 12: Eollor.t Tnlno Applied Linear Regression Models

Section 5.4 Special Types of Matrices 185

Additional Examples

t .

ti iiltnr:t1,i r,,i,lRegression Examples. A product frequently needed is Y'Y, where Y is tlrt' vectrlr ofobservations on the response variable as defined in (5.4):

I t , Ilv , l

t5. l3r y 'y: [Xr yz y, ,1 | . ' l : ly i +r l+. . .*1, ] l I ) ] ,1rxr -

| : Ir . . lL t,, -l

Note that Y'Y is a I x I matrix, or a scalar. We thus have a compact wly ()l l r ir irrr :r srrrrrof squared terms: Y/Y : rV?.

We also wil l need X'X, which is a 2 x 2 matrix. where X is defined in tr.r ' r

I r x, l[ r I " ' r l l r x, l [ , , r \ ,(5.14t X'X: l l l - l : l

i *z Lx ' x2 " , ,11, : I Ltx, r \ ,

Lr xn)

and X/Y, which is a2 x l matrix:

(s. rs)" ," : [ '

I " ' t l2xt LX' X2 Xr)

5.4 Special Types of MatricesCertain special types of matrices arise regularly in reurcssion analysis. We consider themost imoortant of these.

2.

I q 211 a,1 | +o, + 2a.1Is s jL, ;J:Lr, i +8, ; )

fz l12 3 s l l1 l : t '+32+s21 : ; :s1

L) l

Here, the product is a 1 x I matrix, which is equivalent to a scalar'. l'lrLrs. lhc rnatrixproduct here equals the number 38.

-).

Iv, I| ' ' l : [ ' , , I| : I L, \ , ) ,1L Y,,J

Page 13: Eollor.t Tnlno Applied Linear Regression Models

186 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

Symmetric Matrix

If A : A', A is said to be symmetric. Thus, A below is symmetric:

A,3x3: [ l i \ ][ r 4 61

A :14 2 5l3x3 L6 5 3l

Diagonal Matrix

A symmetric matrix necessarily is square. Symmetric matrices arise typically in regressionanalysis when we premultiply a matrix, say, X, by its transpose, X'. The resulting matrix,X/X, is symmetric, as can readily be seen from (5.14).

A diasonal matrix is a square matrix whose off-diagonal elements are all zeros, such as:v t - - .__, . . - :

We will often not show all zeros for a diagonal matrix, presenting it in the tbrm:

Two important types of diagonal matrices are the identity matrix and the scalar matrix.

Identity Matrix. The identity matrix or unit matrix is denoted by I. It is a diagonal matrix

whose elemenls on the main diagonal are all ls. Premultiplying orpostmultiplying any r x r

matrix A by thc r x r identity matrix I leaves A unchanged. For example:

atz onf f o, dtz or: Ia22 or.. l : I ou azz or, Ia3z an J Loy a32 an )

Sirni lar ly. we have:

AI:

Note that the identity matrix I therefore corresponds to the number I in ordinary algebra,

s ince we have there that 1 . x : x . l : x .In general, we have for any r x r matrix A:

,i,: [i + l] .*.: [l , ,l l]-'l

At^l' | l |

a1 ln:0l+x+

azl-J

3x3

[ t o ol [a, ,I t t 'lA:10I 0l lo,L0 0 l l [43'

f o, , atz a, , l I t o o- l f o, , arz o, : Ilorr azz atr l l0 I 0 l : lot ' azz onlLo' a32 a33l [0 0 l l Lo: ' a3z ai .3J

l0

(5.16) AI: IA:A

Page 14: Eollor.t Tnlno Applied Linear Regression Models

Section 5.4 Special Types of Matices 187

Thus, the identity matrix can be inserted or dropped from a matrix expression whenever itis convenient to do so.

Scalar Matrix. A scalar matrix is a diagonal matrix whose main-diagonal elements arethe same. Two examples of scalar matrices are:

| \ 0 0ll? 9l l ; ^ 0rLo 2)

[o o i_]

A scalar matrix can be expressed as ,r"I, where ), is the scalar. For irrslrrrrcc:

lz o l_, I r o lfo , ) : ' lo iJ : zr

[ ; , o ol [ r o ollo r o l :^ lo I o l :11L0 0 i ._ j L0 0 r_l

Mult ip ly inganrxrmatr ixAbytherxrscalarmatr ix) , I isequivalcrrr r ' r r r r r l r i ; r ly in,u{ by the scalar ),.

Vectcir and Matrix with All Elements lJnity

A column vector with all elements 1 will be de

(5. 17) I -rXl

and a square matrix with all elements I will be

I r(5.18) f :1,

'xr I r

For instance, we have:

l ' l : [ llx l

noted by 1:

[ ' lt l lLIJdenoted by J:

l lr_l

l r l f r I r lr : l t l I : l r I r l

3xr Lr_j 3x. i [ r I t ]

I vector I we obtain:

| - ' l"L;JNote that for an n x

: [ r1 l : 11

Page 15: Eollor.t Tnlno Applied Linear Regression Models

| 8tt ('lruptcr 5 Matrix Approach to Simple Linear Regression Analysis

and:

l t l [ r=l, lu rr : l iL'J Li

r l

i l : JnXn

Zero Vector

5.5 Linear Dependence and Rank of Matrix

Linear Dependence

Consider the followins marrix:

A zero vector is a vector containing only zeros. The zero column vector will be denotedbv o:

(5.1e)

For example, we have:

03x1

'*,:f:]: [3]

L0_l

[ r 2 sd, : lz 2 to

L3 4 rs

[ , ; j : , I j ]L rs l L3l

:lr l

Let us think now of the columns of this matrix as vectors. Thus, we view A as being madeup of four column vectors. It happens here that the columns are interrelated in a specialmanner. Note that the third column vector is a multiple of the first column vector:

We say that the columns of A tre linearly dependent. They contain redundant information,so to speak, since one column can be obtained as a linear combination of the others.

we define the set of c column vectors cI,..., c" in an r x c matrix to be l inearlydependent if one vector can be expressed as a linear combination of the others. If no vectorin the set can be so expressed, we define the set of vectors to be linearly independent. Arnore general, though equivalent, definition is:

(5.20) When c scalars i,1, . . . , i.", not all zero, can be found such that:

i .1cr + L2c2+.. .+ lccc - 0

Page 16: Eollor.t Tnlno Applied Linear Regression Models

Section 5.6 Inverse of a Matrix 189

where 0 denotes the zero column vector, the c column vectors are lin-early dependent. lf the only set of scalars for which the equality holds islr : 0, ...,1. : 0, the set of c column vectors is l inearly independent.

Toi l lustrate forourexample, l t = 5, lz = 0. l : : -1,1+ :0leads to:

'[i].'[1] 'Ir] .'[i] : [l]Hence, the column vectors are linearly dependent. Note that some of the,l.; equal zero here.For linear dependence, it is only required that not all .1,, be zero.

Rank of Matrix

The rank of a matrix is defined to be the maximum number of linearly independent columnsin the matrix. We know that the rank of A in our earlier example cannot be 4, since the fourcolumns are linearly dependent. we can, however, find three columns (1, 2, and 4) whichare l inearly independent. There are no scalars .].r. 1,, 14 such that ),1cl + ),rcr+ ).oco : 0other than Lt : Lz - L4 : 0. Thus, the rank of A in our example is 3.

The rank of a matrix is unique and can equivalently be defined as the maximum numberof linearly independent rows. It follows that the rank of an r x c matrix cannot exceedmin(r, c), the minimum of the two values r and c.

When a matrix is the product of two matrices, its rank cannot exceed the smaller of thetwo ranks for the matrices being multiplied. Thus, if C : AB, the rank of C cannot exceedmin(rank A, rank B).

5.6 Inverse of a MatrixIn ordinary algebra, the inverse of a number is its reciprocal. Thus, the inverse of 6 is l . A

onumber multiplied by its inverse always equals l:

=-f , .X :x ._{ : I

In matr ix algebra, the inverse of a matr ix A is iurothcl r r r r t r ix . t lcrrotcd bv A-1, suchthat:

(s.21) r \A'- . I

where I is the identity nrutrix. ' l 'hus. rrrrrrirr. thc idcrrtity nratrix I plays the same role as thenumber I in ordinary algebra. An invelsc of a matrix is defined only for square matrices.Even so, many square matrices do not have an inverse. If a square matrix does have aninverse, the inverse is unique.

. l61

x.-

I__.h_l

6

Page 17: Eollor.t Tnlno Applied Linear Regression Models

190 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

Examples

l. The inverse of the matrix:

IS:

sr l lcc:

A_1A:

oI:

AA-I _

l . ' l ' l rc i r rvcrsr ' o l ' the matr ix:

f ' t 41

,!., : Ll il

A-, : i-2x2 L

I r 4112 4l1.3 - .2113 r l

l : i l 11 -11: [ ; ?]: '

. l .41I

: [ ; ?] : '

[ r o olA :10 4 0l

3x3 L0 0 2)

l i ; ; l[+ g ol i , oro i oro.3 ' l : i ; ?3- l : ,Lo o i_ l Lo o 2J [o o rJ

A-rJAJ

since:

A',A =

Nolc thlt tltt' ittvcrsc o1'a diagonal matrix is a diagonal matrix consisting simply of thercciprocirls ol t lrc clcntents on the diagonal.

Finding the lnverse

lJp to this Poirrt. rhc inverse of a matrix A has been given, and we have only checked tonrakc surc it is the invcrse by seeing whether or not A-1A _: I. But how does one find theinversc, ancl wlrcn tklcs i1 exist?

An inverse tll a square r x r matrix exists if the rank of the matrix is i.. Such a matrixis said to be ntttrsittl4ulur <>r of .full rank. An r x r matrix with rank less than r is said to be:;ingulctr or not ol'.litll ntnk, and does not have an inverse. The inverse of an r x r matrix oflull rank also has rank r'

Finding the inverse of a matrix can often require a large amount of computing. We shalltake the approach in this book that the inverse of a2 x 2 matrix and a 3 x 3 matrix can becalculated by hand. For any larger matrix, one ordinarily uses a computer or a programmable

Page 18: Eollor.t Tnlno Applied Linear Regression Models

Section 5.6 Inverse of a Matrix 191

calculator to find the inverse, unless the matrix is of a special form such as a diagonal matrix.It can be shown that the inverses for 2 x 2 and 3 x 3 matrices are as follows:

1. I f :

*, : lx 3 lthen:

(s.22)

where:

(5.22a) D:ad-bc

D is called the determinant of the matrix A. If A were singular, its determinant would equalzero and no inverse of A would exist.

2. rf:

F ' '1

- l

4-r : lo o, l2x2 Lc dJ

f d -bf

: I D D Il -c a ILD Dl

then:

r5 ?1r

where:

B-1 :3x3

B:l i2; l3x3

Ls h k)

fa b , f - ' le B clld n f | : lD E r lLs h k) LG H K)

7:(ek-fh) lz B:-(bk-ch) lz C:(bf-ce) lZ(5.23a) p:-(dk-f i l /Z 6:(ak-cC)lZ F:-(af-cd) lZ

6:(dh-edlZ g:-(ah-b7) lZ K-(ae-bd)/Z

and;

(s.23b) Z : a(ek - fh) - b(dk -fil * c(dh - eg)

Z is called the determinant of the matrix B.

Let us use (5.2D to find the inverse of:

We have:

f ' t r ,1A:l l , I

LJ TI

a:2 b:4

c:3 d:1

Page 19: Eollor.t Tnlno Applied Linear Regression Models

192 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

D : ad - bc :2(1) - 4(3) : -19

Hence:

- . r .41.3 - .2 )

as was given in an earlier example.When an inverse A-l has been obtained by hand calculations or from a computer

il#tti;tf s!:iff :"tri;:;ffi ilH,lf;1'ffi :"iil,Trilr;l,ffr"hffi ::

Regression Example

Ihe principal inverse matrix encountered in regression analysis is the inverse of the matrixX'X in (-5. l4 l :

Using rule (5.22), we have:

a:n b:LXi

, :LXt d:LX?

so that:

I t -4 IA-,=l =T f t l : f

l_3 2l LL -10 - l0 l

f n tX, l

I 'T :1"", txi-l

D:,, fx i - ( I " , ) ( f r , ) : " l t r f -

(-5.24 )

Since X X ; : t tX ancl X(X; - I ) ' :>X? - n*2,wecan simpl i fy (5.24):

r r v. i2 l\ut ' i ) l :n l ,1x, -*)2| 4. Inl

| >x? -EXi I, | ^>A:-ZP ,>d-W I

(X'X;- ' : ; I2x2 | -XXi n I

t,>,,x-W ;t1;|:VF )

fL- r ' - t I--. _ _, I r - t(X, rTP t(x=r I(X'X)- ' : ; I2x2 l -X1l

L ;A;;7 ;6;vP )

Hencc:

(5.24a)

Page 20: Eollor.t Tnlno Applied Linear Regression Models

Uses of lnverse Matrix

Section 5.6 Inverse of a Matrix 193

In ordinary algebra, we solve an equation of the type:

5Y :20

by multiplying both sides of the equation by the inverse of 5, namely:

lrsrr : lrrurf )

and we obtain the solution:

, : !p0,1 :4-5

In matrix algebra, if we have an equation:

AY: C

we corespondingly premultiply both sides by A-l , assuming A has an inverse:

A-IAY: A-rC

Since A-lAY : IY : Y, we obtain the solution:

Y: A- lC

To illustrate this use, suppose we have two simultaneous equations:

2y, I 4yr:29

31tfY' : lQ

which can be written as follows in matrix notation:

lz +l f i r I : I ro lL3 l l [ r : l L lo l

The solution of these equations then is:

[v, ' l - [z 4.1-r [2olL,, l :L: r l L 'o l

Earlier we found the required inverse, so we obtain:

[ r , l - l . t 4.1 [20' l : [ r . l

Hence, yt :2 and yr - 4 satisfy these two equations'

Page 21: Eollor.t Tnlno Applied Linear Regression Models

194 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

5.7 Some Basic Theorems for MatricesWe list here, without proof, some basic theorems for matrices which we will utilize in laterwork.

(5 ?5)

(s.26)

(s.27)

(s.28)

(s.2e)

(s.30)

(s.31)

(s.32)

(5 3?\

r5 34)

(5 ?5\

(5 ?6\

(s.37)

A+B:B*A

(A+B)fC:A+(B+C)

(AB)C: A(BC)

c(A+B):CA+Cg

).(A+B):) ,A+i"B

(A,), : A

(A + B)/ : A'+ B'

(AB)/ : B'A/

(ABC), :C,B,A,

(AB)- ' - B-rA-r

(ABC)- ' - C-tB-tA-r

(A-t ; - t - O

(A')- t : (A-r) '

5.8 Random Vectors and MatricesA random vector or a random matrix contains elements that are random variables. Thus,the observations vector Y in (5.4) is a random vector since the Y, elements are randomrtariables.

Expectation of Random Vector or Matrix

Suppose we have r : 3 observations in the observations vector Y:

:[l]The expected value of Y is a vector, denoted by E{Y}, that is defined as follows:

I t t t ' l IE{Y} : lElYzl l3x1

Luttr t - l

Y3xl

Page 22: Eollor.t Tnlno Applied Linear Regression Models

Section 5.8 RandomVectors and Matrices 195

Thus, the expected value of a random vector is a vector whose elements are the expectedvalues of the random variables that are the elements of the random vector. Similarly, theexpectation of a random matrix is a matrix whose elements are the expected values of thecorresponding random variables in the original matrix. We encountered a vector of expectedvalues earlier in (5.9).

In general, for a random vector Y the expectation is:

(s.38) E{Y} : lEIYiJlnX I

i : I , . . . ,n

and for a random matrix Y with dimension n x p, the expectation is:

(s.39) i :1, . . . ,n i j : I , . . . ,PE{Y} : lE{Yi i } lnxp

Regression Example. Suppose the number of cases in a regression application is n : 3The three error terms €t, €2, t3 each have expectation zero. For the error terms vector:

I3xl

we have:

E{e} : 03x1 3x1

since:

,lariance-Covariance Matrix of Random Vector

Consider again the random vector Y consisting of three observations Yr, Yz, L,. Thevariances of the three random variables. o:{Yi}, and the covariances between any twoof the random variables, o {Yi, Y1lt, are assembled in the yariance-covariance matrix of Y ,denoted by 62{Y}, in the following form:

(s.40) 62{Y}:

Note that the variances are on the main diagonal, and the covariance o {Y; , Y i} is foundin the i th row and 7th column of the matrix. Thus, o {I2, I, } is found in the second row, firstcolumn, and o {Y ,, 12 } is found in the flrst row, second column. Remember, of course, thato{Yr,Yr i ' - o{Yr, I2} . Since o{Y;,Y;} : o{Yi , Y;} for a l l i t ' j .o2{Y} is a symmerr icmatrix.

=

[" ]

I t { ' ' } I Io lLf[;]l: L3l

f " t { r r } o{Y1,Y2l

" ty, , y:} l

I o{Yr,Y,} o2{yr l otYr, \ } |

lo{Yr,Y,} o{ \ ,Y2} o ' {Y) J

Page 23: Eollor.t Tnlno Applied Linear Regression Models

196 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

It follows readily that:

(5.41) o2{y} : E{ty - E{y}lty - E{y}l '}

For our illustration, we have:

or{Y} :

Multiplying the two matrices and then taking expectations, we obtain:

Lclcation in Product Expected Value

o2lYr lo lY 1, Y2lro {Yr, Yr}o {Yr, Yr]r

etc.

This. ol'course , lcads to the variance-covariance matrix in (5.40). Remember the definitionsol'variarrcc and covariance in (A.15) and (A.21), respectively, when taking expectations.

To ge rrctlrliz.c. the variance-covariance matrix for an n x I random vector y is:

'{[l -ritil ,'-E{Yt} Y2- E[Y2]

"

-U,rr" ]

(5 4) l

Row l, column 1llow l, column 2l{ow l, column 3ll.ow 2, column I

etc.

( '2{Y}nX,l

(Yt _ E{YJ)z(yt- E{Yt})(Y2- E{Y2})(Yt - ElYl ))(r: - E{yz]j)(Y2- EIy\DVt - EIYJ)

etc.

olYr,Yr)oz IYr|

| " ' t 'I o lY, .: l 'I

Lo{Y" 'Yt\ o lYn, Y2)

Note again thar o21Y] is a symmetric matrix.

Regression llxarnple. Let us return to the example based on n :3 cases. Suppose thatthe three error lertt ls have constant variance, o2le,] : oz,andare uncorrelated so that o {ei,e; ) : 0 l-or i I .i. "l'he variance-covariance matrix for the random vector e of the previousexamplc is t lrcrclirrc as follows:

62{€}3x3

Notc: thrrt rtll r,:trianccs are o2 and all covariances are zero. Note also that this variance-ctlvariancc ttr:tlrix is a scalar matrix, with the common variance o2 the scalar. Hence, wec:rn cxprcss thc variunce-covariance matrix in the following simple fashion:

o2 {El - o2l3x3 3x3

o {Yt, Yr\

o {Y2, Ynl'r ]Yrl

fo ' o ol:10 o2 0 |

Io o "r ]

[ t o ol fo, o ol" ' lo I o l : l o o2 o I

L0 0 lJ L0 0 o ' )o2l =

Page 24: Eollor.t Tnlno Applied Linear Regression Models

Section 5.8 Randont Vettors and Matrices 197

Some Basic Theorems

Frequently, we shall encounter a random vector W which is obtained hy prcrrrultiplying therandom vector Y by a constant matrix A (a matrix whose elements arc Iixctl):

(5.43) W: AY

Some basic theorems for this case are:

where o2{Y} is the variance-covariance matrix of Y.

Example. As a simple illustration of the use of these theorems, consider:

I

(s.44)

{ \ l l ' \ l

(5.46)

We then have by (5.45):

E{w)2x1

and by (5.46):

o21W1 :2x2

Thus:

E{A} : A

E{W} :E{AY} :AE{Y}

62{w} : o2{AY} : Ao2{Y}A'

w2x1

A2x2

I Iu,r , ] l : IE{rr}- r { r , } lI Lr t r , t l Lr ty, ]+ EtY) l

I r: I r

Y2xl

-1

I

- o ' lYt - Y2J : o2{yr l + o21vry - 2olYr,Yr)

- o ' lY, * Y) : o2{Yr l + 02lY2} * 2olYr, Y2l

- o lyt * yz,y, * y2| : o ' lyr j - o2lyr j

I r - ' l I o?1Y,1 o{Y,.r , } l [ | t lI t r J [ "1rr . r ,1 "r1vry ) l - t r I

f " ' {v,} + 02lyzJ - 2olyr ,yr l o2{y) - o2{yr l

L o21Yrl - o ' lyr \ o2lyr l + o21rry *2o{Y,, , , , ]

o21w,l

o21wrl

o lW1, W2)

Page 25: Eollor.t Tnlno Applied Linear Regression Models

198 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

5 e si mp'e'';::J"::il"::::l *::;:::ff:T: ::J.T:, Remember again,ha,

we will not present any new results, but shall only state in matrix terms the results obtained

earlier. We beein with the normal error regression model (2.1):

(5.47a)

Yt: f ro * B,X, * e,Yz:f l0*BtXrte,

Yn:frolBtXr+e,

We defined earlier the observations vector Y in (5.4), the X matrix in (5.6), and the I vector

in (5.10). Let us repeat these definitions and also define the p vector of the regression

coefficients:

(s.48)

(s.41)

This implies:

Yi: f lo l BrX,+ e, i :1. . . . .n

(s.4e)

since:

[ r x, I f ' l ll l 1 ' l [ ! :1.1: ,Lt xn) L ' ,_J

IBo* F,x,1 [ ' ' l Ino* B,x,+e,)

lao+0," ' l * | ' : l : | , ' * P'x,+ '=

|Lr,*'u,r,) L"l Lr,* u,'*,*,,)

Note that Xp is the vector of the expected values of the I; observations since E{Y;}

Bo+ Brx, ;hence:

E{Y} : XpnXl nXl

Y:nXl

[ r , I [ r x, I [ ' , I

I t l .L: l l " ' l ,g, : [ f l ] , i , : l ' ' ILr,_l [ t x, , ] L ' ,1

(5.47a) in matrix terms compactly as follows:Now we can write

YnXl

=x p+€nX2 2\ I nXl

t l ll . . l :Lr, l

t

f.e

(s.s0)

Page 26: Eollor.t Tnlno Applied Linear Regression Models

Section 5.9 Simple Linear Regression Moclel in Matrix Terms lgg

where E{Y} is defined in (5.9).The column of ls in the X matrix may be viewed as consisting of the dummy variable

Xo : I in the alternative regression model (1.5):

Yi : FsXo -t BtX; * e; where Xn : I

Thus, the X matrix may be considered to contain a column vector of the dummy variableXo and another column vector consisting of the predictor variable observations X;.

with respect to the error terms, regression model (2.1) assumes that E{€;} : 0,o21e;\ : o2, and that the e; are independent normal random variables. The conditionE{r;} : 0 in matr ix terms is:

(s.s r) E{€}nXl

since (5.51) states:

Iat . , t l I o- ll t { . , } l_ l o Il : l l : l

fur ' ,r j L r lThe condition that the error terms have constant variance o2 and that all covariances

6l€i ,€ j l for i I j arezero (s incethee, areindependent) isexpressedinmatr ixtermsthrough the variance-covariance matrix of the error terms:

:0nXl

/5 52 r

Since this is a scalar matrix, we know from the earlier example that it can be expressed inthe following simple fashion:

f",^ l0

o'{e} : Inxn' |

:

L0

olol. l: l

" t )

00o2o

::00

(5.52a)

Thus, the normal error regression

(s.s3)

where:

e is a vector ofoz{e} : 621

62{E} - o2rt rxn nxn

model (2.1,) in matr ix terms is:

Y:XF+e

independent normal random variables with E{e} : 0 and

Page 27: Eollor.t Tnlno Applied Linear Regression Models

Chapter 5 Matrix Approach to Simple Linear Regression Analysis

5.10 Least Squares Estimation of Regression Parameters

Normal Equations

The normal equations (1.9):

(5.54)

in nratrix ternrs are:

(.5..s-5 ) X'X b : X/Y2x.2 zxl 2xl

wlrcrc b is thc vcctor of the least squares regression coefficients:

(5.55a) b : [? ' l2x1 Lbr )

To see lhis. rccirl l that we obrained x/X in (5.14) and x'Y in (5.15). Equation (5.55) thus

["i, ;;i][i:1:t;';]I nbr+btrx i l_[ t t , - l

Iao:x, +bt,x? ) Lr" , r , lThesc irrc prcciscly the normal equations in (5.54).

E sti mated Reg ress i on C oeff i c i ents

Ttr ohtuilr lhc eslirnated regression coefficients from the normal equations (5.55) by matrixrncthotls. rvc Ple rrrult iply both sides by the inverse of X/X (we assume this exists):

(x'x)-rx'xb - (X'x;-t*'t

We then l ind, s incc (X'X;- t ;1 ' " : f and Ib : b:

tS 56r 5 : (X/X)-tX,y2x1 2x2 2xl

The estimators bo and b, tn b are the same as those given earlier in ( 1. l0a) and ( l. l0b). Weshall demonstrate this by an example.

nbo+brlXi : f Y,

boL xi * brL x! : 2 x,Y,

' | '

I

Page 28: Eollor.t Tnlno Applied Linear Regression Models

Sectbn 5. I0 Least Squares Estimation of Regression parameters 201

Example. We shall use matrix methods to obtain the estimated regression coefficients forthe Toluca Company example. The data on the I and X variables were given in Table l.l.Using these data, we def,ne the Y obserrrations vector and the X matrix as follows:

15 57\

[ :eel I r 8ol(a)

" : l '? ' | (b) * : l l ' : l1,,, J Li ̂ j

We now require the following matrix products:

I z.soz IL 617,180 I

I r 80lt ' l | 1 ,0 |Tol l ' : I

Lr 70)

[:eo Ir l | 12r I

7o. l | : l :1323J

l - r I . . .(5.58) X' ,X: | ̂ .180 30

_ t 2s 1.7s0'l- L l.7so t42.3ooJ

. l - r r(5.59) X'Y: l^ .180 30

Using (5.22), we find the inverse of X'X:

(5.60)

In subsequent matrix calculations utilizing this inverse matrix and other matrix results. weshall actually utilize more digits for the matrix elements than are shown.

Finally, we employ (5.56) to obtain:

t5.6lr b: | -?ol : rX'Xr- ,X'y: I l t '172 - 99] :1: I t 7.807. l- lb, l - ""^' l-.oo:s:s .0000s0s r .1 l e l7. ts0 J_l ez.n I

| 3.s102l

or bo: 62.37 andbr :3.5102. These results agree with the ones in Chapter l. Anydifferences would have been due to roundins effects.

Comments

I . To derive the normal equations by the method of least squares, we minimize the quantity:

Q:DIYi _ $o+ Fl))2

(x,x)-, : [_ 33]1li - 33il:;,, ]

In matrix notation:

(s.62)

Expanding, we obtain:

Q: (Y - Xp),g _ Xp)

e :Y'Y - F'x'Y - Y'xF + p'x'xp

Page 29: Eollor.t Tnlno Applied Linear Regression Models

2ll2 Clnpter 5 Matrix Approach to Simple Linear Regression Analysis

since (Xp)/ : p'X'by (5.32)- Note now that Y'Xp is I x 1, hence is equal ro its transpose,which according to (5.33) is p'X'Y. Thus, we find:

(5.63) Q:Y'Y -2F'x'Y + p'x'xp

To find the value of p that minimizes Q, we differentiate with respect to po and Br.Let'.

(5.64)

Then it follows that:

(s.6s)

f ao1| - |a '^ . I ado Iap(s '= | og I

lah )

eaF(Q:-2x'Y+2x'xp

Equating to the zero vector, dividing by 2, and substituting b for p gives the matrix form of theleast squares normal equations in (5.55).

2. A comparison of the normal equations and X'X shows that whenever the columns ofX'X are linearly dependent, the normal equations will be linearly dependent also. No uniquesolutions can then be obtained for bo and b,. Fortunately, in most regression applications, thecolumns of X'X are linearly independent, teading to unique solutions for bn and 0,.

5.11 Fitted Values and Residuals

Fitted Values

(s.66)

In matrix notation, we then have:

(s.67)

noted by Y:

-^-lv, l

l t , l: l ' ll^ ' lLY' l

Let the vector of the fitted values Y, be de

f a^+ u,x, lI u i+a' ,x" It - lIr, * r,", -]Ltl[l:]

tn\1

Y:nXl

xbnx2 2x1

f ? ' l :Lot J

;

Page 30: Eollor.t Tnlno Applied Linear Regression Models

Section 5.II FittedValues and Residuals 203

Example. For the Toluca Company example, we obtain the vector of fitted values usinsthe matrices in (5.57b) and (5.61):

(5.68)

The fitted values are the same, of course, as in Table 1.2.

Hat Matrix. We can express the matrix result for t in (S.OZ) as follows by usins theexpression for b in (5.56):

t : X(X'X)- 'X'y

or, equivalently:

I r 80 l [:+z.rs Iv:xb: I l , I f ,3 j l . , l : |

16e47 |

Li to j ' ' [ r228_]

(s.6e)

where:

(5.69a)

Y:HYnXl nXn nXl

: X(X'x)-rx'HnXtr

Residuals

We see from (5.69) that the fitted values Y, canbe expressed as linear combinations ofthe response variable observations f,, with the coefficients being elements of the matrixH. The H matrix involves only the observations on the predictor variable X, as is evidentfrom (5.69a).

The square n xn matrix H is calledthe /za tmatrix.ltplays animportantrole in diagnosticsfor regression analysis, as we shall see in Chapter 9 when we consider whether resressionresults are unduly influenced by one or a few observations. The matrix H is symmetric andhas the special property (called idempotency):

(s.70) HH: H

ln general, a matrix M is said to be idempolenl if MM : M.

Let the vector of the residuals e, :

(s.1r)nXl

Y.-Y.he'1 denoted bv e:

[ ' ' Il " , l1", l

Page 31: Eollor.t Tnlno Applied Linear Regression Models

2(l.l ('lnt1t11'v 5 Matrix Approach to Simple Linear Regression Analysis

In matrix notation, we then have:

(s.72) e=YnXl nXl

Y: Y -XbnX 1 nXl nXl

example, we obtain the vector of the residualsExample. For the Toluca Companyusing the results in (5.57a) and (5.68):

(s.73)[:ee I [:+z.es I

": | '1 ' l - | ' ' , i ' l -lzzz ) 1r,r.rrl

The residuals are the same as in Table 1.2.

Variance-Covariance Matrix of Residuals. The residuals e,, like the fitted values i,can be expressed as^linear combinations of the response variable observations I;, using theresult in (5.69) for Y:

e=Y-?:Y-HY:f l -mY

We thus have the important result:

(s.74) e:( IH)YnX I nxn nxn nXl

by

I sr .ozl

l-or;or I| '0 "-l

where H is the hat matrix defined in (5.69a). The matrix I - H, like the matrixsymmetric and idempotent.

The variance-covariance matrix of the vector of residuals e involves the matrix I

H, is

_H:

(s.7s)

and is estimated by:

(5.76)

Note

The variance-covariance matrixe : (I - H)Y, we obtain:

o2{e} =o2(I-H)nXn

s21e1 :MsE(I-H)nXn

a':

of e in (5.75) can be derived by means of (5.46). Since

or1e1 : ( I - H)62{Y}( I - H) '

Now oz{Y} : o2{s} : o2lfor the normal error model according to (5.52a). Also, (I - H)/I - H because of the symmetry of the matrix. Hence:

o21e1: o2(I-H)I( I -H)

: o2(t - H)( I - H)

i ,Y.:i--

Page 32: Eollor.t Tnlno Applied Linear Regression Models

5.12 Analysis of Variance Results

Sums of SquaresTo see how the sums of squares are expressed in matrix notation, we begin u'ith the total sum

of squares SSZO, definedin (2.43).It will be convenient to use an algebraically equivalent

expression:

Section 5.12 Anahsis ofVariance Results 205

lnviewofthefact thatthematr ix l -His idempotent,weknowthat( I -H)( I -H): I -H

and we obtain formula(5.15).

(s.11)

We know from (5.13) that:

Y,Y:L,Y?

The subtraction terrn (rY)2 ln in matrix form uses J, the matrix of ls defined in t5.18), as

follows:

(s.7s) (r v' )r : (1) v'rv

n \n/

For instance, if n :2, we have:

SSIO : LtYi - Y l ' : L,rz - . lYi \2

n

(i) ,, ,,r Il l] [l;] :Hence, it follows that:

(Yt+y2)(Yr+Y,)

(s.19\

Just as X ff is represented by Y'Y in matrix terms, so 55p : L el

be reoresented as follows:

(s.80) SSE: e'e : (Y - Xb)'(Y - Xb)

which can be shown to equal:

(5.80a) SSE: Y'Y - b 'X'Y

Finally, it can be shown that:

sszo: Y/Y - (1)v'r

ssR: b'x'y - (1)v'n

: t ( r i - f ; t r can

(s.81)

Page 33: Eollor.t Tnlno Applied Linear Regression Models

Chapter 5 Matrix Approach to Simple Linear Regression Analysis

Example. Let us find SSE for the Toluca Company example by matrix methods, us-ing (5.80a). Using (5.57a), we obtain:

Y/Y: [399 121

and using (5.61) and (5.59), we find:

[:se Il r2r l

32311 | : 2,74s,173l : lL323 )

b,yly : t6237 3.s102t Ir , ] :?33]

:2.6s0,348

sSE : y'y - tr'xiy : 2,745,1',73 - 2,690,34g : 54,g25

which is the same resull as that obtained in Chapter l. Any difference would have been dueto rounding effects.

Note

To illustrate the derivation of the sums of squares expressions in matrix notation, consider SSE:

ssE: e/e : (y _ xb)/(y _ xb) : y,y _ 2b,x,y + b,x,xb

In substituting fur the rightmost b we obtain by (5.56):

ssE : y'y - 2btx,y + b,x,x(x/x)-,x,y

: y'y - 2b'x'y + b/Ix'y

tn dropping I and subtracting, we obtain the result in (5.80a).

Sums of Squares as Quadratic Forms

The ANOVA sums of squares can be showntobe quadraticforms. Anexample of aquadraticform of rhe observations Y, when n :2 is:

(s.82) 5Y? +6YtYz+4Y:

Note that this expression is a second-degree polynomial containing terms involving thesquares of the observations and the cross product. We can express (5.82) in matrix terms asfollows

(5.82a)Is

lY' Yr l l :_-t1L.-

llli:l : "'o"where A is a symmetric matrix.

Page 34: Eollor.t Tnlno Applied Linear Regression Models

Section 5.12 Analysis ofVariance Results

In general, a quadratic form is defined as:

(5.83)

(5.85a)

(s.8sb)

(5.85c)

(5.86a)

(s.86b)

(5.86c)

Y'AY: i io, ,y,y.rx l Et i !1 ' t ' t

where aii : aii

A is a symmetric n x n maftix and is called the matrix of the quadratic form.The ANovA sums of squares .ssro, ,s,tE, and ssR are ali quadratic forms, as can be

seen by reexpressing b/X/. From (5.67), we know, using (5.32), ihat:

b'X' : (Xb)': t '

We now use the result in (5.69) to obtain:

b'X' : (HY) '

Since H is a symmetric matrix so that H, : H, we finally obtain, using (5.32):

(5.84) b,X, _ Y,H

This result enables us to express the ANovA sums of squares as follows:

sszo: Y'ir - (;) r]'

SSE: Y,( I _ H)Y

ssR: " ' [ "

- ( ; ) ' ] "

Each of these sums of squares can now be seen to be of the form y/Ay, where the three Amatrices are:

' - (1)r\n/

I -H

"- 11)r\n/

Since each of these A matrices is symmetric, sszo, ssE, and ssR are quadratic forms,with the matrices of the quadratic forms given in (5.86). Quadratic forms play an importantrole in statistics because all sums of squares in the analysis of variance for linear statisticalmodels can be expressed as quadratic foms.

Page 35: Eollor.t Tnlno Applied Linear Regression Models

208 Chapter 5 MatrixApproachto Simple Linear Regression Ana[ysis

5.13 lnferences in Regression AnalysisAs we saw in earlier chapters, all interval estimates are of the following form: point estimator

plus and minus a certain number of estimated standard deviations of the point estimator'

Similarly, all tests require the point estimator and the estimated standard deviation of the

point esiimator or, in the case of analysis of variance tests, various sums of squares' Matrix

algebra is of principal help in inference making when obtaining the estimated standard

deviations and sums of squares. We have already given the matrix equivalents of the sums

of squares for the analysis of variance. We focus here chiefly on the matrix expressions for

the estimated variances of point estimators of interest'

Regression CoefficientsIhe variance-covariance matrix of b:

02{b}2x2

I o ' luol o{bg,btJ l:

Io{b1. be} o2lb] )

o2{b} : ozlx 'X)- l2x2

-Xoz

,(xi - x)'

oZ----.-----=-E(Xi - X) '

When MSE is substituted for o2 in (5.88a), we obtain the estimated variance-covariance

matrix o1'b. t ienoted bY s2 {b}:

(s.88)

or, using (5.24a):

(5.88a) 62{b}2x2

s21b1 : MSE(x'x)

f usr _r MSE(X:) - -xusE ^

s:1b1 :MsE(x'x)- ' I : l ' ?) ,1: :*" t (x i -x) '

rvr | -Xusn MSE

L tt",;f ,6=W

fozt-_ln- t

IL

I oas.:+: [ -a.+za

, o2x2* ;1x, -Vy-Xo2

-.---.:;t(xi - x)'

(s.8e)

In (5.88a), you will recognize the variances of bo in (2.22b) and of, b', in (2.3b) and the

covariance tr ao ana a, iln 1+.s1. Likewise, the esiimated variances in (5'89) are familiar

from earlier chaPters.

Example. We wish to find s2{bo} and s2{b, } for the Toluca Company example by matrix

methods. Using the results in Figure 2'2 and in (5'60)' we obtain:

-r . ,o, | - .287415 -.003535 I' : 2'56+ f-.oo:srs .oooo5o5l I

-8.428 I)2o4o )

(5.90)

Page 36: Eollor.t Tnlno Applied Linear Regression Models

hlean Response

Section 5.13 Inferences in Regression Analysis 209

Thus, s2{b0} : 685.34 and s2{b,} : .12040. These are the same as the results obtained inChapter 2.

Note

To derive the variance-covariance matrix of b, recall that:

b: (x/x)- ix,Y: AY

where A is a constant matrix:A: (X/X)-rX/

Hence, by (5.46) we have:o2{b} : Ao2{Y}A'

Now o2{Y} : d2L Fufther, it follows from (5.32) and the fact that (X'X)-l is symmerric that:

We find therefore:

A': X(X'X)-r

62 {b} : 1x'xy-tx' o2lx(x'x)- I

: o2(x'X)-rx'x1x'x;-r

: o2(x'x)-1I

: o2(x'x)-1

To estimate the mean response at Xp,let us define the vector:

(s.91)l - r I

Xh: lo I or X'h: i l Xnlzxl L"hJ lx2

The fitted value in matrix notation then is:

(s.e2)

since:

?lr :x 'nb

l- i" Ix;b : Ir xnl l lo | : [bo * b1X1) : [ lr ] : 11,

L"I J

Note that X[b is a I x I matrix; hence, we can write the final result as a scalar

The variance of ?1r, givenearlier in (2.29b), in matrix notation is:

(s.e3) o2 {? nl - o'x'h(x'x) - I xft

The variance of Y7, in (5.93 ) can be expressed as a function of o2 1b1. ttre variance-covariancematrix of the estimated regression coefficients, by making use of the result in (5.88):

(5.93a) o2 It'nl : x'ncz {b}xn

Page 37: Eollor.t Tnlno Applied Linear Regression Models

210 Chapter 5 Matrix Approach to Simple Linear Regression Analysis

The estimated variance of Il, given earlier in (2.30), in matrix notation is:

(s.e4) t2 1? o1 : MSE (x/h(x'x)- txr, )

Example. We wish to find s21ir) for the Toluca Company example when X1 : 65. Wedefine:

x; : t l 6sland use the result in (5.90) to obtain:

s21?^1: x i ,s21n1x,

. . . lass:q -8.428 l f r l=rl o ' l l -s.+zs . r2o4o) lo! l : rs ' :z

This is the same result as that obtained in Chaoter 2.

Note

The result in (5.93a) can be derived directly by using (5.46), since in : X,r,b:

Hence:

o21i '01 = 1t

o21?1,| : x ioz{b}Xr,

, ulJ*',ii',, " !;,'Ji'] [ ;, ]of:

(5.e5) o'{ i 'n\ : oz{bo} *2X1,olbo,b) + x|ozlbJ

Using the results from (5.88a), we obtain:

) , : . . o2 o2*2 2X t \ -X)o2 X?.o2- r 'ht

-n >(Xi - X) ' ' >1x, - x ' t2 ,(x i - x)2

which reduces to the familiar expression:

or l r ) : " r l ! * txn-x) 'z]' ln t(xt - x) ' /1

(5.95a)

Thus, we see explicitly that the variance expression in (5.95a) contains contributions fromo21Ul, o21b.,1, and o {bo,b1 }, which it musr according to theorem (A.30b) since ?1, : bo+brX 1,is a linear combination of bo and, br.

Prediction of New Observation

The estimated variance s2{pred}, given earlier in (2.38), in matrix notation is:

L__

(s.96) s2{pred} : MSE(I + X;(X'X)-rXft )

Page 38: Eollor.t Tnlno Applied Linear Regression Models

Problems 2ll

Cited Reference

Graybill, F. A. Matices with Applications in statistics. 2nd ed. Belmont, calif.:Wadsworth, 1982.

Problems

For the matrices below, obrain (1) A + B, (2) A _ B, (3) AC, (4) AB,, (5) B,A.

[ r + lA:12 6 |

L3 8J

lz r lo: l ; ; I

14 8_l ":fi]

T"c: l l

L)

[ : s l. : I I 9 |

l ) r I12 4l

I r : ln: l r + l

lz sJr t . lo0l

State the dimension of each resulting matrix.

5.2. For the matrices below, obtain (1) A + C, (2) A - C, (3) B/A, (4) AC,, (5) C,A.

State the dimension of each resulting matrix.

5.3. Show how the following expressions are written in terms of matrices: (1) yi _ ii : ei,(2) t X,ei : 0. Assume i : 1, . . . , 4.

t/ s'+. tr'lavor deterioration. The results shown below were obtained in a small-scale experimenrto study the relation between o F of storage temperature (X) and number of weeks beforeflavor deterioration of a food product begins to occur (y).

i :123

Xi:84Yi: 7.8 9.0

0-4-810.2 I 1.0 l r .1

Assume that first-order regression model (2.1) is applicable. Using matrix methods. find(1) Y'Y, (2) X'X, (3) X/Y.

5'5' Consumer finance. The data below show, for a consumer linance company operatingin six cities, the number of competing loan companies operating in the city (X) and thenumber per thousand of the company's roans made in that city that are currentry derinquent(Y):

r23456

Xi:412334yi : t6 5 10 15 13 22

Assume that flrst-order regression model (2.1) is applicable. using matrix methods, find( l ) Y'Y. (2) X' ,X, (3) X' ,Y.

Page 39: Eollor.t Tnlno Applied Linear Regression Models

('lurltrcr 5 Mutrix Approach to Simple Linear Regression Analysis

\i 5.6. Refer to Airfreight breakage problem 1.21. Using matrix merhods, find (l) y/y, (2)x'x, (3) x/Y.

Refer to Plastic hardness Problem 1.22. Using marrix methods, find (1) y,y, (2) x/x,(3) X/Y.

Let B be defined as follows:

a. Are the column vectors of B linearly dependent?

b. What is the rank of B?

c. What must be the determinant of B?

5.9. Let A be defined as follows:[n

-1r - l8 l

A:10 3 l l1055J

a. Are the column vectors of A linearly dependent?

b. Restate definition (5.20) in terms of row vectors. Are the row vectors of A linearlydependent?

c. What is the rank of A?

d. Calculate the determinant of A.

5.10. Find the inverse of each of the followins matrices:

" : [ i ;31Lr 0 5J

^: [3 i ]l+

B: | 6

Lr0

3215 10 Ir6J

5.12.

5.1 3.

5.r4.

Check in each case that the resulting matrix is indeed the inverse.

Find the inverse of the followins matrix:

[s r 3 lA:14 0 5l

Lr e 6)

Check that the resulting matrix is indeed the inverse.

Refer to Flavor deterioration Problem 5.4. Find (XrX;-t.

Refer to Consumer finance Problem 5.5. Find (X,y;-t.

Consider the simultaneous equalions:

i7yr: )5

*3yr:12

4y,

2yr

a. Write these equations in matrix notation.

Page 40: Eollor.t Tnlno Applied Linear Regression Models

Problems 213

b. Using matrix methods, find the solutions for y, and yr.

5.15. Consider the simultaneous equations:

5yt i_Zyt:8

23y1 4- 7 lz : 28

a. Write these equations in matrix notation.b. Using matrix methods, find the solutions for y, and yr.

5.16. Consider the estimated linear regression function in the form of (1.15). Write expressionsin this form for the fitted values i in matrix terms for I : 1, . . ., 5.

5.17. Consider the following functions of the random variables yr, yr, and, y.:

Wt:Yr+Y2+Y3

Wz: Yt - Yz

Wz:Yr-Yz-Yz

a. State the above in matrix notation.b. Find the expectation of the random vector W.c. Find the variance-covariance matrix of W.

5.18' consider the following functions of the random variables yr, yz, y3, and r_r:

1Wr:;(Yt+Y2+Y3+Y4l

l lWr:

i (Y, - r Yr) -

, (y3 + y4l

a. State the above in matrix notation.

b. Find the expectation of the random vector W.

c. Find the variance-covariance matrix of W.

5.19. Find the matrix A of the quadratic form:

3Y? +1oyy2+17y15.20. Find the matrix A of the quadratic form:

lYl -8YrY2+8Y:

5.21. For the matrix:[s ) f

A:1" - l

L2 1_l

find the quadratic form of the observations y, and yr.

5.22. For the matrix:10410 3 ol4 0 eJ

; Yr, Yr, and Y,find the quadratic form of the observations

Page 41: Eollor.t Tnlno Applied Linear Regression Models

t*_>---*

214 Chapter 5 Matix Approach to Simple Linear Regression Anlalysis

'l S.ZZ. Refer to Flavor deterioration Problems 5.4 and 5.12.

a. Using matrix methods, obtain the following: (1) vector of estimated regression coef-ficients, (2) vector of residuals, (3) s.tR, (4) ssE, (5) estimated variance-covariancematrix of b, (6) point estimate of E lY o] when X1 = -6, (7) estimated variance of iowhen X7, : -6.

b. What simplifications arose from the spacing of the X levels in the experiment?

Find the hat matrix H.

Find s2{ei.

5.24. Refer to Consumer finance Problems 5.5 and 5.13.

using matrix methods, obtain the following: (1) vector of estimated regression coef-ficients, (2) vector of residuals, (3) ssR, (4) ssE, (5) estimated variance-covariancematrix of b, (6) point estimate of E {Y 1,} when X1 : 4, (j) s2 {pred} when X o : 4.

From your estimated variance-covariance matrix in part (a5), obtain the following:(1) s{bn, b,} ; (2) s21br} ; (3) s{b,} .

Find the hat matrix H.

Find s2{e}.

a,

b.

c.

d.

1 5.25. Refer to Airfreight breakage Problems 1.21 and 5.6.

a. Using matrix merhods, obtain the following: (l) (X/D-r, (2) b, (3) e, (4) H, (5) ,SSE,(6) s2{b}, (7) i, when Xn = 2,18; s21io1 when X1, - 2.

b. From parl (a6), obtain the following: (l) 12{br}; (2) s{Do, b,i; (3) s{bo}.

c. Find thc ntarrix ofthe quadratic form for SSR.

5.26. Re fer to Plastic hardness Problems 1.22 and 5.7.

a. Llsing nrarrix nrerhods, obtain the following: (1) (X,X)-1, (2) b, (3) i, <+l H, (5) SSE,(6) s?{b}, (7) s2{pred} when X7, - 30.

Irrorn pi lrr (a6). obrain the fol lowing: ( l) s2{bo}; Q) s{bo, D, }; (3) s{b1}.

Obtain tlrc nr:rtrix ol'the quadratic form for SSE.

b.

Exercises

5.27. Refer to regression-through-the-origin model (4.10). Set up the expectation vector for €.Assumethat i :1, . . . ,4.

5.28. Consider model (4.10) for regression through the origin and the estimaror b, givenin (4.14). Obtain (4.14) by utilizing (5.56) with X suitably defined.

5.29. Consider the least squares estimator b given in (5.56). Using matrix methods, show thatb is an unbiased estimator.

5.30. Show that i, in (5.92) canbe expressed in matrix terms as b/X;,.

5.31. Obtain an expression for the variance-covariance matrix of the fined values f,, I :1. .... n. in terms of the hat matrix.

b-