techniques of covariance structural analysis1

20
A~t~al. J. Sbtid., IS (3), 1976, 131-150 TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS BISHWA NATH MI~KHE~~ Council for Social Development, New Delhi, India The paper delineates the scope and important features of covariance structural analysis in which some pattern can be postulated a priori for the covariance matrix. The coverage given also includes a brief review of the published work in this area with illustration of statistical tests based on principles of maximum likelihood and union-intersection. New results on the computational ease under the assumption of pmticular covariance structures are reported. A few unsolved problems in the area of structural analysis are briefly mentioned. 1. Introduction The problem of statistical inference for multivariate normal populations has been considered in detail by Anderson (1958), Roy (1957), Kendall (1961) and, more recently, by Morrison (1967) and Dempster (1969). However, most of the multivariate problems considered by these authors arise from the situation where the para- meters are completely unknown and no restriction is imposed on them. A much more difEcult situation arises when the structure of the population variance covariance matrix C is assumed to be known a primi, so that any element may be specified in advance to be equal to some other element or elements of the same matrix. k~ arbitrary covariance matrix is a linear combination of 4 p (p +1) =c number of known matrices, with constraints on the coeEcient to make it positive definite. In the case of structured (patterned) covariance matrices, additional restrictions are imposed. This paper presents a discussion of the procedures involved in testing hypotheses for multivariate normal populations regarding the structure of such patterned variance. covariance matrices. The testing of such hypotheses where the variance covariance matrix is constrained to be of particular form will be hereafter referred to as covariance structural analysis ’. The coverage in this paper includes a brief review of some relevant published work and a discussion of certain important consequences of assuming structure in the variance covariance matrix. A number of illustrations have been presented to show how such standard techniques of hypothesis testing based on either the principle of maximum likelihood or Roy’s (1953) principle of union-intersection can be used for constructing statistical tests of certain types of covariance structures. It has been conjectured in this connection that the test €or equality of the mean vectors, assuming a particular non-singular covariance structure, is in general more powerful than the corresponding unrestricted test. Other applications of the technique are briefly discussed. * Manuscript received July 16, 1976 ; revised January 12, 1976.

Upload: bishwa-nath-mukherjee

Post on 03-Oct-2016

221 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

A ~ t ~ a l . J. Sbtid., IS (3), 1976, 131-150

TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS

BISHWA NATH M I ~ K H E ~ ~ Council for Social Development, New Delhi, India

The paper delineates the scope and important features of covariance structural analysis in which some pattern can be postulated a priori for the covariance matrix. The coverage given also includes a brief review of the published work in this area with illustration of statistical tests based on principles of maximum likelihood and union-intersection. New results on the computational ease under the assumption of pmticular covariance structures are reported. A few unsolved problems in the area of structural analysis are briefly mentioned.

1. Introduction The problem of statistical inference for multivariate normal

populations has been considered in detail by Anderson (1958), Roy (1957), Kendall (1961) and, more recently, by Morrison (1967) and Dempster (1969). However, most of the multivariate problems considered by these authors arise from the situation where the para- meters are completely unknown and no restriction is imposed on them. A much more difEcult situation arises when the structure of the population variance covariance matrix C is assumed to be known a primi, so that any element may be specified in advance to be equal to some other element or elements of the same matrix. k~ arbitrary covariance matrix is a linear combination of 4 p ( p +1) =c number of known matrices, with constraints on the coeEcient to make it positive definite. In the case of structured (patterned) covariance matrices, additional restrictions are imposed.

This paper presents a discussion of the procedures involved in testing hypotheses for multivariate normal populations regarding the structure of such patterned variance. covariance matrices. The testing of such hypotheses where the variance covariance matrix is constrained to be of particular form wil l be hereafter referred to as ‘ covariance structural analysis ’. The coverage in this paper includes a brief review of some relevant published work and a discussion of certain important consequences of assuming structure in the variance covariance matrix. A number of illustrations have been presented to show how such standard techniques of hypothesis testing based on either the principle of maximum likelihood or Roy’s (1953) principle of union-intersection can be used for constructing statistical tests of certain types of covariance structures. It has been conjectured in this connection that the test €or equality of the mean vectors, assuming a particular non-singular covariance structure, is in general more powerful than the corresponding unrestricted test. Other applications of the technique are briefly discussed.

* Manuscript received July 16, 1976 ; revised January 12, 1976.

Page 2: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

132 BISHWA NATH MUKHXRJEE

2. Hypotheses About Covariance Structures The statistiml techniques which are primarily concerned

with testing hypotheses regarding the pattern of variance covariance matrices is now known a8 “structural analysis” (Bock, 1960). Covariance structural andysis aims at testing whether or not an observed variance covariance matrix is consistent with the population va,riance covariance matrix deduced from the constrained structure of a p-dimensional vector. ‘Covariance structural analysis’ is a generic term describing a variety of statistical procedures for testing the goodness of fit of certain types of structures postulated a priori for the covariance matrix by placing alternative restrictions on the parameter matrices of the general model.

The techniques of covariance structural analysis should not be construed as necessarily the same as are involved in either the study of structural relationships (Kendall and Stuart, 1967, p. 374) or in the evaluation of a structural probability (Fraser, 1968). It is a kind of pattern analysis but not of the form of con.fqyura,l analysis which McQuitty (1956) developed for psychological test construction. It is the pattern in the covariance or correlation matrix rather than patterns of response in the raw data which is of primaq concern in structural analysis.

A population variance covariance matrix I: of order p xp will be called a patterned sigma matrix if it can be expressed as

m X =Caj B,

j= 1 (2.1)

. where the q’s are m unknown nonzero scalars, some or all of which may be equal, the Bj’s are p xp matrices having elements which are k w u m red numbers and such that C is positive def i te , m<p<o and

aS a specific illustrstion, consider a p-dimensional random vector Xli, XEt, . . . , X,,, observed on individual i drawn randomly from a population. Suppose we a s m e that the observational vector has a linear composition of the form (2.2) Xi=Tfi +e,,

where X, is a p x 1 vector of individual i’s score on p tests, fi is another m xl vector of unknown coefficients, which may be interpreted as a set of latent variables, T is a p xm matrix of known constants, which may be conceived as the design matrix of the experiment, and e, is the error vector in X i . It is further assumed that the errors are uncorrehted with each other, with a dispersion (error vazimce) matrix D*=Diag [<, $, . . . , E~J, and similarly that the vector of latent variables, fz, is orthogonal in the population of individnals and independently Wtributed of the error vector e,. Under these structural assnmptions, the population variance covmiance matrix of the p dimensional obsemational vector (2.2) can be written as

o=i& +I).

X =TB(fi,$)T +D2

(2.3) Z p x p = T p x m A m xm T’m x p + D 2 p x p

where the form of A is diagonal bemuse of the orthogonality assumption involved in the distribution of ff and the symbol E denotes the mathematical symbol of expectation.

Page 3: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQUES OF COVARIAXCE STRUCTURAL ANALYSIS 133

A specific example of covariance structure arises when T is a p x p lower triangular matrix with all equal unit non-zero elements. Then the variance-covariance matrix (2.3) with A=Diag. (&, 6,, . . . , S B ) , D2=diag(Ei7 E;, . . . , 5) and yj=Zak, for all j=1, 2, . . ., p wil l have the following type of

Guttman quasi-simples structure : yjk =yj, if j < k

i

k=l

(2.3a)

Thus, given the matrix T, the explicit pattern of X can be specified from (2.2) and all assumptions connected with (2.2). Under multi- variate normal assumptions, a test of the hypothesis that X can be written as a linear combination of certain estimable matrices of the form (2.1) is constructed against the alternative hypothesis, Ha, that X is a positive definite matrix. In general, the dispersion matrix in stqxtural analysis is assumed to be non-singdar so that it can be taken for granted that its sample estimate S is non-sin,dar (apart from the collinear sample that occurs with probability zero).

Covariance Structural Analysis and Exploratory Pactor Analysis In order to amplify some of the distinguishingfeatures of structural

analysis, the main difference between covariance structural analysis and exploratory factor analysis (Mukherjee, 1973) will be pointed out in this section. For this purpose, a model due to Joreskog (1970) which is more general than (2.2), is considered in which the vector of observed scores for the individual i drawn from a multivariate normal population is decomposed as

(2.4) Xi=p.+BTi+ei where E(9,) =p, E(T,) =E(ei) =0, ei being uncorrelated within the population of individuals. Assuming the true score components 7, to be random and mutually independent and also independent of all the error scores e,, it is found that the dispersion matrices have

=yj +E;, if j =k =yk, if j> k.

(2.5%) B(TiTi)

(2.5b) E(e,ei) =W.

X i is multivariate normal mith a dispersion matrix

(2.6)

which is similar in form to (2.3) in all respects excepting that 0 is not necessarily diagonal as A of (2.3) and the matrix B is not of fu l l rank, its rank being T where r < p . When the matrix B of order p x r is not entirely known, (2.6) represents a factor analytic model with t

From these, and the independence assumptions stated earlier,

Z =E( B T ~ +ei) ( B T ~ +ei)’ =BQB’ +w

C

Page 4: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

134 BISHWA NATH m R J E E

correlated non-overlapping factors T{ which can be conceived z1s r true scores for the ith individual. In covariance structural analysis, the matrix B may not be necessarily non-singular, but it is always k n o m . This is also true of the Bj matrices shown in (2.1). The elements in the matrix B are known or hypothesized constants as in a design (model) matrix.

The Scope of Covariance Xtmtdural Analysis From the foregoing discussion, it is apparent that there are three

major phases of work involved in a particular structural analysis procedure. They are (a) to determine the form of expected (population) variance covariance matrix, C, for a given set of variables (more than two) under a specific set of assumptions regarding their compositions and distributions, (b) to develop procedures for estimating the unknown parameters of the p x p expected vaziance covariance matrix, Z, and (c) to develop suitable strttisticd tests for testing the hypothesis that

(3.7)

against the alternative hypothesis

(2.7%) where aj are m unknown parameters and Bj are m known coefficient matrices of order p xp-when the value of m can be equal to or less than or even greater than p but generally much less than c = $ p ( p + l ) .

An IlawtrcGtion of the Likelihood Ratio Test In order to discusa and illustrate a simple case of estimation and

hypothesis testing in covariance structural analysis consider the problem of testing (2.7) under the multivaziate normal assumption. For the sake of convenience suppose further that Zo in (2.7) is a p x p patterned matrix of full rank which has m distinct characteristic roots when m<p. The alternative (2.7%) is that I: has no pattern as such and the matrix is positive definite. One method of obtaining a test of €lo (2.7) against H, (2.7a) is to derive the m.1. estimate of I: under the null hypothesis and compute the lambda criterion A. The procedure for obtaining a likelihood ratio criterion (See Anderson, 1958, p. 91) is to derive first the m.1. estimate of X under the H, and then evaluate the maximum of this likelihood under the restriction imposed by the null hypothesis. Let L(H,) and &(ITa) denote the masimnm value of the likelihood of X'=(Xl , . . . , XJ, under H , and H,, respectively. The m.1. estimate of the elements of Z is obtained under H, when the parameters are allowed to vary over the entire parameter apace. It is well known that when no restrictions are imposed, the estimators of the parameters are the sample values. The maximum L of C under H, is therefore,

H , : Z is any symmetric positive definite matrixfZ,

where S=(Sjgjk) is the sample dispersion matrix and X,, is the element of the j th row and kth column of the sample dispersion matrix obtained by the formula

Page 5: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQUES OF COVARIANCE S"RUCTURAL ANALP616 136

(2.8a) N

8. =N-'C (Xjj -Tj)(xki --Q, I k i=1

for all i = l , 2 , . . . , N and j, k =1,2,. . . , p whenp < N . The likelihood- ratio criterion is the quotient of these two maxima, the k s t maximum, L(H,), divided by the second maximum L(H,). Thus the Neyman- Pearson likelihood ratio test of H , against H , is given by

(2.9)

Given a sample X,,X,, . . . , X, from hr(p,Z,), and setting p = z the log likelihood function of Z, after being divided by 2712, can be written as

(2.10) where tr (A) denotes the trace of matrix A, and S denotes the sample variance covariance matrix obtained from (2.8a).

On differentiating (2.10) with respect to aj (j=l, 2, . . . , m), we

227-1 log ~ ( p , 8, z,) = -p log(2x) -1ogIZ,l -tr(Z,'S)

which can be also written as

(2.11)

(2.12) = -trZrl Bj +trZrlS Zrl Bj

where Bi is the known matrix corresponding to aj in (2.7). the partial derivative equal to zero, we express (2.12) as

(2.13) tr(Z0-l -z;' s z,') B~ =O for allj=1,2,. . . , m.

In order to solve (2.13) for the m.1. estimates of aj for aJl j =1, 2, . . . , m, from the knowledge of the sample variance covaPiilnce matrix S , it may be instructive under certain conditions to make use of the spectral decomposition theorem (Perlis, 1952, p. 175) for symmetric matrices having only m distinct nonzero chmacteristic roots, yl, yz, . . . , ym where y . appears nj times whence Enj =p. This is done in order to express $, of order p , in the form

Setting

(2.14) Z o El +yz E, +a . a +ym Efi where the p x p matrices, El, E,, . . . , Em have the following properties :

2 (a) Ej =Ej for all j =1, 2, . . . m (idempotency) ; (b) E,+E,+. . .+Em=IP, and (c) Ej E, =O whenever j =k.

(2.14a)

From (2.14) and all the properties discussed above, it immediately follows that

Page 6: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

136 BISRWA NATH MUKEERJEE

(2.15) &-l=y~l~l+y;l~,+. . . +y;l~, where y!, yz, . . . , ym, are the characteristic roots of C,. Since (2.14) is identmlly of the same form as (2.7), assume for the purpose of illustration that a,. =y,. and B, =E, for all j =1,2, . . . , m, where m <p and E,'8 me knourn orthogonal idempotent matrices as de5ned above. Using (2.15), we can then write (2.13) as

tr or

(2.16) beca,me of property (a) and (c) in (2.14s). Expanding &-' again in terms of (2.15), and after simplifying we can write (2.16) as

tr (E, -&-IS E,) =O for all j =1, 2, . . . , m

since tr AC=tr CA. we obtain

Again, wing (a) and (c) properties in (2.14%)

(2.17) .

tr ( E ~ -yyl s E,) =o or ii =tr (S E ~ ) / tr E,, for all j=1, 2, . . . , m. Thus, the procedure for obtaining the m.1. estimate of m parameters from equation (2.17) turns out to be very simple. The example to follow ia &o a fairly routine application of the likelihood ratio approach to testing (2.7) in the case where aj are unconstrained and the B, are k7m.m orthogonal idempotent matrices. In the event m>p, the use of the spectral decomposition theorem will not be possible and other approaches should be used for obtaining explicit solutions for the estimable parameters of the covariance structures. For earnple, in the case of the general Guttman quasi- simplex covariance structure, the 2p-1 parameters are estimated by the m.1. procedure through different kinds of short-cuts and simplification (Mukherjee, 1966).

Now for the purpose of illustrating the actuail steps in constructing the likelihood ratio test of the B, (2.7) consider a patterned matrix of the following form where a> b :

~ + b 0 a-b (3.18a) c,= 0 2a 0

[a-b 0 a + d which can be decomposed as

(2.18b) &=2a 0 1 0 +2b 0 0 0 [:: :1 [-: I-: where Ef=El, G=E2, El E,=O, El.+Ez=I, and A1=2a appears with a multiplicity two. Therefore, the mverse of Z, wil l be :

(2.19) Ueing (2.17) and noting that tr El =2 and tr E, =1 in (2.18b), it cas be readily seen that the m.1. estimates of a and b will be given by

&-' =AT1 El +?.F1 E,.

Page 7: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHKIQUES OF COVARIANCE STEUCTUB.AL ANALYSIS 137

(2.20)

A h

(2.21) =A$ =a( 811 -2 8 3 1 f 8 3 3 ) .

Thus, from the knowledge of the sample variance covarimce matrix, S =(fljsj,) as defined in (2.9), the estimates of a and b are given by (2.20) and (2.21) respectively. It can be seen that the determinant of Co is 8a2b. Let the determinant of sample variance covariance be IS1 and N represent the sample size. Then the results of the likelihood ratio test of the H , as shown in (2.9) can be written as

which m'der the null hypothesis is asymptotically distributed as a x2 statistic with i p ( p fl) --m degrees of freedom whe9 m is the number of estimable parameters in Z, and Z, is replaced by C,, the m.1. estimate o€ C under H,. It is to be noted that (2.22) reduces to a very si9ple form because the numerator of (2.22) involves the term --)N tr S Zrl. But following the m.l.fquation (2.16) and summing over d j =1,2, . . . , p , it is seen that tr S Zo-' = p and hence the exponents in the numerator and denominator cancel with each other.

As has been stated eaxlier, acceptance of the null hypothesis (2.7) in covariance structural analysis, however, does not necessarily imply the tenability of the model regarding the composition of the observed variahs. However, the analysis clarifies the latent sources of variation and covariation in the non-experimental data on p normally distributed variables from only the knowledge of their sample variances and covariances. It should also be noted that (2.22) continues to hold irrespective of whether or not the roots of Z, we equal.

3. A Brief Review of Work in Covariance Structural Analysis The importance of studying covariance structures has been keenly

felt since Wilks (1946) attempted to derive sample criteria for testing equality of comriances in a normal multivariate population. However, comparatively few statisticians (Anderson, 1969, 1970 ; Bargmm, 1957, 1967 ; Bock and Bargmann, 1966 ; Gabriel, 1954, 1962; Geisser, 1963, 1964; Greenberg and Sarhan, 1959, 1960; Han, 1968 ; Joreskog, 1970, 1973 ; Kullback, 1967 ; Lawley, 1963 ; Mugherjee, 1966, 1970, 1973 ; Olkin and fiess, 1969 ; J. Roy, 1951, 1954 ; Roy and Sarhan, 1956 ; Srivastava, 1966 ; Styan, 1969 ; Wiley, Schmidt and Bramble, 1973), have contributed to this new area of multivariate andysis. As is widely known in psychometrics, a suitable statistical test is needed to assess whether a set of psychological tools (tests) hazing items of the same nature are " parallel " to each other. In addition to the equality of means, variances, and covwimces, such tools should also have equal validities for predicting any particular criterion. I f the " parallel " tests also satisfy this latter condition, then we would expect that the population varimce

Page 8: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

138 BISHWA NATH

covariance matrix for d the “ pardel ” tests, as well as the criterion variables, should give rise to what Votaw (1945) calls the (‘ compound symmetry ” pattern. The statistical criteria for “ compound symmetry ”, as presented by Votaw (1948), include a statistical test for equal validitg based on the likelihood mtio approach. The likelihood ratio test for the “ compound symmetry ” pattern is ah0 illustrated by Greenhouse and Geisser (1959).

For certain special covariance patterns, tests of the covariance structure based on the likelihood ratio approach (Wilks, 1932, 1935), or union-inter-section principle (8. N. Roy, 1953, 1967), can be easily constracted. These are the patterns which can be readily disgonalized by pre- and post-multiplication with a known matrix independently of the elements of the observed covmiance matrix. Bagmann (1967) characterizes such matrices ~ts having a c‘ reducible ”

Work Related to r3peeijh Covariance Structures Various types of patterned correlation and covariance matrices

have already been identified in psychological literature (Mukherjee, 19690, 1970b). Examples of some of the well-known patterned dispersion matrices are ‘‘ simplex ”, ‘‘ quasi-simplex ”, (‘ circumplex ” (Guttman, 1954,1957 ; Anderson, 1960 ; Mukherjee, 1966), “ compound symmetry ” .(VotELw, 1948), “ uniform structure ” (Geisser, 1964), ‘( equipedictability ” (Bargmann, 1957), “ tri-diagonal ” or “ Jacobi ” (Mnkherjee, 1970b), ‘‘ democratic structure ” (Jones, 1960), “ aristocratic ” structure (Jones, 1960) etc.

The models generating most of these patterned matrices, the methods for estimating their parameters and the statistical tests of the structural hypotheses have recently been discussed elsewhere (Mukherjee, 1970b).

Under the multivariate normality assumption, a number of. likelihood ratio tests have been developed for certain specific covariance atructmea. When the population variances me assumed to be known and equd, a likelihood ratio test of the hypothesis of equal inter- correlations is available following the procedure suggested by Aitken, Nelson and Reinfurt (1968). Mukherjee (1966) has developed two likelihood ratio tests for the Guttman quasi-simplex covariance structures. A likelihood ratio test for the Guttman quasi-circumplex covariance structure has also been proposed (Mukherjee, 1970b) and illustrated elsewhere (Xukherjee, 1975). Tests for Broad Class of Covuriance Btructures

Mukherjee (1970b) has presented a number of empirical examples of covariance structures each of which fall in the “ reducible ” c l ~ s . Likelihood mtio tests for the “reducible” chss and other non- reducible patterns which, however, can be transformed to tri-diagonal form by a h u m matrix, have also been considered by him..

8tatistical hypotheses regarding covmiance matrices which can be expressed a8 (2.1) and in which the given matrices are commutative in nature have been considered by Srivastava (1966), Srivastava and lyIaig (1967). Bock and Bargmm (1966) and Bargmann (1967) have discnssed a slightly different class of structures. Anderson (1969) has examined in g e n d t y the problems of inference regarding covBpi&nca mstrices which axe linear combinations, or whose inversea are linear combinationa of given matrices of the form shown in (2.1).

pattern.

Page 9: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQTJ?3S OF COVARIANCE S T R U C m A L ANALYSIS 139

Joreskog (1970) has derived a general method for estimating the constrained parameters of the variance covariance matrix that are unknown but equal to one or more other parameters. His estimation method is based on the principle of maximum likelihood. Using the quasi-Newton method formulated by Fletcher and Powell (1963), Joreskog has developed a numerical method of solving the likelihood equations for estimating the different unknown parameters of various covariance structures. Following the same approach, Wiley et al. (1973) have implemented the maximum likelihood procedures for a. class of eight covariance structures.

Following Anderson (1969,1970), Styan (1969) has considered the case where the covariance mstrix or its corresponding correlation matrix has linear structure of the form (2.1) and it is assumed that the diagonal matrix of standard deviations is unknown. In deriving the m.1. estimates of the parameters of such patterned covariance matrices, he found closed-form expressions for the nonlinear m.1. equations which in general cannot be solved andytxally.

In a&&tion to these broad classes of covariance structures, it is possible to think of another class of patterned variance covariance matrices. These are the patterned matrices which can be decomposed as (2.14) in terms of m symmetric matrices folloming the spectral decomposition theorem (2.14). For convenience, we may describe patterned matrices of this type as spectral matrices. Covariance structural analyses of such matrices are illustrated elsewhere (Mukherj ee, 197 6). As an example of such type of matrices, consider a 4 x4 Guttman (1954) quasi-circumples correlation matrix of the form :

1 P1 Pz P1

(3.1) p = [ ;; ;; a] The matrix Z of (2.23) can be spectrally decomposed as

1 1 1 1 1 0 - 1 0 1-1 1-1

(3.2) .=*Al[ 1 1 1 1 I] +hi2[ -p + k h [ 1-1 1-1 I] 1 1 1 1 0-1 0 1 -1 1-1 1

where the characteristic roots of the correlation matrix p are given by

when A, occurs with multiplicity two. Under the normal multivariate assumption and specifying that the common variance term c2 is known, the m.1. estimate of all the unknown correlation parameters can be readily obtained from (2.17) and thereafter a likelihood ratio test of the form (2.22) can be easily constructed (Mukherjee, 1975) to verify whether or not the sample correlation matrix do have the Guttman quasi-circumplex pattern as in (3.1).

nlustratim of Structural Analysis Based o n Union-Intersection Principle

between variables j and (j +2) in the first-order Markov scheme

X 1 = ( 1 +2 P l + P 2 ) , he=( l -Pz) , A3=(1-2P,+P,)

Consider the problem of computing the partial correlations

Page 10: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

140 BISHWA NATH MUXHERJEE

- 1 -b 0 ... 0- -b l+bZ -b . . . 0 0 -b l + b 2 ... 0 . ... . . ... . ... . - t i 0 b ... 1-

(3.3) U,=U,-~P+E, when IpI <1 when the effect of the intermediate variable (j+1) is eliminated, assuming that the disturbances are mutually independent m d normally distributed with meaa zero and a common va.rimce oz. I f we further assume that the process (3.3) started at 1=a, we derive

(3.4)

(3.8)

for a, l l j=2, 3, . . . p-1. The vanishing of the partial correlation (3.8) is characteristic

of the stationary Markov process (Kenddl and Stuart, 1966, p. 425) such &8 (3.3). We can therefore exploit the above property to derive a statistical test based on the principle of union-intersection for testing the hypothesis that the expected correlation matrix is of the form shown in (3.5s).

Following Roy’s (1953) principle of union intersection, we need to CrtlCul&ie ( p - 2 ) idepmdent p&d correlations ~ j , j - 2 . (j-1) from

Page 11: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQVES OF COVARIAXCE STRUCTURAL ANALYSIS 141

the sample correlation matrix, R=(rjk) for testing the above structural hypothesis. The signScance of departure of the obtained first-order partial correlations from zero is tested on the sample data based on N sets of independent observations using the statistic

&No.

-- 1 2 3 4 5 6

where tj for all j=3, 4, . . . p obeys (' Student's 7 7 t distribution with N - j degrees of freedom.

TABLE 1 Intercurrelations Among Siz Emotional Words

(Frcm Block, 1957) (N=48 women)

Word

Sympathy Contentment Love Elation Anticipation Pride

- 26 -55 *8f -99 1.00 -92 I

Sympathy l l Contentment

- 15 -41 a68 .87 -92 1-00

1.00 * I 3 *66 .42 -26 .I5

3 Love

-66 -83 1.00 *SS -82 -68

- - 13 1.00

a83 -14 .55 -41

4 Elation

~

-42 .I4 *88 1.00 -92 . 8 1

I Anticipation l 6 Pride

To illustrate the test procedure and also to provide a numerical example of the correlation structure emerging from (3.3), the inter- correlations among six emotional states as reported by Block (1957) are presented in Table 1. In this study, the investigator employing Osgood's semantic differential technique, attempted to find how men and women tended to interpret emotional states from the names of these states. He asked 40 male and 48 female students to rate on a 7-point scale each emotion according to 20 connotative dimensions based on pairs of adjectives such as weak-strong, active-passive,

TABLE 2 Partia2 Correlations Computed from Table 1 and Associated t Stutiaties

Partial Correlation t d.f. Probability

rI3.*= * 141 1 -955 I 45 I a 1 5

_ _ _ _ _ ~ ~ ~ ~~~

rw4= ,066 -364 43 -50

rlvL= * 164 1 1.01 ' 1 42 1 -14

teme-relaxed, happy-sad, loud-soft. Since each word (concept) was located by the subject on each one of the 20 reference scales, it was possible for the investigator to run a correlation between them. The intercorrelations shown in Table 1 were based on the ratings of 48 females.

From Table 1, the four independent partial correlations were computed. The partial correlations and associated t statistics are presented in Table 2.

Page 12: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

142 BISEWA NATH M W H E R J E E

All the t values shown in Table 2 are low. The t statistics are low enough to justify the acceptance of the null hypothesis that in the population of female students each of the four partial correlations have zero value. Acceptance of al l these null hypotheses implies acceptance of the multivariate hypothesis that in the population of female students the correlation structure based on the intercorrelations among six emotiond states is of the form p as shown in (3.6).

Rejection of a single null hypothesis, on the other hand, leads to the rejection of the composite hypothesis. Since under the null hypothesis, the above ( p -2) partial correlations are independent, the probability of rejecting at least one of them at the a-level is therefore 1 -(1 -a)p--2. The significance level for the composite hypothesis when the null hypothesis is accepted can be obtained from

(3.10) fi P(tJ j -3

where P(tj') denotes. the probability of computed value of t j statistics based on (3.9) undek the null hypothesis (3.8).

It is seen from the above illustration of the union-intersection approach in testing a covariance structure hypothesis that although some assumptioqs me used, a simple, and intuitively appealing test is obtained. The value of the heuristic principle involved in the construction of the union-intersection test generally lies in the fact that it directs attention to a particular test, out of a number of intuitively appealing tests, as being worthy of further study.

4. Advantages of Covariance Structural Analysis Although the development of tests of covariance strnctnres and

methods of estimating the constrained parameters of the variance- covariance matrix may stand in their own right as erner-oig problems in mdtivmiate analysis, there we many other applications of covaziance structural analysis, such as in testing for equality of mean vectors when a certain structure can be assumed in the population variance matrix. Specific applications of the technique in the field of psychology and other behaviourd sciences have been discussed by Bock and B ~ g m a ~ (1966), Mugherjee (1970b) and Jtireskog (1970). The discussion below wi l l be colltined mainly to various situations in statistics where it is necessary to apply the techniques of covariance

Assumption of Covariance Btructures in fitatistical Analysis In statistical analysis of psychological and biological data, a

number of situations call for testing certain hypotheses associated with patterns of the variance covslriance matrix before conventional statistical tests could be applied. The situation is andogous to that of running a test of homoscedasticity before an analysis of variance. Standard multivariate techniques, such a8 principal component d y s i s , canonical correhtiond andysis, and multiple discriminant analysis etc., strongly involve the covariance structure of the popdation (Dempster, 1966, p. 316). Therefore, one is better off if the assumed covariance structure is fust tested before carrying out the standard amlysis.

Covariance structural analysis is important even for malysing data from standard univa,riate designs such as Latin squares, one-

s h c t n r a l analysis.

Page 13: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQTJES OF COVARIANCE STRUCTURAL ANaLTSIS 143

factor repeated designs, etc. (Graybill, 1961). To elucidate this point, consider the problem of deriving statistical tests for the analysis of repeated measurements. As is well known, sampling variation in the data is almost always correlated in the case of repeated measurements. Univariate analysis of variance of such data is justifiable only when the sampling covariance matrix has certain favourable patterns such as the uniform structure of (4.1). Bock (1963) has shown that one can proceed to use the transformed sums of products for tests of hypotheses on the polynomial representation of group differences only after it is verified that the structure of the variance covariance matrix is consistent either with the ‘‘ compound symmetry ” hypothesis or the hypothesis of independence among variates. When the structure of C is not known, there is no direct method of investigating the degree of polynomial representation for repeated measurements. The usual P test is valid if, but not only if, each of the populakion covariance matrices is equal and has the pattern shown in (4.1), since equal correlation is a sufficient condition but not a necessary condition. It is also implicit that for such repeated measures the usual P test on the within subject of effects is valid only when the variance covariance matrix has the uniform structure (4.1). If the assumption of equal correlation among repeated measurement is not met, as in the case of many experimental situations, the experimenter has to take recourse to a multivariate analysis. Thus, it can be said that covmiance structural analysis is a necessity for a number of hypothesis testing problems in univariate analysis.

The study of patterns is further justified because the covariance matrix of the estimates of the treatment effects from a partially balanced univariate design has o pattern which can be easily reduced to a canonical form (Bose and Mesner, 1959). The matrix will be a circulant for a certain class of cyclic partially balanced designs (Srivastava, 1966). Srivastava and Maik (1967) have given a number of examples of expected covariance structures in data obtained from experiments based on partially balanced incomplete block (PBIB) designs.

A number of patterned variance covariance matrices arising in experimental design have been discussed by Graybill (1969). Patterned covariance matrices such as the ones appearing as (2.3a) and (3.5) are important in stochastic processes and time-series analysis (Kendall and Stuart, 1966, vol. 3, pp. 472-503). The need for their study in response-surface fitting and order statistics has been suggested by Greenberg and Sarhan (1959, 1960), Graybill (1969), Roy and Sa.rhan (1956) and Roy, Greenberg and Sarhan (1960).

Estimation of Variance Components Under the assumptions of model II analysis of variance where

there is one random way of classifkation and one, possibly unbalanced, iked classification, it is possible t o estimate the variance components associated with the latent variables specified by the structural model (2.2). Bock (1964) has illustrated the use of covariance structural analysis in the estimation of variance components. Bramble and Wiley (1974) have also used the covariance structural technique for estimating the variance components when the model employed for analysis was a mixed model in which respondents are measured repeatedly under four treatment conditions of a 2 x 2 ANOVA design.

Page 14: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

144 BISHWA NATH V

The advantage of covariance structure analysis for the study of variance components under mixed models has also been pointed out by Bock (1960, 1964, 1975, p. 450). It has been shown that the method of structnral analysis can be viewed as a “ generalization of the a4alysis of vmimce of random effects in the mixed model for experimental designs with one random way of classification. It is more general than the conventional mixed-model analysis in that the design for the fhed classifications may be nonorthogonal, the replica- tion error variance for different subclasses of the fixed classifications may be nonhomogeneous, and the measurement for these subclasses may be in Merent metrics ” (Bock and Bargmann, 1966, p. 533).

The covariasce structural analysis approach to variance component estimation not only enables the researcher to obtain maximum likelihood estimates of the parameters under several alternative models of interest but also appropriate tests of goodness of fit of these models using the likelihood ratio statistics. Confirmatory Fact? Analysis and covariance Structural Analysis

The relevance of structural analysis to the problem of rotation in faator analysis becomes apparent when the transformation of the factor matrix is carried out following the suggestions of Howe (1955) and Lawley and Mamell (1963) so that the pattern of factor loadings conform as closely as possible with some a priori notion of the correlational pattern among the variables.

Structural analysis enables us to analyse “ a sample covariance matrix in order to detect and a8sess the potency of latent sowces of variation and covariation in the multivariate normal data ” (Bock and Bwgmann, 1966, p. 507) more objectively than the traditional method of factor analysis. Elucidation of this point has been m:cde elsewhere (Mukherjee, 1970b).

McDonald (1969) has presented a generalized factor analysis procedure based on residual covariance matrices OE prescribed structure. This technique relaxes the assumption-by-definition that unique factors are mutually orthogonal. It may be remarked that the method is a special case of covariance structural analysis. Cmpulationab Ease in Yzcltivariate Analysis

Estimators of pmameters for patterned Z are different in general from those for the unrestricted case. In general, there is also relatively greater ease in the numerical solution for these estimators in the former case. Thus, there is a distinct advantage in statistical computations when the structure of the population variance covariance matrix is known. Computational drudgery involved in most multi- variate statistical techniques can be reduced considerably if the covariance structure in the population is known or can be assumed to be known. Mukherjee (1969a, 19692)) has shown certain com- putational w e in various multivariate analyses when the p dimensional norma3 variables axe assumed t o obey the Guttman quasi-simplex linear model and give rise to the variance covaziance matrix of the form shown in (2.3%). For such mdtivariate techniques as multiple correlation, partial correlation, and canonical correlation, the comput- ations can be handled efficiently even with a desk calculator if the analysis is done under the assumption of the uniform variance covazirtnce structure or under Bmgmann’s (1967) equi-predictability structure.

Page 15: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS 145

Suppose that the p x 1 vector X has a multivariate distribution with mean p and non-singular variance-covariance matrix having the following uniform structure (Geisser, 1963) :

It can be seen that Z of (4.1) can be written as

(4.1%) I: =a2 [(l -p) I + p l 1’1.

When Z is unknown, the maximum likelihood estimate of Z of the form (4.1) is given under the usual normality assumption by

1‘ (S-D)1 (1 1’-I), 3 where D is a diagonal matrix of p sample variances (biased) as defined in (2.11), with element ajj in its j th diagonal and 1‘ is a unit row vector of order 1 x p . The matrix S , , = ( ( A ~ * ~ ~ ) ) can be expressed M

(4.3) S,=sZ[(l -r) I+r 1 1’1 where the pooled sample variance s2 is the aicithmetic mean of all sample variances, i.e.,

(4.4) and

P r =X E 8,k/(p2-p) s2 when j # k .

j-1 k-1 (4.5)

Now suppose it is required to estimate the squared multiple correlation p; between a particular variable j and the remaining ( p -1) variables under the assumption of uniform covariance structure. It can be shown that under the uniform assumption dl the p squared multiple correlations are identically equal and can be obtained by

where S , , refers to the cofactor of the j th diagonal element, i.e., s*jj, in (4.31.

The two determinants can be expressed as

I SUl =sZp[1 + ( p - l ) r ] [l -r]p-I

and similarly replacing p by ( p -1) in the above,

I S,.jjl =I =s2(=-1)[1 + ( p -2)rl [l - r ] p - 2 .

Therefore, the estimate (4.6) of the squared multiple correlation ( p2) between any variable and the remaining (p--1) variables is given by

(4.7) R2=(p-l)r2/[1 +(p-2)~]

where r is the average correlation estimated from (4.5).

Page 16: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

146 BISHWA NBTH

I f instead of computing the squared multiple correlations, one desires an estimate of the canonical correlation ( I* ) between the first p , variables and the last p , variables (pl+p,=p) , then under the nnifom structure assumption the estimate is

It can be further shown that under the uniform structure assumption there will be only one non-zero canonical correlation and this is given by (4.8).

In addition to the computational ease as shown above, it also seems likely that the various methods of obtaining interval estimates for the parameters of the multivariate normal distribution will yield the same results for n given confidence coefficient when the covariance structure is h o w n in advance. Geisser (1964), for example, has shown that if the sthcture of the covmiance is uniform, the confidence interval, fiducial and Bayesian methods will yield the same region for the vector mean and the same interval for the common correlation. coefficient.

When the population dispersion matrix is patterned, it implies a '' reduced parameter situation " (Geisser, 1963). Since in the multi- variate analysis, " increasing number of parameters pose increasingly serious testa for any theory of estimation " (Dempster, 1966, p. 315), the study of vaxious covariance structures has an important bearing for the comparison of Merent conceptual approaches to estimation.

Power of Stutistical Tests when Covariance Structure is Known In addition to its promise in the analysis of structural effects in

experimental design (Mallios, 19'70), the study of covariance structures may also contribute to a more powerful test of the general multivan'ate hypotheses and thus have an important role to play in multivariate analysis of variance. The likelihood ratio test of the hypothesis of equality of mean vectors, for example, takes the following form when B particular patterned covariance matrix is assumed. Under the assumption that X is of a pmticulaz non-singular structure, the masimum likelihood estimates of the elements of the pattern-matrix are computed from the within-group of products matrix, and the maximum of the likelihood is evaluated. The same procedure is then applied to obtain the maximum likelihood estimates and the maximum of the likelihood from the sum of products matrix for the total (within + between). A monotonic function of the likelihood ratio statistic for the test of the null hypothesis is the k s t maximum of the likelihood divided by the second, i.e., A=lW(/[W+BI. Its critical points can be obtained with good accuracy by an P or Chi-squared approximation (Anderson, 1958). As is true in general of tests of multivariate hypotheses, other conditions (significance criterion, sample size) being equal, the power of this test will depend upon the nature of the departure from the null hypotheses ; hence, no definite statement can be made by computing its power to an alternative test, such as the conventional likelihood ratio test for the equaty of mean vectors developed under the assumption of normal multivariate distribution, and a common variance-covariance matrix (Anderson, 1958, p. 211 ;

Page 17: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQUES OF COVAEXANCE STRUCTUBAL ANALYSIS 147

Hendall, 1961 ; Wilgs, 1962). It may be conjectured, however, that when the between-group differences arise from sources other than the patterned within-group dispersion, the proposed test will be sensitive not only to the increased generalized variance between groups, which is reflected in the conventional test, but also to the departure from the mithin-group pattern. In this case we might expect the proposed test to have the greater power. For various assumed forms of departure from the null hypothesis, this conjecture could be readily investigated by sampling experiments. Quantitative assessment obtained from mathematical analysis in addition to sampling experiments would be also very useful for this purpose, but no such results have so f a r been published.

It may be noted in this connection that when the population variance covariance matrix is assumed to have a particular pattern, it implies a reduced parameter situation (Geisser, 1963). The reduction in the number of parameters entering into a statistical hypothesis has a great bearing upon the power of any statistical test whenever the test is cmiea out under the assumption of a k n o m variance covariance structure. Therefore, it is no wonder that tests in structured situation will be generally more powerful than the corresponding unrestricted tests.

5. Concluding Remarks Covariance structural analysis is increasingly becoming an

important topic of research in multivariate analysis but many questions connected with this method remain unanswered. One such question is the determination of the necessary and sufEicient conditions for the identifiability of the various specified parameters of the variance covariance matrix Z. The power properties of the statistical tests for hypotheses associated with Werent specific covariance structures are also not known. Most of these tests are derived assuming not only the normal multivariate distribution but also that the population variance covariance matrix is positive definite. One important problem therefore is to determine the extent to which these tests are robust against departures from normality. Many other such unresolved problems exist and these are being dealt with in a separate paper ; here it will suffice to cite some of the most important ones.

In addition to studying power properties and doing sensitivity comparisons of tests about patterned covariance matrices, it would also be desirable to investigate the relationship between the techniques of covariance structural analysis and the techniques of structural regression (Chew, 1970 ; Mallios, 1970) and path analysis (Wright, 1960). In this connection, the study of covariance structures emerging from nonlinear models merits special attention. The use of the likelihood ratio principle has been most frequently used in constructing tests for various covariance structures but a fern tests based on Roy's (1957) union-intersection principle are also available for statistical tests of certain covariance structures. No attempt has been made to obtain other test procedures which are, at least asymptotically, robust against departures from the assumed normality of the underlying distributions. The method of estimation has been also restricted to the maximum likelihood technique. The use of the minimum norm

' quadratic unbiased estimation (NINQUE) method recently proposed by Rao (1970) appears to be a very promising approach in tackling the problem of estimation in covariance structural mdysis.

Page 18: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

148 BISHWA NATH MWEHEWEE

The covaziance s t r u c t u r d analysis technique is still in its infancy a9 an analytic tool for the examina;tion of ~ t r u c t n r a l models or in detecting the effect of fitting one model by another, but it appears to be very fruiffal for mdysing latent sources of variation and covariation in multivariate normal data and for estimation of variance components.

References Aitken, M. A., Nelson, W. C.. end Reiufurt, Karen H. (1968). “ Tests for correlation

matrices.” Biomdriko, 65, 327-334. Anderson, T. W. (1968). IntroductiOn to M&imria& Skrtiaticd Andy&. Wdey.

New York. Anderson, T. W. (1960). ‘‘ Some stochastic process models for intelligence test

scores.” In Mathmatkul Methods in t h Social Sciencee, K. J. Arrow et al. (Eds.), Stanford University Press, Stanford, Calif.

Anderson, T. W. (1969). Statistical inference for covariance matrices with linear structure.” In P. R. Krishneieh (Ed.). Mdtiuaria& Andyeis-XI. Academic Press, New York. Pp. 66-66.

Anderson, T. W. (1970). “Estimation of covariance matrices which a m linear combinations or whose inverses are linear combinations of given matrices.” In R. C. Boae, I. M. C h h v a r t i , P. C. Mahalanobis, C. R. Rao and K. J. C. Smith (Eds.). Essays in Pmbabirity and Statistics. University of North Carolins Pres~. Chap1 Hill, N.C.

“ A study of independence and dependence in multivariate normal anslyais.” Institute of Statistics Mimeograph Series No. 186. University of North Carolina, Chapel Hill, N.C.

Bergman, R. E. (1967). CRC Hadoak of T a b h for Mathemdica. Third Edition, Chemical Rubber Co., Clevehd. Ohio. Pp. 14P166.

Block, J. (1067). J. Abnanal and Social Psych., 64:‘36*363.

Bock, R. D. (1960). Component of v a r k c e analysis 8s a structural and discriminel

Bock, R. D. (1963). Multivsriate analysis of repeated measurements.” In C. Harris (Ed.). Problenw of Meamring Chunge. University of Wisconsin Press, Madison. Pp. 86-103.

Bock, R. D. (1964). Paper read at meeting of Institute of Mathematical Statistics, I(enscrs State University, May. 7, 1964.

Bock, R. D. (1975): MuZlBvariafa SWktkal Metlroda in BchamuumZ Rcscarch. McGmw Hill, New York.

Bock, R. D., e d Bsrgmenn, R. E. (1966). “ Analysis of covariance structures ”. Bose, R. C., and Meaner, D. M. (1969). I‘ On linear associstive dgebra oorresponding

Ann. Math. Stat&.,

“Estimating content-ecqubcence correlation by covariance structure analysis.” MzJtivoriab B e h & ~ d Rss., 9.179-190.

Chew, V. (1970). “Covariance matrix estimation in linear models.” J . A m . S W . Assoo., 65,173-181.

Dempster, A. P. (1966). In P. R. Krish- d h (Ed.). Mdiwaria& Adysi.4. Academic Prese,New York. Pp. 316-334.

Dempster, A. P. (1969). Elemrmk, of Continwnur Multivariats Andy&. Addison Wesley Publishing Co., Reading, Mass.

Durbin, J. (1960). ‘‘ Estimation of peremeters in time series regression modela ”. J. Roy. S W . Sm., Ser. B, 22, 139-163;‘ Fletcher, R., and Powell, M. J. D. (1963). A rapidly converging dasoent method

for minimimtion.” Fraaer, D. A. S. (1968). ,Th6 Stm&ure of Inference. Gabriel, K. R. (1964). The simplex structure of the progressive metrices test.”

Gabriel, K. R, (1962). Ante-dependence analysis of an ordered set of vsriables.”

Geisser, S. (1963). Multivariate analysis of variance for a special covsriance

Pp. 1-24. Bargmenn, R. E. (1967).

“ Matrices and determinants.”

‘I Studies in the phenomenology of emotion.”

anelysis of psy~hOl0,~cel tests.” BZitWh J . Star&. Psydrology, 13, 161-163.

“ Components of variance and covariance structure.”

psyc- , 21, 607-634.

to the association schemes of partially balanced designs.” 30, 21-38.

Bramble, W. J., and Wiley, D. E. (1974).

“ Estimation in multivariate analysis.”

Computer J . , 2, 163-168. Wdey, New York.

Brit. J . Star*. Pty$., 7,9-14.

Ann. Math. Star?., 33, 201-222.

case.” J . Amer. Skrtiet. A8boc., 68. 860469.

Page 19: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

TECHNIQUES OF COVBRIANCE STRUCTURAL ANALYSIS 149

Geisser, S. (1964). J . Roy. Statist.

Graybill. F. A. (1961). An Introduction to Linear StatiSricCrl Mod&. McGraw-

‘‘ Estimation in the uniform covariance case.” Soc., Ser. B, 26, 477483.

Hill, New York. Graybill, F. A. (1969). Introroductien ta Matrioea with Applications in Stsrtistics.

Wadsworth Pubhhim Co. Inc.. Belmont, Calif. Greenberg, B. G., and SaFhan, A. E. (1959). “ MBtrix inversions, its interest and

application in analysis of data.” Greenberg, B. G., and Sarhan. A. E. (1960). “ Generalization of some results for

inversion of partitioned matrices.” In I. Olkin, S. G. Ghurye, W. Hoeffding. W. G. Madow and H. B. Mann (Eds.). Contributh to Probability adStatistiCa : Essays in Honour of Harold Hotelling. Stanford University Press, Stanford, Calif., E‘p. 216-223.

Greenhoy S. W., and Geisser, S. (1969). “ On methods in the analysis of profile data. Psydmmetrjka, 24, 95-112.

Guttman, L. (1954). In P. Lazarsfeld (Ed.). Mazhematical Thinking in ths Social Scitncea. The Free Press, Glencoe, ?Jl. : Pp. 258-348.

Guttman. L. (1957). “Empirical verification of the radex structure of mental abilities and personality traits.” Educational and Psycholqical Measurement, 17, 391407.

Hen. Chien-Pai (1968). “ Testing the homogeneity of a set of correlated variances.”

J . Amer. Statist. Aamc., 54, 755-766.

A new approach to factor anal-: the radex.”

B-etri%a, 55, 317-326. - Jones. M. B. (1960). Molar Cornelat id AnuZl/eia. U.S. Naval School of Aviation

i&diche ‘(Monograph,:), Pensacols, Floridi.

Biometrika, 57, 239-251.

(Ed.) MvUivariate Analysia, III. Academic Press, New York.

Vol. 2 and Vol. 3, Hafner Publishing Co., New York.

80-86.

Ann. Math. SWMt.,,,34, 149-151.

Joreskog, K. G. (1970). A generel method for analysis of covariance structures.”

Joreskog, K. G. (1973). In P. R. Iirishnaiah Pp. 263-285.

Kendall, M. G. (1961). Kendall, M- G., and Stuart, A. (1966, 1967). The Advanced Theory of Statistics,

Kullback, S. (1967). “On testing correlation matrices.” Applied Statistics, 16,

Lawley, D. N. (1963). “ On testing a set of correlation coefficients for equality.”

Mallios, W. S. (1970). The analysis of structural effects in experimental design.”

McDonald, R. P. (1969). “ A generalized common factor analysis based on residual covariance matrices of prescribed structure.” Contributed paper, Annual Meeting of the Can$an Psychological Association, June 6, 1969, Toronto.

McQuitty. L. L. (1956). Agreement analysis : classifying persons by predominant patterns of response.” Brit. J . Statist. Psych., 9, 5-16.

Morrison, D. F. (1967). Myttivariate Skrtistical Methods. McGram-Hill, New York. Mukherjee, B. N. (1966). Derivation,,of likelihood ratio tests for the Guttman

quasi-simplex covariance structures. Psychmetrika, 31, 97-123. Mukherjee, B. N. (1969a). “ Invariance of the Guttman quasi-simplex linear

model under selection.” Brit. J . Math. Statist. Psych., 22, 1-28. Mukherjee, B. N. (19696). “ Computational short-cuts in multivariate analysis for

the Guttman simplex data.” Paper read at meeting of the Psychometric Society, April 17, 1969:‘Princeton. N.J.

Covariance structure analysis in psychological research.” Contributed paper, Annual Meeting of the Canadian Psychological Association, June 6, 1969, Toronto,,panada.

A likelihood ratio test for the Guttman quasi-circumplex covariance structure.” Seminar paper, Department of Measurement and Evaluation, Ontario hstitute for Studies in Education, April 3, 1970, Toronto, Canada.

‘‘ Likelihood ratio tests of statistical hypotheses associated with patterned covariance matrices in psychology.” Brit. J. Math. Statist.

“ Covariance structural analysis and exploratory factor analysis.” Brit. J. Mat:. Statist. Psych., 26. 425-455.

The factorial structure of Wechsler’s Preschool and Primary Scale of Intelligence at successive age levels.” Brit. J . EducutirmaE Psych., 45, 21P226.

I‘ A simple approach to testing of a c h s of covariance structures. Subrmtted for publicstion.

‘‘ Analysis of covariance structures.”

A Course in MvUivariate A d y a i a . GriBn. London.

J . A ~ W . statht. ASSOC., 65, 808-827.

Mukherjee, B. N. (19694.

Mukherjee, B. N. (19705).

Mukherjee, B. N. (1970b).

Mukherjee, B. N. (1973).

Mukherjee, B. N. (1975).

Mukherjee, B. :. (1976))

Psych., 23, 87-108.

D

Page 20: TECHNIQUES OF COVARIANCE STRUCTURAL ANALYSIS1

150 BISHWA NATH V

OLkin, I., and Press, 8. J. (1969). “ Testing and estimation for a circular stationary

Perlis, 5. (1962). Theory of MatrieeS. Addison-Wesley, Reading, Maas. Quenouille, M. H. (1988). The Andy& of Multiple T i m S&. Hafner Publishing

Co.. New York. Reo, C. R. (1970). “Estimation of heteroscedastic variances in linemr models.”

Roy, J. (1951). *’ The distribution of certain likelihood criteria useful in multivariate analysis.” BUU. Inte+nd. SkJist. I&., 33, 219-230.

Roy, J. (1964). “ On some testa of signikaace in sample from bi-polar normal distributions.” Sankhyd. 14, 220-239.

Roy, S. N. (1963). ’‘ On a heuristic method of test construction and its use in multivariate dysis .”

Roy, S. N. (1957). Roy, S. N., Grwnberg, B. 0.. and Sarhan, A. E. (1960). Evdution of

determinants, characteristic equations and their roots for a class of patterned matrices.”

Roy. S. N., and Smrhm, A. E. (1966). “ On inverting 8 class of patterned matrices.” Biometrika, 43, 227-231.

Srivastava, J. N. (1966). “ On testing of hypotheses regarding a class of covariance

Srivcrstava, J. N., and Maik. R. L. (1967). “ On a new property of partially balanced association schemes useful in psychometric structural analysis.” Puychefsika, 32, 279-289.

Styan. G. P. H. (1969). “Inference in multivariate normal populatiow with structure, Psrt I1 : Inference when correlations have structure.’’ Tech. Report No. 115, University of Minnesota, Minneapolis.

Votaw. D. F.- (1948). “Teating compound symmetry in a normal multivariate population.” Ann. Md. Skrtist., 19, 189-198.

Watson, G. S. (1965). “Serial correlation in regression analysis I.” BiomCtl.ikcr, 42, 327-341.

Wiley, D. E.. Schmidt, W. H., and Bramble, W. J. (1973). “ Studies of a class of

Wilks, S. S. (1936). “ O n the independence of k sets of normally distributed statistical variables.” Econometn‘ka, 3, 309-326.

Wilks. 8. S. (1946). “ Sample criteria for testing equality of me-, equality of variance and equality of covariance in a normal multivariate distribution.” Ann. Math. Stat&., 17, 257-281.

Wilks, S. 5. (1962). Mathematid StcrtiaticS. Wiley. New York. Wright, S. (1960). ‘I Path coefiicients and regression coefficients : Alternatives or

complementary concepts? ”

model.” Ann. Math. Statist., 40, 1368-1373.

3. A m . SM&. Aaw~, , 66, 161-172.

Ann. Math. St&&., 24,613-636. Some Aapecta of MzJoivatiats Andy&. WJey,,,New York.

J . Roy. Stat&. Soc. Ser. B, 22, 348-369.

StruCtws.” Psychomct~ka, 31, 147-164.

- covariance structure models.” J. Amer. Statist. Anaoc., 68, 317-323.

Biwnetrika, 46, 189-202.