p 04 pls formative models

16
Advancing formative measurement models Adamantios Diamantopoulos , Petra Riefler 1 , Katharina P. Roth 2 Department of Business Administration, University of Vienna, Bruenner Strasse 72, A-1210 Vienna, Austria Received 1 May 2007; received in revised form 1 November 2007; accepted 1 January 2008 Abstract Formative measurement models were first introduced in the literature more than forty years ago and the discussion about their methodological contribution has been increasing since the 1990s. However, the use of formative indicators for construct measurement in empirical studies is still scarce. This paper seeks to encourage the thoughtful application of formative models by (a) highlighting the potential consequences of measurement model misspecification, and (b) providing a state-of-the art review of key issues in the formative measurement literature. For the former purpose, this paper summarizes findings of empirical studies investigating the effects of measurement misspecification. For the latter purpose, the article merges contributions in the psychology, management, and marketing literatures to examine a variety of issues concerning the conceptualization, estimation, and validation of formative measurement models. Finally, the article offers some suggestions for future research on formative measurement. © 2008 Elsevier Inc. All rights reserved. Keywords: Formative index; Measurement model; Causal indicators Contents 1. Introduction .............................................................. 1204 2. Reflective vs. formative measurement: first-order models ....................................... 1204 3. Higher-order formative models .................................................... 1205 4. Measurement model misspecification ................................................. 1208 4.1. Parameter bias due to reversed causality ........................................... 1208 4.2. Parameter bias due to incorrect item purification ....................................... 1210 4.3. Effects on fit statistics ..................................................... 1210 5. The status quo of formative measures: issues and proposed remedies ................................ 1211 5.1. Conceptual issues ....................................................... 1211 5.1.1. Error-free measures .................................................. 1211 5.2. Interpretation of the error term ................................................ 1211 5.3. Estimation of formative models ................................................ 1212 5.3.1. Multicollinearity ................................................... 1212 5.3.2. Exogenous variable intercorrelations ......................................... 1212 5.3.3. Model identification ................................................. 1213 Available online at www.sciencedirect.com Journal of Business Research 61 (2008) 1203 1218 Corresponding author. Tel.: +43 1 4277 38031. E-mail addresses: [email protected] (A. Diamantopoulos), [email protected] (P. Riefler), [email protected] (K.P. Roth). 1 Tel.: +43 1 4277 38038. 2 Tel.: +43 1 4277 38040. 0148-2963/$ - see front matter © 2008 Elsevier Inc. All rights reserved. doi:10.1016/j.jbusres.2008.01.009

Upload: cdnader

Post on 29-Oct-2015

25 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: p 04 Pls Formative Models

Available online at www.sciencedirect.com

61 (2008) 1203–1218

Journal of Business Research

Advancing formative measurement models

Adamantios Diamantopoulos ⁎, Petra Riefler 1, Katharina P. Roth 2

Department of Business Administration, University of Vienna, Bruenner Strasse 72, A-1210 Vienna, Austria

Received 1 May 2007; received in revised form 1 November 2007; accepted 1 January 2008

Abstract

Formative measurement models were first introduced in the literature more than forty years ago and the discussion about their methodologicalcontribution has been increasing since the 1990s. However, the use of formative indicators for construct measurement in empirical studies is stillscarce. This paper seeks to encourage the thoughtful application of formative models by (a) highlighting the potential consequences ofmeasurement model misspecification, and (b) providing a state-of-the art review of key issues in the formative measurement literature. For theformer purpose, this paper summarizes findings of empirical studies investigating the effects of measurement misspecification. For the latterpurpose, the article merges contributions in the psychology, management, and marketing literatures to examine a variety of issues concerning theconceptualization, estimation, and validation of formative measurement models. Finally, the article offers some suggestions for future research onformative measurement.© 2008 Elsevier Inc. All rights reserved.

Keywords: Formative index; Measurement model; Causal indicators

Contents

1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12042. Reflective vs. formative measurement: first-order models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12043. Higher-order formative models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12054. Measurement model misspecification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208

4.1. Parameter bias due to reversed causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12084.2. Parameter bias due to incorrect item purification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12104.3. Effects on fit statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210

5. The status quo of formative measures: issues and proposed remedies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12115.1. Conceptual issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211

5.1.1. Error-free measures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12115.2. Interpretation of the error term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12115.3. Estimation of formative models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212

5.3.1. Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12125.3.2. Exogenous variable intercorrelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12125.3.3. Model identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213

⁎ Corresponding author. Tel.: +43 1 4277 38031.E-mail addresses: [email protected] (A. Diamantopoulos), [email protected] (P. Riefler), [email protected] (K.P. Roth).

1 Tel.: +43 1 4277 38038.2 Tel.: +43 1 4277 38040.

0148-2963/$ - see front matter © 2008 Elsevier Inc. All rights reserved.doi:10.1016/j.jbusres.2008.01.009

Page 2: p 04 Pls Formative Models

1204 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

5.4. Reliability and validity assessment of formative models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12155.4.1. Reliability assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12155.4.2. Validity assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215

6. Conclusion and future research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216

1. Introduction

The literature in psychology, management, and marketingpays increasing attention to formative measurement models foroperationalizing latent variables (constructs). Researchers invarious disciplines have undertaken considerable effort to (a)make the academic community aware of the existence offormative (cause, causal) indicators (e.g., Bollen and Lennox,1991), (b) demonstrate the potential appropriateness of formativemeasurement models for a large number of latent constructs (e.g.,Diamantopoulos, 1999; Fassot and Eggert, 2005; Fassot, 2006;Jarvis, MacKenzie and Podsakoff, 2003; Venaik, Midgley andDevinney, 2004), (c) reveal consequences of measurement modelmisspecification (e.g., Diamantopoulos and Siguaw, 2006; Lawand Wong, 1999; MacKenzie, Podsakoff and Jarvis, 2005), and(d) develop practical guidelines for the construction of multi-itemmeasures (indexes) comprising formative indicators (e.g.,Diamantopoulos and Winklhofer, 2001; Eggert and Fassot,2003; Giere, Wirtz and Schilke, 2006). Despite the growingnumber of contributions on formative measurement, however,Bollen's (1989, p. 65) statement still holds true as even a cursoryglance in the top management and marketing journals readilyreveals, that is, “[M]ost researchers in the social sciences assumethat indicators are effect indicators. Cause indicators areneglected despite their appropriateness in many instances”.

Two reasons help explain the prevalent lack of applications.On the one hand, a substantial number of researchers engaging inmeasure development might still be unaware of the potentialappropriateness of formative indicators for operationalizingparticular constructs (Hitt, Gimeno and Hoskisson, 1998;Podsakoff, Shen and Podsakoff, 2006); indeed “nearly allmeasurement in psychology and the other social sciences assumeseffect indicators” (Bollen, 2002, p. 616). On the other hand,researchers might hesitate to specify formative measurementmodels because they “are often uncertain how to incorporate theminto structural equation models” (Bollen and Davis, 1994, p. 2).Indeed, there are a number of controversial and not fully resolvedissues concerning the conceptualization, estimation and valida-tion of formative measures (e.g., see Howell et al., 2007, 2008-this issue) including, among others, the treatment of indicatormulticollinearity, the assessment of indicator validity, and theinterpretation of formatively-measured constructs.

This article provides insights into the current state of literatureon formative measurement by merging major contributions in thepsychology, management and marketing literatures into anoverall picture. The overall aim is to encourage the appropriateuse of formative indicators in empirical research while at thesame time highlighting potentially problematic issues and sug-gested remedies.

The section that follows provides a brief conceptual dis-cussion of reflective and formative measurement models. Thesubsequent section covers the problem of measurement modelmisspecification, followed by a discussion of its consequences.Next, the article turns attention to a number of critical issuesconcerning the specification, estimation, and validation of for-mative measures. Finally, the paper concludes by proposingsome directions for future research.

2. Reflective vs. formative measurement: first-order models

The assessment of latent variables has a long tradition insocial science (e.g., Churchill, 1979; Duncan, 1984; Nunally,1978). Latent variables are phenomena of theoretical interestwhich cannot be directly observed and have to be assessed bymanifest measures which are observable. In this context, ameasurement model describes relationships between a constructand its measures (items, indicators), while a structural modelspecifies relationships between different constructs (Edwardsand Bagozzi, 2000; Scholderer and Balderjahn, 2006).Anderson and Gerbing (1982, p. 453) note that “the reasonfor drawing a distinction between the measurement model andthe structural model is that proper specification of themeasurement model is necessary before meaning can beassigned to the analysis of the structural model”. Themeasurement model (which is of focal interest in this paper)specifies the relationship between constructs and measures. Inthis respect, the direction of the relationship is either from theconstruct to the measures (reflective measurement) or from themeasures to the construct (formative measurement).

The first form of specification, that is, the reflectivemeasurement model (see Fig. 1, Panel 1), has a long traditionin social sciences and is directly based on classical test theory(Lord and Novick, 1968). According to this theory, measuresdenote effects (or manifestations) of an underlying latentconstruct (Bollen and Lennox, 1991). Therefore, causality isfrom the construct to the measures. Specifically, the latentvariable η represents the common cause shared by all items xireflecting the construct, with each item corresponding to a linearfunction of its underlying construct plus measurement error:

xi ¼ kigþ ei ð1Þ

where xi is the ith indicator of the latent variable η, εi is themeasurement error for the ith indicator, and λi is a coefficient(loading) capturing the effect of η on xi. Measurement errorsare assumed to be independent (i.e., cov(εi, εj)=0, for i≠ j)and unrelated to the latent variable (i.e., cov(η, εi) =0, forall i).

Page 3: p 04 Pls Formative Models

Fig. 1. Alternative measurement models.

1205A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

Eq. (1) is a simple regression equation where the observablemeasure is the dependent variable and the latent construct is theexplanatory variable. A fundamental characteristic of reflectivemodels is that a change in the latent variable causes variationin all measures simultaneously; furthermore, all measures in areflective measurement model must be positively intercorre-lated (for a proof, see Bollen, 1984).

The second form of specification, that is, the formativemeasurement model, was first proposed by Curtis and Jackson(1962) who challenge the characteristic of positively correlatedmeasures as a necessary condition. They argue that in specificcases measures show negative or zero correlations despitecapturing the same concept. Blalock (1964, 1968, 1971) andLand (1970) subsequently discuss this alternative measurementperspective according to which measures are causes of theconstruct rather than its effects (see Fig. 1, Panel 2). In otherwords, the indicators determine the latent variable whichreceives its meaning from the former. Some typical examplesare socio-economic status (Hauser and Goldberger, 1971;Hauser, 1973), quality of life (e.g., Bollen and Ting, 2000,Fayers, Hand, Bjordal and Groenvold, 1997;), or career success(e.g., Judge and Bretz, 1994); Table 1 provides furtherexamples.

The formal specification of the formative measurementmodel is:

g ¼Xn

i¼1

gixi þ f ð2Þ

where γi is a coefficient capturing the effect of indicator xi onthe latent variable η, and ζ is a disturbance term. The lattercomprises all remaining causes of the construct which are notrepresented in the indicators and are not correlated to the latter;thus following the assumption that cov(xi,ζ)=0.

Eq. (2) represents a multiple regression equation and incontrast to Eq. (1), the latent variable is the dependent variableand the indicators are the explanatory variables. Diamantopoulosand Winklhofer (2001) point out several characteristics of thismodel which make it sharply distinct from the reflective model.First, the indicators characterize a set of distinct causes which arenot interchangeable as each indicator captures a specific aspect of

the construct's domain (see also Jarvis et al., 2003; and Rossiter,2002); indeed, omitting an indicator potentially alters the natureof the construct (Bollen and Lennox, 1991). Second, there are nospecific expectations about patterns or magnitude of intercorrela-tions between the indicators; formative indicators might correlatepositively or negatively or lack any correlation (for a detaileddiscussion see Bollen, 1984). Third, formative indicators have noindividual measurement error terms, that is, they are assumed tobe error-free in a conventional sense (Edwards and Bagozzi,2000). The error term (ζ) is specified at the construct level(MacCallum and Browne, 1993) and does not constitutemeasurement error (Diamantopoulos, 2006). Fourth, a formativemeasurement model, in isolation, is underidentified and, there-fore, cannot be estimated (Bollen, 1989; Bollen and Davis,1994). In contrast, reflective measurement models with three ormore indicators are identified and can be estimated (e.g., seeLong, 1983). A later section of this paper addresses theestimation of formative models.

3. Higher-order formative models

The formative model specified in Eq. (2) is a first-ordermeasurement model (Edwards, 2001). However, constructs areoften conceptualized and subsequently operationalized asmultidimensional entities (e.g., Brewer, 2007; Lin, Sher andShih, 2005; Venaik et al., 2004; Yi and Davis, 2003). From aconceptual point of view, a construct is multidimensional “whenit consists of a number of interrelated attributes or dimensionsand exists in multidimensional domains. In contrast to a set ofinterrelated unidimensional constructs, the dimensions of amultidimensional construct can be conceptualized under anoverall abstraction, and it is theoretically meaningful andparsimonious to use this overall abstraction as a representationof the dimensions” (Law, Wong and Mobley, 1998, p. 741).

When dealing with multidimensional constructs, it isnecessary to distinguish between (at least) two levels ofanalysis, that is, one level relating manifest indicators to(first-order) dimensions, and a second level relating theindividual dimensions to the (second-order) latent construct(Jarvis et al., 2003; MacKenzie et al., 2005). Failing to carefullyspecify the latter relationships, “one cannot derive the overall

Page 4: p 04 Pls Formative Models

Table 1Examples of formatively-measured constructs

Author(s) Journal Formative construct(s) Estimation method

Consumer behavior literatureLin et al. (2005) International Journal of Service

Industry ManagementCustomer perceived value SEMa

Hyman et al. (2002) Journal of Marketing Theory and Practice Household affluence MIMIC modelSánchez-Pérez and Iniesta-Bonillo (2004) Journal of Business and Psychology Consumers' commitment towards retailers MIMIC model

Information technology literatureBrock and Zhou (2005) Internet Research Organizational internet use SEM (PLS) a

Pavlou and Gefen (2005) Information Systems Research Psychological contract violation SEM (PLS) a

Perceived effectiveness ofinstitutional structures

Santosa et al. (2005) European Journal of Information Systems Intrinsic motivators SEM (PLS) a

Situational motivatorsYi and Davis (2003) Information Systems Research Observational learning SEM (PLS) a

Management literatureHelm (2005) Corporate Reputation Review Firm reputation SEM (PLS) a

Venaik et al. (2005) Journal of International Business Studies Environmental controls: SEM (PLS) a

– Local government regulatory influence– Quality of local business infrastructure– Pressures of global competition– Pressures from technological change

Witt and Rode (2005) Journal of Enterprising Culture Corporate identity SEM (PLS) a

Corporate cultureDowling (2004) Corporate Reputation Review Corporate descriptors Regression model

Corporate reputationVenaik et al. (2004) Management International Review Firm pressures SEM (PLS) a

Johansson and Yip (1994) Strategic Management Journal Industry drivers SEM (PLS) a

Organization structureManagement processGlobal strategy

Marketing literatureBruhn et al. (2008-this issue) In this Special Issue Customer equity management SEM (PLS, LISREL)Cadogan et al. (2008-this issue) In this Special Issue Quality of market-oriented behaviors MIMIC model

(LISREL)Brewer (2007) Journal of International Marketing Psychic distance n.a.Collier and Bienstock (2006) Journal of Service Research e-service quality SEM (AMOS) a

Johnson et al. (2006) Journal of Advertising Perceived interactivity: SEM (EQS) a

– Reciprocity– Responsiveness– Nonverbal information– Speed of response

Ulaga and Eggert (2006) Journal of Marketing Relationship value SEM with summateddimension scores (PLS)

Reinartz et al. (2004) Journal of Marketing Research CRM process implementation MIMIC modelArnett et al. (2003) Journal of Retailing Retailer equity MIMIC modelHomburg et al. (2002) Journal of Marketing Service orientation Composite scoreWinklhofer and Diamantopoulos (2002) International Journal of Research in Marketing Sales forecasting effectiveness MIMIC modelHomburg et al. (1999) Journal of Marketing Marketing's influence SEM (LISREL) a

Market-related complexitya Identification achieved through linkage to two or more reflective constructs.

1206 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

construct from its dimensions and can only conduct research atthe dimensional level, even though these dimensions areclaimed theoretically to be under an overall construct“ (Lawet al., 1998, p. 741). Since for each level both formative orreflective specifications are applicable, Jarvis et al. (2003)identify four different types of multidimensional constructs,namely, (a) formative first-order and formative second-order(synonyms for this model are “aggregate model”, “compositemodel”, “emergent model” and “indirect formative model”; e.g.,

see Cohen et al., 1990; Edwards and Bagozzi, 2000; Giere et al.,2006; Law et al., 1998; Law and Wong, 1999), (b) reflectivefirst-order and formative second-order, (c) formative first-orderand reflective second-order, and (d) reflective first-order andreflective second-order models (synonyms for this type ofmodel are “latent model”, “factor model”, “superordinateconstruct”, “indirect reflective model” and “second-order totaldisaggregation model”; see Bagozzi and Heatherton, 1994;Edwards, 2001; Edwards and Bagozzi, 2000; Giere et al., 2006;

Page 5: p 04 Pls Formative Models

1207A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

Law et al., 1998). Since this review focuses on formativemeasurement, this section only briefly discusses the formerthree types of multidimensional constructs (see Fig. 2).

The first model in Fig. 2 (Type I) conceptualizes themultidimensional construct as a composite of its dimensionssuch that the arrows point from the dimensions to the construct(Williams, Edwards and Vandenberg, 2003). The dimensions arethus analogous to formative measures; however, in contrast tothe traditional conceptualization of formative measures asobserved variables (see Eq. (2)), the dimensions are themselvesconstructs and conceived as specific components of the second-order construct (Edwards, 2001). In this type of model, the errorterm exists both at the level of the individual (first-order)dimensions and at the overall construct level. Table 1 provides anumber of empirical illustrations of Type I formative multi-dimensional constructs (e.g., Arnett, Laverie, and Meiers, 2003;Brewer, 2007; Reinartz, Krafft and Hoyer, 2004; Venaik et al.,

Fig. 2. Higher-order formative models (ada

2004; Venaik, Midgley and Devinney, 2005; Witt and Rode,2005; Yi and Davis, 2003; see also Bruhn et al., 2008-this issue).For example, Yi and Davis' (2003) construct of “obser-vational learning processes” comprises four formative first-order dimensions, namely “attention processes”, “retentionprocesses”, “production processes”, and “motivation processes”.

The second type of model shown in Fig. 2 (Type II) representsa second-order construct with first-order formative dimensionswhich are themselves measured by several reflective manifestitems. According to this conceptualization, the error term exists attwo different levels, namely (a) at the level of the manifestindicators, where it represents measurement error, and (b) at thelevel of the second-order construct, where it captures the amountof variance in the second-order construct which the first-orderdimensions do not account for. As Type II models have beenintroduced rather recently, only most recent literature providesempirical examples of its use (e.g., Johnson, Bruner and Kumar

pted from Jarvis et al., 2003, p. 205).

Page 6: p 04 Pls Formative Models

1208 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

et al., 2006; Lin et al., 2005; see also Ruiz et al., 2008-this issue).For example, Lin et al. (2005) conceptualize the construct of“customer perceived value” as a second-order factor which isformed by five reflectively specified first-order dimensions,namely “monetary sacrifice”, “website design”, “fulfillment/reliability”, and “security/privacy”.

The third model illustrated in Fig. 2 (Type III) has first-orderfactors as reflective dimensions, but the first-order dimensionsthemselves have formative indicators. For this reason, the errorterm exists at the level of the first-order dimensions only andrepresents both the variance not explained by the manifestindicators (due to the formative specification of the first-orderdimensions) and variance not explained by the underlying(higher-order) construct. Although Jarvis et al. (2003) includethis model in their typology, literature has not explicitlyrecognized this kind of model, and empirical examples remainvirtually non-existent. The reasons for this are threefold (seealso Albers and Götz, 2006). First, as noted above, the nature ofthe error term is difficult to interpret due to the endogenousposition of the formative first-order dimensions. Second,formative indicators capture different facets of a construct andare therefore not interchangeable (Diamantopoulos and Winkl-hofer, 2001). These indicators give the first-order dimensionstheir meaning, which, by definition, has to be different for eachdimension because of the formative specification (Rossiter,2002). Since a reflective specification at the second-order levelimplies that the dimensions are manifestations of a second-orderconstruct, it is unclear whether the meaning of the dimensions isattributable to the formative indicators or to the underlyingcommon cause. Third, Type III models cannot be estimatedusing current procedures for achieving identification offormative constructs (see section on model identification laterin this paper). In short, Type III models do not represent anappealing option for specifying multidimensional constructs.

4. Measurement model misspecification

A number of researchers criticize the prevalent neglect ofexplicit measurement model specification underlying scaleconstruction efforts (Diamantopoulos and Winklhofer, 2001;Eberl, 2006; Fassot, 2006; Fassot and Eggert, 2005; Fornell andBookstein, 1982; Jarvis et al., 2003; Podsakoff et al., 2006).Most researchers apply scale development procedures withouteven questioning their appropriateness for the specific constructat hand (see also Albers and Hildebrandt, 2006; Williams et al.,2004; for a noteworthy exception see Eberl and Schweiger,2005); indeed, Diamantopoulos and Winklhofer (2001, p. 274)speak of an “almost automatic acceptance of reflectiveindicators”. Consequently, misspecification commonly con-cerns the adoption of reflective indicators where formativeindicators (and thus index construction approaches) would beappropriate (which is a Type I error in Diamantopoulos andSiguaw's (2006) terminology). The other case of misspecifica-tion, that is, the incorrect adoption of a formative model whereindeed a reflective would be appropriate (Type II error), is rathernegligible (Fassot, 2006; Jarvis et al., 2003). An explanation forthis difference in evidence of Type I and Type II errors is the

fact that standardized development procedures for reflectivescales have been established over the years (e.g., see Churchill,1979; DeVellis, 2003; Netemeyer, Bearden and Sharma, 2003;Spector, 1992), whereas concrete guidelines for the constructionof formative indixes have been proposed very recently(Diamantopoulos and Winklhofer, 2001; Eggert and Fassot,2003; Giere et al., 2006).

Jarvis et al. (2003) assess the degree of misspecification forstudies published in four major marketing journals (Journal ofMarketing, Journal of Marketing Research, Marketing Science,and Journal of Consumer Research). Even though they apply aconservative evaluation approach (i.e., classifying operationaliza-tions as correct in case that either reflective or formative measurescould in general apply), they find about a third of all studies to besubject to measurement model misspecification. Fassot (2006)applies Jarvis et al.'s (2003) approach to three major Germanmanagement journals (Zeitschrift für Betriebswirtschaft,Zeitschriftfür betriebswirtschaftliche Forschung,Die Betriebswirtschaft) andreports similar results (i.e., 35% of all investigated studies includemisspecified constructs). In a similar effort, Fassot and Eggert(2005) calculate a misspecification rate of some 80% for a majorGerman marketing journal (Marketing ZFP).

This problematic situation is not unique to marketing literature.In similar efforts, Podsakoff et al. (2006) reveal inappropriatemodeling for 62% of constructs published in three major strategicmanagement journals (Academy of Management Journal, Admin-istrative Science Quarterly, Strategic Management Journal),while Podsakoff, MacKenzie, Podsakoff and Lee (2003) report amisspecification rate of 47% for leadership research (includingpublications in The Leadership Quarterly, Journal of AppliedPsychology, and again Academy of Management Journal).

Given this documented existence of measurement modelmisspecification, the obvious question is to what extentmisspecification does impact on model estimates and fitstatistics. This question is important because “any bias in theestimates […] could affect the conclusions about the theoreticalrelationships among the constructs drawn from the research”(Jarvis et al., 2003, p. 207).

The literature review identifies six studies empiricallyinvestigating the consequences of measurement model mis-specification. Table 2 categorizes these studies along twocharacteristics. The first characteristic refers to the source ofbias investigated, which is either (a) the wrongly specifieddirection of causality between a given set of indicators and aconstruct, or (b) the application of an inappropriate item puri-fication procedure (i.e., purifying formative indicators accord-ing to guidelines applicable for reflective indicators). Thesecond characteristic refers to the position of the focal mis-specified construct in the structural model, which is eitherexogenous or endogenous. A discussion of the findings of thesestudies follows.

4.1. Parameter bias due to reversed causality

Jarvis et al. (2003), Law and Wong (1999), and MacKenzieet al. (2005) examine the impact of incorrect causal direction,that is, the specification of a reflective measurement model

Page 7: p 04 Pls Formative Models

Table 2Empirical studies on consequences of measurement model misspecification

Focus (reason for estimationbias)

Data set Technique Structural parameter estimates a Model fit b Additional findings

Exogenousconstructmisspecified

Endogenousconstructmisspecified

Exogenousconstructmisspecified

Endogenousconstructmisspecified

Law and Wong(1999)

Reversed causality Survey data SEM(RAMONA)

Overestimation Not tested CFI ≈ Not tested Also biases in model relationshipswhich do not involve the misspecifiedconstruct

NFI ≈NNFI ≈IFI ≈TLI ≈χ2 /df ↑

Edwards (2001) Reversed causality Publishedcovariance matricesof survey data

SEM (LISREL,RAMONA)

Underestimation c Over- andunderestimation ofsome parameters

CFI ↓ Not comparable(df=0 and perfectfit of formativemodels)

Concluded that both, themultidimensional formative andreflective specification were inferiorto a multivariate structural model

RMSEA ↑χ2 /df ↑

Jarvis et al. (2003) Reversed causality Simulated data Monte Carlosimulation

Overestimation(335% to 555%)

Underestimation(88% to 93%)

CFI ≈ Item correlation found to be negativelyrelated to magnitude of estimation biasGFI ↑

RMSEA ≈SRMR ≈χ2 /df ↑

MacKenzie et al.(2005)

Reversed causality Simulated data Monte Carlosimulation

Overestimation(on average: 429%)

Underestimation(on average: 84%)

CFI ↓ CFI ↓ Type II error increases if endogenousor both constructs are misspecifiedGFI ≈ GFI ↓

RMSEA ↑ RMSEA ↑SRMR ≈ SRMR ↑

Albers andHildebrandt (2006)

Reversed causality andincorrect indicator purification

Simulated data SEM (PLS,LISREL)

No bias Not tested Not given(stated that fitindices weresimilarly good)

Not tested

Diamantopoulos andSiguaw (2006)

Incorrect indicator purification Survey data Regressionanalysis

Underestimation d Not tested CFI ≈ Not testedGFI ≈RMSEA ↑NNFI ≈χ2 /df ↑

a Unstandardized parameter estimates.b ≈ Goodness-of-fit index for reflective and formative model similar (difference +/− .05).c Edwards (2001) estimates several second-order models, this comparison concerns the Congeneric and Estimated Loadings Models.d R-squares compared.

1209A.Diam

antopouloset

al./Journal

ofBusiness

Research

61(2008)

1203–1218

Page 8: p 04 Pls Formative Models

1210 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

when a formative model is conceptually appropriate, forexogenous latent variables. All three studies reveal an over-estimation of structural parameters when the latent variable isaffected by misspecification. In some cases, the (incorrect)reflective specification even yields a significant parameterestimate, whereas the parameter estimate is not significant in the(correct) formative specification. Thus, the impact of the focallatent variable on other constructs in the structural model tendsto be overestimated.

Jarvis et al. (2003) and MacKenzie et al. (2005) additionallyexamine the impact of incorrect specifications of endogenouslatent variables. In contrast to the exogenous case, both studiesreport an underestimation of the parameter estimate capturingthe impact of antecedent variables on the focal construct. Anexplanation for these distinct findings of under- and over-estimation for endogenous and exogenous positions, respec-tively, is the difference in portions of variance accounted for byreflective and formative operationalizations. Specifically, areflective treatment of a formative construct reduces thevariance of the construct (see Fornell, Rhee and Yi, 1991 orNamboodiri, Carter and Blalock, 1975) because the variance ofa reflectively-measured construct equals the common varianceof its measures, whereas the variance of a formatively-measuredconstruct encompasses the total variance of its measures (Lawand Wong, 1999). Consequently, if a misspecification reducesthe variance of the exogenous variable while the level of thevariance of the endogenous variable is maintained, theparameter estimate for their relationship increases. In contrast,if a misspecification reduces the variance of the endogenousvariable while the variance of the exogenous variable isunchanged, the relevant structural parameter estimate decreases.

In any case, these analyses reveal that structural paths areeither overestimated or underestimated as a result of measure-ment model misspecification with undesirable effects on thesubstantive interpretation of the structural model relationships.

4.2. Parameter bias due to incorrect item purification

To fully capture the meaning of a formatively-measuredconstruct, a census of indicators is (ideally) required because“[o]mitting an indicator is omitting a part of the construct”(Bollen and Lennox, 1991, p. 308). Therefore, an omission ofindicators is equivalent to restricting the domain of the construct(MacKenzie et al., 2005). In the context of index construction,this characteristic implies that the elimination of formative itemsfrom the item pool has to be theoretically justified rather thanpurely based on statistical properties (Diamantopoulos andWinklhofer, 2001; Diamantopoulos and Siguaw, 2006). Indeed,“internal-consistency checks on cause-indicators may leadresearchers to discard valid measures improperly” (Bollen,1984, p. 381) and “following standard scale developmentprocedures – for example dropping items that possess low item-to-total correlations – will remove precisely those items thatwould most alter the empirical meaning of the construct” (Jarviset al., 2003, p. 202; see also MacKenzie, 2003).

In light of the extensive presence of measurement modelmisspecification discussed earlier, recent studies examine the

consequences of applying conventional scale developmentprocedures on formative measures. Fassot (2006) provides anexample of a misspecified measure that leads to a neglect of akey aspect of the focal construct. More specifically, “perceivedfriendliness of the staff” is erroneously dropped from a measureof hospital quality due to not meeting conventional standardsfor reflective items (i.e., high item-total correlations), however,despite being a key aspect of a hospital's quality assessment. Inline with this example, Diamantopoulos and Siguaw (2006) findthat the same initial item pool results in considerably differingfinal item sets under reflective and formative purificationguidelines respectively. The former approach eliminates itemswith low inter-item correlations, whereas the latter drops itemswith high inter-item correlations (thus causing problems ofmulticollinearity). In Diamantopoulos and Siguaw's (2006)example, the resulting scale and index share not more than twoout of 30 initial items. Their study therefore demonstrates howerroneous reflective scale purification processes can substan-tially alter the meaning of formative constructs.

Albers and Hildebrandt (2006) address the issue ofparameter estimation bias due to incorrect indicator purification.First, the authors compare parameter estimates of a reflectivelyand a formatively specified measurement model using the fullitem set prior to purification. Second, they compare twoformatively specified models, once using the full item pool (i.e.,in accordance with the requirement of a census of items) andonce using a reduced item set following purification guidelinesfor reflective scales. The latter comparison reveals an extensiveunderestimation of structural parameters, while the formershows no significant differences. Therefore, in this example, itis the erroneous purification rather than the causal ordermisspecification that impacts on parameter bias.

4.3. Effects on fit statistics

The studies in Table 2 also examine the impact ofmisspecification on goodness-of-fit indices for the overall (i.e.,measurement and structural) model. An intuitive expectation isthat the consequences of misspecification in terms of changedconstruct meanings and biased parameter estimates would alsolead to poor model fit. However, the majority of modelsincorporating misspecified constructs show highly acceptablevalues for CFI, GFI, SRMR and RMSEA. Moreover, thesevalues are similar to the goodness-of-fit values obtained for thecorresponding correctly specified model. For example, Mac-Kenzie et al. (2005, p. 724) conclude from their study that “eachof the four goodness-of-fit indices failed to detect themisspecification of the measurement model”. This equallyapplies to all other studies listed in Table 2. Only the chi-square(per degree of freedom) statistic shows to be consistently higherin the wrongly reflectively specified models throughout thestudies, thus providing some indication of the underlyingmisspecification.

Summarizing, all studies empirically examining the con-sequences of measurement model misspecification on para-meter estimates report serious under- or overestimation ofparameters as a consequence of misspecified causality, wrongly

Page 9: p 04 Pls Formative Models

1211A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

adopted purification procedures, or a combination of both. Suchbiases may in turn lead to incorrect conclusions on testedrelationships, thus putting many empirical results into question.Especially alarming is the fact that a satisfactory overall modelfit does not guarantee a correct specification and thatmisspecifications are not detected by poor fit index values. Itis to hope that the empirical demonstration of the undesirableconsequences of measurement model misspecification willyield more echo to Bagozzi's (1984) and Jarvis et al.'s (2003)call to conceptually justify measurement relationships ashypotheses and subsequently test them empirically.

5. The status quo of formative measures: issues andproposed remedies

As the introductory section already outlines, literature hasonly recently started to pay serious attention to formativemeasurement models and empirical applications are still rare.As a result, experience with formative measures is limited andseveral conceptual and practical issues are not fully clarified yet.The following sections discuss such issues and highlightvarious (sometimes contradicting) views of proposed remedies.

5.1. Conceptual issues

5.1.1. Error-free measuresFormative measurement models incorporate the error term at

the construct level and specify individual indicators to be error-free (see Eq. (2) earlier). Some researchers find this non-existence of measurement error hard to accept. Edwards andBagozzi (2000), for example, regard such an assumption asuntenable in most situations. Addressing this objection, a modelsuch as the one depicted in Fig. 3 is worth considering as oneway of incorporating measurement error to formative measure-ment models. This model is similar to Edwards and Bagozzi's(2000) “spurious model” with multiple common causes, withthe only difference that the latent variables are intentionallyintroduced to enable the accommodation of measurement errorin the indicators.

This model inserts a latent variable ξi for each formativeindicator xi so that the focal latent variable η is indirectly linkedto the indicators xi via the latent exogenous constructs ξi. Doing

Fig. 3. Modified formative model with individual error terms.

so, each formative indicator becomes a (single) reflectivemeasure of its respective latent variable ξi and consequentlycomprises an error term; hence, the assumption of error-freeindicators is relaxed.

Although this model has the substantial advantage ofincorporating measurement error, its conceptual justification isquestionable for several reasons. First, the inclusion of the first-order constructs ξi introduces a “fictitious” level, whichadversely affects model parsimony and suggests that a latentvariable can more or less automatically be specified for anymanifest variable. Second, given that the xi are not directlylinked to η, they cannot be legitimately considered to beindicators of η because indicators need to be linked by means ofa direct relationship to the construct they assess. Third, themeasures of the ξi in Fig. 3 are single-indicators with alldrawbacks such indicators entail (such as high specificity, andlow reliability). As a discussion of potential problems withsingle item measures is beyond the scope of this paper, thereader is referred to Gardner, Cummings, Dunham and Pierce(1998) and Nunnally and Bernstein (1994) for further details.

5.2. Interpretation of the error term

Eq. (2) and Fig. 1 (Panel 2) show that a formativemeasurementmodel specification includes an error (disturbance) term at theconstruct level. This error term represents the surplus meaning ofthe construct (Jarvis et al., 2003; Temme 2006) which is notcaptured by the set of formative indicators included in the modelspecification. Diamantopoulos (2006, p. 7) points out that“previous discussions of the error term are often problematicand fail to provide […] a clear interpretation of exactly what theerror term represents”. Jarvis et al. (2003), for example, describethe error term as the collective (i.e., overall) random error of allformative indicators taken as a group, while MacKenzie et al.(2005, p. 712) interpret the error estimate as capturing “theinvalidity of the set of measures— caused by measurement error,interactions among the measures, and/or aspects of the constructdomain not represented by the measures”. However, the firstsource of error, that is, measurement error, is conceptuallyincorrect. Diamantopoulos (2006) demonstrates that the errorterm does not represent measurement error because formativeindicators are specified to be error-free and, therefore, measure-ment error cannot be included in the error term at the constructlevel. The second source, that is, measure interactions, isstatistically plausible but lacks substantive interpretation. Sinceformative indicators determine the meaning of the latent variable,it is not possible to separate the construct's meaning from theindicators' content (Diamantopoulos, 2006). If two indicatorsshow interaction effects, these effects would also form theconstruct's meaning as both indicators separately do. The thirdsource, is indeed the correct interpretation of the nature of theerror term, that is, aspects of the construct domain not representedby the indicators. Specifically, “the error term in a formativemeasurement model represents the impact of all remaining causesother than those represented by the indicators included in themodel” (Diamantopoulos, 2006, p. 11). Formative latent variableshave a number of proximal causes which researchers try to

Page 10: p 04 Pls Formative Models

1212 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

identify when conceptually specifying the construct. However, inmany cases researchers will be unable to detect all possible causesas there may be some which have neither been discussed in priorliterature nor revealed by exploratory research. The construct-level error term represents these missing causes. This means thatthe more comprehensive the set of formative indicators specifiedfor the construct, the smaller the influence of the error term.Williams et al. (2003, p. 908) note in this context that “as thevariance of the residual increases, the meaning of the constructbecomes progressively ambiguous”.

5.3. Estimation of formative models

5.3.1. MulticollinearityMulticollinearity is an undesirable property in formative

models as it causes estimation difficulties (Albers and Hildeb-randt, 2006; Diamantopoulos and Winklhofer, 2001). Theseestimation problems arise because a multiple regression links theformative indicators to the construct (see Eq. (2)). Substantialcorrelations among formative indicators result in unstableestimates for the indicator coefficients γi and it becomes difficultto separate the distinct influence of individual indicators on thelatent variable η. Diamantopoulos and Winklhofer (2001) furthernote that multicollinearity leads to difficulties in assessingindicator validity on the basis of the magnitude of the parametersγi (Bollen, 1984; MacKenzie et al., 2005).

Literature proposes different approaches for dealing withmulticollinearity. Bollen and Lennox (1991) argue that indica-tors which highly intercorrelate are almost perfect linearcombinations and thus quite likely contain redundant informa-tion. Based on this view, several authors (e.g., Diamantopoulosand Winklhofer, 2001; Götz and Liehr-Gobbers, 2004) suggestindicator elimination based on the variance inflation factor(VIF), which assesses the degree of multicollinearity. Someempirical studies on formative measure development (e.g.,Diamantopoulos and Siguaw, 2006; Helm, 2005; Sánchez-Pérezand Iniesta-Bonillo, 2004; Witt and Rode, 2005) follow thisadvice usually by applying the commonly accepted cut-off valueof VIF N10 or its tolerance equivalent (see Giere et al., 2006;Hair, Anderson, Tatham and Black, 1998; Kennedy, 2003).However, considering that this multicollinearity check leads toindicator elimination on purely statistical grounds and given thedanger of altering the meaning of the construct by excludingindicators (Bollen and Lennox, 1991), “[i]ndicator elimination –by whatever means – should not be divorced from conceptualconsiderations when a formative measurement model isinvolved” (Diamantopoulos and Winklhofer, 2001, p. 273).

Albers and Hildebrandt (2006) put forward a differentapproach for overcoming multicollinearity by combiningformative indicators into an index (using either an arithmeticor geometric mean) and using the latter as a single-itemconstruct in the subsequent analysis. However, althoughintuitively appealing, this suggestion raises two importantquestions. First, what is the interpretation of the joint index oftwo indicators in terms of its substantial meaning? If, forexample, income and age show a high intercorrelation (whichappears to be a likely assumption) and their measures are

consequently combined into an index, what exactly does thisindex capture? Second, having included this index into Eq. (2),what kind of information does its corresponding regressionparameter estimate provide? Is it the impact of a joint unitchange in both, the income and the age?

5.3.2. Exogenous variable intercorrelationsOne general issue when specifying measurement models is

the specification of inter-indicator correlations. In reflectivemodels, a common approach is to free all covariances amongexogenous variables allowing for intercorrelations. In formativemodels, following this strategy leads to a large number ofadditional parameters, namely correlation estimates of covar-iances between (a) formative indicators within a construct, (b)formative indicators between constructs, and (c) exogenouslatent constructs (Bollen and Lennox, 1991; MacCallum andBrowne, 1993).

Bollen and Lennox (1991) recommend allowing for inter-correlation of formative measures which relate to the sameconstruct (without, however, expecting any specific pattern).Furthermore, they argue that for both reflectively andformatively-measured constructs it is likely, but though notnecessary, that item correlations within constructs exceed itemcorrelations between constructs. Based on this argument,MacCallum and Browne (1993) consider two possibleapproaches of specifying correlations.

The first approach specifies formative indicators of the sameconstruct to be correlated with each other but uncorrelated withindicators of other constructs. The obvious advantage of thisprocedure is that it retains model parsimony as no non-hypothesized paths are added. The obtained goodness-of-fitindices are hence solely based on the hypothesized relation-ships, that is, the relationships of interest. The shortcoming ofthis approach, however, is that the fixing of covariances to zeroleads to blocks of zeros in the implied covariance matrix. Thesezero covariances assume that the corresponding indicators and/or latent variables are perfectly uncorrelated. MacCallum andBrowne (1993) note that this assumption implies substantivemeaning for the model which requires theoretical justification.They therefore refrain from recommending this approach. Jarviset al. (2003) further argue that any common cause of theconcerned variables that is not incorporated in the modelcontributes to a lack of model fit. Consequently, they alsoconclude that fixing covariances to zero is an inappropriatemethod.

The second approach specifies formative indicators to becorrelated with each other as well as with indicators of otherconstructs or exogenous variables. The major advantage of thismethod is that all variables are allowed to covary instead ofassuming complete independence which is theoretically notjustifiable. This approach, however, also raises a number ofproblematic issues. First, the number of parameters to beestimated increases, thereby decreasing the number of degreesof freedom. Second, MacCallum and Browne (1993) empiri-cally show that the additional parameters provide little ex-planatory value. Consequently, models lack parsimony withoutproviding substantive meaning in explaining inter-measure

Page 11: p 04 Pls Formative Models

Fig. 4. Identification using a MIMIC model.

1213A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

relationships. Furthermore, the estimates for unhypothesizedparameters influence the overall model fit, even though they arenot of interest. Despite these shortcomings, MacCallum andBrowne (1993) recommend this method compared to the optionof having zero blocks in the implied covariance matrix. Jarvis etal. (2003) agree on that fact but stress that locating the impact ofthe non-hypothesized parameter estimates on the model fit isnecessary. They suggest estimating a series of nested models,that is, freeing parameters step by step and comparing theoverall model fit across steps.

5.3.3. Model identificationAmajor concern of formative measurement models is how to

establish statistical identification to enable their estimation. Inisolation, formatively-measured constructs as defined by Eq. (2)are underidentified (Bollen and Lennox, 1991; MacCallum andBrowne, 1993; Temme, 2006) and, thus, cannot be estimated.This inability to estimate formative measurement modelswithout the introduction of additional information (see below)has resulted in criticisms of the value of formative measurementin general (see Howell et al., 2007, 2008-this issue).

As with reflective measurement models, two necessary but yetnot sufficient conditions have to be met for identifying modelsincluding formatively-measured constructs (Bollen, 1989; Bollenand Davis, 1994; Cantallupi, 2002a,b; Edwards, 2001; Temme,2006). First, the number of non-redundant elements in thecovariance-matrix of the observed variables needs to be greateror equal the number of unknown parameter in the model (t-rule).Second, the latent construct needs to be scaled (scaling rule). Forthe latter condition, three main options are available (Bollen andDavis, 1994; MacCallum and Browne, 1993), namely (a) fixing apath from a formative indicator to the construct, (b) fixing a pathfrom the formatively-measured construct to a reflectively-measured endogenous latent variable, or (c) standardizing theformatively-measured construct by fixing its variance to unity.Edwards (2001) advises the last of the three options becausefixing path parameters precludes estimating standard errors oftheoretically interesting relationships. Note that the choice ofscaling method can affect substantive conclusions as thesignificance of different relationships in the model with aformatively-measured construct may vary depending of how thescale of the latter is set (see Franke et al., 2008-this issue).

The t-rule and scaling rule are, however, not sufficientconditions for identifying formative measurement models. Inthis context, Bollen (1989) draws attention to the fact that theformative measurement model needs to be placed within alarger model that incorporates consequences (i.e., effects) of thelatent variable in question to enable its estimation. Specifically,for identifying the disturbance term ζ at the construct level, theformative latent variable needs to emit at least two paths to other(reflective) constructs or indicators (MacCallum and Browne,1993); literature also refers to this condition as 2+ emitted pathsrule (Bollen and Davis, 1994).

Literature discusses three approaches for applying the 2+emitted paths rule, which are (a) adding two reflective indicatorsto the formatively-measured construct, (b) adding two reflec-tively-measured constructs as outcome variables, and (c) a mix-

ture of these two approaches, that is, adding a single reflectiveindicator and a reflectively-measured construct as an outcomevariable.

5.3.3.1. Adding two reflective indicators. The first option isadding two reflective measures to the set of formative indicators(see Fig. 4). Jarvis et al. (2003) and MacKenzie et al. (2005)advise this method based on the key arguments that (a) thisapproach does not require adding constructs to the model solelyfor identification purposes (which contributes to modelparsimony), and (b) measurement parameters are stable andless sensitive to changes in structural parameters.

However, this model allows for different conceptualinterpretations (Jarvis et al., 2003), namely (a) a MIMICmodel (Jöreskog and Goldberger, 1975), (b) an endogenousconstruct with two reflective indicators that is influenced byexogenous observed variables, or (c) a formatively-measuredconstruct which influences indicators of another construct.MacKenzie et al. (2005) argue that the constellation resultingfrom adding two reflective measures to the formative speci-fication should not be interpreted as a MIMIC model but as alatent variable having a mixture of formative and reflectiveindicators (since both types of indicators belong to the sameconcept domain and are content-valid operationalizations of thesame construct). In contrast, MacCallum and Browne (1993),Scholderer and Balderjahn (2006) and Temme (2006) explicitlyequate models with mixed indicators and MIMIC models. It isoutside the scope of this paper to discuss these interpretations indetail, but it should be stressed that despite the different possibleinterpretations at a conceptual level, there are no differencesat the empirical level (the models yield the same parameterestimates).

Page 12: p 04 Pls Formative Models

Fig. 5. Identification with two reflectively-measured constructs.

1214 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

5.3.3.2. Adding two reflective constructs. According to Bollenand Davis (1994), another option of establishing modelidentification is the specification of two structural relations fromthe formative latent variable to two reflectively-measuredconstructs (Fig. 5). While these two reflectively-measuredconstructs need to be unrelated in models which only comprisethe focal formatively-measured construct and the two reflectively-measured constructs (as in Fig. 5), the reflective constructsmay becausally related in larger models (Temme, 2006).

This model is justifiable in case that two reflectively-measured constructs can be included in the nomologicalnetwork based on theoretical considerations. However, includ-ing any reflectively-measured outcome variables purely foridentification reasons puts the theoretical model specificationinto question if these outcomes are not of theoretical interest.Note also that the choice of outcome constructs potentiallyaffects the interpretation of the formatively-measured constructitself by influencing the estimates of γ-parameters (see Heise,1972; Howell et al., 2007; and also Franke et al., 2008-this

Fig. 6. Identification with one reflective measu

issue). Indeed, as Bagozzi (2007, p. 236) observes, “the param-eters regarding the observed variables to their purportedformative latent variable are functions of the number and natureof endogenous latent variables and their measures”.

5.3.3.3. Adding one reflective indicator and one reflectiveconstruct. This model is a mixture of the two previousprocedures and involves adding one reflective indicator to thelatent construct and linking the latter to a reflectively-measuredlatent variable (Fig. 6). This mixed approach is applicable if thetheoretical model includes only one structural relationship ofthe formatively-measured latent variable to a reflectively-measured latent variable. In this case, including a reflectiveindicator such as a global measure helps to overcome under-identification and might simultaneously be used for validationpurposes (see Diamantopoulos and Winklhofer, 2001).

Temme (2006) demonstrates that the 2+ emitted paths rule isa necessary but yet not necessarily sufficient condition foridentification when the two reflectively-measured outcome

re and one reflectively-measured construct.

Page 13: p 04 Pls Formative Models

1215A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

constructs are either directionally related (i.e., one directlyimpacts on the other), or their disturbance terms are correlated.These models require imposing further restrictions in order toestablish full model identification (such as fixing the covarianceof the disturbance terms to zero, or using a partially reducedform model; for details, see Bollen and Davis, 1994;Cantaluppi, 2002a,b; and Temme, 2006).

Finally, models which violate the 2+ emitted paths rule due tocontaining formatively-measured constructs that emit only onepath can be identified by fixing the variance of the disturbanceterm to zero (MacCallum and Browne, 1993). MacCallum andBrowne (1993) alert to applying this approach with caution as itimplies the theoretical assumption that the formative indicatorscompletely capture the construct. In other words, this approachassumes that a census of indicators of the latent variable isundertaken at the item generation stage, and, hence, nounexplained variance exists. By fixing the disturbance term tozero the formative construct becomes a weighted linear combina-tion of its indicators without any surplus meaning (Diamantopou-los, 2006; MacKenzie et al., 2005). Although there are examplesof constructs for which all possible indicators could beconceivably specified (Diamantopoulos, 2006), in most casesthis assumption is not reasonable (Bollen and Davis, 1994) andtherefore setting the error term to zero is not justifiable.

Finally, like all formative measurement models, also thethree higher-order models in Fig. 2 are in isolation statisticallyunderidentified and cannot be estimated. Since a discussion ofnecessary conditions for identifying higher-order models isbeyond the scope of this review, the reader is referred to Albersand Götz (2006), Cantaluppi (2002a,b), Edwards (2001), Giereet al. (2006), Jarvis et al. (2003), Temme (2006), and Williamset al. (2003).

5.4. Reliability and validity assessment of formative models

5.4.1. Reliability assessmentAs the correlations between formative indicators may be

positive, negative or zero (Bollen, 1984; Diamantopoulos andWinklhofer 2001), reliability in an internal consistency sense isnot meaningful for formative indicators (Bagozzi, 1994; Hulland,1999). As Nunally and Bernstein (1994) put it, “internalconsistency is of minimal importance because two variablesthat might even be negatively correlated can both serve asmeaningful indicators of a construct”. Similarly, Bollen andLennox (1991) explicitly alert researchers not to rely oncorrelation matrices for indicator selection as this might lead toeliminating valid measures.

While Rossiter (2002, p. 388) condemns all sorts ofreliability assessments claiming that “for a formed attribute,there is […] no question of unreliability” and several otherauthors skip the issue of reliability assessment when discussingformative measure development (e.g., Diamantopoulos andWinklhofer, 2001; Eggert and Fassot, 2003), Bagozzi (1994)and Diamantopoulos (2005) recommend reliability assessmentfor formative indicators in form of test-retest reliability (see e.g.,DeVellis, 2003; Spector, 1992). MacKenzie et al. (2005)additionally propose using the correlation between formative

indicators and an alternative measure assessing the focalconstruct. What needs to be clarified, however, is how such acorrelation should be interpreted. Would a non-significantcorrelation unambiguously mean that the focal measure lacksreliability? What if the alternative measure is itself unreliable?Does this approach actually test the reliability of the focalmeasure or is it rather a test of convergent validity?

5.4.2. Validity assessmentOne of the most controversial issues in formative measure-

ment literature is validity assessment. Some researchers arguethat no quantitative quality checks are usable for assessing theappropriateness of formative indices (e.g., Homburg andKlarmann, 2006). Others note that the applicability of statisticalprocedures is limited as the choice of formative indicatorsdetermines the conceptual meaning of the construct (Albers andHildebrandt, 2006). Rossiter (2002, p. 315) dismisses anyvalidity assessment for formative indicators claiming that “allthat is needed is a set of distinct components as decided byexpert judgment”. However, most researchers do not share theabove views. Edwards and Bagozzi (2000, p. 171), for example,stress that “if measures are specified as formative, their validitymust still be established. It is bad practice to […] claim thatone's measures are formative, and do nothing more”.

5.4.2.1. Individual indicator validity. Bollen (1989) arguesthat the γ-parameters, which reflect the impact of the formativeindicators on the latent construct (see Eq. (2)), indicate indicatorvalidity. The γ-parameters capture the contribution of theindividual indicator to the construct, therefore items with non-significant γ-parameters should be considered for elimination asthey cannot represent valid indicators of the construct(assuming that multicollinearity is not an issue). Diamantopou-los andWinklhofer (2001) build upon Bollen's (1989) argumentand recommend using a MIMIC model due to simultaneouslyallowing for the estimation of γ-parameters and for theprovision of an overall model fit (which is indicative of thevalidity of formative indicators as a set).

An alternative (or additional) approach is assessing indicatorvalidity by estimating the indicators' correlations with an externalvariable. For example, Diamantopoulos and Winklhofer (2001)suggest including a global measure summarizing the essence ofthe construct (see also Fayers et al., 1997). Assuming that theoverall measure is a valid criterion, the relationship between aformative indicator and the overall measure indicates indicatorvalidity (Eggert and Fassot, 2003; MacKenzie et al., 2005).Following this approach, indicators correlating highly with theexternal variable are retained whereas those showing low or non-significant relationships are candidates for elimination.

Lastly, a formative measurement model specification impliesthat the latent variable completely mediates the effects of itsindicators on other (outcome) variables (see Figs. 4 and 5). Thisimplies certain proportionality constraints on the modelcoefficients (Bollen and Davis, 1994; Hauser, 1973). If suchproportionality constraints do not hold for a particular indicator,the validity of the latter is questionable (see Franke et al., 2008-this issue).

Page 14: p 04 Pls Formative Models

1216 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

5.4.2.2. Construct validity. After examining validity at theindividual indicator level, the next step involves assessingvalidity at the overall construct level. An important point in thisregard is that “causal indicators are not invalidated by lowinternal consistency, so to assess validity we need to examineother variables that are effects of the latent variable” (Bollenand Lennox, 1991, p. 312, emphasis added). One commonapproach is focusing on nomological (Jarvis et al., 2003;MacKenzie et al., 2005; Reinartz et al., 2004) and criterion-related (Diamantopoulos and Siguaw, 2006; Edwards, 2001;Jarvis et al., 2003) validity.

MacKenzie et al. (2005) suggest proceeding as with reflectivescales, that is, estimating hypothesized relationships of the focalconstruct with theoretically related constructs. These estimatedrelationships should be consistent with the expected direction andbe significantly different from zero. Diamantopoulos andWinklhofer (2001) also underline the importance of nomologicalvalidation particularly in cases where indicators have beenpurified. Rossiter (2002, p. 327) challenges the approach ofevaluating the validity of a formative index by relating it to otherconstructs by arguing that “[a] scale's validity should beestablished independently for the construct”. In response,Diamantopoulos (2005) points out that, by definition, all formsof validity — with the exception of face and content validity —are defined in terms of relationships with other measures (seeCarmines and Zeller, 1979; Zeller and Carmines, 1980).

Concerning other types of validity assessments, Bagozzi(1994, p. 338) states that “construct validity in terms ofconvergent and discriminant validity [is] not meaningful whenindexes are formed as linear sums of measurement”. In contrast,MacKenzie et al. (2005) suggest that standard procedures forassessing discriminant validity are equally applicable toformative indexes, which include testing (a) whether the focalconstruct less than perfectly correlates with related constructs,and/or (b) whether it shares less than half of its variance withsome other construct, that is, construct intercorrelation is lessthan .71 (Fornell and Larcker, 1981).

Diamantopoulos (2006) proposes using the variance of theerror term as an indication of construct validity. Since the errorterm captures aspects of the construct's domain that the set ofindicators neglect, the lower the variance of the error term, themore valid the construct (see also Williams et al., 2003). If theset of indicators is comprehensive in the sense that it includes allimportant construct facets, the construct meaning is validlycaptured; accordingly, the residual variance is likely to be small.

Finally, confirmatory tetrad analysis (CTA) (Bollen andTing, 1993, 2000; Eberl, 2006, Gudergan et al., 2008-this issue)offers a basic test of construct validity. Although Bollen andTing (2000, p. 4) originally propose CTA as “an empirical testof whether a causal or effect indicator specification isappropriate”, interpreting evidence supporting the latter asalso supporting the construct's validity is reasonable.

6. Conclusion and future research

Building on a review of literature relating to the specifica-tion, estimation, and validation of formative measurement

models, this article hopefully lends a helping hand to researchersconsidering the adoption of formative measurement in theirempirical efforts, while, at the same time, encouraging a criticalperspective in the application of formative indicators.

Concerning future research, one major issue concerns theconceptual plausibility of formatively-measured constructsoccupying endogenous positions in structural models. While anumber of studies incorporate formative latent variables in suchpositions (e.g., Edwards, 2001; Jarvis et al., 2003; MacKenzieet al., 2005), Wiley (2005, p. 124, emphasis in original) notesthat there is “no mechanism by which an antecedent variablecan influence a formative index”. Since the set of causalindicators and the disturbance term jointly account for the totalvariation of a formatively-measured construct, the specificationof an additional source of variation (i.e., an antecedent con-struct) is conceptually questionable. Given the conceptual andpractical importance of this issue, a debate on the use offormatively-measured constructs as endogenous variables isurgently required.

Another issue for future research concerns modeling forma-tively-measured constructs as moderator variables in structuralmodels. Although literature provides empirical examples ofemploying formatively specified moderators (e.g., Reinartz et al.,2004), more research using formatively-measured constructswhen forming interaction terms is needed.

Finally, there is a debate on whether formative measurementis really necessary, that is, whether it should be used in the firstplace. Bagozzi (2007, p. 236), for example, states that, for-mative measurement can be done but only for a limited range ofcases and under restrictive assumptions”, while Howell et al.(2007, p. 216; see also Howell et al., 2008-this issue) argue that“formative measurement is not an equally attractive alternative[to reflective measurement]”. Although there are those (includ-ing the authors and Bollen, 2007) who feel that, despite itsvarious shortcomings, formative measurement is indeed aviable alternative to reflective measurement based on con-ceptual grounds, further theoretical and methodologicalresearch is necessary to finally settle this debate. Time will tell.

References

Albers S, Götz O. Messmodelle mit Konstrukten zweiter Ordnung in derbetriebswirtschaftlichen Forschung. Betriebswirtschaft 2006;66(6):669–77.

Albers S, Hildebrandt L. Methodische Probleme bei der Erfolgsfaktoren-forschung — Messfehler, formative versus reflective Indikatoren und dieWahl des Strukturgleichungs-Modells. Zfbf 2006;58:2–33.

Anderson J, Gerbing D. Some methods for respecifying measurement models toobtain unidimensional construct measurement. J Mark Res 1982;19(4):453–60.

Arnett DB, Laverie DA, Meiers A. Developing parsimonious retailer equityindexes using partial least squares analysis: a method and applications.J Retail 2003;79:161–70.

Bagozzi RP. A prospectus for theory construction in marketing. J Mark1984;48:11–29.

Bagozzi RP. Structural equation models in marketing research: basic principles.In: Bagozzi RP, editor. Principles of marketing research. Oxford: Blackwell;1994. p. 317–85.

Bagozzi RP. On the meaning of formative measurement and how it differs fromreflective measurement: comment on Howell, Breivik, and Wilcox. PsycholMethods 2007;12(2):229–37.

Page 15: p 04 Pls Formative Models

1217A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

Bagozzi RP, Heatherton TF. A general approach to representing multifacetedpersonality constructs: application to state self-esteem. Struct Equ Modeling1994;1(1):35–67.

Blalock HM. Causal inferences in nonexperimental research. Chapel Hill:University of North Carolina Press; 1964.

Blalock HM. Theory building and causal inferences. In: Blalock HM, Blalock A,editors. Methodology in social research. New York: McGraw-Hil; 1968.p. 155–98.

Blalock HM. Causal models involving unobserved variables in stimulus-response situations. In: Blalock HM, editor. Causal models in the socialsciences. Chicago: Aldine; 1971. p. 335–47.

Bollen K. Multiple indicators: internal consistency or no necessary relationship?Qual Quant 1984;18:377–85.

Bollen K. Structural equations with latent variables. New York: Wiley; 1989.Bollen K, Lennox R. Conventional wisdom on measurement: a structural

equation perspective. Psychol Bull 1991;110(2):305–14.Bollen K. Latent variables in psychology and the social sciences. Annu Rev

Psychol 2002;53:605–34.Bollen K. Interpretational confounding is due to misspecification, not to type of

indicator: comment on Howell, Breivik, and Wilcox. Psychol Methods2007;12(2):219–28.

Bollen K, Ting K. Confirmatory tetrad analysis. In: Marsden PV, editor.Sociological methodology. Washington, D.C: American SociologicalAssociation; 1993. p. 147–75.

Bollen K, Davis W. Causal indicator models: identification, estimation, andtesting. Paper presented at the American Sociological Association Conven-tion, Miami; 1994.

Bollen K, Ting K. A tetrad test for causal indicators. Psychol Methods 2000;5(1):3–22.

Brewer P. Operationalizing psychic distance: a revised approach. J Int Mark2007;15(1):44–66.

Brock JK, Zhou Y. Organizational use of the internet— scale development andvalidation. Internet Res 2005;15(1):67–87.

Bruhn M, Georgi D, Hadwich K. Customer equity management as a formativesecond-order construct. Journal of Business Research 2008;61:1292–301(this issue). doi:10.1016/j.jbusres.2008.01.016.

Cadogan JW, Souchon AL, Procter DB. The quality of market-orientedbehaviors: formative index construction and validation. Journal of BusinessResearch 2008;61:1263–77 (this issue). doi:10.1016/j.jbusres.2008.01.014.

CantaluppiG. Some further remarks on parameter identification of structural equationmodels with both formative and reflexive relationships. In: A.A., V.V., editors.Studi in onore di Angelo Zanella. Milano: Vita e Pensiero; 2002a. p. 89–104.

Cantaluppi G. The problem of parameter identification of structural equationmodels with both formative and reflexive relationships: some theoreticalresults. Serie Edizioni Provvisorie 2002b; No. 108, Istituto di Statistica,Università Cattolica del S. Cuore, Milano:1–19.

Carmines EG, Zeller RA. Reliability and validity assessment. In: Sullivan JL,editor. Quantitative applications in the social sciences. Beverly Hills: Sage;1979.

Churchill GA. A paradigm for developing better measures of marketingconstructs. J Mark Res 1979;16:64–73.

Cohen P, Cohen J, Teresi J, Marchi M, Velez CN. Problems in the measurementof latent variables in structural equations causal models. Appl Psychol Meas1990;14:183–96.

Collier JE, Bienstock CC. Measuring service quality in e-retailing. J Serv Res2006;8(3):260–75.

Curtis RF, Jackson EF. Multiple indicators in survey research. Am J Sociol1962;68:195–204.

DeVellis Robert F. Scale development — theories and applications. Appliedsocial research methods series. 2nd edition. Sage Publications; 2003.

Diamantopoulos A. Viewpoint: export performance measurement: reflectiveversus formative indicators. Int Mark Rev 1999;16(6):444–57.

Diamantopoulos A. The C-OAR-SE procedure for scale development inmarketing: a comment. Int J Res Mark 2005;22:1–9.

Diamantopoulos A. The error term in formative measurement models: inter-pretations and modelling implications. J Modell Manage 2006;1(1):7–17.

Diamantopoulos A,Winklhofer H. Index construction with formative indicators:an alternative to scale development. J Mark Res 2001;38(2):269–77.

Diamantopoulos A, Siguaw J. Formative versus reflective indicators inorganizational measure development: a comparison and empirical illustra-tion. Br J Manage 2006;17(4):263–82.

Dowling G. Journalists' evaluation of corporate reputations. Corp ReputationRev 2004;7(2):196–205.

Duncan OD. Notes on social measurement: historical and critical. New York:Russell Sage; 1984.

Eberl M. Formative und reflektive Konstrukte und die Wahl des Strukturgle-ichungsverfahrens. Betriebswirtschaft 2006;66(6):651–68.

Eberl M, Schwaiger M. Corporate reputation: disentangling the effects onfinancial performance. Eur J Mark 2005;39:838–54.

Edwards JR. Multidimensional constructs in organizational behavior research:an integrative analytical framework. Organ Res Methods 2001;4(2):144–92.

Edwards JR, Bagozzi R. On the nature and direction of relationships betweenconstructs and measures. Psychol Methods 2000;5(2):155–74.

Eggert A, Fassot G. Zur Verwendung formativer and reflektiver Indikatoren inStrukturgleichungsmodellen. Kaiserslaut Schr reihe Mark 2003;20:1–18.

Fassot G. Operationalisierung latenter Variablen in Strukturgleichungsmodellen:Eine Standortbestimmung. Zfbf 2006;58:67–88.

Fassot A, Eggert G. Zur Verwendung formativer und reflektiver Indikatoren inStrukturgleichungsmodellen: Bestandaufnahme und Anwendungsempfeh-lung. In: Bliemel FW, Eggert A, Fassot G, Henseler J, editors. HandbuchPLS-Modellierung, Methode, Anwendung, Praxisbeispiele. Stuttgart:Schaeffer-Poeschel; 2005. p. 31–47.

Fayers PM, Hand DJ, Bjordal K, Groenvold M. Causal indicators in quality oflife research. Qual Life Res 1997;6:393–406.

Fornell C, Larcker DF. Evaluating structural equation models with unobservablevariables and measurement error. J Mark Res 1981;18:39–50.

Fornell C, Bookstein FL. A comparative analysis of two structural equationmodels: LISREL and PLS applied to market data. In: Fornell C, editor. Asecond generation of multivariate analysis, vol. 1. New York: Praeger; 1982.p. 289–324.

Fornell C, Rhee BD, Yi Y. Direct regression, reverse regression, and covariancestructure analysis. Mark Lett 1991;2(3):309–20.

Franke G, Preacher C, Rigdon E. The proportional structural effects of formativeindicators. Journal of Business Research 2008;61:1229–37 (this issue).doi:10.1016/j.jbusres.2008.01.011.

Gardner D, Cummings L, Dunham R, Pierce J. Single-item versus multiple-item measurement scales: an empirical comparison. Educ Psychol Meas1998;58(6):898–915.

Giere J, Wirtz B, Schilke O. Mehrdimensionale Konstrukte: KonzeptionelleGrundlagen und Möglichkeiten ihrer Analyse mithilfe von Strukturgle-ichungsmodellen. Betriebswirtschaft 2006;66(6):678–95.

Götz O, Liehr-Gobbers K. Analyse von Strukturgleichungsmodellen mit Hilfeder Partial-Least-Squares(PLS)-Methode. Betriebswirtschaft 2004;64(6):714–38.

Gudergan SP, Ringle CM, Wende S, Will A. Confirmatory tetrad analysis forevaluating the mode of measurement models in PLS path modeling. Journalof Business Research 2008;61:1238–49 (this issue).

Hair JF, Anderson RE, Tatham RL, Black WC. Multivariate data analysis. NewJersey: Prentice Hall; 1998.

Hauser RM. Diaggregating a social-psychological model of educationalattainment. In: Goldberger AS, Duncan OD, editors. Structural equationmodels in the social sciences. San Diego: Academic Press; 1973. p. 255–89.

Hauser RM, Goldberger AS. The treatment of unobservable variables in pathanalysis. Sociol Method 1971:81–117.

Heise DR. Employing nominal variables, induced variables, and block variablesin path analysis 1972; 1:147–173.

Helm S. Designing a formative measure for corporate reputation. CorpReputation Rev 2005;8(2):95–109.

Hitt MA, Gimeno J, Hoskisson RE. Current and future research methods instrategic management. Organ Res Methods 1998;1:6–44.

Homburg C, Klarmann M. Die Kausalanalyse in der empirischen betriebs-wirtschaftlichen Forschung – Problemfelder und Anwendungsempfehlun-gen. Betriebswirtschaft 2006;66(6):727–48.

Homburg C, Workman JP, Krohmer H. Marketing's influence within the firm.J Mark 1999;63(2):1–17.

Page 16: p 04 Pls Formative Models

1218 A. Diamantopoulos et al. / Journal of Business Research 61 (2008) 1203–1218

Homburg C, Hoyer W, Fassnacht M. Service orientation of a retailer's businessstrategy: dimensions, antecedents, and performance outcomes. J Mark2002;66(4):86–101.

Howell RD, Breivik E, Wilcox JB. Reconsidering formative measurement.Psychol Methods 2007;12(2):205–18.

Howell RD, Breivik K, Wilcox JB. Questions about formative measurement.Journal of Business Research 2008;61:1219–28 (this issue). doi:10.1016/j.jbusres.2008.01.010.

Hulland J. Use of partial least squares (PLS) in strategic management research: areview of four recent studies. Strateg Manage J 1999;20:195–204.

Hyman M, Ganesh G, McQuitty S. Augmenting the household influenceconstruct. J Mark Theory Pract 2002;10(3):13–31.

Jarvis C, MacKenzie S, Podsakoff PA. Critical review of construct indicatorsand measurement model misspecification in marketing and consumerresearch. J Consum Res 2003;30(2):199–218.

Johansson JK, Yip GS. Exploiting globalization potential: U.S. and Japanesestrategies. Strateg Manage J 1994;15(8):579–601.

Johnson GJ, Bruner II GC, Kumar A. Interactivity and its facets revisited. J Advert2006;35(4):35–52.

Jöreskog K, Goldberger A. Estimation of a model with multiple indicators andmultiple causes of a single latent variable. J Am Stat Assoc 1975;10:631–9.

Judge TA, Bretz RD. Person-organization fit and the theory of work adjustment:implications for satisfaction, tenure, and career success. J Vocat Behav1994;44(1):32–54.

Kennedy PA. Guide to econometrics. 5th edition. Boston: MIT Press; 2003.Land K. On estimation of path coefficients for unmeasured variables from

correlations among observed variables. Soc Forces 1970;48:506–11.Law K, Wong C. Multidimensional constructs in structural equation analysis: an

illustration using the job perception and job satisfaction constructs. J Manage1999;25(2):143–60.

Law KS, Wong CS, Mobley WH. Toward a taxonomy of multidimensionalconstructs. Acad Manage Rev 1998;23(4):741–55.

Lin CH, Sher PJ, Shih HY. Past progress and future directions in conceptualizingcustomer perceived value. Int J Serv Ind Manag 2005;16(4):318–36.

Long JS. Confirmatory factor analysis: a preface to LISREL. Bloomington, IN:Sage Publications; 1983.

Lord FM, Novick MR. Statistical theories of mental test scores. Reading, MA:Addison-Wesely; 1968.

MacCallum R, Browne M. The use of causal indicators in covariance structuremodels: some practical issues. Psychol Bull 1993;114(3):533–41.

MacKenzie SB. The danger of poor construct conceptualization. J Acad MarkSci 2003;31(3):323–6.

MacKenzie S, Podsakoff P, Jarvis C. The problem of measurement modelmisspecification in behavioural and organizational research and somerecommended solutions. J Appl Psychol 2005;90(4):710–30.

Namboodiri NK, Carter LF, Blalock HM. Applied multivariate analysis andexperimental designs. New York: McGraw-Hill; 1975.

Netemeyer RG, Bearden WO, Sharma S. Scaling procedures. CA: Sage; 2003.Nunnally JC. Psychometric theory. 2nd edition. New York: McGraw-Hill; 1978.Nunnally JC, Bernstein IH. Psychometric theory. 3rd edition. New York:

McGraw-Hill; 1994.Pavlou P, Gefen D. Psychological contract violation in online marketplaces:

antecedents, consequences, and moderating role. Inf Syst Res 2005;16(4):372–99.

Podsakoff PM, MacKenzie SB, Podsakoff NP, Lee JY. The mismeasure of man(agement) and its implications for leadership research. Leadersh Q2003;14:615–56.

Podsakoff NP, Shen W, Podsakoff PM. The role of formative measurementmodels in strategic management research: review, critique, and implicationsfor future research. Res Methodol Strat Manag 2006;3:197–252.

Reinartz W, Krafft M, Hoyer WD. The customer relationship management pro-cess: Ist measurement and impact on performance. J Mark Res 2004;41(3):293–305.

Rossiter J. The C-OAR-SE procedure for scale development in marketing. Int JRes Mark 2002;19:305–35.

Ruiz DM, Gremler DD,Washburn JH, Capeda-Carrion G. Service value revisited:specifying a higher-order, formative measure. Journal of Business Research2008;61:1278–91 (this issue). doi:10.1016/j.jbusres.2008.01.015.

Sànchez-Pérez M, Iniesta-Bonillo M. Consumers felt commitment towardsretailers: index development and validation. J Bus Psychol 2004;19(2):141–59.

Santosa PI, Wei KK, Chan HC. User involvement and user satisfaction withinformation-seeking activity. European Journal of Information Systems:including a special section on the Pacific Asia conference, vol. 14(4). ; 2005.p. 361–70.

Scholderer J, Balderjahn I. Was unterscheidet harte und weiche Strukturgle-ichungsmodelle nun wirklich? Mark ZFP 2006;28(1):57–70.

Spector PE. Summated rating scale construction: an introduction. Series:quantitative applications in the social sciences. CA: Sage Publications; 1992.

Temme D. Die Spezifikation und Identifikation formativer Messmodelle derMarketingforschung in Kovarianzstrukturanalysen. Mark ZFP 2006;28(3):183–209.

Ulaga W, Eggert A. Value-based differentiation in business relationships:gaining and sustaining key supplier status. J Mark 2006;70(1):119–36.

Venaik S, Midgley DF, Devinney TM. A new perspective on the integration-responsiveness pressures confronting multinational firms. Manag Int Rev2004;44(Special Issue 2004/1):15–48.

Venaik S, Midgley DF, Devinney TM. Dual paths to performance: the impact ofglobal pressures on MNC subsidiary conduct and performance. J Int BusStud 2005;36(6):655–75.

Wiley J. Reflections on formative measures: conceptualization and implicationfor use. ANZMAC Conference, Perth; December 5–7; 2005.

Williams LJ, Edwards JR, Vandenberg RJ. Recent advances in causal modelingmethods for organizational and management research. Journal of Manage-ment 2003;29(6):903–36.

WilliamsLJ, GavinMB,HartmanNS. In:KetchenDJ, BergDD, editors. Structuralequation modeling methods in strategy research: applications and issues.Research Methodology in Strategy ManagementBoston, MA: Elsevier; 2004.p. 303–46.

Winklhofer H, Diamantopoulos A. Managerial evaluation of sales forecastingeffectiveness: a MIMIC model approach. Int J Res Mark 2002;19:151–66.

Witt P, Rode V. Corporate brand building in start-ups. J Enterp Cult 2005;13(3):273–94.

Yi MY, Davis FD. Developing and validating an observational learning model ofcomputer software training and skill acquisition. Inf Syst Res 2003;14(2):146–69.

Zeller RA, Carmines EG. Measurement in the social sciences — the linkbetween theory and data. CA: Cambridge University Press; 1980.