41 - new industrial product

Upload: nasa00

Post on 03-Jun-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 41 - New Industrial Product...

    1/14

    PROD INNOV MANAG 1851987;4:185-198

    New Industrial Product Design and valuation UsingMultiattribute Value AnalysisRalph L. Keeney and Gary L. Lilien

    Increasingly, the design of successful new indus-trial products is related to careful m arket asse ss-ment. Traditionally, managers and researchershave studied their markets by examining a smallnumber of product attr ibutes that are commonacross a range of informed respondents . In m anyways, th ese techniques fail to meet the chal-lenges posed by tod ay s often heterogeneous,highly com petitive, fast mov ing industrial mar-kets . Ralph Keeney and G ary Lilien introduce ust o a technique they call multiattribute unlue anal-ysis, b oth describing th e procedure and describ-ing a comp rehensive exa mp le. Their approach in-troduces considerableJClexibility o the process ofmarket assessm ent. Technically , i t permits theevaluation of many more attributes, value trade-of fs , and synergies amon g attr ibutes than domore traditional meth ods. In addition, i t permitsnonlinear evaluation functions that ma y be idio-syncratic t o the individual. Practically, their ap-proach, illustrated with a detailed case npplica-tion, is shown to have s ign8cant potential foraiding product design decisions.

    Address correspondence to Gary L. Lilien Research Professorof Management Science Pennsylvania State University 113 Busi-ness Administration Building 11, University Park PA 16802

    The New Industrial Product evelopmentProcessThe long-term health of industrial and consumerproduct companies is tied to their ability to inno-vate successfully-to provide existing and newcustomers with a continuing stream of attractivenew products and services. The firm that does notmaintain a program of managed innovation canquickly find itself behind competition. But therisks associated with innovation are significant;there are large investments involved and the like-lihood of failure is high [14,17].

    Hopkins [23] reports that well over half of allindustrial firms find their success rates disap-pointing or unacceptable. Cooper [13] re-ports a failure rate of 41 for fully developed newindustrial products introduced into the market;i.e., for those that successfully passed the devel-opment process. Booz, Allen and Hamilton [4]report a failure rate of 35% for new products. Inan earlier study, BOOZ,Allen and Hamilton [3]report that about 70% of the resources spent onnew products are allocated to products that arenot successful in the market.

    There are many reasons to believe that succes-ful new product development will be even harderin the future than it has been in the past. Thosereasons include the fragmentation of markets,greater market competition, shorter time periodsto adjust new products, increasing social and gov-ernmental constraints, capital shortages, andshorter product life cycles [31].A number of studies on new product failures

    1987 Elsevier S cience Publishing Co., Inc.52 Vanderbil t Ave . New York NY 10017

  • 8/13/2019 41 - New Industrial Product...

    2/14

    186 J PR OD I NNOV M ANAG1987;4: 185- 198 R. L . K E E N E Y A N D G . L. L I L I E N

    BIOGR PHIC L SKETCHESRalph L. Keeney is Professor of System s Science at the University ofSouthern California. He has made contributions to both the theoryand practice of decision analysis with a focus on problems involvingmultiple objectives. He was previously a professor in the OR Centerat M.I.T. and was h ead of the decision analysis group at Woodward-Clyde Consultants. H is published works include Decision with Multi-ple Objectives (Wile y, 1976) with H . Raiffa, Siting Energy Facilities(Academic Press, 1980), and Decision Anaiysis Video Tapes andStudy Guide (M.I.T. Center for Advanced Engineering Study, 1978)with A.W. Drake. Dr. K eeney is associate editor for OR practice forthe journal, Operations Research and serves as editor-at-large forInterfaces. His cu rrent interests include applying multiattr ibute valueanalysis in the new product development process.Gary L. Lilien is Research Professor of Management Science in theCollege of Business Administration at Th e Pennsylvania State U niver-sity. He has published m ore than 50 articles and three books: MarketPlanning For New Industrinl Products (Wiley 1980) with Jean-MarieChoffray Marketing Decision Making: A Model Building Approach(Har per and Row, 1983) with Philip Ko tler; and Marketing Mix Analy-sis W ith Lotus 1-2-3 (Scientific Press, 1986). He is Editor-rn-Chief ofInterfaces a journal focusing on practicaI appiications of managementscience and is a me mber of the editorial board of the Journal of Mar-keting. Dr. Lilien is co-founder and Research Director of The Ins t~tu tefor the Study of Business Markets (ISBM) an organization aimed atfostering research on nonconsumer markets. Research in the newbusiness products a rea is a major research interest of Dr. Lilien and isa major research priority for the ISBM.In preparing this pap er, K eeney was partially supported by the Officeof Naval Resea rch und er Contract N00014-84-K-0332 titled, Value-Focused Thinking and The Study of Values. Additional support wasprovided by The Pennsylvania State University 's Institute for theStudy of Business Markets .

    [1,3-5,lO-13,16,18,29,32] have found that al-though there are often many causes, a predomi-nant reason that products fail is for lack of a clearunderstanding of market needs. For example, Ca-lentone and Cooper [6], in a study of new-indus-trial-product failures, found that the largest cate-gory, 28%, included products that met anonexistent need while only 15% of the productfailures were bad products-that is, did not dowhat they were supposed to do. Cooper andKleinschmidt [15] point out that a major reasonfor this lack of understanding of market needsresults from inadequate market studies. Theynote that

    Preliminary Market Assessment [was] veryweakly rated [as well done] overall, yet stronglycorrelated with all four measures of project per-formance Detailed Market StudylMarketResearch [was] omitted altogether in 74.6 per-cent of the projects yet [was] significantly

    correlated with all four measures of project per-formance. (p. 84Thus, there is strong reason to believe that an

    organized, analytic approach to new industrialproduct development has the potential to stream-line the process and reduce new product failurerates.

    In the next section we outline several popularanalytic evaluation approaches for new productsand suggest that there is an important class ofproblems for which these approaches are inap-propriate. The next section introduces multiattri-bute value analysis for this class of problems. Wedevelop that approach, illustrate its use in a realapplication, and evaluate its potential for new in-dustrial product evaluation.

    nalytic pproaches for the New ProductDevelopment ProcessUrban and Hauser [34] discuss several methodsavailable to aid in new product design and posi-tioning. Most of those methods link product at -tributes (or dimensions, i.e., physical productfeatures-speed, efficiency-as well as psycho-logical aspects of the product-perceived qual-ity, service, vendor reliability) to a preferencemeasure through some functional form (usuallylinear). Three commonly used procedures are ex-pectancy value methods, preference regressionmethods, and conjoint analysis. Each of theseprocedures relies on the product designers or an-alysts, rather than potential customers, to specifythe attributes. While the designers are oftenguided by prior input from customers, their deci-sions may be somewhat arbitrary.

    Expectancy alue MethodsThese methods [37] ask respondents to scoreeach of a set of products or product-concepts (ona 1-7 scale, say) on each product attribute. Therespondent also provides an importance weightfor each of these attributes. The value (or prefer-ence) the individual has for that product is thencalculated as the sum of the attribute scores mul-tiplied by the importance weight of the corre-sponding attribute.

  • 8/13/2019 41 - New Industrial Product...

    3/14

    NEW INDUSTRIAL PRODUCT DESIGN PROD I N N O V MA N A G 87987;4: 185 198

    Expectancy value methods are perhaps themost widely used of all methods for new productscreening. The method is easy to understand,simple to apply, and inexpensive as well. How-ever, if several of the attributes tap the same un-derlying dimension, the method will over-weight(double count) the importance of those attributes.In addition, Beckwith and Lehman [2] show thatthe method can lead to the halo effect in which afavorable product is inappropriately rated favor-ably along all scales.

    Preference RegressionThe preference regression model looks verymuch like the expectancy value model but is de-veloped differently. In the expectancy valuemethod, respondents provide attribute-weightsand the model is used to predict (infer) overallproduct preference. In the preference regressionmethod, preference judgments obtained from in-dividuals are used as dependent variables in aregression equation with attribute ratings as inde-pendent variables. Importance weights are theninferred as regression coefficients. Other differ-ences between preference regression and expect-ancy valuefmethods are (a) in preference regres-sion, importance weights are usually assumedhomogeneous across a group of respondents and(b) groups of attributes (combined through a fac-tor analysis, perhaps) are often used as indepen-dent variables to reduce problems of multicol-linearity.

    Some advantages of the preference regressionapproach are that it is easy to use (requiring onlya standard regression package), it is frequentlymore accurate in predicting preferences than theexpectancy value approach, and the inferred im-portance weights can be used to guide product-design decisions.Its limitations include the fact that a linearmodel-form is most frequently used. Althoughimplementations such as PREFMAP [7] have op-tions to deal with thresholds, nonlinear effects,and the like, Urban and Hauser [34] contend thatsuch effects are not handled in an entirely satis-factory manner. In addition, importance weightsare normally population-averages and provide lit-tle information about individual-level differencesin attribute importance.

    Conjoint nalysisConjoint analysis is an approach for predictingrespondent preferences for a product definedby a set of attributes at specific levels (price =$1400, horsepower rating = 28, etc.). The respon-dent usually rank-orders total product profiles,from most preferred to least preferred. Theseproduct profiles are combinations of product at-tributes, set at discrete levels. Four attributes setat four levels would lead to 44 (i.e., 256) combina-tions for rank ordering, so fractional factorial pro-cedures [21] and trade-off analysis [24] are usedto keep the respondent-task not too unwieldy.The conjoint analysis procedure then determinesimportance weights (known as part-worths )for each of the attributes.Conjoint analysis is most useful in evaluatingdesign tradeoffs when a small number of impor-tant, discrete, alternatives are being considered.Analysis is normally done at the individual level,and market response is estimated by aggregatingindividual responses. Cattin and Wittink [8] re-view commercial uses of conjoint analysis.Green, Carroll, and Goldberg [20] discuss thePOSSEE system, a decision support system forconjoint analysis studies.Conjoint analysis has several limitations. First,no statistical inference procedures exist, a seri-ous drawback for fitting a model form. Second,the procedure assumes that the appropriate ex-perimental factors (the product attributes) areknown in advance, are small in number, and areconstant across respondents. Finally, the ap-proach assumes that either the rank-ordered orpaired-comparison data about individual prefer-ences provide reliable information about likelypurchase actions (a limitation of all the proce-dure s n fact).

    The three procedures outlined here are theones most commonly used in practice. But theyare most frequently used for consumer productsor for industrial products with a few importantdesign dimensions that are identical across re-spondents.

    There is an important class of products forwhich these approaches are not well suited.Many industrial product markets for high tech-nology, and in particular, capital equipment, arecharacterized by heterogenous users (users withdifferent needs), a large number of product attrib-

  • 8/13/2019 41 - New Industrial Product...

    4/14

    188 PROD I N N O V M A N A G1987;4: 185 98 R. L. K E E N E Y AND G. L. LILIEN

    utes, and a small number of high-volume, buyingfirms. The multivariate value function approach,developed and illustrated below, is appropriatefor markets of this type. Indeed, in many suchmarkets, there will be a small number of highlyinfluential buying firms who are consistent earlyadopters of new technologies. Von Hippel [ 5]refers to these customers as lead users. Hepoints out that in such markets, most potentialusers will not have the real-world experience toproblem solve and provide accurate data to in-

    For each criterion at the lowest level of thehierarchy, a measure (continuous or discretescale) must be identified or constructed to indi-cate the degree to which products meet the corre-sponding criterion. Each measure has implicitvalue judgments, so it is desirable to make themeasures customer-specific. Procedures to iden-tify measures are discussed in Keeney [ 2 6 ]

    Identification of a General Evaluation Modelquiring marketing researchers (P 79 . He sug- The appropriate functional form of an evaluationgests that a key to new product success in such model depends on how individuals fivalue,,bothis to market research at the relative levels of the product attributes and inter-(usually) small number of lead users. actions between levels of these attributes. Differ-The Multiattribute Value nalysis pproachThe multiattribute value analysis approach hasthree major steps. The first is specification of theproduct attributes for a given customer and pro-spective need. The second is identification of anappropriate evaluation model (i.e., the form of amultiattribute value function). And the third isthe assessment of value judgments to calibratethe value function for that respondent.

    Specification of ttributesThe specification of attributes provides a compre-hensive list of criteria for evaluating prospectiveproducts. The discussion between the analyst andthe prospective customer is used to develop thecriteria. Particular emphasis must be placed onclarifying softer, hard-to-measure criteria suchas supplier problems, serviceability, or upgra-deability, as these important criteria are often ne-glected with other approaches. The intent is toelicit a complete set of significant criteria.

    Next, each criterion must be individually ap-praised to determine i it is fundamental to theproduct or a means to something fundamental (anend). For example, a criterion concerning redun-dancy may be a means to ensure reliability andlow cost (ends). As such, it is inappropriate toinclude both redundancy (means) and reliability(ends) criteria. The process of identifying funda-mental criteria should indicate which fundamen-tal criteria are parts of which others and result ina hierarchical structure of the fundamental crite-ria. (The next section provides an example.)

    ent independence concepts are used to describethese value relationships. Various rather power-ful theoretical results imply a specific functionalform given sets of these independence properties.A major result concerning the independence ofattributes is summarized in the Appendix. Suchresults are most appropriate for fundamentalcriteria (i.e., criteria that exclude means criteriaas discussed above). If the tests described in theAppendix indicate significant dependencies, thenit is usually appropriate to return to the firststep-specification of attributes-and attempt tobetter define or restructure the attributes.

    s s es sm e n t of V a l u e J ~ ~ d g e m e n t sAssessment procedures first identify the indepen-dence conditions that are appropriate for a pro-spective customer, which will indicate the formof the value function. Then they estimate compo-nent value functions and scaling factors. The pro-cedure outlined here is illustrated in the followingcase study.

    Preferential independence (see Appendix) isverified by finding pairs of products that the cus-tomer finds indifferent that differ in terms of twocriteria only. By varying the other criteria, weinvestigate whether this indifference is affectedby the levels of the other criteria. If indifferencedoes hold, preferential independence is con-firmed. For weak-difference independence, weask the customer for a level of one criterion thatis half way in value between the value of twodifferent levels of that criterion. If this midva-lue level remains the same regardless of what

  • 8/13/2019 41 - New Industrial Product...

    5/14

    NEW INDUSTRIAL PROD UCT DESIGN J PROD INNOV MANAG 1891987;4: 185-198

    levels are chosen for the other criteria, weak-dif-ference independence holds.

    To determine the value function, v, the re-sponses from questions to appraise weak-differ-ence independence are used. If for attribute, Xx' is the midvalue level between the least desir-able level x0 and the most desirable level x*, thenwe set v(xf ) 0.5 (since v(xO) and v(x*) 1are assigned to scale v from 0 to 1). Other pointsbetween x0 and x' and between x' and x* can besimilarly assessed to yield more points on v. Thena curve can be fit to the points. From the pairs ofindifferent products identified in verifying

    preferential independence (see Appendix), atleast n equations must be generated to estimatethe attribute weights.

    Details on the independence concepts and as-sessment procedure are found in Keeney andRaiffa [28] Keeney [25], and von Winterfeldt andEdwards [36].

    In essence, then, in a particular application (a)careful questioning of the prospective purchaseridentifies a set of attributes and ascertainswhether they are ends and not means. (b) aseries of indifference questions are used to deter-mine and select an appropriate value functionform and (c) multiple variations of those ques-tions are then used to calibrate the value func-tions. A case illustration follows next.

    Case Application: Capricorn Corporationand th OR9BackgroundOver the last decade the integrated circuit indus-try has gone through a cycle of birth, explosivegrowth, and, currently, has moved into a phase ofsevere competition. Few industries have evolvedso quickly on the one hand and have found suchsevere competition on the other. The risks associ-ated with customer and market misassessmentare as significant here as in any other industry wehave examined.

    In early 1984we were approached by the Cap-ricorn Corporation (ficticious name), a well-known Silicon Valley firm, with an establishedreputation for producing high quality rnanufactur-ing, test, and control equipment. The firm haddeveloped a technical breakthrough that, theyfelt, would give them a significant cost advantage

    in manufacturing test equipment for very largescale integrated circuits (VLSIC). The questionsCapricorn asked were: how would prospectivecustomers evaluate a Capricorn entry in this(highly competitive) market and how should Cap-ricorn's product be designed?

    The analysis described here was part of alarger study aimed at identifying likely customersfor the product, the decision-process within firmsfor purchasing such test equipment, the likely fu-ture needs of such customers, and the likely com-petitors for the product within each customer-or-ganization.

    Zdent ihing t tr ibutesFollowing a review of the technical literature andseveral meetings with technical and marketingstaff at Capricorn, 17 decision criteria for VLSICTester evaluations were identified (Table 1,Column 1). Analyzing means-ends relationshipsto eliminated redundances, these 17 criteria wereidentified from a much larger list of 57 main crite-ria, many of which had subcriteria [22]. The deci-sion criteria fell into four categories: technical,economic, software, and vendor support. Eachcriterion required an associated measure to de-scribe characteristics of different products interms of that criterion. These measures were ei-ther natural scales such as the number of picose-conds (psec) for timing accuracy or a constructedscale, such as yes/no for the availability of dataanalysis software (Table 1, Column 2).

    Because the uses of the tester vary by thetypes of VSLIC devices tested by the customer,the range of desirable levels for performance cri-teria also vary by customer. For example, a cus-tomer may indicate that the minimum accept-able level for pin capacity is 64 while themaximum level (given current plans) is 256.

    Then the pin capacity dimension for this cus-tomer would only be evaluated between 64 and256.Technical criteria. There are six technical cri-teria for evaluating the testers, and each of thesehas a readily available natural measure. For ex-ample, pin capacity is measured by the number ofpins and vector depth is measured by the memorysize in megabits.Economic criteria. A key criterion in evaluat-ing integrated circuit testers is price. However,

  • 8/13/2019 41 - New Industrial Product...

    6/14

    190 J PROD INNOV MANAG1987;4:185 198 R L KEENEY A N D G L LILIEN

    Table 1 Ratings and Weights of Decision Criteria for Acorn Tester SelectionRank-order Relativeof criteria weightsRange of the measure within within

    Minimum Maximum decision decisionacceptable Midvalue desirable criterion criterionDecision criteria Measure level points level category categoryTechnicalX 1 pin capacity quantityX2 vector depth memory sizemegabits)X 3 data rate MHzX4 timing accuracy picosecondsX5 pin capacitance picofaradsX6 programmable numbermeasurementunitsEconomicX7 price total costX8 uptime percentX9 delivery time monthsSoftwareX t O software percenttranslator conversionX I networking: yeslnocommunicationsX g 2 networking: yeslnoopenX I 3 development mean timetime months)X I 4 data analysis yeslnosoftware

    Vendor SupportX I 5 vendor service time untilsystemworkshours)X I 6 vendor time untilperformance responsehours)X 1 7 customer yeslnoapplications

    an important characteristic of the model is theability of the potential purchaser to provide theappropriate price measure. Some prospectivecustomers use the net purchase price. Othersprefer to use the cost per unit tested or thetotal cost after tax implications. Two other re-lated economic criteria are uptime and deliverytime.

    Software criteria A key software criterion iswhether a universal translator exists, measuredby a yes/no scale. A universal translator takesVLSIC testing software developed for anothermanufacturer s tester and translates it for use onCapricorn s tester.) Other software criteria in-clude several forms of networking capabilitiesand software development time.

  • 8/13/2019 41 - New Industrial Product...

    7/14

    N E W I N D U S T RI A L PRODUCT DESIGN J PROD I N N O V M A N A G 91987;4:185-198

    Vendor support criteria. Vendor support crite-ria include service, performance, and applicationsupport. Vendor service is measured by the timenecessary to get the equipment running after ithas gone down. Vendor performance is measuredby the time until vendor personnel arrive at thecustomer's facility after such assistance has beenrequested. The criterion of vendor capability toassist in applications is measured by the simpleyes/no scale.

    An E valuation Fun ction for Mr Smith of AcornIndustriesAcorn Industries is one of the most importantpotential customers for Capricorn, as it is one ofthe five U.S. industry leaders in the manufactur-ing of VLSICs. After careful preliminary evalua-tion of the tester acquisition process, MichaelSmith, the Manager of Test Engineering was in-tensively interviewed to determine his valuefunction. Not only was Mr. Smith identified by allother buying center members at Acorn as mostinfluential in the buying process for test equip-ment, he was so identified in numerous inter-views outside the firm as well. He seemed toserve as a key referrent for many firms in theregion. In addition, he maintained a detailed ma-trix of test equipment and evaluation of thatequipment along some 25 to 30 dimensions of hisown. Thus, Acorn Industries in general and Mr.Smith in particular met the lead user criteriondiscussed by Von Hippel [ 3 5 ] .

    Form of Sm ith s Evaluat ion Funct ionThe independence assumptions necessary to usethe results in the Appendix were verified with Mr.Smith. Preliminary questioning suggested that theadditive form (Eq. A.2)) would be appropriate,and, as discussed in the next section, checks ofthis assumption confirmed that impression.Hence, the particular evaluation function chosenwas

    where

    where v is a measurable value function scaledfrom to 1 for evaluating testers represented byx l , ,x17);T, E S, and V stand for techni-cal, economic, software, and vendor support cri-teria, respectively; v n VE , S and vv are compo-nent measurable value functions for the fourrespective decision criteria categories; kT, kE, ks,and kv are the relative importance weights of thefour decision criteria categories given the rangesindicated by the prospective customer; vi is thecomponent measurable value function for deci-sion criterion X, scaled from zero to one; xi is aspecific level of X,; and ki is the relative impor-tance weight of decision criterion X, within its

    decision criterion category. To use this model,we need to assess 17 component value functions(the v,), 17 importance weights for the criteria(the ki), and four importance weights on decisioncriterion categories (k,, T, E S, V).Calibration of the Evaluation FunctionThe 17 decision criteria were reviewed with Mr.Smith. The measure he chose for criterion X7 wastotal cost, including tester cost plus initial train-ing and spares. He also changed decision crite-rion Xlo to the percent conversion of the soft-ware translator. To determine the range overwhich criteria could vary, Mr. Smith was askedto specify a minimum acceptable level and amaximum desirable level for each criterion.These results are displayed in Table 1, Columns 3and 5Importance weights. Before assessing infor-mation to directly determine the importanceweights, we asked easier questions to rank-orderthese weights. For instance, with respect to the

  • 8/13/2019 41 - New Industrial Product...

    8/14

    192 J PROD I N N O V M A N A G1987;4: 185 198 R L. K E E N E Y A N D G. L. LILIEN

    technical decision criteria, we begin by asking thefollowing. Suppose all the technical criteriawere set at their respective minimum acceptablelevels. If you could raise only one of these criteriafrom that level to its 'maximum desirable level,'which criterion would you move? His responsewas that timing accuracy would be moved from&500 picoseconds to k250 picoseconds. This im-plies that k4 is Eq (2a) is the largest of the impor-tance weights for the technical decision criteriaevaluation function, v ~ .

    The next question concerned which of thetechnical criteria would be the second most im-portant to move from its minimum acceptablelevel to its maximum desirable level. His re-sponse this time was that vector depth wouldmove from 1 to 4 megabits. This process contin-ued and was repeated for economic, software,

    Figure 1 Assessed Value Tradeoffs

    Cost mil 1ions )a1

    TimingAccuracy k40psec.

    and vendor support decision criteria (Table 1,Column 6).To specify the importance weights numeri-cally, value tradeoffs among criteria must be con-sidered. We chose the criterion ranked firstwithin its decision criterion category and devel-oped indifference pairs of criteria as illustrated inFigure la . This exhibit shows that, with all othercriteria held fixed, the respondent is indifferentbetween a cost of $1.5 million for a tester with atiming accuracy of 00 picoseconds and a testerthat costs 2.5 million and has a timing accuracyof k400 picoseconds. First, we asked which ofpoints A or B in Figure l a was preferred. Mr.Smith stated that B was preferred. Next, weasked which consequence was preferred betweenA and C in Figure la. Here, his response was A.This indicated that an increase in timing accuracy

    assessed as? I ~ lndifferent DevelopmentT i m emonths)

    3 5

    Cost mi ions )b )

    Y e s

    2Customer Software 2.67Applications Development 3

    Timemonths)

    no 42 5 2 4 2.0 1 . 5 2 5 2 0 1 . 5

    Cost million )f)

    Cost millions 1d l

  • 8/13/2019 41 - New Industrial Product...

    9/14

    N E W I N D U S TR I A L P R O D U C T D ES I G N J PROD I N N O V MANAG 193987;4:185-198

    of 50 picoseconds from 500 and 450 was not We must scale the component value functions,worth to him the additional cost of one million vi, from 0 to 1. Thus, for pin capacity, ~ ~ ( 1 4 4 )dollars per tester. We then found out that Mr. 0, ~ ~ ( 1 9 0 )0.5, and ~ ~ ( 2 5 6 )1. Fitting this in-Smith was indifferent between consequences A formation to an exponential curve yieldsand D. The other pairs of indifference points indi-cated in Figure 1 were assessed in a similar man- V I X I -929 ex~(0.0065(44 xl))].ner. These assessments also directly verified sev-eral of the preferential independence assumptionsnecessary t o use the evaluation model in Eq. (1).

    Since each of the value tradeoffs in Figure 1show how much the respondent would pay formore performance along a given criterion, theyprovide a measure of decision criteria impor-tance. An advantage of this procedure is that itdirectly addresses value tradeoffs. The proce-dure, however, is somewhat complex. A simplerprocedure that does not directly address valuetradeoffs is to allocate 100 points among the deci-sion criteria of concern.Within each decision criterion category, Mr.Smith was asked to allocate 100 points to repre-sent the relative values of moving the criteriafrom their minimum acceptable levels to theirmaximum desirable levels. Among technical deci-sion criteria, of the total of 100 points, he as-signed 35 to timing accuracy, 20 points to vectordepth, etc. The other associated weights indi-cated in Table 1, Column 7, were assigned to sumto 100. For ease of implementation, relativeweights were first assigned and then normalizedto sum to 100. Also, as a consistency check andfor guidance in assigning weights, the ranking ofimportance weights within decision categorieswas available.Comp onent value functions To evaluate alter-native testers, we needed the relative desirabilityof any given level of any criterion within therange specified in Table 1. To do this, we askedquestions such as (for pin capacity, which rangedfrom 144 to 256 pins), What number of pins, callthis y, is such that the increased desirability ofgoing from 144 pins to y is equal to the increase indesirability of going from y pins to 256? Mr.Smith's response in this case was 190, indicating

    An exponential, linear, or other functional formcan be used to summarize these judgements. Byusing the midvalue assessments for additionalpairs of criterion levels, additional data points aregenerated to help select and fit an appropriatecurve.

    The other assessed midvalue points are pre-sented in Table 1, Column 4. In the case wherethe measure is a yes/no scale, we simply assign 0to the no level and 1 to the yes level of the valuefunction in each case. The resulting componentvalue functions are listed in Table 2.Synergies To determine whether there wereany synergistic effects in combinations of differ-ent criteria, we posed questions illustrated byFigure Id referring to pairs of costs and meandevelopment times for software. We assigned arelative desirability of zero to the least desirablecombination (i.e., a cost of $2.5 million and fourmonths development time) and a relative desir-ability of 100 to the best pair (a cost of $1.5 mil-lion and 2 months development time). We thendetermined the relative value assigned to theother two corners indicated by E and F in thefigure. The relative weight Smith assigned to Ewas 40 and that assigned to F was 60. This resultwas an indication that the additive value functionwas appropriate since these summed to 100, andwere not otherwise constrained to do so. Thus,the appropriate specific form of the evaluationmodel (A. 1) was the additive case (A.2). Similarcomparisons were performed with other dimen-sion pairs with similar results.

    he overall evaluation function Using the in-formation above, we calculated the parametersfor the value function (1) and (2). The decisioncriteria group weights are

    that the relative desirability assigned to 190 must kT 0.52, kE 0.14,be midway between that assigned to 144 and to ks 0.32, k v 0.02. (3)256 pins. Since this midvalue point did not de-pend on the level of the other criteria, pin capac- These weights were assessed from the type ofity was weak-difference independent of the other value tradeoffs presented in Figure 1 and normal-criteria. This is another required assumption for ized to sum to 1.0. Thus, the overall value func-evaluation model (1). tion ranges from 0 to 100 since we assigned the

  • 8/13/2019 41 - New Industrial Product...

    10/14

    19 4 PROD INNOV MANAG1987;4: 185 198R. L . KEENEY AND G . L. LILIEN

    Table 2 Tester E valuations and Evaluation FunctionTester ComponentvaluefunctionR 9 5941 Sentry 5ecision criteria

    TechnicalX I pin capacityXt vector depthX3 data rateX4 timing accuracyX5 pin capacitanceX6 programmablemeasurement unitsTechnical EvaluationEconomicX7 priceX8 uptimeX9 delivery timeEconomic EvaluationSoftwareX l o software translatorX I 1 networking:communicationsX I 2 networking: openX I 3 development timeX14 data analysis softwareSoftware EvaluationVendor SupportX 1 5 vendor serviceX I 6 vendor performanceX 1 7 customer applicationsVendor SupportEvaluationOverall Evaluation

    component weights to equal 1 within each ofthe decision criteria categories. The weights inEq. 3) plus the w eights in Table 1 and the compo-nent evaluation functions in Table 2 provide allthe parts of the evaluation function.Summary of judgements required. Let us con-sider the num ber of types and judgements thatwould be required from each resp ondent to iden-tify and quantify an appropriate evaluationmodel, If there a re N criteria identified for a par-ticular problem, then approximately 3N judg-ments depending on the evaluation form), areneeded. Of these , N judgments are needed for theverification of N independ ence conditions to im-ply a functional form for the model. Another Njudgments provide the importanc e weights, and athird N provide the component value functions.

    Since the verification of independence condi-tions also provides information to specify eitherimportance weights or component value func-tions, fewer than 3N judgments are absolutelynecessary . H ow ever , a few additional judge-ments are useful as consistency checks. Also, ifviolations of independence conditions occur, ei-ther additional impo rtance weights or comp onentvalue functions that ar e depende nt on criteria lev-els of other attributes a re needed. T he number ofadditional judgments increases roughly as thesquare of the number of criteria violations of in-dependence conditions. If there are more thanabout three independence violations, the valuemodel has been constructed with a poor set ofoverlapping criteria with complex value impacts.Th e criteria set should be reconsidered and modi-

  • 8/13/2019 41 - New Industrial Product...

    11/14

    N EW I N D U S TR I A L PRODUCT DESIGN J PROD I N N O V MA N A G1987;4:185-198

    fied in this case. With two or fewer independenceviolations, assessing value judgments to quantifysynergies is both reasonable to do and providesinsights.

    sing the odelThe model developed above was used to deter-mine the likely response Acorn Industries wouldhave to the 0R9000, a new tester Capricorn cur-rently had in prototype form. Acorn Industriescurrently is interested in two preferred testers:the Sentry 50 and the 5941. Table 2 presents adescription of the three testers, in terms of the 17decision criteria, and their evaluation using thevalue function developed above.

    Base case evaluation. The decision criteriawere scaled on a 0.0 to 1.0 scale with 1.0 corre-sponding to the most desirable level and 0.0 cor-responding to the least desirable level of thescales listed in Table 1. Some evaluations in Ta-ble 2 are less than 0.0 since they are lower thanthe minimum acceptable level. This occurred be-cause the range was set for the testers to be usedin research as well as production engineering,while the testers evaluated were basically pro-duction models. Thus, none of these testers istruly acceptable t o Acorn. This evaluation showswhich of the three testers would be least unac-ceptable, and indicates the characteristics ofpreferable testers.Using base case evaluations, the Sentry 50 isslightly preferred to the OR9000 and both aremuch preferred to the 5941 This follows from theoverall evaluations of 15.4 for Sentry 50, 13.3 forOR9000, and 18.0 for 5941.In terms of decision categories, the OR9000dominates 5941, i.e., the OR9000 is better thanthe 5941 in terms of technical, economic, soft-ware, and vendor support categories. The Sentry50 does not dominate the 5941.The OR9000 is strongly preferred to the Sentry50 in the economic, software, and vendor supportcategories. The Sentry 50 is strongly preferred tothe OR9000 in the technical criteria category.This preference is basically due to pin capacityand timing accuracy, the latter being the majorfactor. We ran several sensitivity analysis toshow the model can be used.Sensitivity analysis : change in description.Consider the effect of a decrease in the data ratefor the OR9000 from 50 to 20 MH z This reduces

    the technical evaluation of OR9000 from 4.9 to-56.9 and reduces the overall evaluation from13.3 to 12.3. How much is this difference worth?If the price of the OR9000 dropped from the $1.4million with a 50 M Hz data rate to $1.26 millionwith a 20 MHz data rate, the evaluations areequal at 13.3. This indicates that this change indata rate is worth $140,000. Similar assessmentscan be performed for any of the criteria.Sensitivity analysis 2: indifferent equivalentcosts. How much more is the Sentry worth thanthe OR9000? Note that the total difference inevaluation scores is 15.4 to 13.3 or 2.1 units. A $1million price difference translates into kEk7 valu-ation units or .14 50 7 units. The 2.1 unitdifference is worth 2.117 $1 million or $300,000.Thus Sentry is worth $300,000 more thanOR9000.Sensitivity analysis 3: weights and value tradeoffs. The evaluations of the OR9000 and theSentry 50 are close, so customer choice is sensi-tive to weighting factors. The Sentry 50 is supe-rior to the OR9000 only in the technical criteriacategory. Hence, we began lowering the weighton the technical criteria from 0.52 and propor-tionally increasing the weight on the other criteriacategories to maintain a sum of 1.0. Once thetechnical weight drops to 0.505, the two testersare indifferent. If the weight on the technical cri-terion drops below 0.505, then the OR9000 is pre-ferred to the Sentry 50. This indicates that thevalue tradeoffs are likely crucial to choice for thisparticular problem. Hence, additional market re-search should perhaps focus on their value trade-off.Sensitivity analysis 4: design upgrade analysis.Consider a Capricorn Machine that is the equal ofthe Sentry 50 in technical performance. Such amachine would have a value score of 47.3 vs. theOR9000's current score of 13.3 and Sentry's 15.4.This analysis points out that a substantial im-provement in customer value will be seen by atechnical upgrade of the OR9000.h p a c t at apricornLargely on the basis of this analysis, whichshowed the high importance of the technical cri-teria relative to the economic criteria, Capricornabandoned market plans for the OR9000. Thisevaluation model, calibrated here and at othercustomer locations, helped focus attention on the

  • 8/13/2019 41 - New Industrial Product...

    12/14

    9 PROD INNOV MANAG1987:4:185 198

    R L K E E N E Y A N D G L LILIEN

    market potential resulting from improvements inseveral technical dimensions. Research into pro-ducing a new machine with significant enhance-ments along key technical dimensions is currentlyin progress.

    valuation and UseOur experience with this procedure for evaluatinglikely customer response to a new industrialproduct has been quite positive. Where the deci-sion criteria-the key product attributes-arecustomer-specific, where there a re many such at-tributes and where synergies, value tradeoffs,thresholds, and the like are important, no com-monly used market evaluation procedure is satis-factory. Specifically, this approach has severalbenefits over existing methods. First, it permitsthe evaluation of many more dimensions than ispossible with conjoint analysis, and permits inter-actions/synergies at any level as well. Second, itnaturally allows for idiosyncratic, respondent-specified evaluation dimensions. Third, themodel structures the evaluation process in muchthe same way that a careful industrial buyerdoes-and respondents take the evaluation taskseriously because of that. Fourth, the procedureis hierarchical, bundling dimensions for value tra-deoffs at higher levels and comparing individualcriteria at lower levels. This eliminates much ofthe redundancy in lists of criteria by focusing onfundamental (i.e., ends) criteria and eliminatingmeans.

    On the other hand, this approach is not amena-ble for a large sample market research study. Itrequires heavy involvement of the respondent(most interviews take about half a day), and awell-trained, perceptive interviewer, since the in-terviewing procedure combines the specificationand testing of functional forms with the calibra-tion task. To perform this analysis in 10 to 20firms would cost about the same as what pretestmarket models for consumer packaged goodscosts ($25-75,000 [33]). This is somewhat morethan most industrial marketers are used to payingfor market analysis, but provides considerablymore specific insight.

    Many of the limitations of this approach derivefrom the more complex nature of this procedurerelative to those procedures described in the sec-ond section of this article. The procedure re-

    quires skill to administer. The interviewer mustbe able to explain and administer trade-off ques-tions and be sensitive to the level of knowledge(and the fatigue level) of the respondent. As acorollary, the respondent must have sufficientproduct-knowledge so that attribute trade-offquestions are appropriate.In addition, there are several possible biasesinvolved in the approach. As most organizationalbuying involves multiple individuals (301 it is im-portant that the key respondent's answers reflectthe values of the firm. It is conceivable that thekey respondents are technological gatekeep-ers, wanting to encourage the supplier to under-take risky developments and produce productsthat may or may not be purchased when theyultimately reach the market. Although, ulti-mately, such a bias would lead to loss of the re-spondent's credibility with suppliers over time, itis a bias that should be considered.

    To the extent that multiple individuals are in-volved in purchasing situations, it is importantthat the implications of those individuals' valuefunctions be combined in a sensible way. Onemethod is to use the value function outputs topredict votes for purchase decisions and thendevelop a (weighted) score from these votes.Choffray and Lilien 9 develop this combiningrule and others; Keeney and Kirkwood [27] de-rive conditions under which a multilinear combin-ing rule (similar to Eq. (A. I is appropriate. Thecombining rule problem is an issue of key impor-tance and no existing procedure handles it ade-quately at the moment. Of comparable impor-tance is the time-variance (stability) of the valueassessment. Our assessment procedure is static;even if it is valid at the moment of assessment,the market (and customer values) may be differ-ent at the time of market entry. How differentthey may be is another unsolved problem.

    The assessment procedure is clearly obtrusive.It forces the respondent to think in terms of hier-archies and levels of attributes, trade-offs, andthe like. How much that assessment procedureaffects (biases) the respondent is hard to say.

    Many of the above problems would be allevi-ated if validation data were available. At thiswriting, we know of no other field application ofthe procedure in marketing. Indeed, assessmentsof multiple firms in multiple markets, using thisand other methods of value assessment, will be

  • 8/13/2019 41 - New Industrial Product...

    13/14

    NEW INDUSTRIAL PRODUCT DESIGN PROD INNOV MANAG987;4: 185 198

    required to address the validation issue. We hopeto have the opportunity to address that questionin the near future.

    In summary, the use of multiattribute valueanalysis takes more time for both the interviewerand the respondent than other approaches ad-dressing prospective customer purchase behav-ior. Clearly, multiattribute value analysis is notgenerally appropriate for most packaged goods.But for multimillion dollar purchases, it may becost-effective to delve deeply into the valuejudgements of prospective purchasers.

    It is reasonable to expect that purchasers willthink hard about these values when purchasetime comes. By forcing this hard thought earlythe seller may get biased responses and prefer-ences may change over time. But with an explicitrepresentation of these stated values, and havingheard the reasoning and thoughts behind them,the seller should be in a better position to recog-nize misleading biases and to foresee causes forfuture changes in values. Since values are crucialin large industrial purchases, a focus on whatbuyers want balanced with what sellers can givethem can provide valuable marketing intelli-gence.

    On net, multiattribute value analysis is a proce-dure worth considering for important product de-sign decisions in highly competitive, heteroge-neous, fast moving markets. It avoids many ofthe disadvantages of the more common proce-dures and, though its cost is not trivial, it is aninvestment with the potential to pay handsomereturns to leading firms.

    eferencesI. Angelus, T. L. Why do most new products fail? AdvertisingA g e 40:865-886 (March 24, 1969).2. Beckwith , Neal E. and Lehmann, D. R. The importance of haloeffects in multi-attribute attitude models. Journal of Marketing

    Research 12:265-275 (August 1975).3. Booz, Allen Hamilton. Managem ent of New Products. NewYork: Booz, Allen Hamilton, Inc., 1971.4. Booz, Allen Hamilton. Ne w Product Managem ent for the1980 s. New York: Booz, Allen Hamilton, Inc., 1982.5. Briscoe, C. Some observations on new industrial product fail-ures. Industrial Marketing Management 2: 15 62 (February1973).6. Caletone, Roger J. and Cooper , Robert C . A typology of indus-trial new product failure. In: 1977 Educator s Conference Pro-ceedings, ed. B. Greenberg and D, Bellenger (eds.). Chicago:American Marketing Association, 1977, pp. 492-497.7. Carroll, J. D. Individual differences and multidimensional seal-ing. In: Multidimensional Scaling: Theory and Application in

    the Behavioral Science , R N. Shepard, A. K. Rommey, and S.Nerlove, (eds .). Vol. 1. New York: New York Seminar Press,Inc., 1972.8. Cattin, P. and Wittink, D. R. Commercial use of conjointanalysis: a survey. Journal of Marketing 4:44-53 (Summer

    1982).9. Choffray, Jean-Marie and Lilien, Gary L. Market Planning forNew Industrial Products. New York: Wiley, 1980.10. Cooper, R. G. Why new industrial products fail. Industrial M ar-ke ting Manag ement 4:315-326 (December 1975).11. Cooper, R. G. The dimensions of industrial new product suc-cess and failure. Journal of Marketing 43:93-103 (Summer1979).12. Cooper, R. G. Project New Prod: factors in new product suc-cess. Journal of Marketing 14:277-292 (1980).13. Cooper, R. G. New product success in industrial firms. Indus-trial Marketing Management 1 :2 15-223 (1982).14. Cooper, R. G. The performance impact of product innovationstrategies. European Journal o f Marketing 18:43-54 (1984).25. Cooper, Robert G. and Kleinschmidt, Elko. An investigation

    into the new product process: steps, deficiencies and impact.Journal of Product Innovation Management 3:71-85 (June1986).16. Crawford, C. M. Marketing research and the new product fail-ure rate. Journal of Marketing 41 :5 1-61 (April 1977).17. Crawford, C. M. New product failure rates-facts and fallacies.Research Management (September 1979), pp. 9-13.18. Davidson, J. H. Why most new consumer brands fail. HarvardBusiness Review 54: 117- 12 1 (March-April 1976).19. Dyer, J. S. and Sarin, R. K. Measurable multiattribute valuefunctions. Operations Research 27:810-822 (July-August1979).20. Green, P. E., Carroll, J. D., and Goldberg, S. M. A generalapproach to product design optimization via conjoint analysis.Journal of Marketing 45: 17-37 (Summer 1981).21. Gree n, Paul E . and Wind, Y. Multi-Attribute Decisions in Mar-keting. Hinsdale, IL: Dryden Press, 1973.22. Healy, J. A VLSI-ATE Selection Matrix. Solid State Technol-ogy (November 1982), pp. 81-88.23. Hopkins, D. S. Ne w Product Winners and Losers. ConferenceBoard Report No. 773, 1980.24. Johnson, Richard M. Tradeoff analysis of consumer values.Journal of Marketing R esearch 11: 121-127 (May 1974).25. Keeney, R. L. Siting Energy Facilities. New York: AcademicPress, 1980.26. Keeney, R. L. Measurement scales for quantifying attributes.Behavioral Sciences 26:29-36 (January 1981).27. Keeney, Ralph L. and Kirkwood, Craig W. Group decisionmaking using cardinal social welfare functions. ManagementScience 22:430-437 (December 1975).28. Keeney, R. L. and Raiffa, H. Decisions with Multiple Objec-t ives. New York: Wiley, 1976.29. Maidique, M. A. and Zirger, B. J. A study of success and failurein product innovation: the case of the U.S. electronics industry.IEEE Transactions on Engineering Management EM-31:192-203 (November 1984).30. Moriarty, Rowland T. Industrial Buying Behavior. Lexington,MA: Lexington Books, 1983.31. Qualls, William, Olshavsky, Richard W. and Michaels, RonaldE. Shortening of the PLC-an empirical test. Journal of Mar-keting 45:76-80 (1981).32. Rothwell, R., Freeman, C . Horsley, A., Jervis, T. T. P. Ro-bertson, A. B., and Townsend, J. SAPPHO updated-ProjectSAPPHO Phase 11. Research Policy 3:258-291 (1974).

  • 8/13/2019 41 - New Industrial Product...

    14/14

    198 J PR OD I NNOV M ANAG987;4: 185-198

    R. L KEENEY A N D G . L. LILIEN

    33. Shocker, Allan D. and Hall, William G. Pretest market models: Resulta critical evaluation. Journal of Product Innovation Managem e n t 3:86-107 (June 1986). Given criteria X I . . ,Xn, n 3 a model of34. Urban, Glen L . and Hauser, John R. Design and Market value differences v with the forming o New Products . Englewood Cliffs, NJ: Prentice-Hall,

    1980. n35. Von Hippel, Eric. Lead users: a source of novel product con-cepts. Management Science 32:79 1-805 (~ uI y 986). V ( X , ,x.) = C kivi(xi)i36. von Winterfeldt, D. and Edwards, W. Decision Analvsis andBehavioral Research. New York: Cambridge ~ n i v e r s i i ~ress,1986. k C kkkjvi(xi)vj(xj)i=I j > l37. Wilkie, William L. and Pessemier, Edgar A. Issues in market-ing s use of multi-attribute models. Journal o f Marke ting Research 10:428-44 1 (November 1973).

    Appendix: Independence Concepts and theSelection of a Value Function exists if and on ly if [X I,X i], =2 , . . ,n is pref-erentially indepe ndent of the other criteria and ifTo define the independence concepts let us X I is weak-difference independent of the otherassume that we have n cr i ter ia denoted XI , criteria.. . ,Xn with Xi being a level of criterion Xi, i= 1, To determine v in Eq. (A. I) , we need to asse ss. . n. Thus, a prospective product can be de- vi, i = 1, . . ,n on a 0 to 1 scale and the scalingscribed by the vector x = ( x l , . . . ,xn). A value constan ts ki i= 1 , . . .n. The additional constantmodel is then a function v that assigns higher k concerning synergy among criteria is deter-num bers to preferred sets of product characteris-tics. T ha t is, v(x) > v( xt ) f and only if x is prefer- mined from the ki .nred to xf by the custome r whose evaluation is ofinterest .Two independence concepts are most impor-tant in developing value models.

    Preferential Indepe nden ce. Th e pair of criteria[ X I Xz] is preferentially indepe ndent of theother criteria X 3, ,X, if the preferenceorde r for products involving only changes inthe levels of X, and X 2 does not dependon the levels at which X3 . . ,Xnar e fixed.We ak Dif ference Indepe nden ce. Criterion X I isweak-difference independent of criteria X2,

    . ,Xn if the orde r of pref eren ce differ-ences between pairs of XI levels does notdepend on the level at which criteria X2,

    . ,Xn are fixed.Using these independence concepts, numerousvalu e mod els ca n be de velope d 1251. Th ey a re allconstructed hierarchically from the followingresult, proven in D yer and S arin [19].

    If i = 1, then k = 0, and Eq. (A.1) reduce sI

    to the additive form

    When i 1, then k 0 and there is a valuei

    synergy among the criteria. Then, multiplyingeach side of Eq. (A. 1) by k , adding 1, and fac tor-ing yields

    which is referred to as the multiplicative form.Any vi in either Eq. (A.2) or Eq. A.3) can itselfbe a multiattribute m odel, so these results can beused in a nested fashion to create more complexmodels if appropriate.