Boost Your Marketing ROI with Experimental ?· Boost Your Marketing ROI with Experimental Design Eric…

Download Boost Your Marketing ROI with Experimental ?· Boost Your Marketing ROI with Experimental Design Eric…

Post on 05-Jun-2018




0 download

Embed Size (px)


<ul><li><p>Boost Your Marketing ROI with Experimental Design</p><p>by Eric Almquist and Gordon Wyner</p><p>Reprint r0109k</p></li><li><p>HBR Case Study r0109aOff with His Head?David Champion</p><p>HBR at Large r0109bThe Leadership Lessons of Mount EverestMichael Useem</p><p>Different Voice r0109cGenius at Work:A Conversation with Mark Morris</p><p>Harnessing the Science of Persuasion r0109dRobert B. Cialdini</p><p>Torment Your Customers (Theyll Love It) r0109eStephen Brown</p><p>Radical Change, the Quiet Way r0109fDebra E. Meyerson</p><p>Your Next IT Strategy r0109gJohn Hagel III and John Seely Brown</p><p>HBR Interview r0109hBernard Arnault of LVMH:The Perfect Paradox of Star BrandsSuzy Wetlaufer</p><p>Best Practice r0109jSpeeding Up Team LearningAmy Edmondson, Richard Bohmer, and Gary Pisano</p><p>Tool Kit r0109kBoost Your Marketing ROI with Experimental DesignEric Almquist and Gordon Wyner</p><p>October 2001 </p></li><li><p>onsumers are bombardeddaily with hundreds, perhaps</p><p>thousands, of marketing messages. De-livered through all manner of media,from television commercials to tele-phone solicitations to supermarket cir-culars to Internet banner ads, thesestimuli may elicit the desired response:The consumer clips a coupon, clicks ona link, or adds a product to a shoppingcart. But the vast majority of marketingmessages fail to hit their targets. Obvi-ously, it would be valuable for compa-nies to be able to anticipate which stim-uli would prompt a response since evena small improvement in the browse-to-buy conversion rate can have a big im-pact on profitability. But it has been verydifficult to isolate what drives consumerbehavior, largely because there are somany possible combinations of stimuli.</p><p>Now, however, marketers have easieraccess, at relatively low cost, to experi-</p><p>mental design techniques long appliedin other fields such as pharmaceuticalresearch. Experimental design, whichquantifies the effects of independentstimuli on behavioral responses, canhelp marketing executives analyze howthe various components of a marketingcampaign influence consumer behavior.This approach is much more precise andcost effective than traditional markettesting. And when you know how cus-tomers will respond to what you have tooffer, you can target marketing pro-grams directly to their needsand boostthe bottom line in the process.</p><p>Traditional Testing </p><p>The practice of testing various formsof a marketing or advertising stimulusisnt new. Direct marketers, in particu-lar, have long used simple techniquessuch as split mailings to compare howcustomers react to different prices or</p><p>Copyright 2001 by Harvard Business School Publishing Corporation. All rights reserved. 5</p><p>by Eric Almquist</p><p>and Gordon Wyner</p><p>To o l K i t</p><p>C</p><p>Boost YourMarketing ROI</p><p>withExperimental</p><p>Design</p><p>Most marketing</p><p>executives will</p><p>admit that their</p><p>discipline involves</p><p>a lot of guesswork.</p><p>But by borrowing</p><p>a statistical</p><p>technique long</p><p>applied in other</p><p>fields, marketers</p><p>can develop</p><p>campaigns that</p><p>target customers</p><p>with uncanny</p><p>accuracy.</p></li><li><p>promotional offers. But if they try toevaluate more than just a couple of cam-paign alternatives, traditional testingtechniques quickly grow prohibitivelyexpensive.</p><p>Consider the test and control cellmethod, which is the basis for almost alldirect mail and e-commerce testingdone today. It starts with a control cellfor, say, a base price, then adds test cellsfor higher and lower prices. To test fiveprice points, six promotions, four ban-ner ad colors, and three ad placements,youd need a control cell and 360 testcells (5 6 4 3 = 360). And thats a rel-atively simple case. In credit card mar-keting, the possible combinations ofbrands, cobrands, annual percentagerates, teaser rates, marketing messages,and mail packaging can quickly add upto hundreds of thousands of possiblebundles of attributes. Clearly, you can-not test them all.</p><p>Theres another problem with thisbrute force approach: It typically doesnot reveal which individual variablesare causing higher (or lower) responsesfrom customers, since most control-cell tests reflect the combined effectof more than two simple variables. Isit the lower price that prompted thehigher response? The promotional deal?The new advertising message? Theresno way to know.</p><p>The problem has been magnified re-cently as companies have gained theability to change their marketing stim-uli much more quickly. Just a few yearsago, changing prices and promotions ona few cans of food in the supermarket,for example, required the time-consum-ing application of sticky labels and thedistribution of paper coupons. Today, astore can adjust prices and promotionselectronically by simply reprogrammingits checkout scanners. The Internet hasfurther heightened marketing complex-ity by reducing the physical constraintson pricing, packaging, and communi-cations. In the extreme, an on-line re-tailer could change the prices and pro-</p><p>motion of every product it offers everyminute of the day. It can also changethe color of banner ads, the tone of pro-motional messages, and the content ofoutbound e-mails with relative ease.</p><p>The increasing complexity of thestimulus-response network, as we call it,means that marketers have more com-munication alternatives than ever be-fore and that the portion of alterna-tives they actually test is growing evensmaller. But this greater complexity canalso mean greater flexibility in your mar-keting programs if you can uncoverwhich changes in the stimulus-responsenetwork actually drive customer be-havior. One way to do this is throughscientific experimentation.</p><p>A New Marketing Science</p><p>The science of experimental design letspeople project the impact of many stim-uli by testing just a few of them. Byusing mathematical formulas to selectand test a subset of combinations ofvariables that represent the complexityof all the original variables, marketerscan model hundreds or even thousandsof stimuli accurately and efficiently.</p><p>This is not the same thing as an after-the-fact analysis of consumer behavior,sometimes referred to as data mining.Experimental design is distinguished bythe fact that you define and control the</p><p>independent variables before puttingthem into the marketplace, trying outdifferent kinds of stimuli on customersrather than observing them as they havenaturally occurred. Because you controlthe introduction of stimuli, you can es-tablish that differences in response canbe attributed to the stimulus in ques-tion, such as packaging or color of aproduct, and not to other factors, suchas limited availability of the product.In other words, experimental design re-veals whether variables caused a certainbehavior as opposed to simply beingcorrelated with the behavior.</p><p>While experimental design itself isntnew, few marketing executives haveused the techniqueeither because theyhavent understood it or because day-to-day marketing operations have got-ten in the way. But new technologiesare making experimental design moreaccessible, more affordable, and easierto administer. (For more informationon the genesis of this type of testing,see the sidebar The Origins of Experi-mental Design.) Companies today cancollect detailed customer informationmuch more easily than ever before andcan use those data to build models thatpredict customer response with greaterspeed and accuracy.</p><p>Todays most popular experimental-design methods can be adapted and cus-</p><p>T O O L K I T Experimental Design</p><p>Eric Almquist and Gordon Wyner arevice presidents of Mercer ManagementConsulting, based in Boston.</p><p>The Origins of Experimental Design</p><p>Experimental design methodologies some dating as far back as the nineteenthcentury have been used for years across many fields, including process manufac-turing, psychology, and pharmaceutical clinical trials, and they are well known tomost statisticians. Sir Ronald A. Fisher was among the first statisticians to intro-duce the concepts of randomization and analysis of variance. In the early 1900s,he worked at the Rothamsted Agricultural Experimental Station outside London.His focus was on increasing agricultural yields.</p><p>Another major breakthrough in the field came with the work of U.S. economistand Nobel laureate Daniel L. McFadden in the 1970s, who drew on psychologicaltheories to explain that consumer choices are a function of the available alterna-tives and other consumer characteristics. In helping to design San FranciscosBART commuter rail system, McFadden analyzed the way people evaluate trade-offs in travel time and cost and how those trade-offs affect their decisions aboutmeans of transportation. He was able to help forecast demand for BART and de-termine where best to locate stations. The model was quite accurate, predicting a 6.4% share of commuter travel for BART, which was close to the actual 6.2%share the system achieved.</p><p>6 harvard business review</p></li><li><p>tomized using guidelines from standardreference textbooks such as Statisticsfor Experimenters by George E. P. Box,J. Stuart Hunter, and William G. Hunter;and from off-the-shelf software pack-ages such as the Statistical Analysis Sys-tem, the primary product of SAS Insti-tute. A handful of companies havealready applied some form of experi-mental design to marketing. They in-clude financial firms such as Chase,Household Finance, and Capital One,telecommunications provider Cable &amp;Wireless, and Internet portal AmericaOnline.</p><p>Applying experimental-design meth-ods requires business judgment and adegree of mathematical and statisticalsophistication both of which are wellwithin the reach of most large corpora-tions and many smaller organizations.The experimental design technique isparticularly useful for companies thathave large numbers of customers andthat face rapid and constant change intheir markets and product offers. In-ternet retailers, for instance, benefitgreatly from experimentation becauseon-line customers tend to be fickle. At-tracting browsers to a Web site and thenconverting them into buyers has provedvery expensive and largely ineffective.Getting it right the first time is nearlyimpossible, so experimentation is criti-cal. The rigorous and robust nature ofexperimental design, combined with theincreasing challenges of marketing tooversaturated consumers, will makewidespread adoption of this new mar-keting science only a matter of time inmost industries.</p><p>The ABCs of ExperimentalDesign</p><p>To illustrate how experimental designworks, lets consider the following sim-ple case. A company, which well call BizWare, is marketing a software productto other companies. Before launchinga national campaign, Biz Ware wantsto test three different variables, orattributes, of a sales message for theproduct: price, message, and promotion.Each of the three attributes can have anumber of variations, or levels. Suppose</p><p>the three attributes and their variouslevels are as follows:</p><p>Price</p><p>LEVEL (1) (2) (3) (4) </p><p>$150 $160 $170 $180</p><p>Message</p><p>(1) SPEEDBiz Ware lets you manage customer relationships in just minutes a day.</p><p>(2) POWERBiz Ware can be expanded to handle a virtually infinite number of customerfiles.</p><p>Promotion</p><p>(1) 30-DAY TRIALYou can try Biz Ware now for 30 days at no risk.</p><p>(2) FREE GIFTBuy Biz Ware now and receive our con-tact manager software absolutely free.</p><p>The total number of possible combina-tions can be determined by multiplyingthe number of levels of each attribute.The three attributes Biz Ware wants totest yield a total of 16 possible combi-nations since 4 2 2 = 16. All 16 com-binations can be mapped in the cells ofa simple chart like the one below.</p><p>Biz Wares Universe of Possible Combinations</p><p>PROMOTION (1) (1) (2) (2)</p><p>MESSAGE (1) (2) (1) (2)</p><p>PRICE (1) X X X X</p><p>PRICE (2) X X X X</p><p>PRICE (3) X X X X</p><p>PRICE (4) X X X X</p><p>Its not necessary to test them all. In-stead, using whats called a fractionalfactorial design, Biz Ware selects a sub-set of eight combinations to test.</p><p>Factorial means Biz Ware crosseseach attribute (price, promotion, andmessage) with each of the others in grid-like fashion, as in the universe chartabove.Fractionalmeans Biz Ware thenchooses a subset of those combinationsin which the attributes are independent(either totally or partially) of each other.The following chart shows the resultingexperimental design,with Xs marking thecells to be tested. Note that each level of</p><p>each attribute is paired in at least oneinstance with each level of the other at-tributes. So, for example, price at $150 ismatched at some point with each pro-motion and each message. This makes itpossible to unambiguously separate theinfluence of each variable on customerresponse.</p><p>Biz Wares Experimental Design</p><p>PROMOTION (1) (1) (2) (2) </p><p>MESSAGE (1) (2) (1) (2)</p><p>PRICE (1) X X</p><p>PRICE (2) X X</p><p>PRICE (3) X X</p><p>PRICE (4) X X</p><p>The eight chosen combinations arenow tested, using one of several media:scripts at a call center, banner ads onBiz Wares Web site, e-mail messages toprospective customers, and direct mailsolicitations. (In general, you should testusing the medium you ultimately expectto use for your marketing campaign, al-though you can also choose multiplemedia and treat the choice of media asan attribute in the experiment.)</p><p>How big should the sample size be tomake the experiment valid? The answerdepends on several characteristics of thetest and the target market. These mayinclude the expected response rate,based on the results of past marketingefforts; the expected variation amongsubgroups of the market; and the com-plexity of the design, including thenumber of attributes and levels. In anyevent, the sample size should be largeenough so that marketers can statisti-cally detect the impact of the attributeson customer response. Since increasingthe complexity and size of an experi-ment generally adds cost, marketersshould determine the minimum samplesize necessary to achieve a degree of pre-cision that is useful for making businessdecisions. (There are standard guide-lines in statistics that can help marketersanswer the question of sample size.)Weve conducted complex experimentsby sending e-mail solicitations to lists ofjust 20,000 names, where 1,250 peopleeach receive one of 16 stimuli.</p><p>october 2001 7</p><p>Experimental Design T O O L K I T </p></li><li><p>Within a few days or weeks, the ex-periments results come in. Biz Waresmarketers note the number and per-centage of positive responses to each ofthe eight tested offers.</p><p>Biz Wares Design Results</p><p>PROMOTION (1) (1) (2) (2) </p><p>MESSAGE (1) (2) (1) (2)</p><p>PRICE (1) 14% 40%</p><p>PRICE (2) 9% 13%</p><p>PRICE (3) 6% 10%</p><p>PRICE (4) 1% 7%</p><p>At a glance, you might intuitively un-derstand that price has a significant im-pact on the response to the various of-fers, since the lower price offers (Price 1and Price 2) generally drew much betterresponse rates than the higher price of-fers (Price 3 and Price 4). But statisticalmodeling, using standard software,makes it possible to assess the impact ofeach variable with far greater precision.Indeed, by using a method known as lo-gistic regression analysis, Biz Ware canextrapolate from the results of the ex-periment the probable response ratesfor all 16 cells.</p><p>Biz Wares Modeled Responses</p><p>PROMOTION (1) (1) (2) (2) </p><p>MESSAGE (1) (2) (1) (2)</p><p>PRICE (1) 14% 2...</p></li></ul>