boost your marketing roi with experimental design · boost your marketing roi with experimental...

12
Boost Your Marketing ROI with Experimental Design by Eric Almquist and Gordon Wyner Reprint r0109k

Upload: vuongcong

Post on 05-Jun-2018

212 views

Category:

Documents


0 download

TRANSCRIPT

Boost Your Marketing ROI with Experimental Design

by Eric Almquist and Gordon Wyner

Reprint r0109k

HBR Case Study r0109aOff with His Head?David Champion

HBR at Large r0109bThe Leadership Lessons of Mount EverestMichael Useem

Different Voice r0109cGenius at Work:A Conversation with Mark Morris

Harnessing the Science of Persuasion r0109dRobert B. Cialdini

Torment Your Customers (They’ll Love It) r0109eStephen Brown

Radical Change, the Quiet Way r0109fDebra E. Meyerson

Your Next IT Strategy r0109gJohn Hagel III and John Seely Brown

HBR Interview r0109hBernard Arnault of LVMH:The Perfect Paradox of Star BrandsSuzy Wetlaufer

Best Practice r0109jSpeeding Up Team LearningAmy Edmondson, Richard Bohmer, and Gary Pisano

Tool Kit r0109kBoost Your Marketing ROI with Experimental DesignEric Almquist and Gordon Wyner

October 2001

onsumers are bombardeddaily with hundreds, perhaps

thousands, of marketing messages. De-livered through all manner of media,from television commercials to tele-phone solicitations to supermarket cir-culars to Internet banner ads, thesestimuli may elicit the desired response:The consumer clips a coupon, clicks ona link, or adds a product to a shoppingcart. But the vast majority of marketingmessages fail to hit their targets. Obvi-ously, it would be valuable for compa-nies to be able to anticipate which stim-uli would prompt a response since evena small improvement in the browse-to-buy conversion rate can have a big im-pact on profitability. But it has been verydifficult to isolate what drives consumerbehavior, largely because there are somany possible combinations of stimuli.

Now, however, marketers have easieraccess, at relatively low cost, to experi-

mental design techniques long appliedin other fields such as pharmaceuticalresearch. Experimental design, whichquantifies the effects of independentstimuli on behavioral responses, canhelp marketing executives analyze howthe various components of a marketingcampaign influence consumer behavior.This approach is much more precise andcost effective than traditional markettesting. And when you know how cus-tomers will respond to what you have tooffer, you can target marketing pro-grams directly to their needs–and boostthe bottom line in the process.

Traditional Testing

The practice of testing various formsof a marketing or advertising stimulusisn’t new. Direct marketers, in particu-lar, have long used simple techniquessuch as split mailings to compare howcustomers react to different prices or

Copyright © 2001 by Harvard Business School Publishing Corporation. All rights reserved. 5

by Eric Almquist

and Gordon Wyner

To o l K i t

C

Boost YourMarketing ROI

withExperimental

Design

Most marketing

executives will

admit that their

discipline involves

a lot of guesswork.

But by borrowing

a statistical

technique long

applied in other

fields, marketers

can develop

campaigns that

target customers

with uncanny

accuracy.

promotional offers. But if they try toevaluate more than just a couple of cam-paign alternatives, traditional testingtechniques quickly grow prohibitivelyexpensive.

Consider the “test and control cell”method, which is the basis for almost alldirect mail and e-commerce testingdone today. It starts with a control cellfor, say, a base price, then adds test cellsfor higher and lower prices. To test fiveprice points, six promotions, four ban-ner ad colors, and three ad placements,you’d need a control cell and 360 testcells (5 × 6 × 4 × 3 = 360). And that’s a rel-atively simple case. In credit card mar-keting, the possible combinations ofbrands, cobrands, annual percentagerates, teaser rates, marketing messages,and mail packaging can quickly add upto hundreds of thousands of possiblebundles of attributes. Clearly, you can-not test them all.

There’s another problem with thisbrute force approach: It typically doesnot reveal which individual variablesare causing higher (or lower) responsesfrom customers, since most control-cell tests reflect the combined effectof more than two simple variables. Isit the lower price that prompted thehigher response? The promotional deal?The new advertising message? There’sno way to know.

The problem has been magnified re-cently as companies have gained theability to change their marketing stim-uli much more quickly. Just a few yearsago, changing prices and promotions ona few cans of food in the supermarket,for example, required the time-consum-ing application of sticky labels and thedistribution of paper coupons. Today, astore can adjust prices and promotionselectronically by simply reprogrammingits checkout scanners. The Internet hasfurther heightened marketing complex-ity by reducing the physical constraintson pricing, packaging, and communi-cations. In the extreme, an on-line re-tailer could change the prices and pro-

motion of every product it offers everyminute of the day. It can also changethe color of banner ads, the tone of pro-motional messages, and the content ofoutbound e-mails with relative ease.

The increasing complexity of thestimulus-response network, as we call it,means that marketers have more com-munication alternatives than ever be-fore – and that the portion of alterna-tives they actually test is growing evensmaller. But this greater complexity canalso mean greater flexibility in your mar-keting programs – if you can uncoverwhich changes in the stimulus-responsenetwork actually drive customer be-havior. One way to do this is throughscientific experimentation.

A New Marketing Science

The science of experimental design letspeople project the impact of many stim-uli by testing just a few of them. Byusing mathematical formulas to selectand test a subset of combinations ofvariables that represent the complexityof all the original variables, marketerscan model hundreds or even thousandsof stimuli accurately and efficiently.

This is not the same thing as an after-the-fact analysis of consumer behavior,sometimes referred to as data mining.Experimental design is distinguished bythe fact that you define and control the

independent variables before puttingthem into the marketplace, trying outdifferent kinds of stimuli on customersrather than observing them as they havenaturally occurred. Because you controlthe introduction of stimuli, you can es-tablish that differences in response canbe attributed to the stimulus in ques-tion, such as packaging or color of aproduct, and not to other factors, suchas limited availability of the product.In other words, experimental design re-veals whether variables caused a certainbehavior as opposed to simply beingcorrelated with the behavior.

While experimental design itself isn’tnew, few marketing executives haveused the technique–either because theyhaven’t understood it or because day-to-day marketing operations have got-ten in the way. But new technologiesare making experimental design moreaccessible, more affordable, and easierto administer. (For more informationon the genesis of this type of testing,see the sidebar “The Origins of Experi-mental Design.”) Companies today cancollect detailed customer informationmuch more easily than ever before andcan use those data to build models thatpredict customer response with greaterspeed and accuracy.

Today’s most popular experimental-design methods can be adapted and cus-

T O O L K I T • Experimental Design

Eric Almquist and Gordon Wyner arevice presidents of Mercer ManagementConsulting, based in Boston.

The Origins of Experimental Design

Experimental design methodologies – some dating as far back as the nineteenthcentury – have been used for years across many fields, including process manufac-turing, psychology, and pharmaceutical clinical trials, and they are well known tomost statisticians. Sir Ronald A. Fisher was among the first statisticians to intro-duce the concepts of randomization and analysis of variance. In the early 1900s,he worked at the Rothamsted Agricultural Experimental Station outside London.His focus was on increasing agricultural yields.

Another major breakthrough in the field came with the work of U.S. economistand Nobel laureate Daniel L. McFadden in the 1970s, who drew on psychologicaltheories to explain that consumer choices are a function of the available alterna-tives and other consumer characteristics. In helping to design San Francisco’sBART commuter rail system, McFadden analyzed the way people evaluate trade-offs in travel time and cost and how those trade-offs affect their decisions aboutmeans of transportation. He was able to help forecast demand for BART and de-termine where best to locate stations. The model was quite accurate, predicting a 6.4% share of commuter travel for BART, which was close to the actual 6.2%

share the system achieved.

6 harvard business review

tomized using guidelines from standardreference textbooks such as Statisticsfor Experimenters by George E. P. Box,J. Stuart Hunter, and William G. Hunter;and from off-the-shelf software pack-ages such as the Statistical Analysis Sys-tem, the primary product of SAS Insti-tute. A handful of companies havealready applied some form of experi-mental design to marketing. They in-clude financial firms such as Chase,Household Finance, and Capital One,telecommunications provider Cable &Wireless, and Internet portal AmericaOnline.

Applying experimental-design meth-ods requires business judgment and adegree of mathematical and statisticalsophistication – both of which are wellwithin the reach of most large corpora-tions and many smaller organizations.The experimental design technique isparticularly useful for companies thathave large numbers of customers andthat face rapid and constant change intheir markets and product offers. In-ternet retailers, for instance, benefitgreatly from experimentation becauseon-line customers tend to be fickle. At-tracting browsers to a Web site and thenconverting them into buyers has provedvery expensive and largely ineffective.Getting it right the first time is nearlyimpossible, so experimentation is criti-cal. The rigorous and robust nature ofexperimental design, combined with theincreasing challenges of marketing tooversaturated consumers, will makewidespread adoption of this new mar-keting science only a matter of time inmost industries.

The ABCs of ExperimentalDesign

To illustrate how experimental designworks, let’s consider the following sim-ple case. A company, which we’ll call BizWare, is marketing a software productto other companies. Before launchinga national campaign, Biz Ware wantsto test three different variables, orattributes, of a sales message for theproduct: price, message, and promotion.Each of the three attributes can have anumber of variations, or levels. Suppose

the three attributes and their variouslevels are as follows:

Price

LEVEL (1) (2) (3) (4)

$150 $160 $170 $180

Message

(1) SPEED“Biz Ware lets you manage customer relationships in just minutes a day.”

(2) POWER“Biz Ware can be expanded to handle a virtually infinite number of customerfiles.”

Promotion

(1) 30-DAY TRIAL“You can try Biz Ware now for 30 days at no risk.”

(2) FREE GIFT“Buy Biz Ware now and receive our con-tact manager software absolutely free.”

The total number of possible combina-tions can be determined by multiplyingthe number of levels of each attribute.The three attributes Biz Ware wants totest yield a total of 16 possible combi-nations since 4 × 2 × 2 = 16. All 16 com-binations can be mapped in the cells ofa simple chart like the one below.

Biz Ware’s Universe of Possible Combinations

PROMOTION (1) (1) (2) (2)

MESSAGE (1) (2) (1) (2)

PRICE (1) X X X X

PRICE (2) X X X X

PRICE (3) X X X X

PRICE (4) X X X X

It’s not necessary to test them all. In-stead, using what’s called a fractionalfactorial design, Biz Ware selects a sub-set of eight combinations to test.

“Factorial” means Biz Ware “crosses”each attribute (price, promotion, andmessage) with each of the others in grid-like fashion, as in the universe chartabove.“Fractional”means Biz Ware thenchooses a subset of those combinationsin which the attributes are independent(either totally or partially) of each other.The following chart shows the resultingexperimental design,with Xs marking thecells to be tested. Note that each level of

each attribute is paired in at least oneinstance with each level of the other at-tributes. So, for example, price at $150 ismatched at some point with each pro-motion and each message. This makes itpossible to unambiguously separate theinfluence of each variable on customerresponse.

Biz Ware’s Experimental Design

PROMOTION (1) (1) (2) (2)

MESSAGE (1) (2) (1) (2)

PRICE (1) X X

PRICE (2) X X

PRICE (3) X X

PRICE (4) X X

The eight chosen combinations arenow tested, using one of several media:scripts at a call center, banner ads onBiz Ware’s Web site, e-mail messages toprospective customers, and direct mailsolicitations. (In general, you should testusing the medium you ultimately expectto use for your marketing campaign, al-though you can also choose multiplemedia and treat the choice of media asan attribute in the experiment.)

How big should the sample size be tomake the experiment valid? The answerdepends on several characteristics of thetest and the target market. These mayinclude the expected response rate,based on the results of past marketingefforts; the expected variation amongsubgroups of the market; and the com-plexity of the design, including thenumber of attributes and levels. In anyevent, the sample size should be largeenough so that marketers can statisti-cally detect the impact of the attributeson customer response. Since increasingthe complexity and size of an experi-ment generally adds cost, marketersshould determine the minimum samplesize necessary to achieve a degree of pre-cision that is useful for making businessdecisions. (There are standard guide-lines in statistics that can help marketersanswer the question of sample size.)We’ve conducted complex experimentsby sending e-mail solicitations to lists ofjust 20,000 names, where 1,250 peopleeach receive one of 16 stimuli.

october 2001 7

Experimental Design • T O O L K I T

Within a few days or weeks, the ex-periment’s results come in. Biz Ware’smarketers note the number and per-centage of positive responses to each ofthe eight tested offers.

Biz Ware’s Design Results

PROMOTION (1) (1) (2) (2)

MESSAGE (1) (2) (1) (2)

PRICE (1) 14% 40%

PRICE (2) 9% 13%

PRICE (3) 6% 10%

PRICE (4) 1% 7%

At a glance, you might intuitively un-derstand that price has a significant im-pact on the response to the various of-fers, since the lower price offers (Price 1and Price 2) generally drew much betterresponse rates than the higher price of-fers (Price 3 and Price 4). But statisticalmodeling, using standard software,makes it possible to assess the impact ofeach variable with far greater precision.Indeed, by using a method known as lo-gistic regression analysis, Biz Ware canextrapolate from the results of the ex-periment the probable response ratesfor all 16 cells.

Biz Ware’s Modeled Responses

PROMOTION (1) (1) (2) (2)

MESSAGE (1) (2) (1) (2)

PRICE (1) 14% 23% 28% 42%

PRICE (2) 7% 12% 15% 24%

PRICE (3) 3% 6% 7% 12%

PRICE (4) 1% 3% 3% 6%

Note that the percentages shown abovedon’t precisely match the original per-centages from the test. That’s becauseBiz Ware used the original percentagesto create a general equation for esti-mating the results in all the cells. Whenthe new equation is then applied to thecells already tested, the results usuallyvary somewhat from the original num-bers. The important thing is that thetester ends up with a full set of consis-tent results for all possible permuta-tions. (For more about how these calcu-lations were made, see the sidebar“Estimating a Response Model.”)

With this complete picture, it be-comes clear that some combinationsare far more likely to be effective thanothers.The combination of Price 1 ($150),Message 2 (Power), and Promotion 2(Free Gift) is clearly the most attractiveto consumers. But is it the right com-bination for Biz Ware? That’s wherebusiness judgment comes in: The com-pany’s management will need to ana-lyze the experiment’s implications forits resources, revenue, and profitability.

Experimentation at Crayola

Let’s look at an actual example of how ex-perimental design can enhance a mar-keting campaign. Last year, Crayola, adivision of Binney & Smith and Hall-mark, launched a creative arts and ac-tivities portal on the Internet calledCrayola.com. The site’s target customersinclude parents and educators, and itsells art supplies and offers arts-and-crafts project ideas and classroom lessonplans.We conducted an experimental de-sign to help Crayola attract people to thesite and convert browsers into buyers.

Based on Crayola’s experience andmarket knowledge, we identified a setof stimuli that could be varied to drivetraffic to Crayola.com and induce pur-

chases. One of these stimuli was ane-mail to parents and teachers. To testvarious components of the e-mail con-tent and format, we relied on the bestjudgment of Crayola’s marketing staffabout the messages that were mostlikely to appeal to the target markets.The e-mail included five key attributesthat seemed likely to affect the cus-tomer response rate, which would bemeasured by click-throughs to the Cray-ola Web site. These attributes and theirrelated levels were:• Two subject lines: “Crayola.com Survey” and “Help Us Help You.”

• Three salutations: “Hi [user name] :-),”“Greetings!” and “[user name].”

• Two calls to action: “As Crayola.comgrows, we’d like to get your thoughtsabout the arts and how you use artmaterials” and “Because you as an ed-ucator have a special understandingof the arts and how art materials areused, we invite you to help buildCrayola.com.”

• Three promotions: a chance to partici-pate in a monthly drawing to win$100 worth of Crayola products; amonthly drawing for one of ten $25Amazon.com gift certificates; and nopromotion.

8 harvard business review

T O O L K I T • Experimental Design

Estimating a Response ModelLogistic regression analysis is a statistical technique that allows an experimenter toanalyze the impact of each stimulus in an experiment. The formula assumes thatthe outcome – in Biz Ware’s case, the customer response rate – is a function of theattributes – in Biz Ware’s case, price, message, and promotion. Here’s what BizWare’s generic equation looks like:

We plug Biz Ware’s customer response data into this equation, using SAS soft-ware to estimate the coefficients (a, b1, b2, and b3). For price we can drop a num-ber into the formula. For message and promotion, which are qualitativeattributes, we assign a dummy code – zero or one, since there are only two levelsfor each attribute. It does not matter which attribute is assigned which number.For Biz Ware, the equation looks like this:

The coefficients tell us a few things: Higher price has a negative impact on de-mand (hence, the coefficient b1 is −8.1) and the effect of promotion is greater thanthe effect of message (because 0.9 is greater than 0.6). But more important, thesecoefficients allow us to apply the equation to extrapolate from the data collectedand predict responses for all 16 cells.

Log ( response rate1 − response rate)= a + b1 (price) + b2 (message) + b3 (promotion)

Log ( response rate1 − response rate)= 10.3 − 8.1 (price) + 0.6 (message) + 0.9 (promotion)

Crayola Draws Results

Sample E-mailCrayola.com marketers wanted tomeasure how customer responsewould be affected by differentvariations (or levels) of five maine-mail attributes: two subjects,three salutations, two calls to ac-tion, three promotions, and twoclosings. At right is one of the 72possible combinations. An experi-mental design was developed sothat only 16 combinations had tobe tested.

Parents’ Response RatesThe company measured the re-sponses it received and throughstatistical modeling could quicklypinpoint which stimuli appealedmost to its target customers – inthis case, parents.

The subject line “Crayola.comSurvey,” for example, was more effective at creating positive re-sponses than “Help Us Help You.”The response rate of the formerwas 7.5% higher than that of thelatter, all else being equal.

Script AttributesThe combination of attributesthat got the best response fromparents was more than threetimes as effective as the combina-tion of attributes that got theworst response. At right are thebest and worst script attributesof the 72 possible combinations.

The best script is 3.5 times more effective than the worst.

Best Response Worst Response

subject Crayola.com Survey Help Us Help You

salutation User name Greetings!

call to action Because you are… As Crayola.com grows…

promotion $100 product drawing No offer

closing [email protected] Crayola.com

response rate 33.7% 9.7%

Change in Levels Response Rate

subject Crayola.com Survey 7.5%

Help Us Help You 0.0%

salutation Hi [user name] :-) 2.7%

Greetings! 0.0%

[user name] 3.4%

call to action As Crayola.com grows… 0.0%

Because you are… 3.5%

promotion $100 product drawing 8.4%

$25 Amazon.com gift certificate drawing 5.2%

No offer 0.0%

closing Crayola.com 0.0%

[email protected] 1.2%

subject Help Us Help You

salutation Greetings!

call to action Because you as an educator have a special understanding of the arts and how art materials are used, we invite you to help buildCrayola.com.

By answering ten quick questions, you’ll be helping to make sure improvements we make to Crayola.com meet your needs. Simply follow this link: http://___________. (If this does not work, please cut and paste the link into your browser’s address bar.)

promotion As a thank you, you will be entered into our monthly drawing to win one of ten $25 Amazon.com gift certificates. By completing our survey, you’ll be automatically entered.

Thanks for your assistance. Be sure to check back often for new fun, creative, and colorful ideas and solutions.

closing Yours,Crayola.com

Experimental Design • T O O L K I T

october 2001 9

Crayola with similar tests we performedfor Cable & Wireless. At Crayola, e-mailscontaining no promotional offer drewpoorly compared with e-mails containingeither of the two promotional offerstested (a $100 product drawing and a$25 Amazon.com gift certificate). AtCable & Wireless, though, e-mails withno promotional offer drew the best click-through rate. But that’s part of the valueof experimental design: It allows mar-keters to move beyond rules of thumbor experience to pinpoint the marketingapproaches that work best with a par-ticular audience in a particular market-place at a particular moment in time.

The Expanding MarketingUniverse

In the world of marketing experimen-tation, the Crayola tests are relativelysimple. We tested only a handful ofmarketing attributes, with a relativelysmall number of levels for each. Evenso, the customer impact was impres-sive, with the best combinations ofstimuli drawing double, triple, or qua-druple response rates compared withthe worst combinations.

The approach we took with Crayolacan be extended and applied to moreattributes and more levels. It’s not un-usual for a company to test ten or moreattributes, including some with as manyas eight levels. A credit card company,for example, might be interested in test-ing six teaser rates, six cobrands, fourdifferent annual percentage rates, sixpromotions, four insurance packages,four modes of communication, eightdirect mail packages, and four mailingschedules. This represents a possible setof 442,368 distinct marketing stimuli,obviously too large a universe for a test-and-control-cell approach. But by usingexperimental design to select and testa manageable number – say 128 combi-nations of these variables – the creditcard company could estimate with greataccuracy the customer reaction to all442,368 combinations.

And this is by no means the upperlimit of the usefulness of experimentaldesign.The responses being sought fromcustomers can be more complex as well.

Customers may have multiple optionsto choose from rather than a simple“yes” or “no” response – for example, achoice between one-year, two-year, andthree-year subscriptions or no purchase.

Different types of experimental de-signs can be used when the experi-mental objectives vary. For example,so-called screening designs can effi-ciently test very large numbers of attri-butes to select a smaller number to in-vestigate in more detail. Subsequenttesting can employ more levels for eachof a smaller collection of attributes. Re-sponse surface designs are used in foodtesting in which multiple dimensionssuch as sweet, salty, crunchy, and sourhave an ideal level somewhere between“too much” and “not enough.” The de-sign lets testers estimate the ideal com-bination of tastes and textures.

Getting Results

Naturally, there are limits to the powerof experimental design. This approachrequires thoughtful planning to hy-pothesize what you are looking for andto rule out other possibilities before theexperiment can begin.

One caution is that many experimen-tal designs rely on “main effects” mod-els. That is, they assume that interactioneffects – the impact that one variablecan have on another variable – are neg-ligible. This is usually a reasonable as-sumption when you’re dealing withcomplex combinations of three, four,and five variables at a time. However,interactions between two variables canbe important and can be tested. For ex-ample, suppose you find that free sam-ples have a more positive impact on aproduct’s sales than do coupons. Youmay also learn that free samples pro-vide an even greater lift when they arehanded out in the stores as opposed tobeing sent through the mail. One vari-able, the chosen distribution channel,interacts meaningfully with anothervariable, the promotion itself.

Experimental design also calls forsubstantive knowledge to frame theproblem, careful application of theo-retically sound methods, and skillfulinterpretation of the results in the ap-

10 harvard business review

T O O L K I T • Experimental Design

• Two closings: “Crayola.com” and“[email protected].”Taking into account all the levels of

each attribute, there were a total of 72possible versions of the e-mail (2 × 3 × 2× 3 × 2 = 72). While Crayola might havebeen able to test all 72 variations, theprocess would have been cumbersomeand expensive. Instead, we created asubset of 16 e-mails to represent the 72possible combinations. Over a two-weekperiod, we sent the 16 types of e-mailto randomly selected samples of cus-tomers and tracked and analyzed theirresponses. The results were compelling.The “best” e-mail of the 72 possiblescripts yielded a positive response rateof about 34% and was more than threetimes as effective at attracting parents asthe “worst”e-mail, which yielded a pos-itive response rate of only about 10%.(See the exhibit “Crayola Draws Re-sults.”) Among educators, the best e-mailscript was nearly twice as effective asthe worst script, with response rates of35% for the best e-mail versus 20% forthe worst.

We also conducted similar experi-ments with Crayola to test the effects ofthree different banner ads on the homepage, as well as product, price, and pro-motions offered at the on-line store. Thebest combination was nearly four timesas effective in converting shoppers intobuyers as the worst combination andnearly doubled revenues per buyer.

Uncovering the Unexpected

In the Crayola experiments, as well as inother tests, our results have yielded sur-prising insights. When we ask experi-enced marketers to predict which stim-uli are likely to elicit the best responses,few get it right. Crayola, for example,was surprised to find that a price reduc-tion on a product or line of productsgenerated sufficient volume to createhigher revenues while also maintainingthe site’s profitability. Conventional wis-dom would have suggested that raisingprices would be a more effective way toincrease revenue.

Of course, the findings from tests likethese can’t be generalized. For instance,compare the e-mail tests we conducted at

be one part of a continuous test-and-learn cycle.

Marketing is, and always will be, acreative endeavor. But it doesn’t haveto be so mysterious. As marketing noiseand advertising clutter continue to in-crease, marketers will find that scien-tific experimentation will allow them

propriate context. Some knowledge ofbasic statistical methods is necessary,of course. But even more important isa nuanced understanding of customersand the ability to form reasonable as-sumptions concerning which attributesand levels should be tested and whichshouldn’t. Experimental design should

to better communicate with their cus-tomers – and substantially raise theodds that their marketing efforts willpay off.

Reprint r0109kTo place an order, call 1-800-988-0886.

Experimental Design • T O O L K I T

october 2001 11