operations mng

Upload: maria-sen

Post on 06-Apr-2018

236 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Operations Mng

    1/13

    Making Large-Scale Models Manageable: Modeling from an Operations Management Perspective

    Author(s): Frederic H. MurphySource: Operations Research, Vol. 41, No. 2 (Mar. - Apr., 1993), pp. 241-252Published by: INFORMSStable URL: http://www.jstor.org/stable/171775

    Accessed: 14/03/2010 08:31

    Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at

    http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless

    you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you

    may use content in the JSTOR archive only for your personal, non-commercial use.

    Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained athttp://www.jstor.org/action/showPublisher?publisherCode=informs.

    Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed

    page of such transmission.

    JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of

    content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms

    of scholarship. For more information about JSTOR, please contact [email protected].

    INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Operations Research.

    http://www.jstor.org/stable/171775?origin=JSTOR-pdfhttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/action/showPublisher?publisherCode=informshttp://www.jstor.org/action/showPublisher?publisherCode=informshttp://www.jstor.org/page/info/about/policies/terms.jsphttp://www.jstor.org/stable/171775?origin=JSTOR-pdf
  • 8/3/2019 Operations Mng

    2/13

    MAKINGLARGE-SCALEMODELS MANAGEABLE:MODELINGFROM AN OPERATIONSMANAGEMENT ERSPECTIVE

    FREDERICH. MURPHYTempleUniversity, hiladelphia, ennsylvania

    (ReceivedJanuary1992;revisionreceivedDecember 1992;acceptedFebruary 993)Whilebuilding omplex models s an importantpartof operations esearch ractice,OR workers ave focused oo oftenon modeling's echnicalaspects nsteadof makingthe models manageable,hat is, designing hem around the wayspeople will operate them. The issues raisedfor complexmodelsare different rom those most widely discussed ordecisionsupport ystemsbecause he focus is on modelsthatrequirea staffto maintainandoperate hem and on howthe staff functions. Operationsmanagementprovidesthe tools for thinking about the operationsof large complexmodels; this paper examines some of these tools and shows how they relateto the design and operation of largecomplex models.

    Over the last two yearsInterfaces as publishedI some 30 articles describing the use of verycomplex models, and most of the Edelman Prizecompetition participants have used complex models.Similarly, the OR Practice section of OperationsResearch regularlypublishes articles involving com-plex models. Clearly, there is a role for these models,and, in fact, they are the bread and butter of ourprofession. This paper explores how we can makethem manageable.Managementscientists have finely honed technicaland craft skills, but they have written very little aboutmanaging complex models. Few academics have hadthe opportunity to manage them; they focus on thetechnical issues with which they are familiar, insteadof the organizationof the day-to-day operations.Prac-titioners tend to avoid writingabout their experiencesas managers; rather, they prefer discussing the tech-nical side of projects in which they are involved. Theproblem with ignoring operating ssues is that we havenot shared deas for improving the operations of com-plex models that involve groups of up to 30 profes-sionals or are diffused throughout an organizationandused by several small teams. Nevertheless, when deal-ing with complex models, the management aspectsare at least as criticalas the technical issues.The operations of large models tend to be chaotic

    and convoluted. They are chaotic because there aremany people doing different tasks with uncertaintimes to completion, such as updating dataand imple-menting new features, and they are convolutedbecause the tasks tend to be interrelated and requirea high degreeof coordination. At the same time, thestatus of the operationsis not completely observable.When operating models, highly skilled people spenda great deal of time doing routine tasksthat only theycan do, given the knowledge requiredto engagein thework.Those who have written about the subject includeGass (1987), Allen et al. (1992), Bennett (1989), andGoeller et al. (1985). Gass emphasizes issues of modelintegrity, assessment, and validation. Allen et al.address the staffing and general management issuesfor model development. Bennett focuses mostly ondata and results management. Goeller and his teamprovide an example of how to organize and use a suiteof models to evaluatea set of complex, interconnectedissues.In this paper I address a specific aspect of modelmanagement: How to design models and study theiroperationsto improve their operating characteristics?That is, can the tools of operationsmanagement (see,forexample, Weiss and Gershon 1992),which includeindustrial engineeringtechniques that weredeveloped

    Subject lassifications:Philosophy f modeling:managing argemodels. Production/scheduling:pplying perationsmanagement echniques o modeloperations.Areaofreview:OR FORUM.OperationsResearch 0030-364X/93/4102-0241$01.25Vol. 41, No. 2, March-April 993 241 ? 1993OperationsResearch ocietyof America

  • 8/3/2019 Operations Mng

    3/13

    242 / MURPHYin an industrial setting, be brought to bear on thesemental, as opposed to physical, tasks? My conclusionis that some can be used directly and others, withmodification, can be of use as well.COMPLEXMODELS AND MODELMANAGEMENTTypically, we think of complex models in terms oftheir size, but size is not the only determinant. Asimple transportation problem with 100 sources anddestinations has 10,000 activities. It is large, but notcomplex, from a model management perspective.It iseasy to conceptualize, and, if costs are measured bydistance times some scale factor, it requires very fewdata elements that change. This model can beexpanded to make it complex throughseveralactions.A demand response can be added to price, multiplecommodities competing for capacity on links, multi-ple periods connected by inventories, and alternativeproduction processes.Simon (1981, p. 195) gives a useful definition ofcomplexity:"Acomplex systemis one made of a largenumber of parts that interact in a nonsimple way."What makes a model complex is structuralcomplex-ity, diverse sources and different kinds of data. Struc-tural complexity is a function of the number ofdifferentmodel components, the number of differentways these components can interact, the number ofdifferent types of variables and functions, and theexistence of spatial and time dimensions. The scopeand rangeof the problemenvironment also add to thecomplexity of the model. Managing operationsbecomes more difficult when the number of interac-tions between and among people and model compo-nents increases.This paper focuses on models that are used repeat-edly. This means the model-buildingteam can investin operational efficiencies, moving along a learningcurve. Given repeated use, one can affordto invest inan efficientproduction process.That a model is used repeatedlymeans it is used toanalyze issues that are under continuous re-evalua-tion. For ongoing concerns in the corporateenviron-ment, people usually understand he problem,and themodel usually functions as a productivity-enhancingtool through improved coordination or synthesis ofinformation, and usually, though not always,in oper-ations. Four of many examples areBlais,Lamont andRousseau(1990) in vehicle and manpower schedulingfor buses, Cohen et al. (1990) in multi-echelon inven-tory, Buchanan et al. (1990) in petroleum productrefining and distribution, and Schmitz, Armstrong

    and Little (1990) in comprehending data from bar-code scanners. The last paperis a marketingexample.Complex models are used regularly n public policydiscussions.Gass and Sisson (1975) survey the kindsof models used. Recent examplesinclude applicationsin forestry Hof and Baltic 1991, and Kent et al. 1991),electric power policy (Ford 1990), health care (Fetter1991), waste management (Baetz, Pas and Neebe1989), and water supply (Goeller et al. 1985). PIES,the ProjectIndependenceEvaluationSystem (Hogan1975), was used for studying energy policy from 1974to 1982, when it was replaced by the IntermediateFutureForecastingSystem (IFFS),which is still in use(see Murphy et al. 1988). These last two models areused to illustratesome of the points raisedhere.Public policy models become complex because, assoon as a model can address one question, otherrelated questions are asked, forcing its scope to beexpanded. A good test for when a complex model isneeded is if a small model raises more answerablequestions than it answers. Public policy modelsbecome especially complex when they address morethan just immediate impacts and take more of asystems view. For example, a change in oil marketsaffects other fuel markets and the macroeconomy,and the energymodels have to take this into account.Having a comprehensive model that provides a uni-form baseline is preferable to having individualmodels for each question that provide inconsistentconclusionsin theirareasof overlap. This is analogousto the coordinatingfunction of privatesector models.The emphasis in this paper is on the implicationsof model design and the implementation for ongoingmanagement. The production process is more thanalgorithms. It involves people management, qualitymanagement, resource allocation, work design andflow, and adaptingto change. The tools for studyingthese issues come from industrial engineering andoperationsmanagement.Problems in model management appear in twoforms: high error rates and long turnaround times.These two problems interact. High errorrates aggra-vate turnaround problems because of the extra timerequired to redo runs. Long turnaround times exac-erbateerrorproblems because there are fewer oppor-tunitiesto redoruns.Too often, analysts stop checkingfor errorsmore from fatigue or a lack of time thanfrom confidence in the results. Because they interact,management strategies for improving one help theother.In the next section, I develop a simple model of amodel and show how it can help in making modelingdecisions that affect both run times and solution

  • 8/3/2019 Operations Mng

    4/13

    ORForum / 243quality. I follow this with a discussion of how to do aworkflow analysis of a model. Next, I discuss tech-niques for reducing errorrates. This is followed by adiscussion of approachesto reduce the time it takesto produce a run.A MODELOF A MODELFigure 1 shows the simplest organizationof a modelthat is conceivable. Here it is a monolith where thetasks are to update the data and change the modelstructure,as needed.Complex models generally have substructures,orsubmodels, that can be treated separately and thenlinked in an overall framework. Figure 2 shows atypical configuration. The task definitions are nowmore refinedby being partitioned among the compo-nents of the model. The figure does not show anysubmodel interactions because I presume these takeplace as part of Main.Although we all know about the benefits of parti-tioning, it is useful to explore the nature of thesebenefits, because this is the modeling equivalent ofdivision of labor as described by Adam Smith inWealthof Nations, and exploringthe subjectcan havethe same impacts as work methods and job design inmanufacturing.We can get a feel for how a modelorganizationthat uses partitioningaffects the opera-tion of a model, as compared to leaving it as a mon-olithby attachingsome times to tasks and probabilitiesto the success of completingthem. These calculationsareanalogousto those of Simon (p. 200) in his watch-maker parableon modular design.We take the model structure in Figure 2, andassume that the normal running time for each sub-model is 5 minutes and 20 minutes for Main. If thesubmodels are run in series, followed by Main, theprocess takes 40 minutes. However, the runs can bedone in two ways: The entire model can be run as amonolith with the resultsof each submodel run input

    Data _ Model - Results

    Figure1. A monolithic model.Data N SublData Do Sub2 Main -ResultsData - Sub3Data - Sub4

    Figure2. A partitionedmodel.

    directly into Main, or the team can inspect the resultsof each submodel, making corrections if errors aredetected and rerunning so that the erroneous resultsdo not reach Main. The first approachpropagatesanysubmodel errors, while the second assures againsterrors n the final solution.To see what effect these two procedures have onrun time, assume that the probability p of having acorrectsubmodel run is 0.7. The expected number ofruns required to achieve a correctone is i/p. For themonolith case, the probabilityof simultaneously get-ting correctruns from all the submodels is p4, and theexpected run length for this case is 40/p4 = 167minutes, over four times the original40 minutes.On the other hand, if the resultsof each submodelare inspected and errors are corrected, the expectedrun length for each submodel is 5/p, so that the totalexpected run length becomes 20 + 4(5/p) = 49 min-utes, only 29% of the time for the monolithic runs,and only 22% more than the minimum value of 40minutes that results from error-free components.These results can be improved further f it is possibleto run and check the submodels in parallel.The lesson of this example is general:By partition-ing a model and instituting quality checks at inter-mediate stages, the effects of errors can be madeadditiverather than multiplicative.Up to this point, nothing has been said about thecosts of interjecting human intervention betweenmodel components. To the extent that human inter-vention becomes part of the solution process, staffingrequirements can increase. Stress levels also increasebecause of the need for the staff to be available whilethe system is running. If component checking can bedone during a debugging phase and all componentsinteractwithout human interventionwhen producingfinal runs, staff requirements are not increased. Infact, they are reducedby the quickerturnaround imeswhen the components are run separately. Sincehuman intervention may be requiredwith final, aswell as trial, runs, the ramifications of these involve-ments must be understood.This example illustratesseveralpoints. First, modeldesign has important implications for model opera-tions; that is, the first step toward efficient modeloperations is to be cognizant of operatingcharacter-isticsduringthe design phase. Second, the time savingsare obvious to an algorithmdesigner,but we have ablind spot when it comes to applying the same con-cepts to the human portion of model operations.Partitioningis natural in most complex models butthe best partitionis not alwaysthe firstpartitiontried.Third, even if we need to sacrifice algorithmic

  • 8/3/2019 Operations Mng

    5/13

    244 / MURPHYefficiency when designing a model, though it is rarewhen we do, we can still come out ahead in theproduction process. This last point is equivalent tolooking at the total productionprocess, not, for exam-ple, just machine efficiency. That is, we must take aglobal view of operatingmodels.We all have an intuitive appreciationof these pointsand they seem obvious. Yet, there has been littlesystematic exploration into how one should defineand partition modeling activities from an operationsmanagement perspective.This is not surprising,sinceit took from 1776, when Adam Smith published hisbook, to the early 20th century for work methods tobe studied systematically.EXAMINING HE WORK FLOWManaging the work flow in an operating modelingsystem is halfway between managing product devel-opment and managing a production process. One ismanaging a process that involves considerable think-ing and reworkingof ideas. Yet, there are productsthat have to be produced regularly.Interestingly,inthe auto industry,managing productdevelopmenthasbecome equally critical with managing production(White 1992) and high development costs have keptGM's Saturn division from becoming profitable. Indeveloping its new line of cars, Chryslerreduced thenumber of workers needed to develop its new modelsfrom 2,000 for the K cars to 740 for the new LH cars,and lead times for developing the models were cutfrom 54 to 39 months (Stertz 1992). Many of theproductdevelopment lessons of the auto industry canbe appliedto model management.The importanceof examiningthe work flow can beunderstood from the design of IFFS (Murphy et al.).To determine the demand for oil and the level ofimports, a world oil price is needed. To calculate it,demands are needed from all consuming nations,including the United States. Thus, one must cyclebetweena model of internationalmarketsand a modelof U.S. markets o achievea marketequilibrium.Untilrecently, each model was operatedby a differentgroupwithin the EnergyInformationAdministration (EIA).The way the system currentlyoperatesis that a pricepath is produced by the international model. This isinput to the U.S. model through a file transfer thatinvolves staff communications. A U.S. model run ismade using these pricepaths.Ifthese pricepathscausesignificant changes in oil imports, then the interna-tional model is rerun and new price paths are trans-ferred. That is, the work flow matches the originaldesignof the organization,whereinternationalmarket

    analysis was separatedfrom domestic policy analysis.This model organization is very inefficient, becausethe internationalmodel takes less than one second todetermine the price in any one year and the U.S.model takes substantially onger.EIA has gone through a reorganization and thedomestic and internationalfunctions have been com-bined within the same group. EIA is also replacingIFFS with a new system called the National EnergyModeling System (NEMS). Initially, the sameapproach was incorporated nto the design of NEMS,except that human interventionwas eliminated. Thatis, the original work methods were to be automated.After examining the work flow, it became clear thatthe most efficient approach was to use the interna-tional model to define a supply curve to be fed intothe domestic portion of the model, replacing multiplesolutions of the more time-consuming domesticmodel with a single solution.One of the useful tools for understanding andimproving the work flow in industrial operations isthe product-flowprocesschart,also called a workflowdiagram (see Figure 3 for an example). This is adiagramthat shows all the material flows and tasksnecessary to produce a part or a product. This issimilarto a "Gozinto"chart (Vazsonyi 1958). Differ-ent symbols are used to representthe different tasksthat are performed, such as machining operations,materialhandling, and storage.With these charts, onecan examine the sequence of tasks, look for unneces-

    sary actions, and communicate the steps necessarytomake the product.What differentiatesmodeling activities from physi-cal production activitiesis that there is no looping inproduct-flow process charts because of the differentenvironmentsfor redoing rejects.If a partfailsinspec-tion, it is removed from the production flow and isOperation

    Transfer

    Storage

    InspectionFigure3. An operationsworkflowmodel.

  • 8/3/2019 Operations Mng

    6/13

    ORForum / 245either discardedor sent for rework.If thereis theequivalent f a partfailure n modeling, he produc-tion process epeatsuntilthe stepis doneright.Thatis, themodeling quivalent f the inspection tep n aworkflow iagram as twopossible lowdirections, sin if statementsn programminglowcharts. showhowto adapt heworkflow iagramo modelingandillustrate the diagram with an analysis of PIESand IFFS.WORKFLOWDIAGRAMSFOR PIES AND IFFSPIESused linearprogrammingo compute an eco-nomicequilibrium.However, hekind of equilibriumcomputedwithlinearprogrammings a competitiveequilibrium heremarginal ostequalsprice.Becauseof this, there was an elaborate uperstructurehatiteratedbetween he linearprogram ndadjustmentsto the LP that reflected egulatedmarketequilibriaand nterfuel emand ubstitution. eeGreenberg ndMurphy 1985) for a discussionof the kindsof com-putationsneeded o finda regulatedquilibrium, ndHogan for a descriptionof PIESand the solutionalgorithm.Operationally,hemodelwasorganized round hecomponents f energy upply suchas oil, natural asand coal) and energy conversionactivities such aselectricpowergenerationand oil refining).Using amatrixgenerationanguage, set of tablesweregen-erated hatwereessentially ondensedactivities on-sisting fsupply urvesproduced y thesupplymodelsandsmallLP modelsof conversionprocesses.Theseintermediateableswere henused o generatehe fullmatrix.After hematrixwasgenerated,t was revisedto reflect minor adjustmentsn a specificscenariousinga revise file. Then the model was solvedandreportsweregenerated.The solutionprocedurewasto solve the LP, adjustfor marketregulations ndcertainpriceeffectson demand, ndrepeathesestepsuntilconvergence. hepartitionwasinto submodelsconsistingof the individual upplyblocks, plus thedemandmodel,as in Figure4. Mainin Figure2 wasthecompleteLPandregulatoryuperstructure.

    Breaking pthemodelgenerationn this waytookadvantage f manyof thebenefitsof partitioning. orexample,n modelingpolicies hataffectelectricutil-ities,thetablescontaining il supply nformation idnot have to regenerate.Using a revisemeant nothaving o regeneratehe matrix or smallchanges nscenarios.PIESwasstill a monolithicmodelbecausemost ofthe effortwasconcentratedn generating he matrixfrom ablesofcondensed ctivities ndcomputing he

    Supply Transportation DemandOil

    OilRefineries Products

    N. Gas N. Gas

    Electricity Elect.

    Coal _ Coal

    Figure . Energymarkets.

    equilibrium.Responsibility or supply and demandcomponentswasdistributedmong taffmembers ndonepersonwas n charge fputting hepieces ogetherinto the LP. Combininghe piecesand maintainingthesuperstructureorcomputinghe equilibriumwasa difficult ask that required emembering lot ofdetail and knowinghow all the pieces interacted.Completing runspanned everal ourswhenstartingfrom matrixgeneration.Usually, he construction ftheinput ableswascompleted uring hedayand theintegratedmodelwasrunatnight.Theperson espon-sible ormatrix enerationndmodelsolutionworkedalldayoncoordinatingctivities mong taffmembersandthenspent heevening hecking ndresubmittingruns.Wecan see the problemwiththis approachn theworkflow iagram f Figure5. Tasksdone by peoplearerepresentedy circles, omputerizedtepsby rec-tangles,andevaluation tepsby diamonds,as in theprogramlowcharts.Arrows,whichrepresentmaterialhandling in productflow processcharts, indicatetransfers etweenpeople or organizational nits.Tofocus the discussion, have simplified he diagramsomewhatby, for example, eavingout the interna-tional nteractions. heflowbeganwithentering ata.Some of thedataorscenario pecificationsadto betransferredrom ndividual ectorcomponentso therevise.This was a source of errorsbecause theseactionsoften occurredat different imes and wereoccasionally done with inconsistent information.Notice thatthere are manytransfers mong stafftoaccomplisha singletest run, and the entiresystemhadto be run to completea test for any sector.Theinput tables did not yield informationuseful fordebugging r analysis.By beingcondensed ctivities,thatis, activitieswithmost of the O'sremoved, herewerefar morenumbers o scanthan in the rawdata

  • 8/3/2019 Operations Mng

    7/13

    246 / MURPHYCoal oil Nat. Gas Ref ineries Electricity L.P. Revise Demand

    Enter DataTransf er Jsome datato ReviseChange Code

    Generatetables

    Transf erto Matrix IGeneratorGenerateMatrix____Transf erto Revise

    ReviseMatrix.

    SolveModel

    ProduceCopiesDistributeOutputCheckOutputComunicatethat runsare good

    PrepareReports

    Figure5. The workftowdiagramfor PIES.used in generating the activities. Completing all thehuman transfers was very time consuming becausethe person handling the matrix generation/modelsolution part was usually working on something else,which delayed the trials with new input files.IFFS is organized differentlyfrom PIES. Insteadofhaving a large integrating model that brings togethermany little pieces, each fuel component can be run asits own equilibriummodel, as indicated by the dashed

    lines in Figure 4, and the full integrating structure sused to capture interfuel substitution only. Each per-son or group is assigned a fuel market model. Alltesting and debugging of individual fuel scenarios iscompleted before the full equilibrium model is run.The integrating portion contains no model compo-nents, just the market equilibration mechanism. Theindividual fuel models are debugged using the inte-grating structure and the other fuel models are shut

  • 8/3/2019 Operations Mng

    8/13

    ORForum / 247off. That is, the outer loop that equilibrates amongmarkets s not executed during most of the debugging.We can split the workflow diagram for IFFS intothe two components in Figures 6 and 7, the first foroperating a submodel and the second for operatingthe entire system. It is apparent that the fuel modulediagramhas a short testing cycle without any transfersbetween people or groups. The long cycles remain torthe entire system because there are some aspects ofthe submodel interactions that become apparent onlywhen the full model is run. The frequency with whichthe long cycles are followed is far lower than it waswith PIES.This organization helps reduce errors and the costsof errorsin severalways. First, the person making achange in a fuel component sees a solution to themodule rather than a table of activities. Having asolution allows the analyst to use a knowledgeof themarkets and previous runs to apply simple reason-ableness tests to the trial runs. Second, the feedbackismeasured in minutes ratherthan hours, because solu-tion times forsubmodelsare far shorter han the timesfor the entire system. This allows more tries in a dayand provides feedback while the actions taken arefresh in the analyst's mind. Third, since the sameintegratingsoftwareis used by everyone even duringtesting, there are fewer errorsdue to lack of familiaritywith run-submission procedures.Consequently,anal-yses and updates that involve single fuels still use theentiremodel, but, because the person in chargeof thatfuel submodel performs the runs, there is more incen-tive to get the submodel representationand interfaceswith other components correct from the beginning,and there is less room for communication errors.

    Enter Data

    Change Code

    Run Module

    Check Module

    Transfer toIntegratingStaff

    Figure6. A workflowdiagramfor each IFFS fuelmodule.

    ERROR REDUCTIONWorkflow analysis can be used to reduce the costs oferrorsby finding them sooner and eliminating wastedeffort. In this section I examine ways to reduce theoccurrence of errors.There are three sources of errors: errors in runsubmission, implementation, and coordination. Thatis, errorscome from making changes and failing tofollow throughon the production steps.These arealsomajor sources of errors in physical production pro-cesses. The errorproblem is significant:The numberof debuggingruns can exceed the number of correctscenarios by one or two orders of magnitude for arapidlychangingmodel.Run-submission errorsare true errors,but errors nimplementationfall into two categories, earningexer-cises and true errors. Learningexercisesare necessaryexperiments, and these errorsare inevitable. They arereducedby experience.Trueerrorscome frommissingsome aspect of the scenario, coding errorsor, miscom-munication. I combine the two categories of errorsinto one because it is hard to define the boundariesbetween them.Since model runs are not mass production itemslike automobiles, only a few analogies can be drawnfrom the quality control literature. One is the totalquality management dictum to eliminate the causesof errors rather than just fix them. Since the sameerrors end to repeat,we can do a Paretoanalysis (seeEvans and Lindsay 1989) to isolate the dominantsourcesof errors. The causes of errors tend to have aPareto distribution, implying that a few sources gen-erate most of the defects. This is sometimes called the80-20 rule in that 20% of the sources of errorscause80% of the errors. To find the sources of errors,onemust make the effort to categorizethe sources of errorand gather the appropriate objective data; this isbecauseeven highly skilledanalystscan miss obviouspatternsand their abilities often allow them to func-tion well even with bad work methods.Since acceptance sampling has no meaning herebecause the task is to redo every run until the resultsareusable,the appropriatendustrialengineeringanal-ogy is the poka yoke approach o quality,as developedby Shingo (Shingo and Robinson 1990). Also, seeDrucker (1991). Here, the production process isdesigned to avoid errors from the beginning and tofindways of performing100% nspectionat each stageof the production processwhere there is the potentialfor error.One does not rely entirelyon evaluatingtheend result.Threestrategies or eliminatingerrors at the source

  • 8/3/2019 Operations Mng

    9/13

    248 / MURPHYCoal Oil Nat. Gas Refineries Electricity Demand

    PrepareFuel Module

    Transfer toIntegratingStaff

    ProduceIntegratedRun

    TransferOutput

    CheckResults

    PrepareReports

    Figure7. A workflow diagramfor the IFFS integratingmodule.

    are to minimizeand formalize heplaceswherecoor-dination s necessary, utomate he stepswhere hereis the potential or humanerror,and build a set ofanalysis ools that automate hechecking f the mostcommon errorsand test for reasonableness f themodel's inputs and outputs.The third strategy simportantbecausea common error s failing o rec-ognizeproblemswithmodel results.Coordinationssues houldbeexamined n the con-text of workflowdiagrams.A common source ofrun-submissionrrors s makingchanges n severallocations o construct new scenario.As mentionedbefore,coordinating he separatechanges n inputtables and the revise file in PIES when changingscenarioswas a relatively ommon sourceof errors.

    Tasks hatinterrelate houldbe part of the same setof actionsand performed y oneperson, o the extentpossible.Menuscanbe used o place ike nputactionstogether o thatone is remindedof the joint actionsthat mustbe takenduring un submission.Input-data heckscan be automated y incorporat-ing checksagainst easonable anges or inputvalues,and the techniquesof object-orientedrogrammingcanbe used oautomate unsubmission. orexample,scenarios anbe grouped nto classeswith classhier-archies hatrecognizewhat actionsshouldbe takenwhensubmitting certain cenario.Usinghierarchiescansimplifyhe process fconstructingewscenarios.Tools that facilitate he analyst's fforts o under-standa modelandexplain he model'sresults anbe

  • 8/3/2019 Operations Mng

    10/13

    ORForum / 249built. One tool that can be usedto evaluatemathe-matical programmingmodelsbeforeand aftertheyare solved s ANALYZE Greenberg 991).It canbeusedwithlinearprogrammingmodels o makesche-mas of submatrices, resenting cognitivelyunder-standable iew of the modelstructure.A schema s anaggregateiewof a matrixconsistingof the signpat-ternsof the coefficientsand symbolsthat representranges f the magnitudesfcoefficients. heactivitiesandrowsareaggregatedn the schema nto theclassesdefinedduringmodeldefinitionforexample,produc-tion or transportationlasses).By usingit, one cancheckfor reasonablenessf the datavaluesand seethatthe appropriateinkagesbetween ubmodelsaremade.After he model s solved,ANALYZE an assist nassessinghe qualityof a solution.Forexample,t canexplainresults uchas pricesby tracing he complexpaths in an LP that builds the prices throughasequenceof actionswhichentailsa cost at eachstep.The modelercan use ANALYZE o search hroughthe matrixand the solutionfile to understand theraspectsof the solution.The primaryvalueof toolssuchasANALYZEs in reducinghe time devoted oinspectionanderrordiagnosis,mprovinghe proba-bilityof finding rrors, ndenhancinghe understand-ingof thesolution.RUN-TIMEREDUCTIONThemaincomplaintwithcomplexmodels show ongit takesto get the results roma newscenario.Occa-sionally,the reasonfor the time taken is becauseconstructinghe new scenario s a creativeact thatrequires erious hinkingand experimentation.Moreoften,the problem s not thinking hrough he pro-ductionprocess. n thissection,I present ometech-niquesbeyondworkflowdiagramsorimprovingheproduction rocess.Onedoesnotalwayshave heopportunityo changetheentirestructure f a modelto achieve hedesiredpartitioning.This is especially rue when the bestrepresentations a mathematical rogram.Tools tocopewiththe bestpossiblepartition aveto be devel-oped.Several trategies ere riedwith PIES.One thatfailedwas to maintainwo versions f themodel,a skeleton ortesting,anda completemodelforproduction.Theskeletonusedmodelcomponentswithfeweraggregatedctivities, xceptfor the com-ponentbeing ested.The costandconfusion ssociatedwith maintainingwoversionsof the modelexceededthe benefits.The benefitswereminimalbecause hemodelwas morelikelyto have infeasible olutions,

    adding to the debugging ime. The analysteffortrequiredo maintain verything lmostdoubled,andgood analystsare a more constrained esource hancomputers.Another trategy hat doesnotwork s to maintaina small,reduced-formmodel for very quickturna-roundanalyses.As with the abovestrategy,we windup maintainingwo models nsteadof just one,whichincreaseshe workload n analysts or a smallsavingsin computer ime.This strategyhas a more seriousflaw than thepreviousone. Once one goes publicwith a set ofnumbersrom he reduced-form odel,one windsupinvesting lot of time trying o calibratehe originalmodel to match the resultsfrom the small model.That s, the tail starts o wagthe dog.This problem smitigatedwith clients thatunderstand he natureofmodelingapproximations.n our situation,Congresswas not tolerantof the slightestdifferencesn resultsdueto changing pproximations. nofficialanalyseswithIFFSaredoneusing he individualuel modules,ignoring ull market nteractions.This reducestheturnaroundimeconsiderably.Aggregatemodels of componentscan be used forplanningwhileretaininghe detailedmodels orshort-termoperations.Here he aggregatemodel s not usedto improve he efficiencyof modelingoperations omuch as an approach o cope with the realitiesofsolvingvery argemodels.In some situations,we have been able to usereduced-form odelsas a form of modeldecomposi-tion.Forexample,heoutputof the oil andgassupplymodels n PIESwerestep-functionpproximationsothe supplycurves or eachyear n whichan equilib-riumwas calculated.The refinerymodel n IFFS s aset of nonlinear unctions ittedto the outputof astandard efinerymodel.This kind of reduced-formmodelinghas been used in many places. See, forexampleGoelleret al., wherea collectionof modelswas used to study the water supply of TheNetherlands. rommy experience,he key to usingthiskindofdecompositionsthat here s no feedbackfrom a solutionof the main model that alterstheparameters f the reduced-formmodel.That is, thesubmodel inkagesare acyclic.Also, consistency smaintainedby alwaysusingthe same reduced-formmodels.Shingo (see Shingo and Robinson)notes threeplaceswheremprovementsn theproduction rocesscan be made:n theoperations,n theprocess, ndinsequencing he actions. Improving he operationsinvolves mprovinghe wayeachstep sdone.Improv-ing the processconsists of choosinga betterset of

  • 8/3/2019 Operations Mng

    11/13

    250 / MURPHYsteps. The sequencing is improved by doing tasks inadvance that can be done ahead of time or doingseveraltasks in parallel.Operations can be improved through automatingtasks.A good place to begin with automation is in thelinkagesto the data sources.One can use modern database techniques to simplify data updates, as in theMIMI modeling system (Chesapeake DecisionSciences 1988). The Achilles heel of many models isthe way data are handled. Models still exist with dataembedded in the code rather than maintained sepa-rately. Bennett articulates many of the issues associ-ated with managingdata in largemodeling systems.Automation can be a viable tool in assisting in theconstruction of new scenarios from different combi-nations of existing scenarios.One can use menus thatoffer choices rather han rely on the analyst'smemory.Doing this has been simplified by the new softwaretools that facilitate windowing, presentingtables, etc.One can build scenario libraries hat are organizedina manner that simplifies the task of piecing togethernew scenarios. Very little work has been done inscenario management as a general subject. MathPro(1990) is a mathematicalprogramming system that isexplicit about scenario management. It maintains atree structure showing the inheritance of model com-ponents and data from prior cases that have beenconstructed.With other systems, an ad hoc scenariomanager has to be implemented, as was done withPIES.When the new scenario involves truly new modelstructures, scenario development cannot be auto-mated. Note, however,where changestake place. Themore that changes can be isolated within one sub-model, the more likely the changescan be made withfewer operations. For this to happen, one has toanticipatethe kinds of changesthat are likely to occurand invest in sufficient generality in the model tofacilitatethese changes.Logging proceduresshould bedeveloped as a way of trackingwork by members ofthe team.In the energyarea, once a consensus developed onthe broad energy picture,when policy makersstartedevaluatingtheir options with conventional technolo-gies, they focused on policies affecting fuel suppliesand the different demands for these fuels with a 5- to15-yearplanning horizon. This pointed to the fuel-market orientation of the submodels in IFFS. Allchanges take place within a single module when thenew representationnvolves a specificfuel,and a quickanalysiscan be done using just the single fuel moduleat a substantialsaving over solving the whole system.In contrast,with PIES,policies involving, say natural

    gas, often affected both the generation of the supplycurvesand the pricing mechanisms in the equilibriummodule. Developing a new scenario, therefore,requiredaltering the model in several places and sev-eral people had to coordinate their actions. The par-titioning in IFFSreducedthis coordination.The last step in thinking through the productionprocess is to examine the timing of activities. Theworst situation is when everything has to be donesequentially. Tasks can often be done in parallel. Forexample, one need not wait for final databefore testingan LP formulation. Randomization techniques orgood guesses can be used to generate trial data beforethe final data are collected.An important consideration is whether the modelneeds to be run at all to address some new question,because the goal of analysis is to produce knowledgeand not just model runs. Runs of a frequently usedmodel should confirm analysts'understandingratherthan be part of a discovery process. Quite often,interpolationsusing existing output are sufficient toprovide an immediate answer and a full model runcan be done later as a check. Doing this requiresalevel of maturity that is often missing in analysts. Yet,this should be part of their professionaldevelopment.When analysts shy away from using theirjudgmentsand hide behind model runs, they are making modelruns the product, not the process that leads to theproduct, which is understanding.

    CONCLUSIONSAs managementscientistswe often forget that we haveto be managersas well. Although we do not producephysical products, we have to be cognizant that weare involved in a production process that can bemanaged. As analysts, we often enjoy the intensityand long work hours that come from operating large-scale models because it often adds to the meaning ofour work. However, maintaining a crisis atmospherewhen there is no crisis is debilitating over the longrun.

    The difference between large-scale and small-scalemodeling is that with complex models we have tothink throughthe operationsto be managedand howour modeling decisions affect model operations.What happens when one does not think throughthe production process is illustratedin Mehring andGutterman(1990), where Amoco was unsuccessfulinimplementing an LP planning model in the UnitedKingdom. This case is interesting because Amoco isone of the premier users of complex models_and

  • 8/3/2019 Operations Mng

    12/13

    OR Forum / 251yet even this organization occasionally runs intodifficulty.When examining operations we need to keep inmind the following questions:* What are we doing?* Why are we doing it?* How arewe doing it?* How can we do it better?Always keeping these questions in mind helps to breakold habits and improve the operating characteristicsof models.I have presented four tools to assist in addressingthese questions: workflow diagrams, Pareto analysis,automation,and analysis tools. I have discussed work-flow diagramsin detail because they present an oper-ational view of models that can lead to large changesin the way the model is built and operated. Also, thistool is the least understoodby management scientists.When using workflow diagrams, one shoulddo the following:1. Look for elaborate oops with severalhuman inter-ventions to see if they can be replaced by a sequenceof shorterloops. This catches errorssooner and elim-inates inefficient iterativeinteractions.2. Leave larger oops only if they consist of checks onproblems that can be discovered only through com-ponent interactions.3. Do not take the sequence of tasks as given, since anatural order occurs less often than with physicalproduction. This may allow the work flow to besimplifiedand the workload balance to be improved.4. Examine each task that involves human interven-tion and ask if it is necessary, and, if necessary,whether or not it can be simplified.Modeling systems,like production systems, suffer from carrying overformerlyefficientwork methods to new environmentswhere they are no longer appropriate.

    Automation is the best understoodtool for improv-ing the manageabilityof models. Furthermore,ana-lysts usually push for more detail to facilitatestudiesin their area at the expense of overall system perfor-mance. There has probablynever been a formal effortto track sources of errors in model operations andcarry through a Pareto analysis. Analysts know bestwhere they are having problems. However, they canbe myopic in their view when describing the causeand a formal approach can provide a better under-standing of the root causes of problems. Also, thereare few analysis tools beyond such passive devices ascheck lists and summary reports.

    One can extend Pareto analysis to studying modelresults, understanding where the value added of themodel lies, and testingthe model for where it is stableor unstable. One can cut back on the model wherethe world and the model are stable. If the modeland the world are unstable, one should understandthe limits of developing understanding throughmodels.This is importantbecausethe naturalpressureis toward increaseddetailat the expense of operationalefficiency.The best indication of a management problemis tofind oneself or one's staff working in a crisis atmo-sphereto producemodel runswhen there is no severecrisis in the environment. For the long-run viabilityof a modeling system, crisis cannot be the norm andthe operationsof a modeling system have to be man-aged to maintain continuity of staff and to have theresources or enhancingthe models so that they retaintheir connection to reality.Thereis one caveatto the discussion presentedhere.Everything has been described in the context of myexperience with energy market models. One of thereasons why there has been little written about man-agingmodels is that those who have had to deal withthese management issues have had to build up aunique combination of institutional knowledge andtechnical skills to build and operate a large system.One rarely gets the opportunityto build another sys-tem in a different subjectarea involving a different setof institutionsand differenttypes of models. My expe-rienceshave been with model-intensive,as opposedtodata-intensive, systems. Energyanalysis involves sig-nificant structural changes stemming from alteringrules for regulatingmarkets,and major data updatesoccur annually as the data for a single yearare addedto the time series. Bennett describes an environmentwhere complex data changes are necessary for eachnew set of model runs. To some extent, when thoseof us who have built and managed large systemsdescribe our experiences, we are like the blind mendescribing their respective parts of the elephant.Clearly,more work needs to be done to understandthe management of largesystems.ACKNOWLEDGMENTThis research has been supported by the Amoco andShell oil companies.

    REFERENCESALLEN, P., B. BENNETT, M. CARILLO,B. GOELLERAND

    W. WALKER. 1992. Quality in Policy Modeling.

  • 8/3/2019 Operations Mng

    13/13

    252 / MURPHYInterfaces 22(4), 70-85.

    BAETZ, B. W., E. I. PAS AND A. W. NEEBE. 1989.TrashManagement: Sizing and Timing Decisions forIncineration nd LandfillFacilities. nterfaces 9(6),52-66.BENNETT, B. 1989.A ConceptualDesign for the Model

    Integrationand ManagementSystem,N-2645-RC.The RandCorporation,SantaMonica, California.BLAIS,J. Y., J. LAMONTAND J. M. ROUSSEAU.1990. TheHASTUS Vehicle and ManpowerSchedulingSys-tem at the Societede Transportde la Communaut6Urbainede Montreal. nterfaces 0(1), 26-42.BUCHANAN,J. E., S. C.GARVEN,0. GENIS,J. F. SHAPIRO,V. SINGHAL,J. M. THOMASAND S. TORPIS. 1990. AMultirefinery,MultiperiodModelingSystemfor theTurkish Petroleum Refining Industry. Interfaces20(4), 48-60.ChesapeakeDecision Sciences, 1988. MIMI/LP UserManual, Version 2.63. Chesapeake DecisionSciences,New Providence,New Jersey.COHEN, M., P. KAMESAM, P. KLEINDORFER, H. LEEAND A. TEKERIAN. 1990. Optimizer: BM's Multi-Echelon Inventory System for Managing ServiceLogistics. nterfaces 0(1), 65-82.DRUCKER, P. 1991. Japan: New Strategies or a NewReality.WallStreetJournal,Oct. 2. 1991.p. A 12.EVANS, J. R., AND W. M. LINDSAY. 1989. TheManage-ment and Control of Quality. West, St. Paul,Minnesota.FETTER,. B. 1991. DiagnosisRelated Groups:Under-standing Hospital Performance. Interfaces 21(1),6-27.FORD,A. 1990. Estimatingthe Impact of EfficiencyStandardson the Uncertainty of the Northwest

    ElectricSystem. Opns.Res. 38, 580-597.GASS,S. I. 1987. Managingthe Modeling Process:APersonalReflection.J. Opnl.Res. 31(1), 1-8.GASS,S. I., AND R. L. SISSON. 975. A Guide to Modelsin Governmental lanningand Operations.Sauger,Potomac, Maryland.GOELLER,. F., AND THEPAWN TEAM.1985. Planningthe Netherlands'WaterResources. nterfaces15(1),1-33.GREENBERG,. J. 1991. A Primer for ANALYZE: A

    Computer-Assisted nalysis Systemfor Mathemat-ical Programming. Mathematics Department,Universityof Colorado,Denver, Colorado.GREENBERG,. J., ANDF. H. MURPHY.1985. Comput-ing MarketEquilibriaWith PriceRegulations.Opns.Res. 33, 935-954.HOF, J., AND T. BALTIC.1990. Multilevel Analysis ofProduction Capabilitiesof the National ForestSystem. Opns.Res. 39, 543-553.HOGAN,W. W. 1975. EnergyPolicyModelsfor ProjectIndependence.Comput.and Opns.Res. 2, 251-271.KENT,B., B. B. BARE,R. C. FIELDANDG. A. BRADLEY.1991. NaturalResource Land ManagementPlan-ning Using Large-ScaleLinear Programs: TheUSDA Forest ServiceExperienceWith FORPLAN.Opns.Res. 39, 13-27.MathPro.1990.MathProUsageGuide: ntroductionndReference.MathPro, nc.,Washington,D. C.MEHRING, . S., AND M. M. GUTTERMAN.990. Supplyand Distribution Planning Support for AMOCO

    (UK). Interfaces 0(4), 95-104.MURPHY,F. H., J. J. CONTI,R. SANDERSAND S. H.SHAW. 988.ModelingandForecastingEnergyMar-ketsWith the IntermediateFutureForecastingSys-tem. Opns.Res. 36, 406-420.SCHMITZ,. D., G. D. ARMSTRONGNDJ. D. C. LITTLE.1990. Cover-Story-Automated News Finding inMarketing.nterfaces 0(6),29-38.SHINGO, ., ANDA. ROBINSON.990. Modern ApproachestoManufacturingmprovementcheShingoSystem.ProductivityPress,Norwalk,Connecticut.SIMON, . A. 1981. TheSciences of theArtificial.MITPress,Cambridge,Massachusetts.STERTZ,. 1992. Detroit'sNew Strategy o Beat Backthe Japanese Is to Copy Their Ideas. Wall StreetJournal,October1, 1992,p. Al.VAZSONYI,. 1958.ScientificProgramming n BusinessandIndustry.JohnWiley,New York.WEISS,H., AND M. GERSHON.1992. Production andOperationsManagement,2nd Edition. Allyn andBacon, Boston.WHITE,. B. 1992. For Saturn,CopyingJapan YieldsHot Sales but No Profits. Wall Street Journal,October 1, 1992, p. A12.