learning to learn, from past to future

7
Learning to learn, from past to future Kenneth G. Cooper, James M. Lyneis*, Benjamin J. Bryant Business Dynamics Practice, PA Consulting Group, 123 Buckingham Palace Road, London SW1W 9SR, UK Abstract As we look from the past to the future of the field of project management, one of the great challenges is the largely untapped opportunity for transforming our projects’ performance. We have yet to discern how to systematically extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. In this paper, the authors discuss some of the reasons behind the failure to systematically learn, and present an approach and modeling framework that facilitates cross-project learning. The approach is then illustrated with a case study of two multi-$100 million development projects. # 2002 Elsevier Science Ltd and IPMA. All rights reserved. Keywords: Learning; Strategic management; Rework cycle; Project dynamics 1. Introduction As we look from the past to the future of the field of project management, one of our great challenges is the largely untapped opportunity for transforming our projects’ performance. We have yet to discern how to extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. One who does not learn from the past... Whether the motivation is the increasingly competitive arena of ‘‘Web-speed’’ product development, or the mandate of prospective customers to demonstrate qualifications based on ‘‘past performance,’’ or the natural drive of the best performers to improve, we are challenged to learn from our project successes and failures. We do so rather well at the technical and process levels... we build upon the latest chip technology to design the smaller faster one... we learn how to move that steel, or purchase order, more rapidly. But how does one sort among the extraordinary variety of factors that affect project performance in order to learn what about the management helped a ‘‘good’’ project? We must learn how to learn what it is about prior good management that made it good, that had a positive impact on the performance, and then we must learn how to codify and disseminate and improve upon those management les- sons. Learning how to learn future management lessons from past performance will enable us to improve system- atically and continuously the management of projects. A number of conditions have contributed to and per- petuated the failure to systematically learn on projects. First is the misguided prevalent belief that every project is different, that there is little commonality between projects, or that the differences are so great that separ- ating the differences from the similarities would be dif- ficult if not impossible. Second, the difficulty in determining the true causes of project performance hin- ders our learning. Even if we took the time to ask suc- cessful managers what they have learned, do we really believe that they can identify what has worked, and what has not, what works under what project condi- tions but not others, and how much difference one practice vs. another makes? As Wheelwright and Clark [1, p. 284–5] note: ... the performance that matters is often a result of complex interactions within the overall develop- ment system. Moreover, the connection between cause and effect may be separated significantly in time and place. In some instances, for example, the outcomes of interest are only evident at the con- clusion of the project. Thus, while symptoms and potential causes may be observed along the devel- opment path, systematic investigation requires observation of the outcomes, followed by any ana- lysis that looks back to find the underlying causes. 0263-7863/02/$22.00 # 2002 Elsevier Science Ltd and IPMA. All rights reserved. PII: S0263-7863(01)00071-0 International Journal of Project Management 20 (2002) 213–219 www.elsevier.com/locate/ijproman * Corresponding author. Tel.: +44-20-7730-9000; fax: +44-20- 7333-5050. E-mail address: [email protected] (J.M. Lyneis).

Upload: api-3707091

Post on 11-Apr-2015

90 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Learning to Learn, From Past to Future

Learning to learn, from past to future

Kenneth G. Cooper, James M. Lyneis*, Benjamin J. Bryant

Business Dynamics Practice, PA Consulting Group, 123 Buckingham Palace Road, London SW1W 9SR, UK

Abstract

As we look from the past to the future of the field of project management, one of the great challenges is the largely untappedopportunity for transforming our projects’ performance. We have yet to discern how to systematically extract and disseminate

management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers,executives, and researchers in project management, we have yet to learn how to learn. In this paper, the authors discuss some of thereasons behind the failure to systematically learn, and present an approach and modeling framework that facilitates cross-project

learning. The approach is then illustrated with a case study of two multi-$100 million development projects. # 2002 ElsevierScience Ltd and IPMA. All rights reserved.

Keywords: Learning; Strategic management; Rework cycle; Project dynamics

1. Introduction

As we look from the past to the future of the field ofproject management, one of our great challenges is thelargely untapped opportunity for transforming ourprojects’ performance. We have yet to discern how toextract and disseminate management lessons as wemove from project to project, and as we manage andexecute portfolios of projects. As managers, executives,and researchers in project management, we have yet tolearn how to learn.One who does not learn from the past. . . Whether the

motivation is the increasingly competitive arena of‘‘Web-speed’’ product development, or the mandate ofprospective customers to demonstrate qualificationsbased on ‘‘past performance,’’ or the natural drive ofthe best performers to improve, we are challenged tolearn from our project successes and failures. We do sorather well at the technical and process levels. . . webuild upon the latest chip technology to design thesmaller faster one. . . we learn how to move that steel, orpurchase order, more rapidly. But how does one sortamong the extraordinary variety of factors that affectproject performance in order to learn what about themanagement helped a ‘‘good’’ project? We must learnhow to learn what it is about prior good managementthat made it good, that had a positive impact on the

performance, and then we must learn how to codify anddisseminate and improve upon those management les-sons. Learning how to learn future management lessonsfrom past performance will enable us to improve system-atically and continuously the management of projects.A number of conditions have contributed to and per-

petuated the failure to systematically learn on projects.First is the misguided prevalent belief that every projectis different, that there is little commonality betweenprojects, or that the differences are so great that separ-ating the differences from the similarities would be dif-ficult if not impossible. Second, the difficulty indetermining the true causes of project performance hin-ders our learning. Even if we took the time to ask suc-cessful managers what they have learned, do we reallybelieve that they can identify what has worked, andwhat has not, what works under what project condi-tions but not others, and how much difference onepractice vs. another makes? As Wheelwright and Clark[1, p. 284–5] note:

. . . the performance that matters is often a result ofcomplex interactions within the overall develop-ment system. Moreover, the connection betweencause and effect may be separated significantly intime and place. In some instances, for example, theoutcomes of interest are only evident at the con-clusion of the project. Thus, while symptoms andpotential causes may be observed along the devel-opment path, systematic investigation requiresobservation of the outcomes, followed by any ana-lysis that looks back to find the underlying causes.

0263-7863/02/$22.00 # 2002 Elsevier Science Ltd and IPMA. All rights reserved.

PI I : S0263-7863(01 )00071-0

International Journal of Project Management 20 (2002) 213–219

www.elsevier.com/locate/ijproman

* Corresponding author. Tel.: +44-20-7730-9000; fax: +44-20-

7333-5050.

E-mail address: [email protected] (J.M. Lyneis).

Page 2: Learning to Learn, From Past to Future

Third, projects are transient phenomena, and fewcompanies have organizations, money, systems or prac-tices that span them, especially for the very purpose ofgleaning and improving upon transferable lessons ofproject management. Natural incentives pressure us togetting on with the next project, and especially not dwellon the failures of the past. And fourth, while there areindividuals who learn—successful project managers thathave three or four great projects before they move todifferent responsibilities or retire—their limited spanand career path make systematic assessment and learn-ing of transferable lessons that get incorporated in sub-sequent projects extremely difficult.In order to provide learning-based improvement in

project management, all of these conditions need to beaddressed. Organizations need:

1. an understanding that the investment in learningcan pay off, and that there needs to be two outputsfrom every project: the product itself, and thepost-project assessment of what was learned;

2. the right kind of data from past projects to sup-port that learning; and

3. model(s) of the process that allow:comparison of ‘‘unique’’ projects, and the siftingof the unique from the common;a search for patterns and commonalities betweenthe projects; andan understanding of the causes of project perfor-mance differences, including the ability to do ana-lyses and what-ifs.

In the remainder of this paper, the authors describeone company’s experience in achieving managementscience-based learning and real project managementimprovement. In the next Section, the means of dis-playing and understanding the commonality amongprojects—the learning framework—is described.1 Then,an example of using this framework for culling lessonsfrom past projects is demonstrated—transforming a pro-ject disaster into a sterling success on real multi-$100Mdevelopment projects (Section 2). Finally, the simulation-based analysis and training system that provides ongoingproject management improvement is explained.

2. Understanding project commonality: the rework

cycle and feedback effects

We draw upon more than 20 years of experience inanalyzing development projects with the aid of computer-

based simulation models. Such models have been usedto accurately re-create, diagnose, forecast, and improveperformance on dozens of projects and programs inaerospace, defense electronics, financial systems, con-struction, shipbuilding, telecommunications, and soft-ware development [2,3,6].At the core of these models are three important

structures underlying the dynamics of a project (in con-trast to the static perspective of the more standard‘‘critical path’’): (1) the ‘‘rework cycle;’’ (2) feedbackeffects on productivity and work quality; and (3) knock-on effects from upstream phases to downstream phases.These structures, described in detail elsewhere, arebriefly summarized later [4,5].What is most lacking in conventional project planning

and monitoring techniques is the acknowledgement ormeasurement of rework. Typically, conventional toolsview tasks as either ‘‘to be done,’’ ‘‘in-process,’’ or‘‘done.’’ In contrast, the rework cycle model shown inFig. 1 below represents a near-universal description ofwork flow on project which incorporates rework andundiscovered rework: people working at a varying pro-ductivity accomplish work; this work becomes eitherwork really done or undiscovered rework, depending ona varying ‘‘quality’’ (quality is the fraction of work donecompletely and correctly); undiscovered rework is workthat contains as-yet-undetected errors, and is thereforeperceived as being done; errors are detected, oftenmonths later, by downstream efforts or testing, where itbecomes known rework; known rework demands theapplication of people in competition with original work;errors may be made while executing rework, and hencework can cycle through undiscovered rework severaltimes as the project progresses.On a typical project, productivity and quality change

over time in response to conditions on the project andmanagement actions. The factors that affect productiv-ity and quality are a part of the feedback loop structurethat surrounds the rework cycle. Some of these feed-backs are ‘‘negative’’ or ‘‘controlling’’ feedbacks usedby management to control resources on a project. InFig. 2, for example, overtime is added and/or staff arebrought on to a project (‘‘hiring’’) based on workbelieved to be remaining (expected hours at completionless hours expended to date) and scheduled timeremaining to finish the work.2 Alternatively, on the leftof the diagram scheduled completion can be increasedto allow completion of the project with fewer resources.Other effects drive productivity and quality, as indi-

cated in Fig. 2: work quality to date, availability ofprerequisites, out-of-sequence work, schedule pressure,

1 The framework discussed in this paper is designed to understand

project dynamics at a strategic/tactical level [5]. Additional frame-

works and models will be required for learning other aspects of project

management (see, for example, [1]).

2 In Fig. 2., arrows represent cause–effect relationships, as in hiring

causes staff to increase. Not indicated here, but a vital part of the

actual computer model itself, these cause–effect relationships can

involve delays (e.g. delays in finding and training new people), and

non-linearities (e.g. regardless of the amount of pressure).

214 K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Page 3: Learning to Learn, From Past to Future

morale, skill and experience, supervision, and overtime.3

Each of these effects, in turn, is a part of a complexnetwork of generally ‘‘positive’’ or reinforcing feedbackloops that early in the project drive productivity andquality down, and later cause it to increase.For example, suppose that as a result of the design

change (or because of an inconsistent plan), the projectfalls behind schedule. In response, the project may bringon more resources. However, while additional resourceshave positive effects on work accomplished, they alsoinitiate negative effects on productivity and quality.Bringing on additional staff reduces average experiencelevel. Less experienced people make more errors andwork slower than more experienced people. Bringing onadditional staff also creates shortages of supervisors,which in turn reduces productivity and quality. Finally,

while overtime may augment the effective staff on theproject, sustained overtime can lead to fatigue whichreduces productivity and quality.Because of these ‘‘secondary’’ effects on productivity

and quality, the project will make less progress thanexpected and contain more errors—the availability andquality of upstream work has deteriorated. As a result,the productivity and quality of downstream work suf-fers. The project falls further behind schedule, so moreresources are added, thus continuing the downwardspiral. In addition to adding resources, a natural reac-tion to insufficient progress is to exert ‘‘schedule pres-sure’’ on the staff. This often results in more physicaloutput, but also more errors (‘‘haste makes waste’’), andmore out-of-sequence work. Schedule pressure can alsolead to lower morale, which also reduces productivityand quality, and increases staff turnover.A rework cycle and its associated productivity and

quality effects form a ‘‘building block.’’ Building blockscan be used to represent an entire project, or replicatedto represent different phases of a project, in which casemultiple rework cycles in parallel and series might beincluded. At its most aggregate level, such buildingblocks might represent design, build and test. Alter-natively, building blocks might separately represent dif-ferent stages (e.g. conceptual vs. detail) and/or designfunctions (structural, electrical, power, etc.). In soft-ware, building blocks might represent specifications,detailed design, code and unit test, integration, and test.When multiple phases are present, the availability and

Fig. 2. Feedback effects surrounding the Rework Cycle.

3 These are just a few of the factors affecting productivity and quality. Models used in actual situations contain many additional factors.

Fig. 1. The Rework Cycle.

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219 215

Page 4: Learning to Learn, From Past to Future

quality of upstream work can ‘‘knock-on’’ to affect theproductivity and quality of downstream work. In addi-tion, downstream progress affects upstream work byfostering the discovery of upstream rework.The full simulation models of these development pro-

jects employ thousands of equations to portray thetime-varying conditions which cause changes in pro-ductivity, quality, staffing levels, rework detection, andwork execution. All of the dynamic conditions at workin these projects and their models (e.g. staff experi-ence levels, work sequence, supervisory adequacy,‘‘spec’’ stability, worker morale, task feasibility, ven-dor timeliness, overtime, schedule pressure, hiring andattrition, progress monitoring, organization and pro-cess changes, prototyping, testing. . .) cause changes inthe performance of the rework cycle. Because ourbusiness clients require demonstrable accuracy in themodels upon which they will base important decisions,we have needed to develop accurate measures of allthese factors, especially those of the Rework Cycleitself.In applying the Rework Cycle simulation model to a

wide variety of projects, we found an extremely highlevel of commonality in the existence of the ReworkCycle and the kinds of factors affecting productivity,quality, rework discovery, staffing, and scheduling.However, there is substantial variation in the strengthand timing of those factors, resulting in quite differentlevels of performance on the projects simulated by themodels (e.g. [3,6]). It is the comparison of the quantita-tive values of these factors across multiple programsthat has enabled the rigorous culling of managementlessons.

3. Culling lessons: cross-project simulation comparison

In using the Rework Cycle model to simulate the per-formance of dozens of projects and programs in differentcompanies and industries, it is unsurprising to note that,even with a high level of commonality in the logic struc-ture, the biggest differences in factors occur as one movesfrom one industry to another. Several, but fewer, differ-ences exist as one moves from one company to anotherwithin a given industry. Within a given company executingmultiple projects, the differences are smaller still.Nevertheless, different projects within a company exhibitapparently quite different levels of performance whenjudged by their adherence to cost and schedule targets.Such was the case at Hughes Aircraft Company, long

a leader in the defense electronics industry and a pioneerin the use of simulation technology for its programs.Hughes had just completed the dramatically successful‘‘Peace Shield ‘‘ program, a command and control sys-tem development described by one senior US Air Forceofficial, Darleen Druyun: ‘‘In my 26 years in acquisition,this [Peace Shield Weapon System] is the most success-ful program I’ve ever been involved with, and the lea-dership of the U.S. Air Force agrees.’’ [ProgramManager, March–April 1996, p. 24]. This on-budget,ahead-of-schedule, highly complimented program stoodin stark contrast to a past program, in the same orga-nization, to develop a different command and controlsystem. The latter exceeded its original cost and sche-dule plans by several times, and suffered a large contractdispute with the customer. Note in Fig. 3 the substantialdifference in their aggregate performance as indicatedby staffing level on the two programs.

Fig. 3. Past program performance compared to Peace Shield (staffing levels; past program shifted to start in 1991 when Peace Shield started).

216 K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Page 5: Learning to Learn, From Past to Future

Theories abounded as to what had produced suchsignificantly improved performance on Peace Shield.Naturally, they were ‘‘different’’ systems. Different cus-tomers. Different program managers. Different technol-ogies. Different contract terms. These and more all werecited as (partially correct) explanations of why suchdifferent performance was achieved. Hughes executiveswere not satisfied that all the lessons to be learned hadbeen learned.Both programs were modeled with the Rework Cycle

simulation. First, data was collected on:

1. the starting conditions for the programs (scope,schedules, etc.);

2. changes or problems that occurred to the pro-grams (added scope, design changes, availability ofequipment provided by the customer, labor marketconditions, etc.);

3. differences in process or management policiesbetween the two programs (e.g. teaming, hiring themost experienced people, etc.); and

4. actual performance of the programs (quarterlytime series for staffing, work accomplished,rework, overtime, attrition, etc.)

Second, two Rework Cycle models with identicalstructures (that is, the causal factors used in the models)were set up with the different starting conditions on theprograms and with estimates of changes to, and differ-ences between, the programs as they were thought to haveoccurred over time. Finally, these models were simulatedand the performance compared to actual performance ofthe programs as given by the data. Working with pro-gram managers, numerical estimates of project-specificconditions were refined in order to improve the corre-spondence of simulated to actual program performance.

In the end, the two programs were accurately simulatedby an identical model using their different starting con-ditions, external changes, and management policies.After achieving the two accurate simulations, the next

step in learning is to use the simulation model to under-stand what caused the differences between these twoprograms. How much results from: Differences in start-ing conditions? Differences in external factors? Differ-ences in processes or other management initiatives?The next series of analyses stripped away the differ-

ences in factors, one set at a time, in order to quantifythe magnitude of performance differences caused bydifferent conditions. Working with Hughes managers,the first step was to isolate the differences in startingconditions and ‘‘external’’ differences—those in workscope, suppliers, labor markets.4 In particular, PeaceShield had: (1) lower scope and fewer changes than thepast program; (2) experienced fewer vendor delays andhardware problems; and (3) had better labor marketconditions (lower delay in obtaining needed engineers).The removal of those different conditions yielded theintermediate simulation shown in Fig. 4.Having removed from the troubled program simula-

tion the differences in scope and external conditions, thesimulation above represents how Peace Shield wouldhave performed but for the changes in managerialpractices and processes. While a large amount of per-formance difference clearly was attributable to exter-nal conditions, there is still a halving of cost and timeachieved on Peace Shield remaining to be explained by

4 In practice, differences in starting conditions are removed sepa-

rately from differences in external conditions. Then, when external con-

ditions are removed, we see the impact of changes (i.e. unplanned events

or conditions) to the project. This provides us with information about

potential sources and impact of potential risks to future projects.

Fig. 4. Past program with external differences removed indicates how peace shield would have performed absent management policy changes.

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219 217

Page 6: Learning to Learn, From Past to Future

managerial differences. We then systematically alteredthe remaining factor differences in the model thatrepresented managerial changes. For example, severalprocess changes made in Peace Shield (such asextensive ‘‘integrated product development’’ practices)generated significantly reduced rework discoverytimes from an average of 7 months on the past programto 4 months on Peace Shield. Also, the policies govern-ing the staffing of the work became far more disciplinedon Peace Shield: start of software coding at 30 versus10%.These and several other managerial changes were tes-

ted in the model. When all were made, the model hadbeen transformed from that of a highly troubled pro-gram to that of a very successful one—and the perfor-mance improvements attributable to each aspect of themanagerial changes were identified and quantified. Asummarized version of the results (Fig. 5) shows thatenormous savings were and can be achieved by theimplementation of what are essentially ‘‘ free’’ chan-ges—if only they are known and understood. That was

the groundbreaking value of the preceding analysis: toclarify just how much improvement could be achievedby each of several policies and practices implemented ona new program.What remained to be achieved was to systematize the

analytical and learning capability in a manner thatwould support new and ongoing programs, and helpthem achieve continuing performance gains through acorporate ‘‘learning system’’ that would yield effectivemanagement lesson improvement and transfer acrossprograms.

4. Putting lessons to work: the simulation-based learn-

ing system

Beyond the value of the immediate lessons from thecross-program comparative simulation analysis, theneed was to implement a system that would continue tosupport rigorous management improvement and lessontransfer, as illustrated below:

Fig. 5. Where did the cost improvement come from?

218 K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Page 7: Learning to Learn, From Past to Future

Development and implementation of the learningsystem began with adapting the simulation model toeach of several Hughes programs, totaling a value ofseveral hundred million dollars, and ranging in statusfrom just starting to half-complete. Dozens of managerswere interviewed in order to identify management poli-cies believed to be ‘‘best practices’’; these were system-atically tested on each program model to test and verifythe universality of their applicability, and the magnitudeof improvement that could be expected from each.All of the models were integrated into a single compu-

ter-based system accessible to all program managers. Thissystem was linked to a data base of the ‘‘best practice’’observations that could be searched by the users whenconsidering what actions to take. Each manager couldconduct a wide variety of ‘‘what if’’ questions as newconditions and initiatives emerged, drawing upon one’sown experience, the tested ideas from the other programs’managers, the ‘‘best practice’’ data base, and the extensiveexplanatory diagnostics from the simulation models. Aseach idea tested is codified and its impacts quantified innew simulations, the amount and causes of performancedifferences are identified. Changes that produce benefitsfor any one program are flagged for testing in otherprograms as well, to assess the value of their transfer. Inthis system’s first few months of use, large cost and timesavings (tens of millions of dollars, many months ofprogram time) were identified on the programs.In order to facilitate expanding use and impact of the

learning system, there is a built-in adaptor that allowsan authorized user to set up a new program model by‘‘building off’’ an existing program model. An extensivemenu guides the user through an otherwise-automatedadaptation. Upon completion, the system alerts the userto the degree of performance improvement that isrequired in order to meet targets and budgets. Themanager can then draw upon test ideas from other pro-gram managers and the best-practice database in iden-tifying specific potential changes. These can in turn betested on the new program model, and new high-impactchanges logged in the database for future managers’reference, in the quest for ever-improving performance.Not only does this provide a platform for organizationalcapture and dissemination of learning, it is a most rigor-ous means of implementing a clear ‘‘past performance’’basis for planning new programs and improvements.Finally, because there is a need for other managers tolearn the lessons being harvested in the new system, a fullyinteractive trainer-simulator version of the same programmodel is included as part of the integrated system.Furthermore, software enhancements made since the

Hughes system was implemented make the learning sys-tems even more effective. First, moving the software to aweb-based interface makes it far more widely available toproject managers, who can now access it over the Inter-net from anywhere in the world. A second enhancement

is to the arsenal of software tools available to the projectmanager. These include: an automatic sensitivity tester todetermine high leverage parameters; a potentiator to ana-lyze thousands of alternative combinations of manage-ment policies to determine synergistic combinations; andan optimizer to aid in calibration and policy optimization.Finally, adding a Monte Carlo capability to the softwareallows the project manager to answer the question ‘‘Justhow confident are you that this budget projection iscorrect?’’, given uncertainties in the inputs.

5. Conclusions

The learning system for program and project managersimplemented at Hughes Aircraft is a first-of-a-kind inaddressing the challenges cited at the outset. First, it haseffectively provided a framework, the Rework Cyclemodel, that addresses the problem of viewing each pro-ject as a unique phenomena from which there is little tolearn for other projects. Second, it employs models thathelp explain the causality of those phenomena. Third, itprovides systems that enable, and encourage, the use ofpast performance as a means of learning managementlessons. And finally, it refines, stores, and disseminatesthe learning and management lessons of past projects tooffset the limited career span of project managers.While simulation is not the same as ‘‘real life’’, neither

does real life offer us the chance to diagnose rigorously,understand clearly, and communicate effectively the effectsof our actions as managers. Simulation-based learningsystems for managers will continue to have project andbusiness impact that increasingly distance these pro-gram managers from competitors who fail to learn.

References

[1] Wheelwright SC, Clark KB. Revolutionizing product development:

quantum leaps in speed, efficiency, and quality. New York: The Free

Press, 1992.

[2] Cooper KG. Naval ship production: a claim settled and a frame-

work built,. Interfaces vol. 10, no. 6, pp. 20–36 December 1980.

[3] Cooper KG, Mullen TW. Swords & plowshares: the rework

cycles of defense and commercial software development projects.

American Programmer vol. 6 no.5, May 1993. Reprinted in

Guidelines for Successful Acquisition and Management of Soft-

ware Intensive Systems, Department of the Air Force, September

1994. pp. 41–51.

[4] Cooper KG. 1993a The rework cycle: Why projects are misman-

aged, PM Network Magazine February 1993, 5–7; 1993b The

rework cycle: How it really works. . .and reworks. . ., PM Net-

work Magazine February 1993, 25–28; 1993c The rework cycle:

Benchmarks for the project manager, Project Management Jour-

nal, 24(1), 17–21.

[5] Lyneis JM, Cooper KG, Els SA. Strategic management of com-

plex projects: a case study using system dynamics. System

Dynamics Review 2001; 17(3): 237–60.

[6] Reichelt KS and Lyneis JM. The dynamics of project performance:

benchmarking the drivers of cost and schedule overrun. European

Management Journal vol. 17, no. 2, April 1999 pp. 135–50.

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219 219