Learning to Learn, From Past to Future
out of 7
Post on 11-Apr-2015
International Journal of Project Management 20 (2002) 213219 www.elsevier.com/locate/ijpromanLearning to learn, from past to futureKenneth G. Cooper, James M. Lyneis*, Benjamin J. BryantBusiness Dynamics Practice, PA Consulting Group, 123 Buckingham Palace Road, London SW1W 9SR, UKAbstract As we look from the past to the future of the eld of project management, one of the great challenges is the largely untapped opportunity for transforming our projects performance. We have yet to discern how to systematically extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. In this paper, the authors discuss some of the reasons behind the failure to systematically learn, and present an approach and modeling framework that facilitates cross-project learning. The approach is then illustrated with a case study of two multi-$100 million development projects. # 2002 Elsevier Science Ltd and IPMA. All rights reserved.Keywords: Learning; Strategic management; Rework cycle; Project dynamics1. Introduction As we look from the past to the future of the eld of project management, one of our great challenges is the largely untapped opportunity for transforming our projects performance. We have yet to discern how to extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. One who does not learn from the past. . . Whether the motivation is the increasingly competitive arena of Web-speed product development, or the mandate of prospective customers to demonstrate qualications based on past performance, or the natural drive of the best performers to improve, we are challenged to learn from our project successes and failures. We do so rather well at the technical and process levels. . . we build upon the latest chip technology to design the smaller faster one. . . we learn how to move that steel, or purchase order, more rapidly. But how does one sort among the extraordinary variety of factors that aect project performance in order to learn what about the management helped a good project? We must learn how to learn what it is about prior good management that made it good, that had a positive impact on theperformance, and then we must learn how to codify and disseminate and improve upon those management lessons. Learning how to learn future management lessons from past performance will enable us to improve systematically and continuously the management of projects. A number of conditions have contributed to and perpetuated the failure to systematically learn on projects. First is the misguided prevalent belief that every project is dierent, that there is little commonality between projects, or that the dierences are so great that separating the dierences from the similarities would be difcult if not impossible. Second, the diculty in determining the true causes of project performance hinders our learning. Even if we took the time to ask successful managers what they have learned, do we really believe that they can identify what has worked, and what has not, what works under what project conditions but not others, and how much dierence one practice vs. another makes? As Wheelwright and Clark [1, p. 2845] note: . . . the performance that matters is often a result of complex interactions within the overall development system. Moreover, the connection between cause and eect may be separated signicantly in time and place. In some instances, for example, the outcomes of interest are only evident at the conclusion of the project. Thus, while symptoms and potential causes may be observed along the development path, systematic investigation requires observation of the outcomes, followed by any analysis that looks back to nd the underlying causes.* Corresponding author. Tel.: +44-20-7730-9000; fax: +44-207333-5050. E-mail address: firstname.lastname@example.org (J.M. Lyneis).0263-7863/02/$22.00 # 2002 Elsevier Science Ltd and IPMA. All rights reserved. PII: S0263-7863(01)00071-0214K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219Third, projects are transient phenomena, and few companies have organizations, money, systems or practices that span them, especially for the very purpose of gleaning and improving upon transferable lessons of project management. Natural incentives pressure us to getting on with the next project, and especially not dwell on the failures of the past. And fourth, while there are individuals who learnsuccessful project managers that have three or four great projects before they move to dierent responsibilities or retiretheir limited span and career path make systematic assessment and learning of transferable lessons that get incorporated in subsequent projects extremely dicult. In order to provide learning-based improvement in project management, all of these conditions need to be addressed. Organizations need: 1. an understanding that the investment in learning can pay o, and that there needs to be two outputs from every project: the product itself, and the post-project assessment of what was learned; 2. the right kind of data from past projects to support that learning; and 3. model(s) of the process that allow: comparison of unique projects, and the sifting of the unique from the common; a search for patterns and commonalities between the projects; and an understanding of the causes of project performance dierences, including the ability to do analyses and what-ifs. In the remainder of this paper, the authors describe one companys experience in achieving management science-based learning and real project management improvement. In the next Section, the means of displaying and understanding the commonality among projectsthe learning frameworkis described.1 Then, an example of using this framework for culling lessons from past projects is demonstratedtransforming a project disaster into a sterling success on real multi-$100M development projects (Section 2). Finally, the simulationbased analysis and training system that provides ongoing project management improvement is explained.2. Understanding project commonality: the rework cycle and feedback eects We draw upon more than 20 years of experience in analyzing development projects with the aid of computerThe framework discussed in this paper is designed to understand project dynamics at a strategic/tactical level . Additional frameworks and models will be required for learning other aspects of project management (see, for example, ).1based simulation models. Such models have been used to accurately re-create, diagnose, forecast, and improve performance on dozens of projects and programs in aerospace, defense electronics, nancial systems, construction, shipbuilding, telecommunications, and software development [2,3,6]. At the core of these models are three important structures underlying the dynamics of a project (in contrast to the static perspective of the more standard critical path): (1) the rework cycle; (2) feedback eects on productivity and work quality; and (3) knockon eects from upstream phases to downstream phases. These structures, described in detail elsewhere, are briey summarized later [4,5]. What is most lacking in conventional project planning and monitoring techniques is the acknowledgement or measurement of rework. Typically, conventional tools view tasks as either to be done, in-process, or done. In contrast, the rework cycle model shown in Fig. 1 below represents a near-universal description of work ow on project which incorporates rework and undiscovered rework: people working at a varying productivity accomplish work; this work becomes either work really done or undiscovered rework, depending on a varying quality (quality is the fraction of work done completely and correctly); undiscovered rework is work that contains as-yet-undetected errors, and is therefore perceived as being done; errors are detected, often months later, by downstream eorts or testing, where it becomes known rework; known rework demands the application of people in competition with original work; errors may be made while executing rework, and hence work can cycle through undiscovered rework several times as the project progresses. On a typical project, productivity and quality change over time in response to conditions on the project and management actions. The factors that aect productivity and quality are a part of the feedback loop structure that surrounds the rework cycle. Some of these feedbacks are negative or controlling feedbacks used by management to control resources on a project. In Fig. 2, for example, overtime is added and/or sta are brought on to a project (hiring) based on work believed to be remaining (expected hours at completion less hours expended to date) and scheduled time remaining to nish the work.2 Alternatively, on the left of the diagram scheduled completion can be increased to allow completion of the project with fewer resources. Other eects drive productivity and quality, as indicated in Fig. 2: work quality to date, availability of prerequisites, out-of-sequence work, schedule pressure,2 In Fig. 2., arrows represent causeeect relationships, as in hiring causes sta to increase. Not indicated here, but a vital part of the actual computer model itself, these causeeect relationships can involve delays (e.g. delays in nding and training new people), and non-linearities (e.g. regardless of the amount of pressure).K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219215morale, skill and experience, supervision, and overtime.3 Each of these eects, in turn, is a part of a complex network of generally positive or reinforcing feedback loops that early in the project drive productivity and quality down, and later cause it to increase. For example, suppose that as a result of the design change (or because of an inconsistent plan), the project falls behind schedule. In response, the project may bring on more resources. However, while additional resources have positive eects on work accomplished, they also initiate negative eects on productivity and quality. Bringing on additional sta reduces average experience level. Less experienced people make more errors and work slower than more experienced people. Bringing on additional sta also creates shortages of supervisors, which in turn reduces productivity and quality. Finally,Fig. 1. The Rework Cycle.while overtime may augment the eective sta on the project, sustained overtime can lead to fatigue which reduces productivity and quality. Because of these secondary eects on productivity and quality, the project will make less progress than expected and contain more errorsthe availability and quality of upstream work has deteriorated. As a result, the productivity and quality of downstream work suffers. The project falls further behind schedule, so more resources are added, thus continuing the downward spiral. In addition to adding resources, a natural reaction to insucient progress is to exert schedule pressure on the sta. This often results in more physical output, but also more errors (haste makes waste), and more out-of-sequence work. Schedule pressure can also lead to lower morale, which also reduces productivity and quality, and increases sta turnover. A rework cycle and its associated productivity and quality eects form a building block. Building blocks can be used to represent an entire project, or replicated to represent dierent phases of a project, in which case multiple rework cycles in parallel and series might be included. At its most aggregate level, such building blocks might represent design, build and test. Alternatively, building blocks might separately represent different stages (e.g. conceptual vs. detail) and/or design functions (structural, electrical, power, etc.). In software, building blocks might represent specications, detailed design, code and unit test, integration, and test. When multiple phases are present, the availability andFig. 2. Feedback eects surrounding the Rework Cycle.3These are just a few of the factors aecting productivity and quality. Models used in actual situations contain many additional factors.216K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219quality of upstream work can knock-on to aect the productivity and quality of downstream work. In addition, downstream progress aects upstream work by fostering the discovery of upstream rework. The full simulation models of these development projects employ thousands of equations to portray the time-varying conditions which cause changes in productivity, quality, stang levels, rework detection, and work execution. All of the dynamic conditions at work in these projects and their models (e.g. sta experience levels, work sequence, supervisory adequacy, spec stability, worker morale, task feasibility, vendor timeliness, overtime, schedule pressure, hiring and attrition, progress monitoring, organization and process changes, prototyping, testing. . .) cause changes in the performance of the rework cycle. Because our business clients require demonstrable accuracy in the models upon which they will base important decisions, we have needed to develop accurate measures of all these factors, especially those of the Rework Cycle itself. In applying the Rework Cycle simulation model to a wide variety of projects, we found an extremely high level of commonality in the existence of the Rework Cycle and the kinds of factors aecting productivity, quality, rework discovery, stang, and scheduling. However, there is substantial variation in the strength and timing of those factors, resulting in quite dierent levels of performance on the projects simulated by the models (e.g. [3,6]). It is the comparison of the quantitative values of these factors across multiple programs that has enabled the rigorous culling of management lessons.3. Culling lessons: cross-project simulation comparison In using the Rework Cycle model to simulate the performance of dozens of projects and programs in dierent companies and industries, it is unsurprising to note that, even with a high level of commonality in the logic structure, the biggest dierences in factors occur as one moves from one industry to another. Several, but fewer, dierences exist as one moves from one company to another within a given industry. Within a given company executing multiple projects, the dierences are smaller still. Nevertheless, dierent projects within a company exhibit apparently quite dierent levels of performance when judged by their adherence to cost and schedule targets. Such was the case at Hughes Aircraft Company, long a leader in the defense electronics industry and a pioneer in the use of simulation technology for its programs. Hughes had just completed the dramatically successful Peace Shield program, a command and control system development described by one senior US Air Force ocial, Darleen Druyun: In my 26 years in acquisition, this [Peace Shield Weapon System] is the most successful program Ive ever been involved with, and the leadership of the U.S. Air Force agrees. [Program Manager, MarchApril 1996, p. 24]. This on-budget, ahead-of-schedule, highly complimented program stood in stark contrast to a past program, in the same organization, to develop a dierent command and control system. The latter exceeded its original cost and schedule plans by several times, and suered a large contract dispute with the customer. Note in Fig. 3 the substantial dierence in their aggregate performance as indicated by stang level on the two programs.Fig. 3. Past program performance compared to Peace Shield (stang levels; past program shifted to start in 1991 when Peace Shield started).K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219217Fig. 4. Past program with external dierences removed indicates how peace shield would have performed absent management policy changes.Theories abounded as to what had produced such signicantly improved performance on Peace Shield. Naturally, they were dierent systems. Dierent customers. Dierent program managers. Dierent technologies. Dierent contract terms. These and more all were cited as (partially correct) explanations of why such dierent performance was achieved. Hughes executives were not satised that all the lessons to be learned had been learned. Both programs were modeled with the Rework Cycle simulation. First, data was collected on: 1. the starting conditions for the programs (scope, schedules, etc.); 2. changes or problems that occurred to the programs (added scope, design changes, availability of equipment provided by the customer, labor market conditions, etc.); 3. dierences in process or management policies between the two programs (e.g. teaming, hiring the most experienced people, etc.); and 4. actual performance of the programs (quarterly time series for stang, work accomplished, rework, overtime, attrition, etc.) Second, two Rework Cycle models with identical structures (that is, the causal factors used in the models) were set up with the dierent starting conditions on the programs and with estimates of changes to, and dierences between, the programs as they were thought to have occurred over time. Finally, these models were simulated and the performance compared to actual performance of the programs as given by the data. Working with program managers, numerical estimates of project-specic conditions were rened in order to improve the correspondence of simulated to actual program performance.In the end, the two programs were accurately simulated by an identical model using their dierent starting conditions, external changes, and management policies. After achieving the two accurate simulations, the next step in learning is to use the simulation model to understand what caused the dierences between these two programs. How much results from: Dierences in starting conditions? Dierences in external factors? Dierences in processes or other management initiatives? The next series of analyses stripped away the dierences in factors, one set at a time, in order to quantify the magnitude of performance dierences caused by dierent conditions. Working with Hughes managers, the rst step was to isolate the dierences in starting conditions and external dierencesthose in work scope, suppliers, labor markets.4 In particular, Peace Shield had: (1) lower scope and fewer changes than the past program; (2) experienced fewer vendor delays and hardware problems; and (3) had better labor market conditions (lower delay in obtaining needed engineers). The removal of those dierent conditions yielded the intermediate simulation shown in Fig. 4. Having removed from the troubled program simulation the dierences in scope and external conditions, the simulation above represents how Peace Shield would have performed but for the changes in managerial practices and processes. While a large amount of performance dierence clearly was attributable to external conditions, there is still a halving of cost and time achieved on Peace Shield remaining to be explained by4 In practice, dierences in starting conditions are removed separately from dierences in external conditions. Then, when external conditions are removed, we see the impact of changes (i.e. unplanned events or conditions) to the project. This provides us with information about potential sources and impact of potential risks to future projects.218K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219Fig. 5. Where did the cost improvement come from?managerial dierences. We then systematically altered the remaining factor dierences in the model that represented managerial changes. For example, several process changes made in Peace Shield (such as extensive integrated product development practices) generated signicantly reduced rework discovery times from an average of 7 months on the past program to 4 months on Peace Shield. Also, the policies governing the stang of the work became far more disciplined on Peace Shield: start of software coding at 30 versus 10%. These and several other managerial changes were tested in the model. When all were made, the model had been transformed from that of a highly troubled program to that of a very successful oneand the performance improvements attributable to each aspect of the managerial changes were identied and quantied. A summarized version of the results (Fig. 5) shows that enormous savings were and can be achieved by the implementation of what are essentially free changesif only they are known and understood. That wasthe groundbreaking value of the preceding analysis: to clarify just how much improvement could be achieved by each of several policies and practices implemented on a new program. What remained to be achieved was to systematize the analytical and learning capability in a manner that would support new and ongoing programs, and help them achieve continuing performance gains through a corporate learning system that would yield eective management lesson improvement and transfer across programs.4. Putting lessons to work: the simulation-based learning system Beyond the value of the immediate lessons from the cross-program comparative simulation analysis, the need was to implement a system that would continue to support rigorous management improvement and lesson transfer, as illustrated below:K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213219219Development and implementation of the learning system began with adapting the simulation model to each of several Hughes programs, totaling a value of several hundred million dollars, and ranging in status from just starting to half-complete. Dozens of managers were interviewed in order to identify management policies believed to be best practices; these were systematically tested on each program model to test and verify the universality of their applicability, and the magnitude of improvement that could be expected from each. All of the models were integrated into a single computer-based system accessible to all program managers. This system was linked to a data base of the best practice observations that could be searched by the users when considering what actions to take. Each manager could conduct a wide variety of what if questions as new conditions and initiatives emerged, drawing upon ones own experience, the tested ideas from the other programs managers, the best practice data base, and the extensive explanatory diagnostics from the simulation models. As each idea tested is codied and its impacts quantied in new simulations, the amount and causes of performance dierences are identied. Changes that produce benets for any one program are agged for testing in other programs as well, to assess the value of their transfer. In this systems rst few months of use, large cost and time savings (tens of millions of dollars, many months of program time) were identied on the programs. In order to facilitate expanding use and impact of the learning system, there is a built-in adaptor that allows an authorized user to set up a new program model by building o an existing program model. An extensive menu guides the user through an otherwise-automated adaptation. Upon completion, the system alerts the user to the degree of performance improvement that is required in order to meet targets and budgets. The manager can then draw upon test ideas from other program managers and the best-practice database in identifying specic potential changes. These can in turn be tested on the new program model, and new high-impact changes logged in the database for future managers reference, in the quest for ever-improving performance. Not only does this provide a platform for organizational capture and dissemination of learning, it is a most rigorous means of implementing a clear past performance basis for planning new programs and improvements. Finally, because there is a need for other managers to learn the lessons being harvested in the new system, a fully interactive trainer-simulator version of the same program model is included as part of the integrated system. Furthermore, software enhancements made since the Hughes system was implemented make the learning systems even more eective. First, moving the software to a web-based interface makes it far more widely available to project managers, who can now access it over the Internet from anywhere in the world. A second enhancementis to the arsenal of software tools available to the project manager. These include: an automatic sensitivity tester to determine high leverage parameters; a potentiator to analyze thousands of alternative combinations of management policies to determine synergistic combinations; and an optimizer to aid in calibration and policy optimization. Finally, adding a Monte Carlo capability to the software allows the project manager to answer the question Just how condent are you that this budget projection is correct?, given uncertainties in the inputs.5. Conclusions The learning system for program and project managers implemented at Hughes Aircraft is a rst-of-a-kind in addressing the challenges cited at the outset. First, it has eectively provided a framework, the Rework Cycle model, that addresses the problem of viewing each project as a unique phenomena from which there is little to learn for other projects. Second, it employs models that help explain the causality of those phenomena. Third, it provides systems that enable, and encourage, the use of past performance as a means of learning management lessons. And nally, it renes, stores, and disseminates the learning and management lessons of past projects to oset the limited career span of project managers. While simulation is not the same as real life, neither does real life oer us the chance to diagnose rigorously, understand clearly, and communicate eectively the eects of our actions as managers. Simulation-based learning systems for managers will continue to have project and business impact that increasingly distance these program managers from competitors who fail to learn. References Wheelwright SC, Clark KB. Revolutionizing product development: quantum leaps in speed, eciency, and quality. New York: The Free Press, 1992.  Cooper KG. Naval ship production: a claim settled and a framework built,. Interfaces vol. 10, no. 6, pp. 2036 December 1980.  Cooper KG, Mullen TW. Swords & plowshares: the rework cycles of defense and commercial software development projects. American Programmer vol. 6 no.5, May 1993. Reprinted in Guidelines for Successful Acquisition and Management of Software Intensive Systems, Department of the Air Force, September 1994. pp. 4151.  Cooper KG. 1993a The rework cycle: Why projects are mismanaged, PM Network Magazine February 1993, 57; 1993b The rework cycle: How it really works. . .and reworks. . ., PM Network Magazine February 1993, 2528; 1993c The rework cycle: Benchmarks for the project manager, Project Management Journal, 24(1), 1721.  Lyneis JM, Cooper KG, Els SA. Strategic management of complex projects: a case study using system dynamics. System Dynamics Review 2001; 17(3): 23760.  Reichelt KS and Lyneis JM. The dynamics of project performance: benchmarking the drivers of cost and schedule overrun. European Management Journal vol. 17, no. 2, April 1999 pp. 13550.
View more >
Learning to Learn From Benchmark Assessment Data: ?· Learning to Learn From Benchmark Assessment Data:…
A Failure to Learn from the Past - ACSA) ?· A Failure to Learn from the Past ... A virus is a piece…
Unit 3 Back to the past. Learning aims: 1. You should learn to respect the past and learn from the history. 2. Learn the basic information about Pompeii.