Learning To Learn, From Past To Future

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Learning To Learn, From Past To Future as PDF for free.

More details

  • Words: 4,243
  • Pages: 7
International Journal of Project Management 20 (2002) 213–219 www.elsevier.com/locate/ijproman

Learning to learn, from past to future Kenneth G. Cooper, James M. Lyneis*, Benjamin J. Bryant Business Dynamics Practice, PA Consulting Group, 123 Buckingham Palace Road, London SW1W 9SR, UK

Abstract As we look from the past to the future of the field of project management, one of the great challenges is the largely untapped opportunity for transforming our projects’ performance. We have yet to discern how to systematically extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. In this paper, the authors discuss some of the reasons behind the failure to systematically learn, and present an approach and modeling framework that facilitates cross-project learning. The approach is then illustrated with a case study of two multi-$100 million development projects. # 2002 Elsevier Science Ltd and IPMA. All rights reserved. Keywords: Learning; Strategic management; Rework cycle; Project dynamics

1. Introduction As we look from the past to the future of the field of project management, one of our great challenges is the largely untapped opportunity for transforming our projects’ performance. We have yet to discern how to extract and disseminate management lessons as we move from project to project, and as we manage and execute portfolios of projects. As managers, executives, and researchers in project management, we have yet to learn how to learn. One who does not learn from the past. . . Whether the motivation is the increasingly competitive arena of ‘‘Web-speed’’ product development, or the mandate of prospective customers to demonstrate qualifications based on ‘‘past performance,’’ or the natural drive of the best performers to improve, we are challenged to learn from our project successes and failures. We do so rather well at the technical and process levels. . . we build upon the latest chip technology to design the smaller faster one. . . we learn how to move that steel, or purchase order, more rapidly. But how does one sort among the extraordinary variety of factors that affect project performance in order to learn what about the management helped a ‘‘good’’ project? We must learn how to learn what it is about prior good management that made it good, that had a positive impact on the

* Corresponding author. Tel.: +44-20-7730-9000; fax: +44-207333-5050. E-mail address: [email protected] (J.M. Lyneis).

performance, and then we must learn how to codify and disseminate and improve upon those management lessons. Learning how to learn future management lessons from past performance will enable us to improve systematically and continuously the management of projects. A number of conditions have contributed to and perpetuated the failure to systematically learn on projects. First is the misguided prevalent belief that every project is different, that there is little commonality between projects, or that the differences are so great that separating the differences from the similarities would be difficult if not impossible. Second, the difficulty in determining the true causes of project performance hinders our learning. Even if we took the time to ask successful managers what they have learned, do we really believe that they can identify what has worked, and what has not, what works under what project conditions but not others, and how much difference one practice vs. another makes? As Wheelwright and Clark [1, p. 284–5] note: . . . the performance that matters is often a result of complex interactions within the overall development system. Moreover, the connection between cause and effect may be separated significantly in time and place. In some instances, for example, the outcomes of interest are only evident at the conclusion of the project. Thus, while symptoms and potential causes may be observed along the development path, systematic investigation requires observation of the outcomes, followed by any analysis that looks back to find the underlying causes.

0263-7863/02/$22.00 # 2002 Elsevier Science Ltd and IPMA. All rights reserved. PII: S0263-7863(01)00071-0

214

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Third, projects are transient phenomena, and few companies have organizations, money, systems or practices that span them, especially for the very purpose of gleaning and improving upon transferable lessons of project management. Natural incentives pressure us to getting on with the next project, and especially not dwell on the failures of the past. And fourth, while there are individuals who learn—successful project managers that have three or four great projects before they move to different responsibilities or retire—their limited span and career path make systematic assessment and learning of transferable lessons that get incorporated in subsequent projects extremely difficult. In order to provide learning-based improvement in project management, all of these conditions need to be addressed. Organizations need: 1. an understanding that the investment in learning can pay off, and that there needs to be two outputs from every project: the product itself, and the post-project assessment of what was learned; 2. the right kind of data from past projects to support that learning; and 3. model(s) of the process that allow: comparison of ‘‘unique’’ projects, and the sifting of the unique from the common; a search for patterns and commonalities between the projects; and an understanding of the causes of project performance differences, including the ability to do analyses and what-ifs. In the remainder of this paper, the authors describe one company’s experience in achieving management science-based learning and real project management improvement. In the next Section, the means of displaying and understanding the commonality among projects—the learning framework—is described.1 Then, an example of using this framework for culling lessons from past projects is demonstrated—transforming a project disaster into a sterling success on real multi-$100M development projects (Section 2). Finally, the simulationbased analysis and training system that provides ongoing project management improvement is explained.

2. Understanding project commonality: the rework cycle and feedback effects We draw upon more than 20 years of experience in analyzing development projects with the aid of computer1

The framework discussed in this paper is designed to understand project dynamics at a strategic/tactical level [5]. Additional frameworks and models will be required for learning other aspects of project management (see, for example, [1]).

based simulation models. Such models have been used to accurately re-create, diagnose, forecast, and improve performance on dozens of projects and programs in aerospace, defense electronics, financial systems, construction, shipbuilding, telecommunications, and software development [2,3,6]. At the core of these models are three important structures underlying the dynamics of a project (in contrast to the static perspective of the more standard ‘‘critical path’’): (1) the ‘‘rework cycle;’’ (2) feedback effects on productivity and work quality; and (3) knockon effects from upstream phases to downstream phases. These structures, described in detail elsewhere, are briefly summarized later [4,5]. What is most lacking in conventional project planning and monitoring techniques is the acknowledgement or measurement of rework. Typically, conventional tools view tasks as either ‘‘to be done,’’ ‘‘in-process,’’ or ‘‘done.’’ In contrast, the rework cycle model shown in Fig. 1 below represents a near-universal description of work flow on project which incorporates rework and undiscovered rework: people working at a varying productivity accomplish work; this work becomes either work really done or undiscovered rework, depending on a varying ‘‘quality’’ (quality is the fraction of work done completely and correctly); undiscovered rework is work that contains as-yet-undetected errors, and is therefore perceived as being done; errors are detected, often months later, by downstream efforts or testing, where it becomes known rework; known rework demands the application of people in competition with original work; errors may be made while executing rework, and hence work can cycle through undiscovered rework several times as the project progresses. On a typical project, productivity and quality change over time in response to conditions on the project and management actions. The factors that affect productivity and quality are a part of the feedback loop structure that surrounds the rework cycle. Some of these feedbacks are ‘‘negative’’ or ‘‘controlling’’ feedbacks used by management to control resources on a project. In Fig. 2, for example, overtime is added and/or staff are brought on to a project (‘‘hiring’’) based on work believed to be remaining (expected hours at completion less hours expended to date) and scheduled time remaining to finish the work.2 Alternatively, on the left of the diagram scheduled completion can be increased to allow completion of the project with fewer resources. Other effects drive productivity and quality, as indicated in Fig. 2: work quality to date, availability of prerequisites, out-of-sequence work, schedule pressure, 2 In Fig. 2., arrows represent cause–effect relationships, as in hiring causes staff to increase. Not indicated here, but a vital part of the actual computer model itself, these cause–effect relationships can involve delays (e.g. delays in finding and training new people), and non-linearities (e.g. regardless of the amount of pressure).

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

morale, skill and experience, supervision, and overtime.3 Each of these effects, in turn, is a part of a complex network of generally ‘‘positive’’ or reinforcing feedback loops that early in the project drive productivity and quality down, and later cause it to increase. For example, suppose that as a result of the design change (or because of an inconsistent plan), the project falls behind schedule. In response, the project may bring on more resources. However, while additional resources have positive effects on work accomplished, they also initiate negative effects on productivity and quality. Bringing on additional staff reduces average experience level. Less experienced people make more errors and work slower than more experienced people. Bringing on additional staff also creates shortages of supervisors, which in turn reduces productivity and quality. Finally,

Fig. 1. The Rework Cycle.

while overtime may augment the effective staff on the project, sustained overtime can lead to fatigue which reduces productivity and quality. Because of these ‘‘secondary’’ effects on productivity and quality, the project will make less progress than expected and contain more errors—the availability and quality of upstream work has deteriorated. As a result, the productivity and quality of downstream work suffers. The project falls further behind schedule, so more resources are added, thus continuing the downward spiral. In addition to adding resources, a natural reaction to insufficient progress is to exert ‘‘schedule pressure’’ on the staff. This often results in more physical output, but also more errors (‘‘haste makes waste’’), and more out-of-sequence work. Schedule pressure can also lead to lower morale, which also reduces productivity and quality, and increases staff turnover. A rework cycle and its associated productivity and quality effects form a ‘‘building block.’’ Building blocks can be used to represent an entire project, or replicated to represent different phases of a project, in which case multiple rework cycles in parallel and series might be included. At its most aggregate level, such building blocks might represent design, build and test. Alternatively, building blocks might separately represent different stages (e.g. conceptual vs. detail) and/or design functions (structural, electrical, power, etc.). In software, building blocks might represent specifications, detailed design, code and unit test, integration, and test. When multiple phases are present, the availability and

Fig. 2. Feedback effects surrounding the Rework Cycle. 3

215

These are just a few of the factors affecting productivity and quality. Models used in actual situations contain many additional factors.

216

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

quality of upstream work can ‘‘knock-on’’ to affect the productivity and quality of downstream work. In addition, downstream progress affects upstream work by fostering the discovery of upstream rework. The full simulation models of these development projects employ thousands of equations to portray the time-varying conditions which cause changes in productivity, quality, staffing levels, rework detection, and work execution. All of the dynamic conditions at work in these projects and their models (e.g. staff experience levels, work sequence, supervisory adequacy, ‘‘spec’’ stability, worker morale, task feasibility, vendor timeliness, overtime, schedule pressure, hiring and attrition, progress monitoring, organization and process changes, prototyping, testing. . .) cause changes in the performance of the rework cycle. Because our business clients require demonstrable accuracy in the models upon which they will base important decisions, we have needed to develop accurate measures of all these factors, especially those of the Rework Cycle itself. In applying the Rework Cycle simulation model to a wide variety of projects, we found an extremely high level of commonality in the existence of the Rework Cycle and the kinds of factors affecting productivity, quality, rework discovery, staffing, and scheduling. However, there is substantial variation in the strength and timing of those factors, resulting in quite different levels of performance on the projects simulated by the models (e.g. [3,6]). It is the comparison of the quantitative values of these factors across multiple programs that has enabled the rigorous culling of management lessons.

3. Culling lessons: cross-project simulation comparison In using the Rework Cycle model to simulate the performance of dozens of projects and programs in different companies and industries, it is unsurprising to note that, even with a high level of commonality in the logic structure, the biggest differences in factors occur as one moves from one industry to another. Several, but fewer, differences exist as one moves from one company to another within a given industry. Within a given company executing multiple projects, the differences are smaller still. Nevertheless, different projects within a company exhibit apparently quite different levels of performance when judged by their adherence to cost and schedule targets. Such was the case at Hughes Aircraft Company, long a leader in the defense electronics industry and a pioneer in the use of simulation technology for its programs. Hughes had just completed the dramatically successful ‘‘Peace Shield ‘‘ program, a command and control system development described by one senior US Air Force official, Darleen Druyun: ‘‘In my 26 years in acquisition, this [Peace Shield Weapon System] is the most successful program I’ve ever been involved with, and the leadership of the U.S. Air Force agrees.’’ [Program Manager, March–April 1996, p. 24]. This on-budget, ahead-of-schedule, highly complimented program stood in stark contrast to a past program, in the same organization, to develop a different command and control system. The latter exceeded its original cost and schedule plans by several times, and suffered a large contract dispute with the customer. Note in Fig. 3 the substantial difference in their aggregate performance as indicated by staffing level on the two programs.

Fig. 3. Past program performance compared to Peace Shield (staffing levels; past program shifted to start in 1991 when Peace Shield started).

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

217

Fig. 4. Past program with external differences removed indicates how peace shield would have performed absent management policy changes.

Theories abounded as to what had produced such significantly improved performance on Peace Shield. Naturally, they were ‘‘different’’ systems. Different customers. Different program managers. Different technologies. Different contract terms. These and more all were cited as (partially correct) explanations of why such different performance was achieved. Hughes executives were not satisfied that all the lessons to be learned had been learned. Both programs were modeled with the Rework Cycle simulation. First, data was collected on: 1. the starting conditions for the programs (scope, schedules, etc.); 2. changes or problems that occurred to the programs (added scope, design changes, availability of equipment provided by the customer, labor market conditions, etc.); 3. differences in process or management policies between the two programs (e.g. teaming, hiring the most experienced people, etc.); and 4. actual performance of the programs (quarterly time series for staffing, work accomplished, rework, overtime, attrition, etc.) Second, two Rework Cycle models with identical structures (that is, the causal factors used in the models) were set up with the different starting conditions on the programs and with estimates of changes to, and differences between, the programs as they were thought to have occurred over time. Finally, these models were simulated and the performance compared to actual performance of the programs as given by the data. Working with program managers, numerical estimates of project-specific conditions were refined in order to improve the correspondence of simulated to actual program performance.

In the end, the two programs were accurately simulated by an identical model using their different starting conditions, external changes, and management policies. After achieving the two accurate simulations, the next step in learning is to use the simulation model to understand what caused the differences between these two programs. How much results from: Differences in starting conditions? Differences in external factors? Differences in processes or other management initiatives? The next series of analyses stripped away the differences in factors, one set at a time, in order to quantify the magnitude of performance differences caused by different conditions. Working with Hughes managers, the first step was to isolate the differences in starting conditions and ‘‘external’’ differences—those in work scope, suppliers, labor markets.4 In particular, Peace Shield had: (1) lower scope and fewer changes than the past program; (2) experienced fewer vendor delays and hardware problems; and (3) had better labor market conditions (lower delay in obtaining needed engineers). The removal of those different conditions yielded the intermediate simulation shown in Fig. 4. Having removed from the troubled program simulation the differences in scope and external conditions, the simulation above represents how Peace Shield would have performed but for the changes in managerial practices and processes. While a large amount of performance difference clearly was attributable to external conditions, there is still a halving of cost and time achieved on Peace Shield remaining to be explained by 4 In practice, differences in starting conditions are removed separately from differences in external conditions. Then, when external conditions are removed, we see the impact of changes (i.e. unplanned events or conditions) to the project. This provides us with information about potential sources and impact of potential risks to future projects.

218

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Fig. 5. Where did the cost improvement come from?

managerial differences. We then systematically altered the remaining factor differences in the model that represented managerial changes. For example, several process changes made in Peace Shield (such as extensive ‘‘integrated product development’’ practices) generated significantly reduced rework discovery times from an average of 7 months on the past program to 4 months on Peace Shield. Also, the policies governing the staffing of the work became far more disciplined on Peace Shield: start of software coding at 30 versus 10%. These and several other managerial changes were tested in the model. When all were made, the model had been transformed from that of a highly troubled program to that of a very successful one—and the performance improvements attributable to each aspect of the managerial changes were identified and quantified. A summarized version of the results (Fig. 5) shows that enormous savings were and can be achieved by the implementation of what are essentially ‘‘ free’’ changes—if only they are known and understood. That was

the groundbreaking value of the preceding analysis: to clarify just how much improvement could be achieved by each of several policies and practices implemented on a new program. What remained to be achieved was to systematize the analytical and learning capability in a manner that would support new and ongoing programs, and help them achieve continuing performance gains through a corporate ‘‘learning system’’ that would yield effective management lesson improvement and transfer across programs.

4. Putting lessons to work: the simulation-based learning system Beyond the value of the immediate lessons from the cross-program comparative simulation analysis, the need was to implement a system that would continue to support rigorous management improvement and lesson transfer, as illustrated below:

K.G. Cooper et al. / International Journal of Project Management 20 (2002) 213–219

Development and implementation of the learning system began with adapting the simulation model to each of several Hughes programs, totaling a value of several hundred million dollars, and ranging in status from just starting to half-complete. Dozens of managers were interviewed in order to identify management policies believed to be ‘‘best practices’’; these were systematically tested on each program model to test and verify the universality of their applicability, and the magnitude of improvement that could be expected from each. All of the models were integrated into a single computer-based system accessible to all program managers. This system was linked to a data base of the ‘‘best practice’’ observations that could be searched by the users when considering what actions to take. Each manager could conduct a wide variety of ‘‘what if’’ questions as new conditions and initiatives emerged, drawing upon one’s own experience, the tested ideas from the other programs’ managers, the ‘‘best practice’’ data base, and the extensive explanatory diagnostics from the simulation models. As each idea tested is codified and its impacts quantified in new simulations, the amount and causes of performance differences are identified. Changes that produce benefits for any one program are flagged for testing in other programs as well, to assess the value of their transfer. In this system’s first few months of use, large cost and time savings (tens of millions of dollars, many months of program time) were identified on the programs. In order to facilitate expanding use and impact of the learning system, there is a built-in adaptor that allows an authorized user to set up a new program model by ‘‘building off’’ an existing program model. An extensive menu guides the user through an otherwise-automated adaptation. Upon completion, the system alerts the user to the degree of performance improvement that is required in order to meet targets and budgets. The manager can then draw upon test ideas from other program managers and the best-practice database in identifying specific potential changes. These can in turn be tested on the new program model, and new high-impact changes logged in the database for future managers’ reference, in the quest for ever-improving performance. Not only does this provide a platform for organizational capture and dissemination of learning, it is a most rigorous means of implementing a clear ‘‘past performance’’ basis for planning new programs and improvements. Finally, because there is a need for other managers to learn the lessons being harvested in the new system, a fully interactive trainer-simulator version of the same program model is included as part of the integrated system. Furthermore, software enhancements made since the Hughes system was implemented make the learning systems even more effective. First, moving the software to a web-based interface makes it far more widely available to project managers, who can now access it over the Internet from anywhere in the world. A second enhancement

219

is to the arsenal of software tools available to the project manager. These include: an automatic sensitivity tester to determine high leverage parameters; a potentiator to analyze thousands of alternative combinations of management policies to determine synergistic combinations; and an optimizer to aid in calibration and policy optimization. Finally, adding a Monte Carlo capability to the software allows the project manager to answer the question ‘‘Just how confident are you that this budget projection is correct?’’, given uncertainties in the inputs.

5. Conclusions The learning system for program and project managers implemented at Hughes Aircraft is a first-of-a-kind in addressing the challenges cited at the outset. First, it has effectively provided a framework, the Rework Cycle model, that addresses the problem of viewing each project as a unique phenomena from which there is little to learn for other projects. Second, it employs models that help explain the causality of those phenomena. Third, it provides systems that enable, and encourage, the use of past performance as a means of learning management lessons. And finally, it refines, stores, and disseminates the learning and management lessons of past projects to offset the limited career span of project managers. While simulation is not the same as ‘‘real life’’, neither does real life offer us the chance to diagnose rigorously, understand clearly, and communicate effectively the effects of our actions as managers. Simulation-based learning systems for managers will continue to have project and business impact that increasingly distance these program managers from competitors who fail to learn. References [1] Wheelwright SC, Clark KB. Revolutionizing product development: quantum leaps in speed, efficiency, and quality. New York: The Free Press, 1992. [2] Cooper KG. Naval ship production: a claim settled and a framework built,. Interfaces vol. 10, no. 6, pp. 20–36 December 1980. [3] Cooper KG, Mullen TW. Swords & plowshares: the rework cycles of defense and commercial software development projects. American Programmer vol. 6 no.5, May 1993. Reprinted in Guidelines for Successful Acquisition and Management of Software Intensive Systems, Department of the Air Force, September 1994. pp. 41–51. [4] Cooper KG. 1993a The rework cycle: Why projects are mismanaged, PM Network Magazine February 1993, 5–7; 1993b The rework cycle: How it really works. . .and reworks. . ., PM Network Magazine February 1993, 25–28; 1993c The rework cycle: Benchmarks for the project manager, Project Management Journal, 24(1), 17–21. [5] Lyneis JM, Cooper KG, Els SA. Strategic management of complex projects: a case study using system dynamics. System Dynamics Review 2001; 17(3): 237–60. [6] Reichelt KS and Lyneis JM. The dynamics of project performance: benchmarking the drivers of cost and schedule overrun. European Management Journal vol. 17, no. 2, April 1999 pp. 135–50.

Related Documents