Using the Testing Maturity Model in practical test-planning and post-evaluation
SM
Author Klaus Olsen
Poul Staal Vinje
Consultant Softwaretest.dk Denmark Email:
[email protected]
Consultant VR Partners Denmark Email:
[email protected]
Tel: +45 4615 5012
Tel: +45 4615 5012
Abstract We have used the Testing Maturity Model (TMMSM ), as the basis for an evaluation of real-world testing. We have found that in addition to being useful in the overall maturity improvement in an organisation, the TMM is useful in the planning process of testing activities. Futhermore the TMM is useful in the post-evaluation of the testing process. TMM is so far lacking a formal assessment model, but the levels and recommended practices are useful in the evaluation.
SM SM
Testing Maturity Model, registered service marks of Illinois Institute of Technology. TMM, registered service marks of Illinois Institute of Technology.
Improving the Software Testing Process The task How do we improve the efficiency of software testing, and at the same time reduce the cost of testing? In other words, how do we make software testing both better and cheaper? The answer of course is process improvement. Klaus Olsen and Poul Staal Vinje work with software testing at an operational level, but in addition to this they work with some of the various models available for Software Process Improvements (SPI). The need The need for efficient software testing has always existed, because the users expect software that works. The need for process improvement is apparent, due to the number of defects delivered, and the time and money consumed in testing. We expect that the future will present even more extensive demands to the testing process. We believe that users will demand better technical quality, and we expect development organisations to demand less expensive testing. The solution The solution is not more people equipped with more tools. As already mentioned, the solution is process improvement. The ’Testing Maturity Model’ (TMM) [1] is developed with this purpose. It is an extension to CMMSM and SPICE. Choosing the TMM is a perfectly good choice, and we will work with this and other SPI-models in the future. The history TMM has been developed by Ilene Burnstein, et al. at Computer Science Department at Illinois Institute of Technology. It reflects the maturity growth through 5 levels well known from Software Engineering Institute’s Capability Maturity Model (CMM) [2]. Burnstein also refers to the five periods in the evolution of testing model proposed by Gelperin and Hetzel [3], and the parallel to Beizers' five-phase model of an individual tester's maturity growth [4]. To sum up, we see TMM as the result of many people’s work during the last decade on defining useful process improvements which also will be useful in testing. TMM – usability The TMM can be used by: ♦ Internal teams to evaluate the current testing maturity ♦ Management to launch specific improvement initiatives ♦ Development projects to improve a specific test ♦ Users and contractors to define their roles in testing TMM – the model TMM consist of 5 levels of testing maturity, each level with maturity goals identifying testing improvements that must be addressed to achieve the next level. See figure 1. SM
CMM, registered service marks of Carnegie Mellon University.
Author Klaus Olsen and Poul Staal Vinje
EuroSTAR ‘98
Page 2 of 6
Level 5: Optimization, Defect Prevention, and Quality Control • Test process optimization • Quality control • Application of process data for defect prevention
Level 4: Management and Measurement • Software quality evaluation • Establish a test measurement program • Establish an organization-wide review program
Level 3: Integration • • • •
Control and monitor the testing process Integrate testing into the software lifecycle Establish a technical training program Establish a software test organization
Level 2: Phase Definition • Institutionalize basic testing techniques and methods • Initiate a test planning process • Develop testing and debugging goals
Level 1: Initial Figure 1, Testing Maturity Model with maturity goals at each level. More information You will find detailed information in the two article “Developing a Testing Maturity Model: Part I and II” [1] which were publicised in U.S. Air force magazine Crosstalk, August and September 1996. Our experiment What we have done is to try to take advantage of the TMM in terms of predicting the cause of a testing process. The background is that our clients typically is at level 1 or 2. Advancing one level usually means working with SPI for 1-2 years. This is acceptable, and most of them work with improvements. But if the model addresses the correct issues, it should be possible to use it as an instrument for prediction. We use other models for prediction the success of testing. They are “home-made” in the sense that they build on practical experience and observations. If the TMM refers to relevant, and
Author Klaus Olsen and Poul Staal Vinje
EuroSTAR ‘98
Page 3 of 6
first and foremost to significant, elements in software testing, then we should be able to use the model as risk analyser. So, we have chosen three projects, in which we personally have been involved, and tried to give answers to the following three questions: 1. Would it had been possible to predict the overall profile of the testing process, in terms of the degree of success and satisfaction as expressed by the stakeholders in the specific project ? 2. Would it had been possible to predict the specific major problems in the testing ? 3. Could we have improved the area of testing if we had applied TMM in the process ?
Project 1: An integrated enterprise business system The assessment puts the organisation at a level 2 [5]. TMM illuminates weaknesses in monitoring, management and measurement. These topics are a level 3, section 4 and level 4, section 2 and 3. The system is developed in India, but in a Joint Application Development, with close interaction between the vendor and user organisation. Predictions, using the TMM, gives the project fine possibilities in planning and organising the test. This turned out to be true. Furthermore an acceptable test efficiency, defined as the percentage of errors found in test, could be expected. This was the case as well. The defect ratio was average (Defect Ratio: 3.000 / 14.000 = 1/4 per Function Point (average)), but that is because the defect density being average in the first place. Predictions of the type of problems, is based on the aforementioned weaknesses. These weaknesses indicate too late action on deficiencies in the testing process. It turned out that the test process experienced two major problem. 1) The vendor did not make proper use of the otherwise high quality testing material. 2) The user organisation reacted too late and too slow on the test being late and inadequate A better monitoring of the testing process would have illuminated the vendor problems Note that the TMM cannot predict the vendor problems, but it predicts that the user organisation reacts in an inappropriate way.
Project 2: Civil Service This is a government system. The major new release of the system is partly new functionality and partly Year 2000 (Y2K) corrections. The testing is therefore partly a user acceptance test of the enhancements, and partly a full Y2K test. Not a good mixture, but the real world isn’t always either.
Author Klaus Olsen and Poul Staal Vinje
EuroSTAR ‘98
Page 4 of 6
The test is managed by a large co-ordination group. This is because of the number of vendors involved. The organisation is certainly at level 1 [6]. The TMM prediction could point in many directions, as a number of areas are immature. If we consider level 2 initiatives as the basic set of disciplines necessary in an organisation, the question is, whether a focus on these issues would have helped the project. The most serious problem turned out to be a lack of resources. The promised effort was not available. So the question is: would this problem have surfaced earlier if we had focused on the basic level 2, initiatives. The answer is yes. In this case we could not only have illuminated the problem, but actually brought solutions in place. We could have insisted on test planning with actual resources.
Project 3: Telecommunication billing system, reuse and added new functionality This is a project where the kernel billing and customer care system is reused, and new functionality is added. The system is being developed at three geographical locations with testers placed at all three locations. The project is still ongoing, but we are using the assessment as a temperature measurement to be able to compare it in this presentation, and also to be able to adjust the future of the project. The man-effort on the project is 64 man years, with testing consuming 9 man years, and overall project leader ship taken another 6 man years this leaves a “Tester to developer ration between 1:5 and 1:6”. This is according to Edward Kit [7] a little below average, but at the same time probably better than what we might experience in other projects. Kit refers to survey with tester to developer ration on 1:7, 1:10 or even 1:20. The assessment puts the organisation at level 1 [8], but getting close at rising to level 2. TMM illuminates smaller weaknesses in Test Planning, Techniques and Methods which need attention if level 2 is to be accomplished. Additionally TMM identifies weaknesses in monitoring, management and measurement, these topics are a level 3, section 4 and level 4, sections 2 and 3. Predictions, using the TMM, give the project fine possibilities in organising and integration of the test. This turned out to be true, especially important was the clear organisation with a test project group with leadership, which made it possible to keep people assigned to testing, when deadline came close and staff were needed to close up programming. Predictions on the type of problems point at test planning as an area of concern. And we could improve the testing on the project by enforcing the test plan template more rigorously, and selecting best practice as an example to be followed in our project group. This also implies that we could have improved our testing process by using TMM when planning the project.
Author Klaus Olsen and Poul Staal Vinje
EuroSTAR ‘98
Page 5 of 6
References 1. Ilene Burnstein, Taratip Suwanassart & C.R.Carlson, Illinois Institute of Technology. Refer to article “Developing a Testing Maturity Model: Part I” which were publiced in U.S. Air force magazine Crosstalk, august 1996. (Use http://www.stsc.hill.af.mil/SWTesting/index.html to find the original issue and articles.) 2. Paulk, M., B. Curtis, M. Chrissis, and C. Weber, Capability Maturity Model for Software (Version 1.1) www.sei.cmu.edu/products/publications/93.reports/93.tr.024.html 3. Gelperin, D., and B. Hetzel, "The Growth of Software Testing," CACM, Vol. 31, No. 6, 1988, pp. 687-695. 4. Beizer, Boris, Software System Testing Techniques, 2d ed., Van Nostrand Reinhold, New York, 1990. 5. TMM Assesment Questionary project 1, attachment to this paper. 6. TMM Assesment Questionary project 2, attachment to this paper. 7. Kit, Edward, Software Testing in the Real World, Addison-Wesley, 1995. 8. TMM Assesment Questionary project 3, attachment to this paper.
Author Klaus Olsen and Poul Staal Vinje
EuroSTAR ‘98
Page 6 of 6