PRODUC T Test metric
Definition
Purpose
The total number of remarks found in a given time period/phase/test type. A remark is a claim made by One of the earliest indicators to measure Number of test engineer that the once the testing commences; provides remarks application shows an initial indications about the stability of the undesired behavior. It may or software. may not result in software modification or changes to documentation. The total number of remarks A more meaningful way of assessing the found in a given time stability and reliability of the software Number of period/phase/test type that than number of remarks. Duplicate defects resulted in software or remarks have been eliminated; rejected documentation modifications. remarks have been done. The status of the defect could vary depending upon the defect-tracking tool that is used. Broadly, the following statuses are available: To be Track the progress with respect to solved: Logged by the test entering, solving and retesting the Remark engineers and waiting to be remarks. During this phase, the status taken over by the software information is useful to know the number engineer. To be retested: of remarks logged, solved, waiting to be Solved by the developer, and resolved and retested. waiting to be retested by the test engineer. Closed: The issue was retested by the test engineer and was approved.
Defect severity
Provides indications about the quality of The severity level of a defect the product under test. High-severity indicates the potential defects means low product quality, and business impact for the end vice versa. At the end of this phase, this user (business impact = information is useful to make the release effect on the end user x decision based on the number of defects frequency of occurrence). and their severity levels.
Defect severity index
An index representing the average of the severity of the defects.
Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.
Time to find a defect
The effort required to find a defect.
Shows how fast the defects are being found. This metric indicates the correlation between the test effort and the number of defects found.
Time to solve a defect
Effort required to resolve a defect (diagnosis and correction).
Provides an indication of the maintainability of the product and can be used to estimate projected maintenance costs.
Test coverage
Defined as the extent to which testing covers the product’s complete functionality.
This metric is an indication of the completeness of the testing. It does not indicate anything about the effectiveness of the testing. This can be used as a criterion to stop testing.
Test case effectiven ess
The extent to which test cases are able to find defects.
Defects/ KLOC
The number of defects per 1,000 lines of code.
This metric provides an indication of the effectiveness of the test cases and the stability of the software. This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version.
PROJECT
Workload capacity ratio
Ratio of the planned workload and the gross capacity for the total test project or phase.
Test planning The planned value related to performan the actual value. ce
This metric helps in detecting issues related to estimation and planning. It serves as an input for estimating similar projects as well.
Shows how well estimation was done.
Test effort is the amount of The effort spent in testing, in relation to work spent, in hours or days the effort spent in the development Test effort or weeks. Overall project activities, will give us an indication of the percentag effort is divided among level of investment in testing. This e multiple phases of the information can also be used to estimate project: requirements, design, similar projects in the future. coding, testing and such. An attribute of the defect in relation to the quality attributes of the product. Defect Quality attributes of a product This metric can provide insight into the category include functionality, usability, different quality attributes of the product. documentation, performance, installation and internationalization. PROCES S Are we able to find the right defects in Should be An attribute of the defect , the right phase as described in the test found in indicating in which phase the strategy? Indicates the percentage of which remark should have been defects that are getting migrated into phase found. subsequent test phases.
Residual defect density
Defect remark ratio
Valid remark ratio
Bad fix ratio
Defect removal efficiency
An estimate of the number of The goal is to achieve a defect level that defects that may have been is acceptable to the clients. We remove unresolved in the product defects in each of the test phases so that phase. few will remain. Provides an indication of the level of Ratio of the number of understanding between the test remarks that resulted in engineers and the software engineers software modification vs. the about the product, as well as an indirect total number of remarks. indication of test effectiveness. Percentage of valid remarks during a certain period. Valid remarks = number of defects Indicates the efficiency of the test + duplicate remarks + process. number of remarks that will be resolved in the next phase or release. Percentage of the number of Indicates the effectiveness of the defectresolved remarks that resolution process, plus indirect resulted in creating new indications as to the maintainability of the defects while resolving software. existing ones. The number of defects that are removed per time unit (hours/days/weeks)
Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product.
Defined as the number of Shows the effectiveness of the defect defects found during the removal. Provides a direct measurement Phase phase of the development life of product quality; can be used to yield cycle vs. the estimated determine the estimated number of number of defects at the start defects for the next phase. of the phase. Backlog The number of remarks that Indicates how well the software developm are yet to be resolved by the engineers are coping with the testing ent development team. efforts. The number of resolved Backlog remarks that are yet to be Indicates how well the test engineers are testing retested by the development coping with the development efforts. team. Scope The number of changes that changes were made to the test scope.
Indicates requirements stability or volatility, as well as process stability.
How to calculate
Total number of remarks found.
Only remarks that resulted in modifying the software or the documentation are counted.
This information can normally be obtained directly from the defect tracking system based on the remark status.
Every defect has severity levels attached to it. Broadly, these are Critical, Serious, Medium and Low.
Two measures are required to compute the defect severity index. A number is assigned against each severity level: 4 (Critical), 3 (Serious), 2 (Medium), 1 (Low). Multiply each remark by its severity level number and add the totals; divide this by the total number of defects to determine the defect severity index. Divide the cumulative hours spent on test execution and logging defects by the number of defects entered during the same period.
Divide the number of hours spent on diagnosis and correction by the number of defects resolved during the same period. Coverage could be with respect to requirements, functional topic list, business flows, use cases, etc. It can be calculated based on the number of items that were covered vs. the total number of items. Ratio of the number of test cases that resulted in logging remarks vs. the total number of test cases. Ratio of the number of defects found vs. the total number of lines of code (thousands)
Computation of this metric often happens in the beginning of the phase or project. Workload is determined by multiplying the number of tasks against their norm times. Gross capacity is nothing but planned working time, determined by workload divided by gross capacity. The ratio of the actual effort spent to the planned effort.
This metric can be computed by dividing the overall test effort by the total project effort.
This metric can be computed by dividing the defects that belong to a particular category by the total number of defects.
Computation of this metric is done by calculating the number of defects that should have been found in previous test phases.
This is a tricky issue. Released products have a basis for estimation. For new versions, industry standards, coupled with project specifics, form the basis for estimation. The number of remarks that resulted in software modification vs. the total number of logged remarks. Valid for each test type, during and at the end of test phases.
Ratio of the total number of remarks that are valid to the total number of remarks found.
Ratio of the total number of bad fixes to the total number of resolved defects. This can be calculated per test type, test phase or time period. Computed by dividing the effort required for defect detection, defect resolution time and retesting time by the number of remarks. This is calculated per test type, during and across test phases. Ratio of the number of defects found by the total number of estimated defects. This can be used during a phase and also at the end of the phase. The number of remarks that remain to be resolved. The number of remarks that have been resolved. Ratio of the number of changed items in the test scope to the total number of items.