Metrics

  • Uploaded by: api-3855990
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Metrics as PDF for free.

More details

  • Words: 2,494
  • Pages: 50
Measurement and Metrics Estimation

SQM – 7014INT Semester 2, 2003

Measurement, Metrics, Indicators •

Measure: A quantitative indication of the extent, amount, dimension, capacity or size of some attribute of a product or process. •

• •

Measurement: The act of determining a measure Metric: A measure of the degree to which a system, component or process possesses a given attribute. • •



A single data point (e.g. number of defects from a single review)

Metrics relate measures (e.g. Average number of defects found in reviews) Relate data points to each other

Indicator: A metric or series of metrics that provide insight into a process, project or product.

Why Measure? •

Characterise •



Evaluate •



To determine status with respect to plans.

Predict •



To gain understanding of processes, products, etc.

So that we may plan.

Improve •

Rational use of quantitative information to identify problems and strategies to remove them

What to Measure? •

Process • •



Project • • • •



Measure the efficacy of processes What works, what doesn’t Assess the status of projects Track risk Identify problem areas Adjust work flow

Product •

Measure predefined product attributes (generally related to ISO9126 Software Characteristics)

Process Metrics •



majority focus on quality achieved as a consequence of a repeatable or managed process statistical SQA data •



defect removal efficiency •



defect categorization & analysis propagation from phase to phase

reuse data

Project Metrics • • • • •

Effort/time per SE task Defects detected per review hour Scheduled vs. actual milestone dates Changes (number) and their characteristics Distribution of effort on SE tasks

Product Metrics • • •

focus on the quality of deliverables measures of analysis model complexity of the design • • •

• •

internal algorithmic complexity architectural complexity data flow complexity

code measures (e.g., Halstead) measures of process effectiveness •

e.g., defect removal efficiency

Metrics Guidelines • • • • • •



Use common sense and organizational sensitivity when interpreting metrics data. Provide regular feedback to the individuals and teams who have worked to collect measures and metrics. Don’t use metrics to appraise individuals. Work with practitioners and teams to set clear goals and metrics that will be used to achieve them. Never use metrics to threaten individuals or teams. Metrics data that indicate a problem area should not be considered negative. These data are merely an indicator for process improvement. Don’t obsess on a single metric to the exclusion of other important metrics.

Software Measurement •

Direct Measures •

Process • •



Product • •



Cost Effort Lines of Code (LOC) Resource usage

Indirect Measures •

Product • • 

Functionality (FP) Maintainability …ability

Measurement Criteria •

Measurements should be: •

Objective •



Timely •



Difficulty to obtain

Representative •



Available in time to affect development/maintenance

Available •



Repeatable

Degree of representation of customers perception

Controllable •

Extent to which value can be changed by action

Quality Metrics • •

Quality metrics mostly involve the measurement of defects and problems Defect: an inconsistency with a specification (assuming 

the specification is correct). Any flaw or imperfection in a  software work product or software process. • Problem: A software problem is a human encounter  with software that causes difficulty, doubt, or uncertainty  in the use or examination of the software. •

Defect and problem data comes primarily from activities designed to find problems (inspections, testing) or from users in an operational system.

Measuring Defects •

In defect finding activities (synthesis, reviews and inspections, testing, customer service) log defects and problems.

•CSCI

•Date Corrected

•Date Found

•Time to Correct

•Location Found

•Who Corrected

•Activity Found in

•etc…

•Who Found it •Type and Severity of defect •Similarity with other defects

Quality Metrics •

Examples: • Defects per KLOC • Defects per Page • % Defect Free • Mean Time to Failure, Mean Time to Repair (Availability) • Phase Yields • Review hours / defect found • Defect found by review / Defects found by testing • Change Activity / stage • Change Activity / module • Software structure and complexity

Metrics and the Quality Plan •

• •

Quality plan commits to specific numerical targets / goals - relating to overall goals of organisation Measurement programs established to allow for tracking of conformance to goals 3 Outcomes: • • •



Track almost as planned Significantly worse than plan Significantly better than plan

Plan should be aggressive!!!

Making Software Quality Estimates •

• • • •

Data from recently completed development projects from your organisation similar to current project is examined to form basis for estimate Significant product or process differences taken into account and potential effects estimated Project based on this historical data and planned process changes. Compare with organisational goals. Suggest improvements to process.

Planning a Defect Profile •

For each stage of the SDLC • Estimate the number of defects likely to be injected (n) • Estimate the removal efficiency (planned phase yield %) (y) • Calculate the number of defects likely to be removed (y*n) • Calculate number remaining (n-y*n) • Add to estimate of the number likely to be added in next stage • Calculate cummulative removal efficiency (as %) • Example on next pages

Planned Injection Rates and Removal Efficiency Defects

REQ

HLD

DLD

CODE

U/T

I/T

S/T

Prior Project Injection Rate 30 Removal Efficiency 70% Cumulative Efficiency 70%

11 65% 83%

23 50% 76%

60 60% 76%

5 57% 88%

2 47% 93%

2 55% 96%

New Project Injection Rate 20 Removal Efficiency 70% Cumulative Efficiency 70%

11 65% 80%

15 65% 85%

60 65% 78%

2 60% 91%

1 50% 95%

1 55% 97%

Planned Defect Profile Defects

REQ

Residual 0 Injected 20 Removed 14 Remaining 6 Removal Efficiency 70% Cumulative Efficiency 70% Defects Injected Defects Removed

20 14

HLD

DLD

CODE

U/T

I/T

S/T

6 11 (17) 11 (17*.65) 6 65% 80% (25/31)

6 15 14 7 65% 85%

7 60 44 23 65% 78%

23 2 15 10 60% 91%

10 1 6 5 50% 95%

5 1 3 3 55% 97%

31 25

46 39

106 83

108 98

109 104

110 107

Plan •

Inspection Phase Yields • • • •

• • •

Requirements - 70% HLD - 65% DLD - 65% Code - 65%

Total Inspection Yield - 78% Testing Yields etc... Percentage Defect Free after System Testing •

97%

Predictive - No Inspection Predict % Defect Free with no inspections. Defects

REQ

HLD

DLD

CODE

U/T

I/T

S/T

Residual Injected Removed Remaining Removal Efficiency Cumulative Efficiency

0 20 0 20 0% 0%

20 11 0 31 0% 0%

31 15 0 46 0% 0%

46 60 0 106 0% 0%

106 2 65 43 60% 60%

43 1 22 22 50% 80%

22 1 13 10 55% 91%

Defects Injected Defects Removed

20 14

31 25

46 39

106 83

108 65

109 87

110 100

As a comparison also need to consider historical cost of removing defects in later stages (See last weeks notes)

Locating Data •

Defect Reporting •

Defect Log •



Inspection Report forms •



Where found, date found, type, stage injected, stage removed, consequences of removal, time to repair,... Location, severity, inspection rates, yields, etc...

Direct measurement of time, size, etc... also necessary

Types of Defect • •

User Interface / Interaction Defects Programming Defects • • • • • •



Data Functionality Logic Standards Component Interface ...

Operating Environment

Availability and other Reliability Metrics



An indirect measure of a systems ability to provide available functionality when required MTBF – Mean Time Between Failure MTTR – Mean Time to Repair/Recover



Availability = (1-(MTTR/(MTTR-MTBF)))x100

• •

Metrics for Evaluating Design • •



A good modular design exhibits properties of strong cohesion and weak coupling. Cohesion Metrics – measure the adhesion within components (Relationship of data tokens (variables)) Coupling Metrics – See next slide

Coupling Metrics



d(I) – nr of input parameters d(o) – nr of output parameters c(I) – nr of input control parameters c(o) – nr of output control parameters g(d) – nr of global variables used as data g(c) – nr of global variables used as control w – nr of modules called (fan-out) r – nr of modules calling this (fan-in)



m(c) = k/M , where k=1 (can be adjusted) and

• • • • • • •



M= d(I)+(axc(I))+d(o)+(bxc(o))+g(d)+(cxg(c))+w+r •



where a,b,c are constants and may be adjusted

The lower the value, the weaker the coupling

Complexity Metrics •



McCabe’s Cyclomatic Complexity based on number of paths through a flow graph (explained later in testing lectures) Relationship between complexity and defects and maintainability (tf. time to repair defects)

Halstead’s Metrics for Software Source Code • •

Software Complexity Measure: • • • • • • •

n(1) nr of distinct operators n(2) nr of distinct operands N(1) total nr of operator occurrences N(2) total nr of operand occurrences Length: N=n(1)log2n(1)+n(2)log2n(2) Volume: V=Nlog2(n(1)+n(2)) Volume Ratio: L=2/n(1).n(2)/N(2)

Maintenance Metrics •

Software Maturity Index



M = nr of modules in current release F(c) = nr of changed modules in current release F(a) = nr of added modules in current release F(d) = Number of modules removed from previous



SMI = [M-(F(a)+F(c)+F(d)]/M



As SMI approaches 1 product begins to stabalise.

• • •

Size and Function Oriented Metrics for Planning and Tracking

Size Oriented Metrics • •

Normalising quality/productivity metrics considering size Size usually LOC, KLOC, or Pages of Documentation • • • • •

Defects per KLOC $ per LOC Pages of documentation per KLOC LOC per person-month $ per page of documentation

Function Oriented Metrics • • •

Normalising quality/productivity metrics considering functionality Indirect measurement. Can use the function point, based on other direct attributes • • •

Defects per function point $ per FP FP per person-month

Estimating LOC •

Based on historical data estimate a range of values: • • •



Optimistic Most Likely / Realistic Pessimistic

Calculate an expected value weighted average by: S = (Opt + 4R + Pess) / 6

What is a LOC? • • •

Subjective Language dependent Organisations will often establish own standard definition.

Function Points • • •

Measure of the amount of functionality in a product Can be estimated early in project before a lot is known. Measure a software project by quantifying processing functionality associated with major external data or control input, output or file types

Computing Function Points - Step 1 •

Establish count for information domain values • • • • •

Number of User Inputs Number of User Outputs Number of User Inquiries Number of Files Number of External Interfaces (files)

Computing Function Points - Step 2 •

Associate complexity value with each count • • •

Simple, Average, Complex Determined based on organisational experience based criteria. (e.g. Table II-2 Cocomo II reading) Compute count total

Computing Function Points - Step 3 Calculate complexity adjustment values (Fi, where i∈ [1..14])



• • • • • • • • •

Does the system require reliable backup and recovery? Are data communications required? Are there distributed processing functions? Is performance critical? Will the system run in an existing, heavily utilised operating environment? Does the system require on-line data entry? Does the on-line data entry require the input transaction to be built over multiple screens or operations? Are the master files updated on-line Are the inputs, outputs, files or inquiries complex

Computing Function Points - Step 3 • • • • •





Is the internal processing complex? Is the code designed to be reusable? Are conversion and installation included in the design? Is the system designed for multiple installations in different organisations? Is the application designed to facilitate change and ease of use by the user?

Answer Each Question using a scale of 0 (N/A) to 5 (Absolutely Essential) Sum the 14 complexity adjustment values ΣFi

Computing Function Points - Step 4 •

Calculate FP = count total × (0.65 + 0.01×∑Fi)



0.65 and 0.01 are empirically derived constants



Should calculate weighted average of FPs also

Reconciling FP and LOC • •

Most effort estimation models require LOC Example: • • •

LOC/FP for C is estimated at 128 LOC LOC/FP for C++ is estimated at 64 LOC LOC/FP for Assembly is estimated at 320 LOC

Project Planning Estimates • • • •

Scope must be defined Task and/or functional decomposition required Use historical data Use at least two approaches

Problem based estimation •

Function decomposition based

Define product scope; Identify functions by decomposing scope; Do while functions remain Select a functionj Estimate LOC/FP for functionj Refer to historical data productivity metrics relating LOC or FP to person-month etc.

Function User Interface and control

Estimated LOC

Example LOC Function

Estimated LOC

User Interface and control

1555

Language Parser (GCL)

450

Semantic Analysis

11250

Algebraic Simplifier

3335

Database Management

2505

Estimated Lines of Code

19095

e.g. For Algebraic Simplifier, P=5000 O=1000 R=3500 Historical data 500 LOC/pm. So, 38.2 pm Burdened labor rate of $8000/month means $16/LOC, so est. cost $305520

Process Based Estimation • •

• •

Decompose process into small set of tasks Estimate, based on experience, effort required to achieve each task Identify major functions For each function, a series of software process activities must be performed • • • • •



Analysis Design Code Test And Verification

Estimate effort for each

Empirical Estimation Models E = A + B × (ev)C • • •



E is effort in person months A,B,C are empirically derived constants ev is the estimation variable (LOC or FP) Most models also involve a project adjustment component to allow E to be adjusted for variables such as staff experience etc…

COCOMO • •

COCOMO  COnstructive COst MOdel Basic Model Software Project organic

ab

bb

cb

db

2.4

1.05

2.5

0.38

semi-detached

3.0

1.12

2.5

0.35

embedded

3.6

1.20

2.5

0.32

COCOMO • •



E = ab (KLOC)^bb (person-months) Duration D = cb E^db (months) Effort

Adjustments to the models can be done by collecting own statistics in a company

• •

Forms and data collection sheets are a helpful tool for consistent and homogeneous data collection: •

Examples: evaluating time logs data contents

 Visit Barry Boehm’s website

COCOMO II •

Enominal = A×(size)B pm •



A is a constant which accounts for the multiplicitive effects of increasing size on effort • For Assignment A = 2.94 B is a scaling factor that accounts for economies of scale • B = 1.01 + 0.01×₢Wi • Each Wi is a weight for a scaling factor rated from 0 to 5 • Scaling Factors: • Precedence • Development Flexibility • Risk Resolution • Team Cohesion • Process Maturity (CMM based)

COCOMO II •

Eadjusted = Enominal×(∏ I=1..7EMi) •

There are 7 weighted Effort Multipliers (EM) in the early design model



For assignment calculate Enominal

Related Documents

Metrics
November 2019 35
Metrics
November 2019 32
Metrics
November 2019 35
Metrics
November 2019 38
Sqa Metrics
November 2019 10
Excel Metrics
November 2019 16