Mind Meeting Metrics Sei

  • Uploaded by: Neville Amaral
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Mind Meeting Metrics Sei as PDF for free.

More details

  • Words: 3,566
  • Pages: 26
Performance Benchmarking Consortium

12/12/2006

SOFTWARE PRODUCTIVITY RESEARCH

Minds Meeting Metrics The SEI Performance Benchmarking Consortium December 12, 2006 – Boston SPIN Michael A. Bragen, Managing Partner Chair, PBC Specification Committee Chair Software Productivity Research, LLC http://www.SPR.com

SPR • P: +1.781.273.0140 F +1.781.273.5176 • www.spr.com

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Performance Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of operations Voice of customer Specification

Next Steps ƒ ƒ

Near-term goals How you can participate

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

1

Performance Benchmarking Consortium

12/12/2006

Motivation

ƒ

Organizations want a way to gauge their performance and to compare their performance with others in their industry.

ƒ

Data on project performance is needed to demonstrate the impact of process improvement.

Benchmarks ƒ ƒ

provide a reference point for interpreting performance facilitate interpretation by setting specifications Cost for how performance measurements are collected Schedule

Quality

Customer satisfaction

Draft version 0.1

Is There Community Interest?

During June, 2006, the PBC conducted an initial survey to assess the voice of the customer.

14 can authorize sharing of organization’s performance data

63 part of a team who can authorize sharing of organization’s performance data

Results of 2006 PBC Survey assigns a high degree of value to software • Sample size = 800 project performance benchmarks • Response Outcome ≈ 25%

12 out of 14 or assigns a high degree of value to software project performance benchmarks 85.7% 57 out of 63 or 90.5%

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

2

Performance Benchmarking Consortium

12/12/2006

What is Benchmarking?

Term

Description

Benchmark

To take a measurement against a reference point.

Benchmarking

A process of comparing and measuring an organization with business leaders anywhere in the world to gain information which will help the organization take action to improve its performance.

The Benchmarking Management Guide American Productivity and Quality Center

Draft version 0.1

Types of Process Benchmarking

Term

Description

Internal studies

Compare similar operations within different units of an organization.

Competitive studies

Target specific products, processes, or methods used by an organization’s direct competitors.

Functional or industry studies

Compare similar functions within the same broad industry or compare organizational performance with that of industry leaders.

Generic benchmarking

Compares work practices or processes that are independent of industry.

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

3

Performance Benchmarking Consortium

12/12/2006

Software Benchmarking Model

Historical Base of Project Experience Draft version 0.1

The Three-Legged Stool



Holistic view of software project performance factors - interacting effects

Project Duration

Project Quality

Project Effort (cost)



Key premises: 1) Improvements to one leg always affect performance of others 2) Higher organizational maturity enables better overall performance 3) Priority setting is a business, not technical decision

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

4

Performance Benchmarking Consortium

12/12/2006

Benchmark Derivation and Modeling

MEASURING

QUANTITATIVE DATA

QUALITATIVE DATA

Size Effort Schedule Documentation Defects

Personnel Process Technology Environment Products

Productivity Rates Quality Levels

ASSESSING

IMPROVING

Current state

Project Profiles

Best Case Models

Gap analysis

Influences

Draft version 0.1

SPR Knowledge Base 11,307 projects (2006)

Project Size

Project Classification Large 23% Enhancement 55%

Project Type Systems 35%

Medium 50%

MIS 42% Small 27% New 30% Maintenance 15%

Miscellaneous 16% Commercial 7%

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

5

Performance Benchmarking Consortium

12/12/2006

Linear regression analysis

• SPR uses r2 value statistical analysis to determine best-fit curves around the peer projects • The comparative quantitative data charts use these curves to present comparisons between each project (data point) and statistically significant subsets of the SPR Knowledge Base

Draft version 0.1

Benchmark Comparison Terminology – Peer Worst and Best -



The SPR analysis is based upon the following benchmark comparison values: ƒ ƒ

Peer Best: largest (maximum*) value in the range Peer Worst: smallest (minimum*) value in the range

Actual project data points from peer set

1,000 Maximum value

100

μ = ∑(1..n) / n

10

Minimum value

1 1

4

7

10 13 16 19 22 25 28 31 34 37 40 43 46 49

* This represents the opposite for measures with negative correlation to larger magnitude. Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

6

Performance Benchmarking Consortium

12/12/2006

Project Productivity – Variance to Peer Mean Values

Project Productivity Comparison All Platforms Type 1

Type 2

Type 3

Type 4

Project Quality Comparison All Platforms

Type 5

Type 1

-

Type 2

Type 3

Type 4

Type 5

-

Code Defects per 1000 FP

20

100

Obs

150

BIC Peer Best Peer Worst

200

40 60 80 100

Obs BIC Peer MEAN Peer Worst

120 140 160

Draft version 0.1

Effort Comparisons APPLIED TIME (Effort Months) UTE/Endesa Projects Unregulated Utility Regulated - Web

2 Tier C/S

Project durations areRegulated consistently Utility lower than expected ranges 10240 5120 2560 1280 Function Poiints

Hours per FP

50

640

R

320 O

160

4

80 Y D

40

X

P

G 3

20 10

M F

W

N U I V L

BH

1 K

A

C2S

4

16

64

256

1024

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

7

Performance Benchmarking Consortium

12/12/2006

Productivity Rate Comparisons PRODUCTVITY (Function Points / Effort Month) UTE/Endesa Projects

2 Tier C/S

Observed values Unregulated Utility from sample Regulatedset Utilityare Regulate - Web from Year 2004 and 2005 only. 10240 5120 2560

Function Points

1280 640

R

320 O 4

160

M

F

80

W D

X

Y

G

40 A

P 1

K

20

S

3 HB 2C

N

U

V

I

L

10 0

2

4

6

8

10

12

14

Function Points per Effort Month

Draft version 0.1

Defect Removal Efficiency Defect Removal Efficiency 2 Tier C/S Regulated Utility

Unregulated Utility Regulated - Web

10240 5120 2560

Function Points

1280 640 320 160 80 40 20 10 70.00

80.00

90.00

100.00

Defect Removal %

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

8

Performance Benchmarking Consortium

12/12/2006

Composite Quantitative Overview Composite analysis of key performance UTE Project Comparisons factors shows strengths offsetting A B C D K N S V Y 2 H I L weaknesses in theU environment M

R

W

1

3

F

G

O

P

X

4

500% 2

400% Better than similar projects

300%

V

U

G

200%

P

Y

100% U

D C

R W

2

O P

0% Worse than similar projects

S VY A

-100%

I 4

M

3 1 F

K

D N

R L

VY

X BC S K A

2

H

M

W 1 F G

X

I H L N U VY

B

P O

3

C

G

HI I

VY

L

R

P

1

H

K S W

G F O X

B C

2

M

I

4 F

L

M

W 1

D

U

13 W R

O X

X F

R L

4

A

K

4

-300%

G

H

-400% -500%

P O

3

S

2

N

-200%

D N

M

D 4

U 3

A

B

Time to Market

Productivity

Staffing (FTE)

Effort

Documentation

Draft version 0.1

Risk / Value analysis Risk / Value Placement

LOW

Risk

HIGH

LOW

Value

HIGH

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

9

Performance Benchmarking Consortium

12/12/2006

Qualitative Analysis – Environmental Attributes Detail

Draft version 0.1

Attribute Rating Scale

RATING

RANGE

Leading Edge

1.00 - 1.50

Above Average (Competitive)

1.51 - 2.50

Average

2.51 - 3.50

Below Average

3.51 - 4.50

Deficient

4.51 - 5.00

Note: All Attribute data collected for this analysis represents the responses of the project teams based upon their perception of each factor at the beginning of the project.

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

10

Performance Benchmarking Consortium

12/12/2006

Attributes Analysis by Level Environmental Attributes

Personnel

Technology

Process

Environment

Product

Management

Developers

Application experience

Analysis experience

Selected views

Design experience

Quality

Users

Draft version 0.1

Development Process Factors – (Process)

Development Process

Leading Edge

Improvement here should aid the clarity and stability of requirements. Competitive

Average

Below Average

Deficient

Difference is related to interpretation of prototyping.

Improvement in requirement process should balance and improve this factor. Requirements Clarity and Stability

Requirements Methods

Prototyping Methods

Analysis Methods

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

11

Performance Benchmarking Consortium

12/12/2006

Improvement Priorities



This analysis provides a view of self-identified gaps and objectives in the development teams that pertain to organizational capability and maturity attributes SPR considers important for performance (quality and productivity levels) Scores are based upon interviews and qualitative data collection activity conducted with development teams and managers.

Draft version 0.1

The PBC is Born

Objectives •

During April 2006, SEI launched a vendor and industry collaboration on benchmarking software project performance.

• Provide tools and credible data for goal-setting and performance improvement • To combine benchmark data from multiple repository sources thereby creating a superset of information for benchmark and/or performance comparison

Value • Establish specifications for the collection and comparison of benchmark data from different vendor sources • Allow companies to leverage existing data to help them establish and achieve their business goals

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

12

Performance Benchmarking Consortium

12/12/2006

Current PBC Members as of October 2006



Consortium members are leaders in software measurement and benchmarking from consultancies, industry, and academia.

Galorath

QSM

ISBSG

Raytheon

Lockheed Martin

SEI

Motorola

SPR

Oracle

STTF

PRICE Systems

The David Consulting Group

PRTM

University of Ottawa

* International Software Benchmarking Standards Group (ISBSG) ** Software Technology Transfer Finland (STTF)

Draft version 0.1

Broader Data Provided by PBC

Vendor Vendor Vendor Vendor Vendor 1 2 3 4 5

PBC

+ + + + + + +

Software size Defect density Defects by phase Productivity Schedule predictability Effort Customer satisfaction

Granularity of Data M O R E

LESS

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

13

Performance Benchmarking Consortium

12/12/2006

Broader Data Set Provided by PBC

PBC ™ Member #1 ™ Member #2 ™ Member #3 ™ Member #4 ™ Member #5 ™ Member #6 ™ Member #7 ™ Member #n

Draft version 0.1

PBC Provides Broader Coverage of Application Domains

Education Insurance Health and Medicine Military Systems Financial & Banking Manufacturing Electronics Communications

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

14

Performance Benchmarking Consortium

12/12/2006

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Process Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of operations Voice of customer Specification

Next Steps ƒ ƒ

Near-term goals How you can participate

Draft version 0.1

Ground Covered - 1

Kick-Off Workshop at SEI [April 19-20, 2006] ƒ

14 presentations by workshop attendees; Discussion of current benchmarking issues and ways to address them.

ƒ

Brainstorming issues on how to proceed.

ƒ

Initiative to conduct survey to obtain community input on factors most likely to affect software project performance.

ƒ

Performance Benchmarks Consortium (PBC) is born. What is performance measurement? What makes a benchmark good and useful? What constitutes valid data if you are interested in learning about your range of results in comparison to other benchmarking companies? How should performance measurements be categorized?

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

15

Performance Benchmarking Consortium

12/12/2006

Ground Covered - 2

PBC Workshop [June 28-29, 2006] ƒ

Initial survey addressing performance factors results presented

ƒ

Planning and goal-setting

ƒ

Soliciting member input on the PBC business concept

ƒ

Initial concepts about PBC products and services

ƒ

Setting up work teams

Draft version 0.1

Ground Covered - 3

PBC Workshop [October 4-5, 2006] ƒ

Work team status reports and planning

ƒ

Discussion of ConOps (Concept of Operations)

ƒ

Presentations by Member Companies

ƒ

Discussion of PBC Measurement Specification

ƒ

Selection of initial set of performance influence factors and performance indicators that will populate version 1 of the PBC repository.

ƒ

PBC 2006 planning and 2007 goal-setting

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

16

Performance Benchmarking Consortium

12/12/2006

PBC Member Alignment is Happening

Work during 2006 as focused on • Developing a common understanding of the terrain

2007

• Developing common goals • Cultivating a shared commitment

October, 2006

April, 2006

Draft version 0.1

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Process Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of Operations Voice of Customer Specification

Next Steps ƒ ƒ

Near-Term Goals How you can participate

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

17

Performance Benchmarking Consortium

12/12/2006

Current PBC Teams

ConOps Team

• Business planning • Communication planning • Concept of operations

Voice of Customer Team

• Soliciting community input • What do people need?

Specification Team

• Performance measurement definitions & guidance

Pilot Implementation Team

• Testing the solution components

Draft version 0.1

Concept of Operations - 1

Subscriber Performance Data

Member Assets PBC Member Companies

SEI

Subscriber Organizations

PBC Repository

Members provide services & reports to their internal or external customers using PBC Assets in addition to, or in combination with, their own

Customized reports

Measurement Specification

PBC Performance Reports

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

18

Performance Benchmarking Consortium

12/12/2006

Concept of Operations - 2

3

PBC Member Assets

1

Measurement Specification

PBC Measurement Specification

4

Subscriber Organizations

2 PBC Repository

• Using specification allows subscribers to make valid comparisons. • Becomes an ISO standard?

Subscriber Inputs 100

100 Total Staff Hours Expended (X1000)

94

90

90

80

80

Actual 70

70 60

60 Frequency 50

50

Planned

40

40 30

30

20 20

14

20 10

10

10

8 4

0

0 Jan

Feb

March

April

May

Month

June

July

August

Sept

No Response Don't Know or N/A

Strongly Disagree

Disagree

Agree

Strongly Agree

Response

Draft version 0.1

Concept of Operations - 3

The SEI: • authors Meas. Spec. • houses the repository • provides website & admin • authors PBC Reports • provides communication support PBC Member Companies

Subscriber Organizations SEI

PBC Repository Subscribers submit performance data that adheres to Meas. Spec.

Member companies: • provide assets to stock the repository • collaborate on Meas. Spec. • pay fee to sustain operations • provide training & consultation

Measurement Specification

They pay fee for: • PBC Performance Reports • Customized reports

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

19

Performance Benchmarking Consortium

12/12/2006

Concept of Operations - 3 Benchmarking From PBC Performance Report

PBC

PBC Repository

Member #1 Member #2 Member #3

Client 1

Member #4

Client 2

Member #5

Our client provided that measure. We can show you how they did it. We are

Client 3

here.

But we want to be here.

Member #6 Client 4 Member #7

PBC Subscriber

Client 5 Member #n

Draft version 0.1

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Process Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of Operations Voice of Customer Specification

Next Steps ƒ ƒ

Near-Term Goals How you can participate

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

20

Performance Benchmarking Consortium

12/12/2006

Voice of Customer Team

This team if focused on ensuring that the voice of the customer is represented as PBC products are planned for development. • Conducted a survey to obtain community input about factors that influence software project performance • Planning a birds-of-feather session to solicit community input to influence the development of PBC products and services

Draft version 0.1

Survey:

What Are the Key Factors Affecting SW Project Performance?

• Management and leadership

• Project types development

• Project organization environment

• Application domain

• Analysts' functional knowledge

• Technical complexity

• Developer skill level

• Use of development methodology

• Process maturity

• Product architecture

• Team dynamics

• Project risks

• Team size

• Project technology – language & tool effectiveness

• Volume of staff turnover • External customer relationship • External integration complexity • Business domain

• Newness of development platform • Platform volatility • Project technology – familiarity with • Size

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

21

Performance Benchmarking Consortium

12/12/2006

Sample Characteristics & Response Rates

Population Size

Sample Size

SEI Cust. Relations

6,398

500

DCG

2,016 8,414

Subpopulation

Total

Actual Sample Size

Response Rates RR1

RR2

407

20.9%

24.0%

500

412

22.8%

27.7%

1,000

819

21.9%

25.9%

Due to email bounce-backs and ineligible respondents. RR1: RR2:

Minimum Response Outcome. Excludes those who did not complete the entire questionnaire. Maximum Response Outcome. Includes those who partially completed the questionnaire.

Draft version 0.1

Top-10 Factors

“Very Large” Impact

“Very Large” or “Large” Impact

1 Management and leadership 45.3%

1 Management and leadership 87.6%

2 Developer skill level 42.9%

4 Project organization environment 83.6%

3 External customer relationship 31.7%

9 Team dynamics 79.9%

4 Project organization environment 31.3%

2 Developer skill level 77.5%

5 Product architecture 28.6%

3 External customer relationship 74.7%

6 Project risks 28.0%

5 Product architecture 74.7%

7 Process maturity 27.9%

7 Process maturity 74.2%

8 Technical complexity 26.9%

6 Project risks 73.6%

9 Team dynamics 24.9%

10 Use of development methodology 72.5%

10 Use of development methodology 23.6%

8 Technical complexity 65.9%

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

22

Performance Benchmarking Consortium

12/12/2006

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Process Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of Operations Voice of Customer Specification

Next Steps ƒ ƒ

Near-Term Goals How you can participate

Draft version 0.1

Specification Team Work

The Spec Team is working to develop common definitions.

Performance Influence Factors

Performance Indicators

• Process maturity

• Defect density

• Application domain

• Time-to-market

• Stability of requirements

• Schedule predictability

• Size

• Productivity index

• Project type

• Project delivery rate

• Team size • Developers' functional knowledge

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

23

Performance Benchmarking Consortium

12/12/2006

Specification Team – Building Common Definitions

Example: Time-to-market measure

S/W Project A Meas. Definition A S/W Project B Meas. Definition B

How can you compare?

S/W Project C Meas. Definition C S/W Project D Meas. Definition D

The Specification

A cool, refreshing common definition for “time-to-market” Team that permits valid comparisons between software projects

Draft version 0.1

PBC Measurement Specification

PBC Member Assets Measurement Specification

PBC Measurement Specification

Subscriber Organizations

4

2 PBC Repository

• Using specification allows subscribers to make valid comparisons. • Becomes an ISO standard?

Subscriber Inputs 100

100

94

90

90

Total Staff Hours Expended (X1000)

3

80

80

Actual 70

70 60

60 Frequency 50 50

Planned

40

40 30

30

20 20

14

20 10

10

10

8 4

0

0 Jan

Feb

March

April

May

Month

June

July

August

Sept

No Response Don't Know or N/A

Strongly Disagree

Disagree

Agree

Strongly Agree

Response

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

24

Performance Benchmarking Consortium

12/12/2006

Presentation Topics

Introduction ƒ ƒ ƒ

Overview – The Process Benchmarking Consortium Motivation & Benefits Ground covered

PBC Working Teams ƒ ƒ ƒ

Concept of Operations Voice of Customer Specification

Next Steps ƒ ƒ

Near-Term Goals How you can participate

Draft version 0.1

Near-Term Goals

The PBC will ƒ

create a set of process specifications for the consistent and meaningful collection, analysis, and dissemination of comparative performance benchmarks for software projects

ƒ

develop a data model that will facilitate the aggregation and comparison of data from different sources

ƒ

pilot test solution components

ƒ

develop version 1.0 of the PBC data repository

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

25

Performance Benchmarking Consortium

12/12/2006

Are You Interested?

™ The PBC will be expanding membership during 2007. ™ Individuals who become members during 2007 will participate in proof-of-concept trials and provide feedback on the PBC Concept of Operations. ™ If your organization is interested in sponsoring you as a member or if you are interested in becoming a subscriber of future PBC products and services, then please send email to Mark at [email protected] or Michael at [email protected] ™ Individuals who contact us will be added to our database. We will periodically email about progress and future plans for the PBC. Thank you

Draft version 0.1

Acknowledgements

Kate Armel

QSM

Michael Bragen

Software Productivity Research

Khaled El Emam

University of Ottawa

Eric Finch

PRTM

SEI Anita Carleton

Robert Floyd

Raytheon

Robert Ferguson

Pekka Forselius

STTF

Diane Gibson

David Garmus

DCG

Dennis Goldenson

Peter Hill

ISBSG

Mark Kasunic

Tim Hohmann

Galorath

Oksana Schubert

Thomas Lienhard

Raytheon

Robert Stoddard

Larry McCarthy

Motorola

Dave Zubrow

Arlene Minkiewicz

PRICE Systems

Kristal Ray

Oracle

Suresh Subramanian

PRTM

Bob Weiser

Lockheed Martin

Draft version 0.1

© 2006 Carnegie Mellon University © 2006 Software Productivity Research, LLC

26

Related Documents

Mind Meeting Metrics Sei
November 2019 19
Metrics
November 2019 35
Metrics
November 2019 32
Metrics
November 2019 35
Metrics
November 2019 38
Sei Du
August 2019 20

More Documents from "Malte"

Integrating 6s And Cmmi
November 2019 21
Effective Metrics
November 2019 29
Mind Meeting Metrics Sei
November 2019 19
Bite The Whole Apple
November 2019 18
5 Questions In Pi
November 2019 17
B'marking Roi From
November 2019 18