Software Failures Report 2

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Software Failures Report 2 as PDF for free.

More details

  • Words: 13,243
  • Pages: 59
BSc. (Hons) in Computing Professional Issues in Computing

The University of Bolton

Software Failures What can be done to improve the quality of our software and the consequences of this? by Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

Professional Issues in Computing Software Failures

The University of Bolton

Acknowledgements The author would like to thank the lecturers of SEGi College for contributing their valuable advices and guidance on writing this report. They are Miss Jaya Letchumi (our PIC lecturer), Mr. Tan Teik Thai (Head of Computer Department), and Miss Jeannie, our former PIC lecturer who had left us in August 2006. Another special thank to Miss Louise Blenkharn, who came all the way from UK in giving us a great insight of writing a truly professional sound report as well as good presentation skills. Also thanks to all the online publishers as listed in the reference list and bibliography for sharing out their ideas and knowledge in making this report a more convincing one with relevant supported facts. Last but not least, many thanks to my fellow classmates, who always have the spirit of sharing, and also my special friend, Miki Lee, for giving out her full support and confidence on this report.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

i

Professional Issues in Computing Software Failures

The University of Bolton

Summary Not all software problems are considered software failures. Software failures have been defined by J. C. Laprie, the author of ‘Dependability: Basic Concept & Terminology’, as “the system service that no longer complies with the agreed specification” (J. C. Laprie, 1992). Some of the possible failures are like system crashes, inaccurate output, security vulnerability, and data integrity issues.

To improve the software quality, identifying root causes of the problems is important as the starting point of the software improvement lifecycle, followed by taking appropriate actions and solutions based on the proven approaches developed by certain reputable organisations, such as CMMI, UML, RUP, and Software Factory. After that, monitoring the progress and evaluating results are important to ensure the intended objectives can be met. For continuous improvements, repeating the entire software improvement lifecycle is necessary.

Consequences of software failures will lead to the following three impacts: 1. Consumer Impact 2. Industry Impact 3. Economy Impact

As a conclusion, it is wiser to invest more on software quality improvement plans rather than causing losses to all the parties involved due to software failure. It is also recommended that improving human quality is important as it is a key factor to ensure that all the industry best practices can be implemented successfully.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

ii

Professional Issues in Computing Software Failures

The University of Bolton

Table of Contents 1.0

TERMS OF REFERENCE...................................................................................................1

2.0

INTRODUCTION .................................................................................................................2

3.0

WHAT IS “SOFTWARE FAILURE”? ..................................................................................3

3.1

DEFINITION OF SOFTWARE FAILURE ....................................................................................3

3.2

EXAMPLES OF SOFTWARE FAILURES ...................................................................................3

4.0

HOW CAN WE IMPROVE THE SOFTWARE QUALITY? ................................................5

4.1

SOFTWARE QUALITY IMPROVEMENT LIFECYCLE...................................................................5

4.2

IDENTIFYING THE ROOT CAUSES.........................................................................................6

4.3

TAKING APPROPRIATE ACTIONS AND SOLUTIONS .................................................................6

4.3.1

Capability Maturity Model Integration (CMMI) ............................................................7

4.3.2

Unified Modeling Language (UML) .............................................................................8

4.3.3

Rational Unified Process (RUP) .................................................................................9

4.3.4

Software Factory .........................................................................................................9

4.4

MONITORING AND EVALUATING THE RESULTS ......................................................................9

4.4.1

4.4.1.1

Unit Test ...........................................................................................................10

4.4.1.2

Integrated Test..................................................................................................10

4.4.1.3

Stress Test........................................................................................................11

4.4.2 4.5 5.0

Software Quality Assurance Test ..............................................................................10

CMMI Software Product Quality Measurement ........................................................11 GOING THROUGH THE IMPROVEMENT LIFECYCLE ...............................................................12

WHAT ARE THE CONSEQUENCES?............................................................................14

5.1

CONSUMER IMPACT .........................................................................................................15

5.1.1

Impact to Standalone Software Users ......................................................................15

5.1.2

Impact to Online Transaction Users..........................................................................15

5.2

INDUSTRY IMPACT ............................................................................................................16

5.3

ECONOMIC IMPACT ..........................................................................................................16

6.0

CONCLUSIONS ................................................................................................................18

7.0

RECOMMENDATIONS .....................................................................................................19

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

iii

Professional Issues in Computing Software Failures

The University of Bolton

APPENDICES ................................................................................................................................21 APPENDIX I. – ARTICLE: CAPABILITY MATURITY MODEL ................................................................21 APPENDIX II. – PRESENTATION (EXTRACTED): MEASURING SOFTWARE PRODUCT QUALITY: THE ISO 25000 SERIES AND CMMI .....................................................................................................33 APPENDIX III. – ARTCLE: SOFTWARE INDEPENDENT VERIFICATION AND VALIDATION (IV&V)/INDEPENDENT ASSESSMENT (IA) CRITERIA.........................................................................38 APPENDIX IV. – ARTICLE: QUANTIFYING ECONOMIC IMPACT OF INFORMATION INTEGRITY FAILURES 46 BIBLIOGRAPHY ............................................................................................................................55 REFERENCE LIST.........................................................................................................................57

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

iv

Professional Issues in Computing Software Failures

The University of Bolton

Table of Tables Table 1 – Some of the industry approaches and the software development processes that each of them covers. .....................................................................................................................................7 Table 2 – CMMI Performance Results as of 15 December 2005 (Software Engineering Institute, 2005).................................................................................................................................................8

Table of Figures Figure 1 – Software Quality Improvement Lifecycle.........................................................................5 Figure 2 – Relating Requirements, Evaluation, and Measurement (Dave Zubrow, 2004, p.11) ....12 Figure 3 – One of the observations from SEI after the year 2006 State of Software Measurement Practice Survey (Mark Kasunic, 2006, p.37) ..................................................................................13

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

v

Professional Issues in Computing Software Failures

1.0

The University of Bolton

Terms of Reference

Before starting to explain the way of improving the software quality, first, it is necessary to understand the actual meaning behind “software failures”. Hence, this report has a kick start on defining the meaning of software failure based on J. C. Laprie’s definition.

About the software quality improvement approach, this report is taking the assumption that the software problems are the reasons for improvement. Therefore, the terminology starts from identifying root causes of the problems, followed by taking appropriate actions and solutions, which introduce some of the proven approaches developed by certain reputable organisations, such as CMMI from Software Engineering Institute (SEI).

The consequences have been divided into the following three major impacts: 1. Consumer Impact 2. Industry Impact 3. Economy Impact

Each of these impacts will be elaborated in detail together with the relevant supported facts. Conclusions have been made based on the studies of this report and some constructive recommendations have been given by the author as the closing end of this report.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

1

Professional Issues in Computing Software Failures

2.0

The University of Bolton

Introduction

As human’s day-to-day life has become more and more dependent on computer software, efforts to reduce software failure and to improve its quality have become vital importance. Unlike thirty years ago, computers were mostly used by government agencies in the first world countries for scientific research purposes. Today, the users range from government to education, business, and even consumers that include small little kids, not to mention about the tremendous growth of Internet users throughout the world. By having such volume of users, and for those Internet users that typically accessing the same piece of server application in particular, down of one server system will enough to harm millions of user at a time.

Although there are many proven industrial approaches available in the market, it seems no significant improvements until today and to the local made software in the country of Malaysia especially. Therefore, instead of repeating the same in-depth technical details in this report again, it is time to widen the area of considerations and find out the root causes in making the approaches much more worthwhile to be implemented.

Hence, this report attempts to target on readers who may have already heard about and/or implemented some of the approaches but still unable to get a satisfactory result. In addition, it is also suitable to those who are new to software quality improvement techniques due to its non-in-depth, introductory level of explanations. However, for those who would like to drill through the details, useful references are also available at the end of this report.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

2

Professional Issues in Computing Software Failures

3.0

What is “Software Failure”?

3.1

Definition of Software Failure

The University of Bolton

Not all software problems can be classified as ‘software failure’. According to J. C. Laprie (1992), the author of ‘Dependability: Basic Concept and Terminology’, “A system failure occurs when the delivered service no longer complies with the specifications, the latter being an agreed description of the system's expected function and/or service” (J. C. Laprie, 1992). Apparently, software problems which are not being committed, and/or not fell under the same category of the agreed deliverables in the specification, are not considered as software failures. This is because every system has its own missions and promises to accomplish. If those missions and promises have been stated in the specification and had been agreed upon by the provider and end-users, the software are then need to comply with the specification, otherwise they are considered ‘fail’ to deliver as promised. For example, if the specification stated that ‘this software can be scaled up to 100 concurrent users without failing’ and the system stops responding when it reaches 101 concurrent users, it is not considered a software failure.

3.2

Examples of Software Failures

While only the software problems that do not comply with the specification are legitimately able to classify as ‘software failures’, some of the following issues are commonly classified as software failures by most of the software providers: •

Unreasonable crashes or hangs of the system



Logically inaccurate output of information



Security vulnerabilities that defeat the purpose of the security features



Loss of data integrity, consistency, and accuracy due to program logic and system design

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

3

Professional Issues in Computing Software Failures

The University of Bolton

The above issues may be considered as failures although they are not being specified in the software agreement because they are preventing the fundamental roles of the system from being accomplished. For example, 1 + 1 = 2 but the printed output of the system appears as ‘3’. Apparently, this is an unacceptable bug and should be classified as software failure even though a statement to guarantee the output accuracy was not explicitly stated in the agreement.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

4

Professional Issues in Computing Software Failures

4.0 4.1

The University of Bolton

How can we improve the software quality? Software Quality Improvement Lifecycle

It is unable to deny that none of the software in this world is 100% bugs free. However, its quality has to be improved if the bugs are not able to be fully eliminated. When mentioning about ‘improvement’, it also implies the meaning of ‘continuous improvements’, hence the word ‘lifecycle’. Figure 1 shows the high level of Software Quality Improvement Lifecycle:

Software Quality Improvement Lifecycle Identify Root Causes

Go through the Lifecycle Again

Take Appropriate Actions & Solutions Monitor & Evaluate Results

Figure 1 – Software Quality Improvement Lifecycle

The sections below explain each of the processes as specified in the above diagram.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

5

Professional Issues in Computing Software Failures

4.2

The University of Bolton

Identifying the Root Causes

Root Cause Analysis (RCA) is not only important in the manufacturing industry but also in software industry as well. Without investigating and identifying the root cause of the software problems, it is almost impossible for a software company to establish an effective solution or methodology in eliminating certain issues and also for better improvements. Problems that repetitively occur will be the very good examples in proving that the root of the problems is not being identified and resolved with proper solution. As mentioned in the RCA seminar brochure conducted by Ops A La Carte:

“Also, they become experts in fixing rather than preventing the problems. What is left is very reactive method of fixing equipment rather than a pro-active method of solving problems.” (Ops A La Carte, 2006, p. 1)

4.3

Taking Appropriate Actions and Solutions

After identifying the root causes, appropriate actions need to be taken, such as seeking for proven software modeling, standards, best practices, and process management techniques to solve the problems, once and for all. Most of these approaches are not only help to design and develop better software but also contain techniques to promote highly visible system specification and better predictability of software issues. As far as the software quality is concerned, Quality Assurance Test alone has never been enough. Other software development processes, such as requirements studies, design, development, deployment, and implementation, are processes that highly responsible to the final quality of the software products. The following table shows some of the well-known and proven approaches that software companies can apply to improve their software development processes:

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

6

Professional Issues in Computing Software Failures

Approach

Requirements Studies

The University of Bolton

Design

Development

Evaluation &

Deployment &

Testing

Implementation

Capability Maturity Model Integration (CMMI)

3

3

3

3

3

Unified Modeling Language (UML)

3

3

ŽŽ

ŽŽ

3

Rational Unified Processed (RUP)

3

3

3

3

3

ŽŽ

3

3

3

3

Software Factory

Table 1 – Some of the industry approaches and the software development processes that each of them covers.

The sections below explain each of these approaches in more detail.

4.3.1

Capability Maturity Model Integration (CMMI)

CMMI is a process improvement approach developed by Software Engineering Institute (SEI). It has been widely adopted by many software companies through out the world, not only having the intention to improve their products quality and internal management processes but also to gain trusts and confidence from their customers. It consists of five levels that the companies can strive to achieve. They are:

Maturity Level 1 – Initial Maturity Level 2 – Repeatable Maturity Level 3 – Defined Maturity Level 4 – Managed Maturity Level 5 – Optimising

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

7

Professional Issues in Computing Software Failures

The University of Bolton

For more detail of each maturity level, please refer to Appendix I – ARTICLE: Capability Maturity Model or visit: http://en.wikipedia.org/wiki/Capability_Maturity_Model

In order to show the effectiveness of their process improvement approach, SEI has compiled quantitative evidence as shown in Table 2 below. The last row of the table, that is the Return on Investment ratio, shows that the overall results are quite significant and on productivity and quality in particular.

Performance Category Median Number of Data Points Low

High

Cost

20%

21

3%

87%

Schedule

37%

19

2%

90%

Productivity

62%

17

9%

255%

Quality

50%

20

7%

132%

Customer Satisfaction

14%

6

-4%

55%

Return on Investment

4.7 : 1

16

2 : 1 27.7 : 1

Table 2 – CMMI Performance Results as of 15 December 2005 (Software Engineering Institute, 2005)

4.3.2

Unified Modeling Language (UML)

UML is a general purpose modeling language which is not only meant for software engineering but also for business process modeling. It is not a proprietary language and therefore it is opened freely to public to extend its potential and capability, which they think best to describe their software in a graphical manner. In other words, UML is meant to be customised. However, certain object modeling rules and principles still need to be followed in order to reach the intended results as expected by UML. The examples of UML-based modeling diagrams are like class diagrams, components diagrams, and system behaviors diagrams. Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

8

Professional Issues in Computing Software Failures

4.3.3

The University of Bolton

Rational Unified Process (RUP)

RUP is quite similar to UML because it is in fact an extension of UML. As mentioned in the previous section, UML is not only meant for business process modeling but it is also widely used in software industry. In addition, it is also opened for customisation to suite the industrial needs. Therefore, it is not surprised that RUP was actually being designed based on the concept of UML. The company that founded RUP is Rational Software Corporation, which now becomes part of IBM.

4.3.4

Software Factory

The concept of software factory is very much like a manufacturing industry where the final products will be assembled based on the reusable components. All these smaller software components are well-tested, high quality, and flexible enough to be plugged into a particular framework to form a complete software product. Such components, although consists of standard functionalities, they can be blended to different combinations in order to meet different product requirements.

4.4

Monitoring and Evaluating the Results

While the approaches stated above are proven and tested effective, different groups or individuals may implement them differently on different environment. Therefore, certain extents of progress monitoring and results evaluation are important. The output quality needs to be tested to ensure that the intended objectives can be met. Software testing is also playing a very important role in this process. It ensures that all the user requirements can be met and the reliability of the system during deployment and implementation. The following sections introduce two popular methods:

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

9

Professional Issues in Computing Software Failures

The University of Bolton

1. Software Quality Assurance Test 2. CMMI Software Product Quality Measurement

4.4.1

Software Quality Assurance Test

This is the most common and traditional way of testing the software quality. It consists of three types of testing:

4.4.1.1 Unit Test Unit Test is a detailed test that focuses on just a particular unit of a program, such as a data entry form. It includes validity of the data input, behaviors, and displays under which all sorts of criteria have been tested. However, it does not take into consideration the affection of other units, modules, or source of the system. For example, a unit test has been successfully conducted to change the credit limit of a customer but the tester did not take into consideration the sales order portions and how the changes affect the sales order transaction.

4.4.1.2 Integrated Test The integrated test is an overall testing of the entire system. It includes testing of program flows, data integrity, output accuracy that may be affected by the other part of the system. However, problems detected during integrated test may be harder to debug if the basic unit test was not conducted fully. A well-performed unit test will help streamline the work of the integrated test and making the problems easier to trace and debug.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

10

Professional Issues in Computing Software Failures

The University of Bolton

4.4.1.3 Stress Test Stress Test is a very important QA test to mission critical enterprise solutions in particular. It is basically a performance or load test that test on high volume transaction data and user concurrency effectiveness. It requires test environment, manpower, and preparation of large volume of data. Due to the relatively high cost of such testing, many organisations tend to ignore it.

4.4.2

CMMI Software Product Quality Measurement

In the past, many people believe that software quality can be assured by performing adequate testing as mentioned in Section 4.4.1. However, the question is: What if the requirements are not met even though the software is extremely stable and 99.9% bugs free? CMMI Software Product Quality Measurement is a modern way of measuring software quality. Below is how CMMI defines Quality Requirements:

“The phrase ‘quality and process-performance objectives’ covers objectives and requirements for product quality, service quality, and process performance. Process performance objectives include product quality.” (Dave Zubrow, 2004, p.5)

Therefore, they are not solely focus on physical and end product quality inspection, but they also believe other software development processes, such as Requirements Development, will affect the product quality as well. In Figure 2, it shows that the Product Quality Measurement can be conducted during Product Requirements Quality and Product Quality Evaluation processes.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

11

Professional Issues in Computing Software Failures

The University of Bolton

Figure 2 – Relating Requirements, Evaluation, and Measurement (Dave Zubrow, 2004, p.11)

As the objectives for ‘quality’ are too conceptual, they have to be translated into operational guidelines in order to achieve the desired quality in a realistic manner. Measurement plays a very important role in this area. It can really provide an efficient way of achieving the acceptance criteria based on the requirements specification. The help of ISO 25000 series and GQ(I)M Indicator Template can ease the implementation of this measurement. (Dave Zubrow, 2004, p.28)

4.5

Going through the Improvement Lifecycle

It is less likely that every software product is able to fully meet the expected results during the first rollout. In fact, there are always rooms for improvement in terms of functionality, stability, scalability, and performance. It is a good practice by constantly going through the improvement lifecycle so that corrective actions can be taken to refine the quality of the products.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

12

Professional Issues in Computing Software Failures

The University of Bolton

Figure 3 – One of the observations from SEI after the year 2006 State of Software Measurement Practice Survey (Mark Kasunic, 2006, p.37)

According to Mark Kasunic (2006), Software Engineering Institute (SEI) has conducted a year 2006 State of Software Measurement Practice Survey and observed that almost 20% of the respondents seldom or never took corrective action when a measurement threshold is exceeded (see Figure 3). Also in Figure 3, it is highlighted that measurement has become useless unless the information is used to take appropriate action.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

13

Professional Issues in Computing Software Failures

5.0

The University of Bolton

What are the consequences?

In fact, the consequences of software failure and the severity of its impact are very much depending on the following factors: •

The purpose that a particular software serves



The type of industry that the software serves



The users who use it and the number of users



The roles that the software plays



The size of the system

According to NASA (n.d.) Software Independent Verification and Validation (IV&V), factors that contributing to the consequences of software failure may include the following:

1. Potential for loss of life 2. Potential for serious injury 3. Potential for catastrophic mission failure 4. Potential for partial mission failure 5. Potential for loss of equipment 6. Potential for waste of software resource investment 7. Potential for adverse publicity 8. Potential effect on routine operations If looking from other perspectives, some or all of the factors listed above may impact the consumer, industry, and economic areas. The impact to each of these areas will be discussed in more detail in the following sections.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

14

Professional Issues in Computing Software Failures

5.1

Consumer Impact

5.1.1

Impact to Standalone Software Users

The University of Bolton

The severity of impact to consumer like home users due to software failure may not be significant because the prices that charged to those individuals are typically low. Furthermore, software being used for this group of individuals is usually less critical compared to corporate users. However, due to the volume and consumer market size, failure of personal software products may greatly impact the entire industry businesses as a whole. This may also cause the increasing use of unlicensed and pirated software due to disappointments and frustrations given by the genuine products. More and more consumers are reluctant to risk their money because of the past experiences with licensed software.

5.1.2

Impact to Online Transaction Users

Another group of consumer is online Internet users who perform buying and selling over the Internet in particular. When mention about Internet, two major failures will cause huge impact on online consumer:

1. Security Password thefts and exposure of credit card numbers are the very common issues happening to online users. This could be due to the failure of user authentication system and the implementation of Secure Socket Layer (SSL). Such failures will allow hackers to gain unauthorised access to owner’s banking account or making use of the owner’s credit card number to perform online shopping. For example, e1040.com, the popular tax-preparation Web site, was being sued by their customers because the customers’ private information, including passwords and Social Security numbers, were displayed in plain text on the Internet due to a switching error in the site's encryption software (Salimol Thomas, 2003, p.5).

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

15

Professional Issues in Computing Software Failures

The University of Bolton

2. Data Integrity Web applications like online banking and online shopping usually involve monetary transactions, for examples online fund transfer, bills payment, and credit card payment. If the application failed to handle the transaction, money can be deducted from the user’s account but the payee has not been received the payment.

5.2

Industry Impact

As mentioned in Section 5.1, consumer impact due to software failure may also introduce unwanted consequences to software industry, such as increasing use of pirated software and unlicensed software. When this becomes a tragedy to the software industry, the loss is not only directly come from this but millions of dollars need to be spent in order to fight piracy and also to give more incentives to users who buy genuine products. The incentives are like giving out free software add-ons and provide free supports and services, which incur very high costs to the software companies.

In large corporate sector, software failures can cause up to billions of dollar lost, not only at the end-user sides but also to the software providers as well. For instance, FoxMeyer sued SAP and Andersen Consulting for $500 million each because of SAP's software failure (Salimol Thomas, 2003, p.5). Such consequences will not only lead to loss of tangible assets to the corporate users but also intangible assets such as tremendous loss of productivity, efficiency, company image, and competitive advantage.

5.3

Economic Impact

As described in Section 5.1 and 5.2, consumer and industry impacts resulted by software failure are definitely affecting the economy, not only in a country alone

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

16

Professional Issues in Computing Software Failures

The University of Bolton

but worldwide. While globalisation becomes a key factor for economic growth to a particular country, demand for computer software in general and online transaction processing applications in particular have become more and more aggressive. To a development country like Malaysia, billions of dollars have been invested to buy software products and bring in software experts from other countries such as U.S. and India due to lack of local expertise. When majority of software being used in the country are imported software, failures of those software can really bring a huge impact to the economic growth of the country. Imagine billions of dollars are wasted, return of investments (ROI) are in negative values, loss of productivity, loss of income, and increase of liability costs.

Of course, such an economic impact is not involving the development countries alone. “In the United States, billions of dollars are invested on software application development. However, many of these projects fail due to poor design and deployment errors. For example, in 1999, the failure of NASA’s Mars Lander missions has cost a total loss of US$360 million. The cause of the failure was traced back to a software error.” (Salimol Thomas, 2003, p.2)

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

17

Professional Issues in Computing Software Failures

6.0

The University of Bolton

Conclusions

By comparing the prices being paid for software quality improvement plans and the losses caused by software failures, the following conclusions can be confidently made:

1. Fixing bugs is more painful than preventing bugs Bugs fixing may not be the only cost bared by the development team alone. It may also mean losses that affect more than one party, such as loss of investment, productivity, and efficiency at the end-user’s side and therefore liability cost to the company. Therefore, “deliver first, fix later” is always a bad practice.

2. Improving product quality is better than fighting piracy The cost of anti-piracy campaign always comes together with the loss of business and revenues. Therefore cost of improving software quality is far better than spending money to fight software piracy.

3. Investing on quality improvements is better than paying for liability costs By taking the example as specified in Chapter 5.0, Section 5.2, by spending half of $500 million in improving the software quality, SAP could have avoided the court case, saving up to $250 million dollars, and at the same time maintaining good business relationship with FoxMeyer.

Therefore, since improving software quality can turn those possible losses into revenues, why not doing it?

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

18

Professional Issues in Computing Software Failures

7.0

The University of Bolton

Recommendations

Regardless of how great that the industry approaches were being established, the success of these approaches is still depending on people who:

1. Able to promote them in a very motivating manner 2. Able to transfer the knowledge effectively 3. Realise their benefits and potential 4. Appreciate their intended objectives 5. Willing to study and explore it 6. Passionate to make them effective and contributing 7. Discipline enough to follow the rules and guidelines being set by the approaches

Besides, top management people and key persons who drive the organization are also playing a very important role to ensure:

1. Serving customers with high quality software is always the top most priority; 2. Good company culture that promotes “work smart” to avoid employees from being too exhausted, which may in turn lead to unproductive works, inefficiency, and full of mistakes; 3. Good policies that encourage learning, R&D, knowledge sharing, and attending trainings, seminars, and conferences; 4. Good company philosophy that realises on “product quality drives revenue” instead of “revenue first then quality later”.

Without the above human qualities, the goal towards high quality state-of-the-art software will be harder to achieve. Bear in mind that all computer software instructed by human. In other words, the quality of the software is actually reflecting the human quality. Therefore, instead of feeling doubtful to the effectiveness of the industry approaches, why not putting more efforts in

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

19

Professional Issues in Computing Software Failures

The University of Bolton

improving people quality as the fundamental approach? Once the human quality is improved, all the required attitudes and mindset will be in place. Naturally, all the great approaches will be easily absorbed and the success of the software products will just be around the corner.

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

20

Professional Issues in Computing Software Failures

The University of Bolton

Appendices Appendix I. – ARTICLE: Capability Maturity Model

Title: Capability Maturity Model Author: (Wikipedia’s Contributors) Publisher: Wikipedia Date: URL: http://en.wikipedia.org/wiki/Capability_Maturity_Model Date Accessed: 15 September 2006

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

21

Capability Maturity Model From Wikipedia, the free encyclopedia

Capability Maturity Model (CMM) broadly refers to a process improvement approach that is based on a process model. CMM also refers specifically to the first such model, developed by the Software Engineering Institute (SEI) in the mid-1980s, as well as the family of process models that followed. A process model is a structured collection of practices that describe the characteristics of effective processes; the practices included are those proven by experience to be effective. [1] CMM can be used to assess an organization against a scale of five process maturity levels. Each level ranks the organization according to its standardization of processes in the subject area being assessed. The subject areas can be as diverse as software engineering, systems engineering, project management, risk management, system acquisition, information technology (IT) services and personnel management. CMM was developed by the SEI at Carnegie Mellon University in Pittsburgh. It has been used extensively for avionics software and government projects, in North America, Europe, Asia, Australia, South America, and Africa. [2]

Currently, some government departments require software development contract organization to achieve and operate at a level 3 standard.

Contents „

1 Maturity model

„

2 History

„

„

2.1 Context

„

2.2 Origins

„

2.3 Current state

„

2.4 Future direction

3 Levels of the CMM „

3.1 Level 1 - Initial

„

3.2 Level 2 - Repeatable

„

3.3 Level 3 - Defined

„

3.4 Level 4 - Managed

„

3.5 Level 5 - Optimizing

„

3.6 Extensions

„

4 Process areas

„

5 Controversial aspects „

5.1 Praise

„

5.2 Criticism

„

6 The most beneficial elements of CMM Level 2 and 3

„

7 Companies appraised against the CMMI

„

8 See also

„

9 References

„

10 Footnotes

„

11 External links

Maturity model The Capability Maturity Model (CMM) is a way to develop and refine an organization's processes. The first CMM was for the purpose of developing and refining software development processes. A maturity model is a structured collection of elements that describe characteristics of effective processes. A maturity model provides: „

a place to start

„

the benefit of a community’s prior experiences

„

a common language and a shared vision

„

a framework for prioritizing actions

„

a way to define what improvement means for your organization

A maturity model can be used as a benchmark for assessing different organizations for equivalent comparison. It describes the maturity of the company based upon the project the company is dealing with and the clients.

History The Capability Maturity Model was initially funded by military research. The United States Air Force funded a study at the Carnegie-Mellon Software Engineering Institute to create a model (abstract) for the military to use as an objective evaluation of software subcontractors. The result was the Capability Maturity Model, published as Managing the Software Process in 1989. The CMM is no longer supported by the SEI and has been superseded by the more comprehensive Capability Maturity Model Integration (CMMI), of which version 1.2 has now been released.

Context In the 1970s, technological improvements made computers more widespread, flexible, and inexpensive. Organizations began to adopt more and more computerized information systems and the field of software development grew significantly. This led to an increased demand for developers—and managers—which was satisfied with less experienced professionals. Unfortunately, the influx of growth caused growing pains; project failure became more commonplace not only because the field of computer science was still in its infancy, but also because projects became more ambitious in scale and complexity. In response, individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas published articles and books with research results in an attempt to professionalize the software development process. Watts Humphrey's Capability Maturity Model (CMM) was described in the book Managing the Software Process (1989). The CMM as conceived by Watts Humphrey was based on the earlier work of Phil Crosby. Active development of the model by the SEI (US Dept. of Defense Software Engineering Institute) began in 1986. The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of processes (e.g., IT Service Management processes) in IS/IT (and other) organizations.

The model identifies five levels of process maturity for an organisation: 1. Initial (chaotic, ad hoc, heroic) the starting point for use of a new process. 2. Repeatable (project management, process discipline) the process is used repeatedly. 3. Defined (institutionalized) the process is defined/confirmed as a standard business process. 4. Managed (quantified) process management and measurement takes place. 5. Optimising (process improvement) process management includes deliberate process optimization/improvement. Within each of these maturity levels are KPAs (Key Process Areas) which characterise that level, and for each KPA there are five definitions identified: 1. Goals 2. Commitment 3. Ability 4. Measurement 5. Verification The KPAs are not necessarily unique to CMM, representing - as they do - the stages that organizations must go through on the way to becoming mature. The assessment is supposed to be led by an authorised lead assessor. One way in which companies are supposed to use the model is first to assess their maturity level and then form a specific plan to get to the next level. Skipping levels is not allowed. NB: The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. It may be suited for that purpose. When it became a general model for software process improvement, there were many critics. "Shrinkwrap" companies are also called "COTS" or commercial-off-the-shelf firms or software package firms. They include Claris, Apple, Symantec, Microsoft, and Lotus, amongst others. Many such companies rarely if ever managed their requirements documents as formally as the CMM described in order to achieve level 2, and so all of these companies would probably fall into level 1 of the model.

Origins The United States Air Force funded a study at the SEI to create a model for the military to use as an objective evaluation of software subcontractors. In 1989, the Capability Maturity Model was published as Managing the Software Process. Timeline „

1987: SEI-87-TR-24 (SW-CMM questionnaire), released.

„

1989: Managing the Software Process, published.

„

1991: SW-CMM v1.0, released.

„

1993: SW-CMM v1.1, released.

„

1997: SW-CMM revisions halted in support for CMMI.

„

2000: CMMI v1.02, released.

„

2002: CMMI v1.1, released.

„

2006: CMMI v1.2, released.

Current state Although these models have proved useful to many organizations, the use of multiple models has been problematic. Further, applying multiple models that are not integrated within and across an organization is costly in terms of training, appraisals, and improvement activities. The CMM Integration project was formed to sort out the problem of using multiple CMMs. The CMMI Product Team's mission was to combine three source models: 1. The Capability Maturity Model for Software (SW-CMM) v2.0 draft C 2. The Systems Engineering Capability Model (SECM) 3. The Integrated Product Development Capability Maturity Model (IPD-CMM) v0.98 4. Supplier sourcing CMMI is the designated successor of the three source models. The SEI has released a policy to sunset the Software CMM and previous versions of the CMMI. [3] The same can be said for the SECM and the IPD-CMM; these models were superseded by CMMI.

Future direction With the release of the CMMI Version 1.2 Product Suite, the existing CMMI has been renamed the CMMI for Development (CMMI-DEV), V1.2.[1] A version of the CMMI for Services is being developed by a Northrop Grumman-led team under the auspices of the SEI, with participation from Boeing, Lockheed Martin, Raytheon, SAIC, SRA, and Systems and Software Consortium (SSCI).[2] A CMMI for Acquisition (CMMI-ACQ) is also under development at the SEI.[3] Suggestions for improving CMMI are welcomed by the SEI. For information on how to provide feedback, see the CMMI Web site. In some cases, CMM can be combined with other methodologies. It is commonly used in conjunction with the ISO 9001 standard. JPMorgan Chase & Co. tried combining CMM with the computer programming methodologies of Extreme Programming (XP), and Six Sigma. They found that the three systems reinforced each other well, leading to better development, and did not mutually contradict, see Extreme Programming (XP) Six Sigma CMMI.

Levels of the CMM (See chapter 2 of (March 2002 edition of CMMI from SEI), page 11.) There are five levels of the CMM. According to the SEI, "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."

Level 1 - Initial At maturity level 1, processes are usually ad hoc and the organization usually does not provide a stable environment. Success in these organizations depends on the competence and heroics of the people in the organization and not on the use of proven processes. In spite of this ad hoc, chaotic environment, maturity level 1 organizations often produce products and services that work; however, they frequently exceed the budget and schedule of their projects. Maturity level 1 organizations are characterized by a tendency to over commit, abandon processes in the time of crisis, and not be able to repeat their past successes again. Level 1 software project success depends on having quality people.

Level 2 - Repeatable At maturity level 2, software development successes are repeatable. The processes may not repeat for all the projects in the organization. The organization may use some basic project management to track cost and schedule. Process discipline helps ensure that existing practices are retained during times of stress. When these practices are in place, projects are performed and managed according to their documented plans. Project status and the delivery of services are visible to management at defined points (for example, at major milestones and at the completion of major tasks). Basic project management processes are established to track cost, schedule, and functionality. The minimum process discipline is in place to repeat earlier successes on projects with similar applications and scope. There is still a significant risk of exceeding cost and time estimate.

Level 3 - Defined The organization’s set of standard processes, which is the basis for level 3, is established and improved over time. These standard processes are used to establish consistency across the organization. Projects establish their defined processes by the organization’s set of standard processes according to tailoring guidelines. The organization’s management establishes process objectives based on the organization’s set of standard processes and ensures that these objectives are appropriately addressed. A critical distinction between level 2 and level 3 is the scope of standards, process descriptions, and procedures. At level 2, the standards, process descriptions, and procedures may be quite different in each specific instance of the process (for example, on a particular project). At level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organisational unit.

Level 4 - Managed Using precise measurements, management can effectively control the software development effort. In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of

quality or deviations from specifications. At this level organization set a quantitative quality goal for both software process and software maintenance. Subprocesses are selected that significantly contribute to overall process performance. These selected subprocesses are controlled using statistical and other quantitative techniques. A critical distinction between maturity level 3 and maturity level 4 is the predictability of process performance. At maturity level 4, the performance of processes is controlled using statistical and other quantitative techniques, and is quantitatively predictable. At maturity level 3, processes are only qualitatively predictable.

Level 5 - Optimizing Maturity level 5 focuses on continually improving process performance through both incremental and innovative technological improvements. Quantitative process-improvement objectives for the organization are established, continually revised to reflect changing business objectives, and used as criteria in managing process improvement. The effects of deployed process improvements are measured and evaluated against the quantitative processimprovement objectives. Both the defined processes and the organization’s set of standard processes are targets of measurable improvement activities. Process improvements to address common causes of process variation and measurably improve the organization’s processes are identified, evaluated, and deployed. Optimizing processes that are nimble, adaptable and innovative depends on the participation of an empowered workforce aligned with the business values and objectives of the organization. The organization’s ability to rapidly respond to changes and opportunities is enhanced by finding ways to accelerate and share learning. A critical distinction between maturity level 4 and maturity level 5 is the type of process variation addressed. At maturity level 4, processes are concerned with addressing special causes of process variation and providing statistical predictability of the results. Though processes may produce predictable results, the results may be insufficient to achieve the established objectives. At maturity level 5, processes are concerned with addressing common causes of process variation and changing the process (that is, shifting the mean of the process performance) to improve process performance (while maintaining statistical probability) to achieve the established quantitative process-improvement objectives.

Extensions Recent versions of CMMI from SEI indicate a "level 0", characterized as "Incomplete". Many observers leave this level out as redundant or unimportant, but Pressman and others make note of it. See page 18 of the August 2002 edition of CMMI from SEI. [4] Anthony Finkelstein [5] extrapolated that negative levels are necessary to represent environments that are not only indifferent, but actively counterproductive, and this was refined by Tom Schorsch [6] as the Capability Immaturity Model:

Process areas For more details on this topic, see Process area (CMMI).

The CMMI contains several key process areas indicating the aspects of product development that are to be covered by company processes. Key Process Areas of the Capability Maturity Model Integration (CMMI) Abbreviation

Name

Area

Maturity Level

CAR

Causal Analysis and Resolution

Support

5

CM

Configuration Management

Support

2

DAR

Decision Analysis and Resolution

Support

3

IPM

Integrated Project Management

Project Management

3

ISM

Integrated Supplier Management

Project Management

3

IT

Integrated Teaming

Project Management

3

MA

Measurement and Analysis

Support

2

OEI

Organizational Environment for Integration

Support

3

OID

Organizational Innovation and Deployment Process Management

5

OPD

Organizational Process Definition

Process Management

3

OPF

Organizational Process Focus

Process Management

3

OPP

Organizational Process Performance

Process Management

4

OT

Organizational Training

Process Management

3

PI

Product Integration

Engineering

3

PMC

Project Monitoring and Control

Project Management

2

PP

Project Planning

Project Management

2

PPQA

Process and Product Quality Assurance

Support

2

QPM

Quantitative Project Management

Project Management

4

RD

Requirements Development

Engineering

3

REQM

Requirements Management

Engineering

2

RSKM

Risk Management

Project Management

3

SAM

Supplier Agreement Management

Project Management

2

TS

Technical Solution

Engineering

3

VAL

Validation

Engineering

3

VER

Verification

Engineering

3

Controversial aspects The software industry is diverse and volatile. All methodologies for creating software have supporters and critics, and the CMM is no exception.

Praise „

The CMM was developed to give Defense organizations a yardstick to assess and describe the capability of software contractors to provide software on time, within budget, and to acceptable standards. It has arguably been successful in this role, even reputedly causing some software sales people to clamour for their organizations' software engineers/developers to "implement CMM."

„

The CMM is intended to enable an assessment of an organization's maturity for software development. It is an important tool for outsourcing and exporting software development work. Economic development agencies in India, Ireland, Egypt, Syria, and elsewhere have praised the CMM for enabling them to be able to compete for US outsourcing contracts on an even footing.

„

The CMM provides a good framework for organizational improvement. It allows companies to prioritize their process improvement initiatives.

Criticism „

CMM has failed to take over the world. It's hard to tell exactly how wide spread it is as the SEI only publishes the names and achieved levels of compliance of companies that have requested this information to be listed[4]. The most current Maturity Profile for CMMI is available online[5].

„

CMM is well suited for bureaucratic organizations such as government agencies, large corporations and regulated monopolies. If the organizations deploying CMM are large enough, they may employ a team of CMM auditors reporting their results directly to the executive level. (A practice encouraged by SEI.) The use of auditors and executive reports may influence the entire IT organization to focus on perfectly completed forms rather than application development, client needs or the marketplace. If the project is driven by a due date, CMMs intensive reliance on process and forms may become a hindrance to meeting the due date in cases where time to market with some kind of product is more important than achieving high quality and functionality of the product.

„

Suggestions of scientifically managing the software process with metrics only occur beyond the Fourth level. There is little validation of the processes cost savings to business other than a vague reference to empirical evidence. It is expected that a large body of evidence would show that adding all the business overhead demanded by CMM somehow reduces IT headcount, business cost, and time to market without sacrificing client needs.

„

No external body actually certifies a software development center as being CMM compliant. It is supposed to be an honest self-assessment ([6] and [7]). Some organizations misrepresent the scope of their CMM compliance to suggest that it applies to their entire organization rather than a specific project or business unit.

„

The CMM does not describe how to create an effective software development organization. The CMM contains behaviors or best practices that successful projects have demonstrated. Being CMM compliant is not a guarantee that a project will be successful, however being compliant can increase a project's chances of being successful.

„

The CMM can seem to be overly bureaucratic, promoting process over substance. For example, for emphasizing predictability over service provided to end users. More commercially successful methodologies (for example, the Rational Unified Process) have focused not on the capability of the organization to produce software to satisfy some other organization or a collectively-produced specification, but on the capability of organizations to satisfy specific end user "use cases" as per the Object Management Group's UML (Unified Modeling Language) approach[8].

The most beneficial elements of CMM Level 2 and 3 „

Creation of Software Specifications, stating what is going to be developed, combined with formal sign off, an executive sponsor and approval mechanism. This is NOT a living document, but additions are placed in a deferred or out of scope section for later incorporation into the next cycle of software development.

„

A Technical Specification, stating how precisely the thing specified in the Software Specifications is to be developed will be used. This is a living document.

„

Peer Review of Code (Code Review) with metrics that allow developers to walk through an implementation, and to suggest improvements or changes. Note - This is problematic because the code has already been developed and a bad design can not be fixed by "tweaking", the Code Review gives complete code a formal approval mechanism.

„

Version Control - a very large number of organizations have no formal revision control mechanism or release mechanism in place.

„

The idea that there is a "right way" to build software, that it is a scientific process involving engineering design and that groups of developers are not there to simply work on the problem du jour.

Companies appraised against the CMMI Large numbers of IT companies across the world are making forays up the CMMI level ladder. In June, 1999 Wipro of India became the first software services company in the world to attain SEI CMM maturity level 5, the highest maturity level. Every year many IT companies in the world enter into the CMMI regime or improve their CMMI Levels. As of 2006, about 75% of the CMMI level 5 software centers are in India. Most of them are located in the city of Bangalore. For a complete list view the published SCAMPI results.

See also „

Personal Software Process (PSP) and Team Software Process (TSP), two other process models also developed by the Software Engineering Institute to address individual and team software development respectively

„

Information Technology Infrastructure Library (ITIL), a framework of best practice approaches intended to facilitate the delivery of high quality information technology (IT) services.

„

Capability Maturity Model Integration

„

People Capability Maturity Model

„

The SPICE project

One must be very skeptical about a company claiming that they have obtained a certain level (the higher level, the more skeptical to be) of CMM at an "enterprise level." Usually this is used as a marketing technique that may indeed apply to some project done by the company at some time, but most unlikely achieved by the enterprise.

References Books „

Chrissis, Mary Beth, Konrad, Mike, Shrum, Sandy (2003). CMMI : Guidelines for Process Integration and Product Improvement. Addison-Wesley Professional. ISBN 0321154967.

„

Humphrey, Watts (1989). Managing the Software Process. Addison-Wesley Professional. ISBN 0201180952.

Websites „

History of Process Models

„

Process Improvement: The Capability Maturity Model

„

ITNOW - September 2005: Capability model mature - or is it?

„

The PITCOM meeting of 22 March 2004 addressed one of the major trends affecting the UK technology sector

Footnotes 1. ^ Capability Maturity Model®Integration (CMMI®) Version 1.2 Overview (PDF). SEI (2006). Retrieved on 28 October 2006. 2. ^ What is CMMI? - Worldwide Adoption. SEI (2006). Retrieved on 28 October 2006. 3. ^ Sunsetting Version 1.1 of the CMMI® Product Suite. SEI (2006). Retrieved on 28 October 2006. 4. ^ August 2002 edition of CMMI (PDF). CMU/SEI-2002-TR-011. SEI (2002). Retrieved on 28 October 2006. 5. ^ Finkelstein, Anthony (2000-04-28). A Software Process Immaturity Model (PDF). University College London. Retrieved on 28 October 2006. 6. ^ Schorsch, Tom (1996). The Capability Im-Maturity Model. The Air Force Institute of Technology. Retrieved on 28 October 2006.

External links „

CMMI Official Web Site

„

Capability Maturity Model® Integration (CMMI®) Overview [PDF]

„

A critical look at implementing CMM Level 2

Retrieved from "http://en.wikipedia.org/wiki/Capability_Maturity_Model" Categories: Cleanup from January 2006 | Articles with unsourced statements | Software engineering | Information technology management

„ „

This page was last modified 13:20, 28 October 2006. All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.) Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.

Professional Issues in Computing Software Failures

The University of Bolton

Appendix II. – PRESENTATION (Extracted): Measuring Software Product Quality: The ISO 25000 Series and CMMI

Title: Measuring Software Product Quality: The ISO 25000 Series and CMMI Author: Dave Zubrow Publisher: Software Engineering Institute (SEI) Date: 14 June 2004 URL: http://www.sei.cmu.edu/sema/presentations/ zubrow/esepg/esepg.pdf Date Accessed: 25 September 2006

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

33

Professional Issues in Computing Software Failures

The University of Bolton

Appendix III. – ARTCLE: Software Independent Verification and Validation (IV&V)/Independent Assessment (IA) Criteria

Title: Software Independent Verification and Validation (IV&V)/ Independent Assessment (IA) Criteria Author: Publisher: NASA, United States Date: URL: http://ivvcriteria.ivv.nasa.gov/Documents/ivvcrit.pdf Date Accessed: 3 October 2006

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

38

Software Independent Verification and Validation (IV&V)/Independent Assessment (IA) Criteria 1. Introduction 1.1 The purpose of this appendix is to provide quantifiable criteria for determining whether IV&V should be applied to a given software development. Since software IV&V should begin in the Formulation Subprocess (as defined in NPG 7120.5, paragraph 1.4.3) of a project, the process described here is based on metrics that are available before project approval. 1.2 All projects containing software shall evaluate themselves against this criteria (which is also available at http://ivvcriteria.ivv.nasa.gov) to determine if a software IA or IV&V is required. 1.3 See paragraph P.2 for applicability of these criteria. 2. Risk Factors and Consequences 2.1 Software IV&V is intended to assist mitigating risk; hence, the decision to do software IV&V should be risk based. NPG 7120.5 defines risk as the “combination of 1) the probability (qualitative or quantitative) that a program or project will experience an undesired event such as cost overrun, schedule slippage, safety mishap, or failure to achieve a needed technological breakthrough; and 2) the consequences, impact, or severity of the undesired event were it to occur.” The exact probability of occurrence and consequences of a given software failure cannot be calculated early in the software lifecycle. However, there are realistically available metrics that give good general approximations of the consequences as well as the likelihood of failures. 2.2 In general, the consequences of a software failure can be derived from the purpose of the software: i.e., what does the software control; what do we depend on it to do. Paragraph 2.2.1 contains a list of factors that can be used to categorize software based on its intended function and the level of effort expended to produce the software. Paragraph 2.2.2 defines the boundaries of four levels of failure consequences based on the rating factors from paragraph 2.2.1. 2.2.1 Factors contributing to the consequences of software failure include the following: a. Potential for loss of life. Is the software the primary means of controlling or monitoring systems that have the potential to cause the death of an operator, crewmember, support personnel, or bystander? The presence of manual overrides and failsafe devices is not to be considered. This is considered a binary rating: responses must be either yes or no. Examples of software with the potential for loss of life include: (1) Flight and launch control software for human-rated systems. (2) Software controlling life support functions. (3) Software controlling hazardous materials with the potential for exposing humans to a lethal dose.

1

(4) Software controlling mechanical equipment (including vehicles) that could cause death through impact, crushing, or cutting. (5) Any software that provides information to operators where an inaccuracy could result in death through an incorrect decision (e.g., mission control room displays). b. Potential for serious injury. Serious injury is here defined as loss of use of digit or limb, or sight in one or both eyes, hearing, or exposure to substance or radiation that could result in long term illness. This rating is also binary. This rating considers only those cases where the software is the primary mechanism for controlling or monitoring the system. The presence of manual overrides and failsafe devices is not to be considered. Examples of software with potential for serious injury include software controlling milling or cutting equipment, class IV lasers, or X-ray equipment. c. Potential for catastrophic mission failure. Can a problem in the software result in a catastrophic failure of the mission? This is a binary rating. Software controlling navigation, communications, or other critical systems whose failure would result in loss of vehicle or total inability to meet mission objectives would fall into this category. d. Potential for partial mission failure. Can a problem in the software result in a failure to meet some of the overall mission objectives? This is a binary rating. Examples of this category include software controlling one of several data collection systems or software supporting a given experiment, which is not the primary purpose of the mission. e. Potential for loss of equipment. This is a measure of the cost (in dollars) of physical resources that are placed at risk due to a software failure. Potential collateral damage is to be included. This is exclusive of mission failure. Examples include the following: (1) Loss of a $5 million unmanned drone due to flight control software failure. (Assuming the drone is replaceable, this would not be a mission failure.) (2) Damage to a wind tunnel drive shaft due to a sudden change in rotation speed. f. Potential for waste of software resource investment. This is a measure or projection of the effort (in work-years, civil service, contractor, and other) invested in the software. This shows the level of effort that could potentially be wasted if the software does not meet requirements. g. Potential for adverse publicity. This is a measure of the potential for negative political and public image impacts stemming from a failure of the system as a result of software failure. The unit of measure is the geographical or political level at which the failure will be common knowledge, specifically: local (Center), Agency, national, international. The potential for adverse publicity is evaluated based on the history of similar efforts. h. Potential effect on routine operations. This is a measure of the potential to interrupt business. There are two major components of this rating factor: scope and impact. Scope refers to who is

2

affected. The choices are Center and Agency. The choices for impact are inconvenience and work stoppage. Examples include the following: (1) A faulty firewall that failed to protect against a virus resulting in a 4-hour loss of e-mail capabilities at a Center would be a “Center inconvenience.” (2) Assuming that the old financial management software was no longer maintainable, the failure of the replacement system to pass acceptance testing and the resulting 2-year delay would be a potential “Agency work stoppage.” This does not imply that workarounds could not be implemented, but only that it has the potential to stop work Agencywide. 2.2.2 Software Consequences of Failure Rating. 2.2.2.1 Consequences of failure are considered “Grave” when any of the following conditions are met: a. Potential for loss of life – Yes. b. Potential for loss of equipment – Greater than $100,000,000. c. Potential for waste of resource investment – Greater than 200 work-years on software. d. Potential for adverse publicity – International. 2.2.2.2 Consequences of failure are considered “Substantial” when any of the following conditions are met: a. Potential for serious injury – Yes. b. Potential for catastrophic mission failure – Yes. c. Potential for loss of equipment – Greater than $20,000,000. d. Potential for waste of resource investment – Greater than 100 work-years on software. e. Potential for adverse publicity – National. f. Potential effect on routine operations – Agency work stoppage. 2.2.2.3 Consequences of failure are considered “Significant” when any of the following conditions are met: a. Potential for partial mission failure – Yes. b. Potential for loss of equipment – Greater than $1,000,000.

3

c. Potential for waste of resource investment – Greater than 20 work-years on software. d. Potential for adverse publicity – Agency. e. Potential effect on routine operations – Center work stoppage or Agency inconvenience. 2.2.2.4 Consequences of failure are considered “Insignificant” when all of the following conditions are met: a. Potential for loss of life – No. b. Potential for serious injury – No. c. Potential for catastrophic mission failure – No. d. Potential for partial mission failure – No. e. Potential for loss of equipment – Less than $1,000,000. f. Potential for waste of resource investment – Less than 20 work-years on software. g. Potential for adverse publicity – No more than local visibility. h. Potential effect on routine operations – No more than a Center inconvenience. 2.3 The probability of failure for software is difficult to determine even late in the development cycle. However, Table 1 contains simple metrics on the software, the developer, and the development environment, which have proven to be indicators of future software problems. While these indicators are not precise, they provide order of magnitude estimates that are adequate for assessing the need for software IV&V. (The Facility and the NASA Software Working Group will further refine these indicators and their associated weighting factors as more data become available.)

4

Factors Un-weighted likelihood of failure score contributing to likelihood of software failure

Weighting LikelyFactor hood of failure rating

1 2 4 8 16 More than 50 X2 Up to 50 Up to 5 people Up to 10 Up to 20 at one location people at one people at one people at one people at one location location or 10 location or 20 location or 20 people with people with people with external external external support support support Contractor None Contractor with Contractor with Contractor with X2 Support minor tasks major tasks major tasks critical to project success Organization One location Two locations Multiple Multiple Multiple X1 Complexity(1) but same locations but providers with providers with reporting chain same reporting Prime/sub associate chain relationship relationship Software team complexity

Schedule Pressure(2)

No deadline

Process Maturity of Software Provider

Independent Independent Independent assessment of assessment of assessment of Capability CMM Level 3 CMM Level 2 Maturity Model (CMM) Level 4, 5 Proven and Proven but accepted new to the development organization Simple - Stand alone

Degree of Innovation

Level of Integration

Deadline is negotiable

Requirement Well defined Well defined Maturity objectives - No objectives unknowns Few unknowns Software Lines of Code(3)

Less than 50K

Non-negotiable X2 deadline CMM Level 1 CMM Level 1 X2 with record of or equivalent repeated mission success Cutting edge

Preliminary objectives

Over 500K

X1

Extensive X2 Integration Required Changing, X2 ambiguous, or untestable objectives Over 1000K X2

Total

Table 1 Likelihood of Failures Based on Software Environment

The following notes and definitions apply to Table 1: (1) “Organization Complexity” is an indirect measure of the software developer’s potential communications difficulties. A single organization working from multiple locations faces a slightly greater difficulty than an organization in one location. When the software development is accomplished by multiple organizations working for a single integrator, the development is significantly complicated. If the developing organizations are coequal such as in an associate contractor relationship (or a similar relationship between government entities) then there is no integrator. Experience has shown this arrangement to be extremely difficult as no one is in charge.

5

(2) Under “Schedule Pressure,” a deadline is negotiable if changing the deadline is possible, although it may result in slightly increased cost, schedule delays, or negative publicity. A deadline is non-negotiable if it is driven by an immovable event such as an upcoming launch window. (3) As the problems identified in software IV&V are often mismatches between the intended use and the actual software built, “Software Lines of Code” shall include reused software and autogenerated software. 3. Risk Assessment 3.1 Combining the software consequences of failure and the likelihood of failure rating from Paragraph 2 yields a risk assessment that can be used to identify the need for software IV&V. The indication of whether software IV&V is required is obtained by plotting in Figure 2 the intersection of the Consequences of Software Failure determination and the Total Likelihood of Failure determination. Application of these criteria simply determines that a project is a candidate for software IV&V – not the level of software IV&V or the resources associated with the software IV&V effort. These will be determined as a result of discussions between the project and the Facility.

Figure 2 Software Risk 3.2 Figure 2 shows a dark region of high risk where software consequences, likelihood of failure, or both are high. Projects having software that falls into this high-risk area shall undergo software IV&V. The exception is those projects that have already done hardware/software integration. A software IV&V would not be productive that late in the development cycle. These projects shall undergo a Software Independent Assessment (IA). An IA is a review and analysis of the project/program’s software development lifecycle and products. The IA differs in 6

scope from a full software IV&V program in that IV&V is applied over the lifecycle of the system whereas an IA is usually a one-time review of the existing products and plans. 3.3 Figure 2 shows three gray regions of intermediate risk. Projects having software that falls into these areas shall undergo a Software IA. The Facility shall conduct the Software IA. One purpose of the Software IA is to ensure that the software development does not have projectspecific risk characteristics that would warrant the performance of software IV&V. Should such characteristics be identified, the Facility will make a recommendation for software IV&V performance.

7

Professional Issues in Computing Software Failures

The University of Bolton

Appendix IV. – ARTICLE: Quantifying Economic Impact of Information Integrity Failures

Title: Quantifying Economic Impact of Information Integrity Failures Author: Salimol Thomas Publisher: Information Integrity Coalition (IIC) Date: 14 February 2003 URL: http://www.informationintegrity.org/downloads/ econImpact-executiveSummary.pdf Date Accessed: 3 October 2006

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

46

Quantifying Economic Impact of Information Integrity Failures Executive Summary

by Salimol Thomas

2

Executive Summary Information integrity failure causes economic loss as well as loss of intangible assets. It results in loss of investment, loss of income, loss of productivity and liability costs. The Standish Group estimates that defective software code accounted for 45 percent of computer system downtime and cost U.S. companies $100 billion in lost productivity in 2000 [1]. Indirect effects of information system failures have included diminution of brand value and even legal action by law enforcement agencies. Pervan G. P. and Phua R. [2] extensively studied the executive information systems (EIS) used in Australia. They identified that inadequate technology, unavailability of data, doubtful data, and complexities were the main factors responsible for the failure of EIS. This report is organized in two sections. Section 1 examines the various economic aspects of lack of information integrity. Section 2 reports the factors responsible for failure of information systems (IS).

1.0 Lack of Information Integrity Cost The following section describes various tangible and intangible losses an organization can incur because of information integrity failures.

1.1

Loss of Investment

In the United States, billions of dollars are invested on information technology (IT) application development. However, many of these projects fail due to poor design and deployment errors. For example: •

In 1999, failure of NASA’s Mars Lander missions , result ed in a total loss of $360 million. The cause of the failure was traced back to a software error [3, 4].

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

3

1.2

Loss of Income

Loss of Income can occur because of: 1. Missed opportunity due to lack of valid information. For example, the information system predicts no growth for a particular product for a particular year, and then the company won't be anticipating any increase in demand. However, if this information is incorrect, the company will lose the additional revenue. 2. Information integrity failure of the business support system. Today's business decisions and operations are largely dependent on information handled by information systems. Any failure/lack of integrity at any point in the system could lead to significant loss in business. The following examples illustrate the above points: •

The failure to produce reliable software to handle baggage at the new Denver airport is costing the city $1.1 million per day [5].



Nike's third quarter sales fell after faulty supply chain software (i2) delayed orders and caused excess inventories. Revenues for that quarter were off by more than $100 million [11]



Hershey Foods’ $115 million SAP computer system implementation had problems, crippled shipments and was blamed for a 12 percent decrease in third quarter sales in 1999 [6]



eBay’s business was halted for 22 hours due to a system crash, resulting in a $4 million revenue loss and $5.7 billion in market capitalization [3].

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

4

1.2

Brand Image

In today's world the image of an organization largely depends on the IT infrastructure. Any failure of these systems conveys a wrong signal to investors. 1. Organizations can lose market value because of a decrease in stock price. 2. Diminution of the intangible brand image The following examples prove the above points: •

Egghead.com's reputation was shattered when hackers were able to steal roughly 3.7 million credit card numbers by exploiting a security flaw in the company's e-commerce software servers. Egghead customers left in droves after being notified of the theft and shares of the company's stock fell to as low as $0.50 after trading as high as $13.50 [10].



In the days following announcement of software problems at Nike, the company's stock dropped nearly 20 percent per share [14].



H&R Block’s reputation was adversely affected when a software failure allowed online clients to view other client's tax returns [3].



Online music retailer CDUniverse suffered brand damage, liability costs and loss of revenue when a hacker exploited a software security flaw and stole 300,000 credit card numbers [3].

1.3

Loss of productivity

Lack of integrity in an organization’s information system may halt normal business operations and reduce productivity. •

Input errors are typically around 2 percent, and in excess of 10 percent under certain conditions, when a user interface is not designed properly. Spreadsheet errors are estimated

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

5

to be 5 percent. These errors not only reduce productivity, but also spread inaccurate information [13]. •

The Standish Group [1] estimates that defective software code accounted for 45 percent of computer system downtime and cost U.S. companies $100 billion in lost productivity in 2000.



The New York Stock Exchange (NYSE) came to a crashing halt for one hour on June 6, 2001 [7]. It was due to system error.

1.4

Liability Costs

Absence of integrity not only causes loss of revenue, but it also invites the wrath of unsatisfied customers. For example: •

Customers of the popular tax-preparation Web site e1040.com were preparing to sue after their private information — including passwords and Social Security numbers — was displayed in plain text on the Internet due to a switching error in the site's encryption software [8].



FoxMeyer sued SAP and Andersen Consulting for $500 million each because of SAP's software failure [3]

1.5

Federal Government Action

Inaccurate information invites the wrath of law enforcing agencies. For example: •

In August 3, 1999, Securities and Exchange Commission (SEC) filed 26 cases against 82 defendants and respondents across the country for engaging in fraudulent microcap schemes from which they profited by over $12 million. [15]

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

6

2.0 Failure factors Pervan G. P. and Phua R. (1996) [2] have extensively studied the executive information systems (EIS) used in Australia and identified the following factors that were collectively the leading cause of the EIS’s failure. Typical Failure Factors (Pervan G. P. and Phua R., 1996) No.

Failure Factors

1 2 3

Inadequate/Inappropriate Technology Loss of Interest by Executives Unavailability of Data Poorly Planned System Evolution Leading to Inability to Demonstrate Breakthroughs Doubtful Data Integrity Difficulty in Reaching Operational Status Due to Technical Difficulties Information Systems Unable to Adapt to Changing Requirements Inability to Cost Justify IT Systems System is Difficult to Use Lack of Commitment/Support by Top Management Organizational Resistance to Change/Technology System Objectives not Linked to Business Strategy Perception of System by Executives as Unimportant Insufficient Depth of Information Lack of Executive Sponsorship Insufficient/Incompetent System Support Staff Failure of System to Meet Executive Information Requirements

4 5 6 7 7 8 9 10 11 12 13 14 15 16

Rate 4.75 4.25 4.25 3.75 3.75 3.75 3.50 3.25 3.25 3.00 3.00 2.75 2.75 2.75 2.50 2.50 2.00

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

7

3.0 References 1. Ricadela, A., Gilbert, A., “The state of the software,” Informationweek, 42-54, May 21, 2001.

2. Pervan , G.P. and Phua, R., “Executive Information Systems in Australia: Current Status and Some Historical Comparisons,” Proceedings Of The 29th Annual Hawaii International Conference on System Sciences, 1996. 3. Miles, P., Bryant, B. “Cigital Estimates a Half Trillion Dollars Was Lost by Businesses Due to Software Failure,” http://www.cigital.com/news/halftrillion.ht ml , Jan 24th 2001. 4. “Second Science Committee Hearing on NASA's Mars Failures,” The American Institute of Physics Bulletin of Science Policy News, Number 81: July 11, 2000, http://www.aip.org/enews/fyi/2000/fyi00.081.htm, 5. “The CHAOS Report,” Standish Group , 1994. http://www.eee.bham.ac.ukdsvp_gr/roxby/ee4a3/Lecture2/sld010.htm 6. English, L. P., “Plain English on Data Quality: The Information Quality Revolution,” DM Review, February 2000. 7. Yasin, R., “Software Bug Halts Trading On NYSE ,” Reuters , June 8, 2001, http://www.internetweek.com/story/INW20010608S0002 8. Sandoval, G. “Tax site leaves customer data exposed,” CNET News.com, Feb. 12, 2001 http://news.com.com/2100-1017-252457.html 9. Ricadela , A. ,“The State Of Software :QUALITY,” Information Week, May 21, 2001 10. Rosencrance, L., “Bulletin: Hacker breaks Egghead's security shell,” Computerworld December 22, 2000, http://www.computerworld.com/cwi/story/0,1199,NAV47_STO55529,00.html 11. Karpinski, R., “Don't Get "Nike-ed ,” InternetWeek hosted by CMP Media’s TechWeb section, March 7, 2001 12. Fonstad, J., “Supply-chain model giving way to real-time inventory management,” CNNmoney May 24, 2001 http://money.cnn.com/2001/05/24/redherring/herring_supply/index.htm 1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

8

13. 13. Panko R. R., “What we know about spreadsheet errors,” Journal of End User Computing, Vol.10, No. 2, 1998, pp.15-21. 14. 14. Wilson, T., “Supply chain debacle,” InternetWeek hosted by CMP Media’s TechWeb section, March 1, 2001, http://www.internetweek.com/story/INW20010301S0002 15. “About Microcap Fraud,” Securities and Exchange Commission, August 3 1999, http://www.sec.gov/divisions/enforce/microcap.htm,.

1240 E. Diehl Road, Suite 300, Naperville, IL 60563 Tel: 630-505-5525 Fax: 630-505-1812 Toll Free: 866-CALL IIC www.informationintegrity.org 2/14/2003_v03

Professional Issues in Computing Software Failures

The University of Bolton

Bibliography Dave Zubrow. (2004) Measuring Software Product Quality: the ISO 25000 Series and CMMI [Online]. Available from < http://www.sei.cmu.edu/sema/presentations/ zubrow/esepg/esepg.pdf > [Accessed 25 September 2006].

J. C. Laprie. (1992) Dependability: Basic Concept & Terminology [Online]. Available from < http://srel.ee.duke.edu/sw_ft/node3.html > [Accessed 14 September 2006].

Mark Kasunic. (2006) 2006 State of Software Measurement Practice Survey [Online]. p.37. Available from < http://www.sei.cmu.edu/sema/presentations/ stateof-survey.pdf > [Accessed 25 September 2006].

NASA. (n.d.) Software Independent Verification and Validation (IV&V) /Independent Assessment (IA) Criteria [Online]. Available from < http://ivvcriteria.ivv.nasa.gov/Documents/ivvcrit.pdf > [Accessed 3 October 2006].

Ops A La Carte. (2006) Root Cause Analysis [Online]. p.1. Available from < http://www.opsalacarte.com/pdfs/courses/OALC_Root_Cause_Analysis_ Seminar.pdf > [Accessed 20 September 2006].

Robert N. Charette. (2005) Why Software Fails [Online]. Available from < http://www.spectrum.ieee.org/sep05/1685 > [Accessed 14 September 2006]

Salimol Thomas. (2003) Quantifying Economic Impact of Information Integrity Failures Executive Summary [Online]. Available from < http://www. informationintegrity.org/downloads/econImpact-executiveSummary.pdf > [Accessed 3 October 2006].

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

55

Professional Issues in Computing Software Failures

The University of Bolton

Software Engineering Institute. (2005) CMMI Performance Results [Online]. Available from < http://www.sei.cmu.edu/cmmi/results.html > [Accessed 15 September 2006].

The University of Bolton. (n.d.) Using Information [Online]. Available from < http://www.bolton.ac.uk/bissto/infoskills/useinfo/index.htm > [Accessed 20 October 2006]

VA Software. (2005) VA Software Uncovers Impact of "Development Disconnect" [Online]. Available from < http://www.vasoftware.com/news/press.php/2005/ 1481.html > [Accessed 4 October 2006]

Wikipedia. (n.d.) Capability Maturity Model [Online]. Available from < http://en.wikipedia.org/wiki/Capability_Maturity_Model > [Accessed 15 September 2006]

Wikipedia. (n.d.) Rational Unified Process [Online]. Available from < http://en.wikipedia.org/wiki/RUP > [Accessed 15 September 2006]

Wikipedia. (n.d.) Software Factory [Online]. Available from < http://en.wikipedia.org/wiki/Software_factory > [Accessed 15 September 2006]

Wikipedia. (n.d.) Unified Modeling Language [Online]. Available from: http://en.wikipedia.org/wiki/Unified_Modeling_Language [Accessed 15 September 2006]

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

56

Professional Issues in Computing Software Failures

The University of Bolton

Reference List J. C. Laprie. (1992) Dependability: Basic Concept & Terminology [Online]. Available from < http://srel.ee.duke.edu/sw_ft/node3.html > [Accessed 14 September 2006].

Ops A La Carte. (2006) Root Cause Analysis [Online]. p.1. Available from < http://www.opsalacarte.com/pdfs/courses/OALC_Root_Cause_Analysis_ Seminar.pdf > [Accessed 20 September 2006].

Software Engineering Institute. (2005) CMMI Performance Results [Online]. Available from < http://www.sei.cmu.edu/cmmi/results.html > [Accessed 15 September 2006].

Dave Zubrow. (2004) Measuring Software Product Quality: the ISO 25000 Series and CMMI [Online]. Available from < http://www.sei.cmu.edu/sema/presentations/ zubrow/esepg/esepg.pdf > [Accessed 25 September 2006].

Mark Kasunic. (2006) 2006 State of Software Measurement Practice Survey [Online]. p.37. Available from < http://www.sei.cmu.edu/sema/presentations/ stateof-survey.pdf > [Accessed 25 September 2006].

NASA. (n.d.) Software Independent Verification and Validation (IV&V) /Independent Assessment (IA) Criteria [Online]. Available from < http://ivvcriteria.ivv.nasa.gov/Documents/ivvcrit.pdf > [Accessed 3 October 2006].

Salimol Thomas. (2003) Quantifying Economic Impact of Information Integrity Failures Executive Summary [Online]. Available from < http://www. informationintegrity.org/downloads/econImpact-executiveSummary.pdf > [Accessed 3 October 2006].

Prepared by: Ooi Teong Kuan Bolton ID: 0509033 Date: 13 November 2006

57

Related Documents

Software Failures Report 2
November 2019 6
Fantastic Failures
June 2020 3
Fuse Failures
November 2019 17
Market Failures
August 2019 25