Fault Tolerant Systems

  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Fault Tolerant Systems as PDF for free.

More details

  • Words: 14,363
  • Pages: 43
FAULT TOLERANT SYSTEMS:Fault-tolerance or graceful degradation is the property that enables a system (often computer-based) to continue operating properly in the event of the failure of (or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naïvely-designed system in which even a small failure can cause total breakdown. Faulttolerance is particularly sought-after in high-availability or life-critical systems. Fault-tolerance is not just a property of individual machines; it may also characterise the rules by which they interact. For example, the Transmission Control Protocol (TCP) is designed to allow reliable two-way communication in a packet-switched network, even in the presence of communications links which are imperfect or overloaded. It does this by requiring the endpoints of the communication to expect packet loss, duplication, reordering and corruption, so that these conditions do not damage data integrity, and only reduce throughput by a proportional amount.

An example of graceful degradation by design in an image with transparency. The top two images are each the result of viewing the composite image in a viewer that recognises transparency. The bottom two images are the result in a viewer with no support for transparency. Because the transparency mask (centre bottom) is discarded, only the overlay (centre top) remains; the image on the left has been designed to degrade gracefully, hence is still meaningful without its transparency information. Data formats may also be designed to degrade gracefully. HTML for example, is designed to be forward compatible, allowing new HTML entities to be ignored by Web browsers which do not understand them without causing the document to be unusable. Recovery from errors in fault-tolerant systems can be characterised as either roll-forward or roll-back. When the system detects that it has made an error, roll-forward recovery takes the system state at that time and corrects it, to be able to move forward. Roll-back recovery reverts the system state back to some earlier, correct version, for example using checkpointing, and moves forward from there. Roll-back recovery requires that the operations between the checkpoint and the detected erroneous state can be made idempotent. Some systems make use of both roll-forward and roll-back recovery for different errors or different parts of one error. Within the scope of an individual system, fault-tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication. In any case, if the consequence of a system failure is catastrophic, the system must be able to use reversion to fall back to a safe mode. This is similar to roll-back recovery but can be a human action if humans are present in the loop.

Structured Systems Analysis and Design Method From Wikipedia, the free encyclopedia Jump to: navigation, search

Structured Systems Analysis and Design Method (SSADM) is a systems approach to the analysis and design of information systems. SSADM was produced for the Central Computer and Telecommunications Agency (now Office of Government Commerce), a UK government office concerned with the use of technology in government, from 1980 onwards.

Overview SSADM is a waterfall method by which an Information System design can be arrived at. SSADM can be thought to represent a pinnacle of the rigorous document-led approach to system design, and contrasts with more contemporary Rapid Application Development methods such as DSDM. SSADM is one particular implementation and builds on the work of different schools of structured analysis and development methods, such as Peter Checkland's Soft Systems Methodology, Larry Constantine's Structured Design, Edward Yourdon's Yourdon Structured Method, Michael A. Jackson's Jackson Structured Programming, and Tom DeMarco's Structured Analysis. The names "Structured Systems Analysis and Design Method" and "SSADM" are now Registered Trade Marks of the Office of Government Commerce (OGC), which is an Office of the United Kingdom's Treasury.[citation needed]

[edit] History •

1980: Central Computer and Telecommunications Agency (CCTA) evaluate analysis and design methods.



1981: Learmonth & Burchett Management Systems (LBMS) method chosen from shortlist of five.



1983: SSADM made mandatory for all new information system developments



1984: Version 2 of SSADM released



1986: Version 3 of SSADM released, adopted by NCC



1988: SSADM Certificate of Proficiency launched, SSADM promoted as ‘open’ standard



1989: Moves towards Euromethod, launch of CASE products certification scheme



1990: Version 4 launched



1993: SSADM V4 Standard and Tools Conformance Scheme Launched



1995: SSADM V4+ announced, V4.2 launched

[edit] SSADM Techniques The 3 most important techniques that are used in SSADM are: Logical Data Modeling

This is the process of identifying, modeling and documenting the data requirements of the system being designed. The data are separated into entities (things about which a business needs to record information) and relationships (the associations between the entities). Data Flow Modeling This is the process of identifying, modeling and documenting how data moves around an information system. Data Flow Modeling examines processes (activities that transform data from one form to another), data stores (the holding areas for data), external entities (what sends data into a system or receives data from a system), and data flows (routes by which data can flow). Entity Behavior Modeling This is the process of identifying, modeling and documenting the events that affect each entity and the sequence in which these events occur.

[edit] Stages The SSADM method involves the application of a sequence of analysis, documentation and design tasks concerned with the following. [edit] Analysis of the current system

Also known as: feasibility stage. Analyze the current situation at a high level. A Data Flow Diagram (DFD) is used to describe how the current system works and to visualize known problems. The following steps are part of this stage: •

Develop a Business Activity Model. A model of the business activity is built. Business events and business rules would also be investigated as an input to the specification of the new automated system.



Investigate and define requirements. The objective of this step is to identify the problems associated with the current environment that are to be resolved by the new system. It also aims to identify the additional services to be provided by the new system and users of the new system.



Investigate current processing. It investigates the information flow associated with the services currently provided, and describes them in the form of Data Flow Model. At this point, the Data Flow Model represents the current services with all their deficiencies. No attempt is made to incorporate required improvement, or new facilities.



Investigate current data. This step is to identify and describe the structure of the system data, independently of the way the data are currently held and organized. It produces a model of data that supports the current services.



Derive logical view of current services. The objective of this step is to develop a logical view of the current system that can be used to understand problems with the current system.

Software engineering From Wikipedia, the free encyclopedia Jump to: navigation, search

The Airbus A380 uses a substantial amount of software to create a "paperless" cockpit. Software engineering successfully maps and plans the millions of lines of code constituting the plane's software

Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.[1] The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time.[2][3] Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, more affordable, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. It has grown organically out of the limitations of viewing software as just programming. Software development, a much used and more generic term, does not necessarily subsume the engineering paradigm. Although it is questionable what impact it has had on actual software development over the last more than 40 years,[4][5] the field's future looks bright according to Money Magazine and Salary.com who rated "software engineering" as the best job in the United States in 2006.[6]

Contents [hide] •

1 History



2 Profession ○

2.1 Employment



2.2 Certification



2.3 Impact of globalization



3 Education



4 Sub-disciplines



5 Related disciplines ○

5.1 Computer science



5.2 Project management



5.3 Systems engineering



6 See also



7 References



8 Further reading



9 External links

[edit] History Main article: History of software engineering

When the first modern digital computers appeared in the early 1940s,[7] the instructions to make them operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the first division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing. Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, ALGOL, and Cobol were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E.W. Dijkstra wrote his seminal paper, "Go To Statement Considered Harmful",[8] in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972[9] to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm. These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a computer and write software for it. This in turn lead to the now famous Personal Computer or PC and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired objectoriented programming languages, including C++, Smalltalk, and Objective C. Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software.[10] Then the Internet and World Wide Web hit in the mid 90s changing the engineering of software once again. Distributed Systems gained sway as a way to design systems and the Java programming language was introduced as another step in abstraction having its own virtual machine. Programmers collaborated and wrote the Agile Manifesto that favored more light weight processes to create cheaper and more timely software. The current definition of software engineering is still being debated by practitioners today as they struggle to come up with ways to produce software that is "cheaper, bigger, quicker".

[edit] Profession

Main article: Software engineer

While some areas, such as Alberta, Ontario,[11] and Quebec, Canada, license software engineers, most places in the world have no laws regarding the profession of software engineers. Yet there are some guides from the IEEE Computer Society and the ACM, the two main professional organizations of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge - 2004 Version or SWEBOK defines the field and gives a coverage of the knowledge practicing software engineers should have. There is also an IEEE "Software Engineering Code of Ethics".[12] In addition, there is a Software and Systems Engineering Vocabulary (SEVOCAB),[13] published on-line by the IEEE Computer Society. In the UK, the British Computer Society licenses software engineers and members of the society can also become Chartered Engineers (CEng), while in Canada, software engineers can hold the Professional Engineer (P.Eng)designation and/or the Information Systems Professional (I.S.P.) designation; however, there is no legal requirement to have these qualifications. [edit] Employment

In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined.[14] Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.[15] Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers). There is considerable debate over the future employment prospects for software engineers and other IT professionals. For example, an online futures market called the "ITJOBS Future of IT Jobs in America"[16] attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002. [edit] Certification

Professional certification of software engineers is a contentious issue. Some see it as a tool to improve professional practice; "The only purpose of licensing software engineers is to protect the public".[17] The ACM had a professional certification program in the early 1980s,[citation needed] which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering.[18] As of 2006[update], the IEEE had certified over 575 software professionals.[19] In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified Members (MBCS). In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP)[20]. The Software Engineering Institute offers certification on specific topic such as Security, Process improvement and Software architecture[21].

Most certification programs in the IT industry are oriented toward specific technologies, and are managed by the vendors of these technologies.[22] These certification programs are tailored to the institutions that would employ people who use these technologies. [edit] Impact of globalization

Many students in the developed world have avoided degrees related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers.[23] Although government statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected.[24][25] Often one is expected to start out as a computer programmer before being promoted to software engineer. Thus, the career path to software engineering may be rough, especially during recessions. Some career counselors suggest a student also focus on "people skills" and business skills rather than purely technical skills because such "soft skills" are allegedly more difficult to offshore.[26] It is the quasi-management aspects of software engineering that appear to be what has kept it from being impacted by globalization.[27]

[edit] Education A knowledge of programming is the main pre-requisite to becoming a software engineer, but it is not sufficient. Many software engineers have degrees in Computer Science due to the lack of software engineering programs in higher education. However, this has started to change with the introduction of new software engineering degrees, especially in postgraduate education. A standard international curriculum for undergraduate software engineering degrees was defined by the CCSE. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers.[28] In 2004 the IEEE Computer Society produced the SWEBOK, which has become an ISO standard describing the body of knowledge covered by a software engineer[citation needed]. The European Commission within the Erasmus Mundus Programme offers a European master degree called European Master on Software Engineering for students from Europe and also outside Europe[29]. This is a joint program (double degree) involving 4 universities in Europe.

Requirements management From Wikipedia, the free encyclopedia Jump to: navigation, search

Requirements management is the process of identifying, eliciting, documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. It is a continuous process throughout a project. A requirement is a capability to which a project outcome (product or service) should conform.

Contents [hide] •

1 Overview



2 Traceability



3 Requirements activities





3.1 Investigation



3.2 Feasibility



3.3 Design



3.4 Construction and test



3.5 Release

4 Tools ○

4.1 Modeling Languages



4.2 On-demand requirements management platforms



5 See also



6 References



7 Further reading



8 External links

[edit] Overview The purpose of requirements management is to assure the organization documents, verifies and meets the needs and expectations of its customers and internal or external stakeholders[1]. Requirements management begins with the analysis and elicitation of the objectives and constraints of the organization. Requirements management further includes supporting planning for requirements, integrating requirements and the organization for working with them (attributes for requirements), as well as relationships with other information delivering against requirements, and changes for these. The traceabilities thus established are used in managing requirements to report back fulfillment of company and stakeholder interests, in terms of compliance, completeness, coverage and consistency. Traceabilities also support change management as part of requirements management in understanding the impacts of changes through requirements or other related elements (e.g., functional impacts through relations to functional architecture), and facilitating introducing these changes.[2] Requirements management involves communication between the project team members and stakeholders, and adjustment to requirements changes throughout the course of the project[3]. To prevent one class of requirements from overriding another, constant communication among members of the development team is critical. For example, in software development for internal applications, the business has such strong needs that it may ignore user requirements, or believe that in creating use cases, the user requirements are being taken care of.

[edit] Traceability Main article: Requirements Traceability

Requirements traceability is concerned with documenting the life of a requirement. It should be possible to trace back to the origin of each requirement and every change made to the requirement should therefore be documented in order to achieve traceability. Even the use of the requirement after the implemented features have been deployed and used should be traceable[4]. Requirements come from different sources, like the business person ordering the product, the marketing manager and the actual user. These people all have different requirements for the product. Using requirements traceability, an implemented feature can be traced back to the person or group that wanted it during the requirements elicitation. This can, for example, be used during the development process to prioritize the requirement, determining how valuable the requirement is to a specific user. It can also be used after the deployment when user studies show that a feature is not used, to see why it was required in the first place.

[edit] Requirements activities At each stage in a development process, there are key requirements management activities and methods. To illustrate, consider a standard five-phase development process with Investigation, Feasibility, Design, Construction and Test, and Release stages.

[edit] Investigation

In Investigation, the first three classes of requirements are gathered from the users, from the business and from the development team. In each area, similar questions are asked; what are the goals, what are the constraints, what are the current tools or processes in place, and so on. Only when these requirements are well understood can functional requirements be developed. A caveat is required here: no matter how hard a team tries, requirements cannot be fully defined at the beginning of the project. Some requirements will change, either because they simply weren’t extracted, or because internal or external forces at work affect the project in mid-cycle. Thus, the team members must agree at the outset that a prime condition for success is flexibility in thinking and operation. The deliverable from the Investigation stage is a requirements document that has been approved by all members of the team. Later, in the thick of development, this document will be critical in preventing scope creep or unnecessary changes. As the system develops, each new feature opens a world of new possibilities, so the requirements specification anchors the team to the original vision and permits a controlled discussion of scope change. While many organizations still use only documents to manage requirements, others manage their requirements baselines using software tools. These tools allow requirements to be managed in a database, and usually have functions to automate traceability (e.g., by allowing electronic links to be created between parent and child requirements, or between test cases and requirements), electronic baseline creation, version control, and change management. Usually such tools contain an export function that allows a specification document to be created by exporting the requirements data into a standard document application. [edit] Feasibility

In the Feasibility stage, costs of the requirements are determined. For user requirements, the current cost of work is compared to the future projected costs once the new system is in place. Questions such as these are asked: “What are data entry errors costing us now?” Or “What is the cost of scrap due to operator error with the current interface?” Actually, the need for the new tool is often recognized as these questions come to the attention of financial people in the organization. Business costs would include, “What department has the budget for this?” “What is the expected rate of return on the new product in the marketplace?” “What’s the internal rate of return in reducing costs of training and support if we make a new, easier-to-use system?” Technical costs are related to software development costs and hardware costs. “Do we have the right people to create the tool?” “Do we need new equipment to support expanded software roles?” This last question is an important type. The team must inquire into whether the newest automated tools will add sufficient processing power to shift some of the burden from the user to the system in order to save people time. The question also points out a fundamental point about requirements management. A human and a tool form a system, and this realization is especially important if the tool is a computer or a new application on a computer. The human mind excels in parallel processing and interpretation of trends with insufficient data. The CPU excels in serial processing and accurate mathematical computation. The overarching goal of the requirements management effort for a software project would thus be to make sure the work being automated gets assigned to the proper processor. For instance, “Don’t make the human remember where she is in the interface. Make the interface report the human’s location in the system at all times.” Or “Don’t make the human enter the same data in two screens. Make the system store the data and fill in the second screen as needed.” The deliverable from the Feasibility stage is the budget and schedule for the project.

[edit] Design

Assuming that costs are accurately determined and benefits to be gained are sufficiently large, the project can proceed to the Design stage. In Design, the main requirements management activity is comparing the results of the design against the requirements document to make sure that work is staying in scope. Again, flexibility is paramount to success. Here’s a classic story of scope change in midstream that actually worked well. Ford auto designers in the early ‘80s were expecting gasoline prices to hit $3.18 per gallon by the end of the decade. Midway through the design of the Ford Taurus, prices had centered to around $1.50 a gallon. The design team decided they could build a larger, more comfortable, and more powerful car if the gas prices stayed low, so they redesigned the car. The Taurus launch set nationwide sales records when the new car came out, primarily because it was so roomy and comfortable to drive. In most cases, however, departing from the original requirements to that degree does not work. So the requirements document becomes a critical tool that helps the team make decisions about design changes. [edit] Construction and test

In the construction and testing stage, the main activity of requirements management is to make sure that work and cost stay within schedule and budget, and that the emerging tool does in fact meet requirements. A main tool used in this stage is prototype construction and iterative testing. For a software application, the user interface can be created on paper and tested with potential users while the framework of the software is being built. Results of these tests are recorded in a user interface design guide and handed off to the design team when they are ready to develop the interface. This saves their time and makes their jobs much easier. [edit] Release

You might think that requirements management ends on product release, but that’s not entirely accurate. From that point on, the data coming in about the application’s acceptability is gathered and fed into the Investigation phase of the next generation or release. Thus the process begins again.

[edit] Tools There exist both desktop and Web-based tools for requirements management. A Web-based requirements tool can be installed at the customer′s datacenter or can be offered as an ondemand requirements management platform which in some cases is completely free.[

Computer-based systems

Complex systems in which computers play a major role. While complex physical systems and sophisticated software systems can help people to lead healthier and more enjoyable lives, reliance on these systems can also result in loss of money, time, and life when these systems fail. Much of the complexity of these systems is due to integration of information technology into physical and human activities. Such integration dramatically increases the interdependencies among components, people, and processes, and generates complex dynamics not taken into account in systems of previous generations. Engineers with detailed understanding both of the application domain and computer electronics, software, human factors, and communication are needed to provide a holistic approach to system development so that disasters do not occur. Engineering activities The computer-based systems engineer develops a system within a system; the properties of the former have pervasive effects throughout the larger system. The computer-based system consists of all components necessary to capture, process, transfer, store, display, and manage information. Components include software, processors, networks, buses, firmware, application-specific integrated circuits, storage devices, and humans (who also process information). Embedded computer-based systems interact with the physical environment through sensors and actuators, and also interact with external computer-based systems (see illustration). The computer-based systems engineer must have a thorough understanding of the system in which the computer-based system is embedded, for example an automobile, medical diagnostic system, or stock exchange. Model-based development Models are necessary in systems engineering as they support interdisciplinary communication, formalize system definition, improve analysis of trade-offs and decision making, and support optimization and integration. The use of models can reduce the number of errors in the design and thus the system, reduce engineering effort, and preserve knowledge for future efforts. Maintaining models with up-to-date knowledge is a major problem as most systems are not generated from models, although this should be an industry goal. During the later stages of system development and testing, significant schedule pressure makes it difficult to keep the models and manually developed software consistent.

Common Object Request Broker Architecture From Wikipedia, the free encyclopedia Jump to: navigation, search

The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) that enables software components written in multiple computer languages and running on multiple computers to work together, i.e. it supports multiple platforms.

Contents [hide] •

1 Overview



2 CORBA Topics ○

2.1 Objects By Reference



2.2 Data By Value



2.3 Objects by Value (OBV)



2.4 CORBA Component Model (CCM)



2.5 Portable interceptors



2.6 General InterORB Protocol (GIOP)



2.7 Data Distribution Service (DDS)



2.8 VMCID (Vendor Minor Codeset ID)



2.9 Corba Location (CorbaLoc)



3 Benefits



4 Problems and criticism



5 See also



6 References



7 Further reading



8 External links

[edit] Overview CORBA is a mechanism in software for normalizing the method-call semantics between application objects that reside either in the same address space (application) or remote address space (same host, or remote host on a network). Version 1.0 was released in October 1991. CORBA uses an interface definition language (IDL) to specify the interfaces that objects will present to the outside world. CORBA then specifies a “mapping” from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C+ +, Lisp, Ruby, Smalltalk, Java, COBOL, PL/I and Python. There are also non-standard mappings for Perl, Visual Basic, Erlang, and Tcl implemented by object request brokers (ORBs) written for those languages. The CORBA specification dictates that there shall be an ORB through which the application interacts with other objects. In practice, the application simply initializes the ORB, and accesses an internal Object Adapter which maintains such issues as reference counting, object (and reference) instantiation policies, object lifetime policies, etc. The Object Adapter is used to register instances of the generated code classes. Generated Code Classes are the result of compiling the user IDL code which translates the high-level interface definition into an OSand language-specific class base for use by the user application. This step is necessary in order to enforce the CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.

Some IDL language mappings are "more hostile" than others. For example, due to the very nature of Java, the IDL-Java Mapping is rather straightforward and makes usage of CORBA very simple in a Java application. The C++ mapping is not "trivial" but accounts for all the features of CORBA, e.g. exception handling. The C-mapping is even stranger (since it's not an Object Oriented language) but it does make sense and handles the RPC semantics just fine. A "language mapping" requires the developer ("user" in this case) to create some IDL code that represents the interfaces to his objects. Typically, a CORBA implementation comes with a tool called an IDL compiler which converts the user's IDL code into some languagespecific generated code. A traditional compiler then compiles the generated code to create the linkable-object files for the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:

Illustration of the autogeneration of the infrastructure code from an interface defined using the CORBA IDL

This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. Issues not addressed here, but that are accounted-for in the CORBA specification include: data typing, exceptions, network protocol, communication timeouts, etc. For example: Normally the server side has the Portable Object Adapter (POA) that redirects calls either to the local servants or (to balance the load) to the other servers. Also, both server and client parts often have interceptors that are described below. Issues CORBA (and thus this figure) does not address, but that all distributed systems must address: object lifetimes, redundancy/fail-over, naming semantics (beyond a simple name), memory management, dynamic load balancing, separation of model between display/data/control semantics, etc. In addition to providing users with a language and a platform-neutral remote procedure call specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models. OMG trademarks

CORBA, IIOP and OMG are the registered marks of the Object Management Group and should be used with care. However, GIOP is not a registered OMG trademark. Hence in some cases it may be more appropriate just to say that the application uses or implements the GIOP-based architecture.

[edit] CORBA Topics [edit] Objects By Reference

This reference is either acquired through a stringified URI string, NameService lookup (similar to DNS), or passed-in as a method parameter during a call. Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according the local language and OS mapping. [edit] Data By Value

The CORBA Interface Definition Language provides the language- and OS-neutral interobject communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc) are passed by value. The combination of Objects by reference and data-by-value provides the means to enforce strong data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problemspace. [edit] Objects by Value (OBV)

Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs from where this code should be downloaded. The OBV can also have the remote methods. The OBV's may have fields that are transferred when the OBV is transferred. These fields can be OBV's themselves, forming lists, trees or arbitrary graphs. The OBV's have a class hierarchy, including multiple inheritance and abstract classes. [edit] CORBA Component Model (CCM)

CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language independent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing 4 component types instead of the 2 that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports. The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction management. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.

[edit] Portable interceptors

Portable interceptors are the "hooks", used by CORBA and RMI-IIOP to mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors: 1. IOR interceptors mediate the creation of the new references to the remote objects, presented by the current server. 2. Client interceptors usually mediate the remote method calls on the client (caller) side. If the object Servant exists on the same server where the method is invoked, they also mediate the local calls. 3. Server interceptors mediate the handling of the remote method calls on the server (handler) side.

The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target. [edit] General InterORB Protocol (GIOP) Main article: General Inter-ORB Protocol

The GIOP is an abstract protocol by which Object request brokers (ORBs) communicate. Standards associated with the protocol are maintained by the Object Management Group (OMG.). The GIOP architecture provides several concrete protocols: 1. Internet InterORB Protocol (IIOP) — The Internet Inter-Orb Protocol is an implementation of the GIOP for use over an internet, and provides a mapping between GIOP messages and the TCP/IP layer. 2. SSL InterORB Protocol (SSLIOP) — SSLIOP is IIOP over SSL, providing encryption and authentication. 3. HyperText InterORB Protocol (HTIOP) — HTIOP is IIOP over HTTP, providing transparent proxy bypassing. 4. and many more... [edit] Data Distribution Service (DDS)

The Object Management Group (OMG) has a related standard known as the Data Distribution Service (DDS) standard. DDS is a publish-subscribe data distribution model, in contrast to the CORBA remotely-invoked object model. [edit] VMCID (Vendor Minor Codeset ID)

Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit “Vendor Minor Codeset ID” (VMCID), which occupies the high order 20 bits, and the minor code which occupies the low order 12 bits. Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3-13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, “Standard Exception Definitions,” on page 3-52 and Section 3.17.2, “Standard Minor Exception Codes,” on page 3-58).

Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email to [email protected]. The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, “Standard Exception Definitions,” on page 3-52) and 1 through 0xf are reserved for OMG use. The Common Object Request Broker: Architecture and Specification (CORBA 2.3) [edit] Corba Location (CorbaLoc)

Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL. All CORBA products must support two OMG-defined URLs: "corbaloc:" and "corbaname:". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained. An example of corbaloc is shown below: corbaloc::160.45.110.41:38693/StandardNS/NameServer-POA/_root

A CORBA product may optionally support the "http:", "ftp:" and "file:" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.

[edit] Benefits The neutrality of this section is disputed. Please see the discussion on the talk page. Please do not remove this message until the dispute is resolved. (June 2009)

CORBA aims to bring to the table many benefits that no other single technology brings in one package. These benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers. Language Independence CORBA at the outset was designed to free engineers from the hang-ups and limitations of considering their designs based on a particular software language. Currently there are many languages supported by various CORBA providers, the most popular are Java and C++. There are also Conly, SmallTalk, Perl, Ada, Ruby, and Python implementations, just to mention a few. OS Independence CORBA's design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Sun, Mac and others. Freedom from Technologies

One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model. For example, the design of a Multitier architecture is made simple using Java Servlets in the web server and various e time, C++ legacy code can talk to C or Fortran legacy code and Java database code, and can provide data to a web interface. Strong Data Typing CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled datatyping, reducing human errors. In a situation where Name-Value pairs are passed around, it's conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions. High Tune-ability There are many implementations available (e.g. OmniORB (Open source C++ and Python implementation)) that have many options for tuning the threading and connection management features. Not all implementations provide the same features. This is up to the implementor. Freedom From Data Transfer Details When handling low-level connection and threading, CORBA provides a high-level of detail in error conditions. This is defined in the CORBAdefined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead" or "The reference doesn't make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature. Compression CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT and Telefónica have worked on an extension to the

CORBA standard that delivers compression. This extension is called ZIOP and this is now recommended for adoption

Cleanroom Software Engineering From Wikipedia, the free encyclopedia Jump to: navigation, search Please help improve this article by expanding it. Further information might be found on the talk page. For the meaning of Cleanroom engineering as a method to avoid copyright infringement, see Clean room design.

Software development process Activities and steps

Requirements · Specification Architecture · Design Implementation · Testing Deployment · Maintenance Models Agile · Cleanroom · DSDM Iterative · RAD · RUP · Spiral Waterfall · XP · Scrum · Lean V-Model · FDD · TDD Supporting disciplines Configuration management Documentation Quality assurance (SQA) Project management User experience design Tools Compiler · Debugger · Profiler GUI designer Integrated development environment This box: view



talk

The Cleanroom Software Engineering process is a software development process intended to produce software with a certifiable level of reliability. The Cleanroom process was originally developed by Harlan Mills and several of his colleagues including Alan Hevner at IBM[1]. The focus of the Cleanroom process is on defect prevention, rather than defect removal. The name Cleanroom was chosen to evoke the cleanrooms used in the electronics industry to prevent the introduction of defects during the fabrication of semiconductors. The Cleanroom process first saw use in the mid to late 80s. Demonstration projects within the military began in the early 1990s[2]. Recent work on the Cleanroom process has examined fusing Cleanroom with the automated verification capabilities provided by specifications expressed in CSP[3].

Contents

[hide] •

1 Central principles



2 References



3 Further reading



4 External links

[edit] Central principles The basic principles of the Cleanroom process are Software development based on formal methods Cleanroom development makes use of the Box Structure Method to specify and design a software product. Verification that the design correctly implements the specification is performed through team review. Incremental implementation under statistical quality control Cleanroom development uses an iterative approach, in which the product is developed in increments that gradually increase the implemented functionality. The quality of each increment is measured against preestablished standards to verify that the development process is proceeding acceptably. A failure to meet quality standards results in the cessation of testing for the current increment, and a return to the design phase. Statistically sound testing Software testing in the Cleanroom process is carried out as a statistical experiment. Based on the formal specification, a representative subset of software input/output trajectories is selected and tested. This sample is then statistically analyzed to produce an estimate of the reliability of the software, and a level of confidence in that estimate

Legacy system From Wikipedia, the free encyclopedia Jump to: navigation, search This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2009)

A legacy system is an old method, technology, computer system, or application program that continues to be used, typically because it still functions for the users' needs, even though newer technology or more efficient methods of performing a task are now available. A legacy system may include procedures or terminology which are no longer relevant in the current context, and may hinder or confuse understanding of the methods or technologies used. The term "legacy" may have little to do with the size or age of the system — mainframes run 64-bit Linux and Java alongside 1960s vintage code.

Although the term is most commonly used to describe computers and software, it may also be used to describe human behaviors, methods, and tools. For example, timber framing using wattle and daub is a legacy building construction method.

Contents [hide] •

1 Overview



2 NASA example



3 Potential problems



4 Improvements on legacy software systems



5 Legacy support



6 Brownfield architecture



7 Alternative view



8 See also



9 References



10 Further reading

[edit] Overview Organizations can have compelling reasons for keeping a legacy system, such as: •

The system works satisfactorily, and the owner sees no reason for changing it.



The costs of redesigning or replacing the system are prohibitive because it is large, monolithic, and/or complex.



Retraining on a new system would be costly in lost time and money, compared to the anticipated appreciable benefits of replacing it (which may be zero).



The system requires close to 100 percent availability, so it cannot be taken out of service, and the cost of designing a new system with a similar availability level is high. Examples include systems to handle customers' accounts in banks, computer reservation systems, air traffic control, energy distribution (power grids), nuclear power plants, military defense installations, and systems such as the TOPS database.



The way that the system works is not well understood. Such a situation can occur when the designers of the system have left the organization, and the system has either not been fully documented or documentation has been lost.



The user expects that the system can easily be replaced when this becomes necessary.

[edit] NASA example NASA's Space Shuttle program still uses a large amount of 1970s-era technology. Replacement is cost-prohibitive because of the expensive requirement for flight certification; the legacy hardware currently being used has completed the expensive integration and certification requirement for flight, but any new equipment would have to go through that

entire process – requiring extensive tests of the new components in their new configurations – before a single unit could be used in the Space Shuttle program. This would make any new system that started the certification process a de facto legacy system by the time of completion. Additionally, because the entire Space Shuttle system, including ground and launch vehicle assets, was designed to work together as a closed system, and the specifications have not changed, all of the certified systems and components still serve well in the roles for which they were designed. It was advantageous for NASA – even before the Shuttle was scheduled to be retired in 2010 – to keep using many pieces of 1970s technology rather than to upgrade those systems.

[edit] Potential problems This section does not cite any references or sources. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (January 2008)

Legacy systems are considered to be potentially problematic by many software engineers for several reasons (for example, see Bisbal et al., 1999). •

Legacy systems often run on obsolete (and usually slow) hardware, and spare parts for such computers may become increasingly difficult to obtain.



If legacy software runs on only antiquated hardware, the cost of maintaining the system may eventually outweigh the cost of replacing both the software and hardware unless some form of emulation or backward compatibility allows the software to run on new hardware.



These systems can be hard to maintain, improve, and expand because there is a general lack of understanding of the system; the staff who were experts on it have retired or forgotten what they knew about it, and staff who entered the field after it became "legacy" never learned about it in the first place. This can be worsened by lack or loss of documentation.



Integration with newer systems may also be difficult because new software may use completely different technologies. The kind of bridge hardware and software that becomes available for different technologies that are popular at the same time are often not developed for differing technologies in different times, because of the lack of a large demand for it and the lack of associated reward of a large market economies of scale, though some of this "glue" does get developed by vendors and enthusiasts of particular legacy technologies (often called "retrocomputing"[citation needed] communities).

[edit] Improvements on legacy software systems Where it is impossible to replace legacy systems through the practice of application retirement, it is still possible to enhance them. Most development often goes into adding new interfaces to a legacy system. The most prominent technique is to provide a Web-based interface to a terminal-based mainframe application. This may reduce staff productivity due to slower response times and slower mouse-based operator actions, yet it is often seen as an "upgrade", because the interface style is familiar to unskilled users and is easy for them to use. John McCormick discusses such strategies that involve middleware.[1]

Printing improvements are problematic because legacy software systems often add no formatting instructions, or they use protocols that are not usable in modern PC/Windows printers. A print server can be used to intercept the data and translate it to a more modern code. Rich Text Format (RTF) or PostScript documents may be created in the legacy application and then interpreted at a PC before being printed. Biometric security measures are difficult to implement on legacy systems. A workable solution is to use a telnet or http proxy server to sit between users and the mainframe to implement secure access to the legacy application. The change being undertaken in some organizations is to switch to Automated Business Process (ABP) software which generates complete systems. These systems can then interface to the organizations' legacy systems and use them as data repositories. This approach can provide a number of significant benefits: the users are insulated from the inefficiencies of their legacy systems, and the changes can be incorporated quickly and easily in the ABP software.

[edit] Legacy support The term legacy support is often used with reference to obsolete or legacy computer hardware, whether peripherals or core components. Operating systems with "legacy support" can detect and use legacy hardware. It is also used as a verb for what vendors do for products in legacy mode – they "support", or provide software maintenance, for older products. A "legacy" product may have some advantage over a modern product, even if not one that causes a majority of the market to favor it over the newer offering. A product is only truly "obsolete" if it has an advantage to nobody – if no person making a rational decision would choose to acquire it new. In some cases, "legacy mode" refers more specifically to backward compatibility. The computer mainframe era saw many applications running in legacy mode. In the modern business computing environment, n-tier, or 3-tier architectures are more difficult to place into legacy mode as they include many components making up a single system. Government regulatory changes must also be considered in a system running in legacy mode.[clarification needed] Virtualization technology allows for a resurgence of modern software applications entering legacy mode.

[edit] Brownfield architecture IT has borrowed the term brownfield from the building industry, where undeveloped land (and especially unpolluted land) is described as greenfield and previously developed land – which is often polluted and abandoned – is described as brownfield.[2] •

A brownfield architecture is an IT network design that incorporates legacy systems.



A brownfield deployment is an upgrade or addition to an existing IT network and uses some legacy components.

[edit] Alternative view This article's tone or style may not be appropriate for Wikipedia. Specific concerns may be found on the talk page. See Wikipedia's guide to writing better articles for suggestions. (December 2007)

There is an alternate point of view — growing since the "Dot Com" bubble burst in 1999 — that legacy systems are simply computer systems that are both installed and working. In other

words, the term is not pejorative, but the opposite. Bjarne Stroustrup, creator of the C++ language, addressed this issue succinctly: "Legacy code" often differs from its suggested alternative by actually working and scaling. —Bjarne Stroustrup

IT analysts estimate that the cost to replace business logic is about five times that of reuse, [citation needed] and that's not counting the risks involved in wholesale replacement. Ideally, businesses would never have to rewrite most core business logic; debits must equal credits — they always have, and they always will. New software may increase the risk of system failures and security breaches; a regional airline fired its CEO due to the failure of an antiquated legacy crew scheduling system in 2004.[3] The IT industry is responding to these concerns. "Legacy modernization" and "legacy transformation" refer to the act of reusing and refactoring existing, core business logic by providing new user interfaces (typically Web interfaces), sometimes through the use of techniques such as screen scraping and service-enabled access (e.g., through Web services). These techniques allow organisations to understand their existing code assets (using discovery tools), provide new user and application interfaces to existing code, improve workflow, contain costs, minimize risk, and enjoy classic qualities of service (near 100% uptime, security, scalability, etc.).[citation needed] The reexamination of attitudes toward legacy systems is also inviting more reflection on what makes legacy systems as durable as they are. Technologists are relearning the fact that sound architecture, practiced up front, helps businesses avoid costly and risky rewrites in the first place. The most common legacy systems tend to be those which embraced well-known IT architectural principles, with careful planning and strict methodology during implementation. Poorly designed systems often don't last, both because they wear out and because their reliability or usability are low enough that no one is inclined to make an effort to extend their term of service when replacement is an option. Thus, many organizations are rediscovering the value of both their legacy systems themselves and those systems' philosophical underpinnings.



Contact Us



Report a Bug Top of Form

w eakAND

Search

Bottom of Form



Home



Content



About Us



Help

You are here: Home » Content » Software engineering management Content Actions • •

Edit this content (author only)



Save to del.icio.us

Quality Affiliated with (?) This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization. •

Vietnam Education Foundation sponsored content

Related material Similar content •

Hardware and Software



Software quality management



Software configuration management



More »

Collections using this content •

Software Engineering

Lenses Tags (?) These tags come from the endorsement, affiliation, and other lenses that include this content. •

engineering



software

Software engineering management Module by: Hung Vo Summary: In this module, we introduce concepts and application of management activities as planning, coordinating, measuring, monitoring, controlling, and reporting to ensure that the development and maintenance of software is systematic, disciplined, and quantified.

Introduction Software Engineering Management can be defined as the application of management activities - planning, coordinating, measuring, monitoring, controlling, and reporting - to ensure that the development and maintenance of software is systematic, disciplined, and quantified. The software engineering management therefore addresses the management and measurement of software engineering. While measurement is an important aspect of all topics of software engineering, it is here that the topic of measurement programs is presented. While it is true to say that in one sense it should be possible to manage software engineering in the same way as any other process, there are aspects specific to software products and the software life cycle processes which complicate effective management - just a few of which are as follows: •

The perception of clients is such that there is often a lack of appreciation for the complexity inherent in software engineering, particularly in relation to the impact of changing requirements.



It is almost inevitable that the software engineering processes themselves will generate the need for new or changed client requirements.



As a result, software is often built in an iterative process rather than a sequence of closed tasks.



Software engineering necessarily incorporates aspects of creativity and discipline—maintaining an appropriate balance between the two is often difficult.



The degree of novelty and complexity of software is often extremely high.



There is a rapid rate of change in the underlying technology.

With respect to software engineering, management activities occur at three levels: organizational and infrastructure management, project management, and measurement program planning and control. Aspects of organizational management are important in terms of their impact on software engineering - on policy management, for instance: organizational policies and standards provide the framework in which software engineering is undertaken. These policies may need to be influenced by the requirements of effective software development and maintenance, and a number of software engineering-specific policies may need to be established for effective management of software engineering at an organizational level. For example, policies are usually necessary to establish specific organization-wide processes or procedures for such software engineering tasks as designing, implementing, estimating, tracking, and reporting. Such policies are essential to effective longterm software engineering management, by establishing a consistent basis on which to analyze past performance and implement improvements, for example. Another important aspect of management is personnel management: policies and procedures for hiring, training, and motivating personnel and mentoring for career development are important not only at the project level but also to the longer-term success of an organization. Software engineering personnel may present unique training or personnel management challenges (for example, maintaining currency in a context where the underlying technology undergoes continuous and rapid change). Communication management is also often mentioned as an overlooked but major aspect of the performance of individuals in a field where precise understanding of user needs and of complex requirements and designs is necessary. Finally, portfolio management, which is the capacity to have an overall vision not only of the set of software under development but also of the software already in use in an organization, is necessary. Furthermore, software reuse is a key factor in maintaining and improving productivity and competitiveness. Effective reuse requires a strategic vision that reflects the unique power and requirements of this technique. Organizational culture and behavior, and functional enterprise management in terms of procurement, supply chain management, marketing, sales, and distribution, all have an influence, albeit indirectly, on an organization’s software engineering process. The Software Engineering Management consists of both the software project management process, in its first five subareas, and software engineering measurement in the last subarea. While these two subjects are often regarded as being separate, and indeed they do possess many unique aspects, their close

relationship has led to their combined treatment in software engineering. Unfortunately, a common perception of the software industry is that it delivers products late, over budget, and of poor quality and uncertain functionality. Measurement-informed management - an assumed principle of any true engineering discipline - can help to turn this perception around. In essence, management without measurement, qualitative and quantitative, suggests a lack of rigor, and measurement without management suggests a lack of purpose or context. In the same way, however, management and measurement without expert knowledge is equally ineffectual, so we must be careful to avoid overemphasizing the quantitative aspects of Software Engineering Management (SEM). Effective management requires a combination of both numbers and experience. The following working definitions are adopted here: •

Management process refers to the activities that are undertaken in order to ensure that the software engineering processes are performed in a manner consistent with the organization’s policies, goals, and standards.



Measurement refers to the assignment of values and labels to aspects of software engineering (products, processes, and resources) and the models that are derived from them, whether these models are developed using statistical, expert knowledge or other techniques.



The software engineering project management subareas make extensive use of the software engineering measurement subarea.



Software Requirements, where some of the activities to be performed during the Initiation and Scope definition phase of the project are described



Software Configuration Management, as this deals with the identification, control, status accounting, and audit of the software configuration along with software release management and delivery



Software Engineering Process, because processes and projects are closely related.



Software Quality, as quality is constantly a goal of management and is an aim of many activities that must be managed.

Topics for software engineering management As the Software Engineering Management is viewed here as an organizational process which incorporates the notion of process and project management, we have created a breakdown that is both topic-based and life cycle-based. However, the primary basis for the top-level breakdown is the process of managing a software engineering project. There are six major subareas: •

Initiation and scope definition, which deals with the decision to initiate a software engineering project



Software project planning, which addresses the activities undertaken to prepare for successful software engineering from a management perspective



Software project enactment, which deals with generally accepted software engineering management activities that occur during software engineering



Review and evaluation, which deal with assurance that the software is satisfactory



Closure, which addresses the post-completion activities of a software engineering project



Software engineering measurement, which deals with the effective development and implementation of measurement programs in software engineering organizations (IEEE12207.0-96)

Initiation and Scope Definition The focus of this set of activities is on the effective determination of software requirements via various elicitation methods and the assessment of the project’s feasibility from a variety of standpoints. Once feasibility has been established, the remaining task within this process is the specification of requirements validation and change procedures.

Determination and Negotiation of Requirements Software requirement methods for requirements elicitation (for example, observation), analysis (for example, data modeling, use-case modeling), specification, and validation (for example, prototyping) must be selected and applied, taking into account the various stakeholder perspectives. This leads to the determination of project scope, objectives, and constraints. This is always an important activity, as it sets the visible boundaries for the set of tasks being undertaken, and is particularly so where the novelty of the undertaking is high.

Feasibility Analysis Software engineers must be assured that adequate capability and resources are available in the form of people, expertise, facilities, infrastructure, and support (either internally or externally) to ensure that the project can be successfully completed in a timely and cost-effective manner (using, for example, a requirement-capability matrix). This often requires some “ballpark” estimation of effort and cost based on appropriate methods (for example, expert-informed analogy techniques).

Process for the Review and Revision of Requirements Given the inevitability of change, it is vital that agreement among stakeholders is reached at this early point as to the means by which scope and requirements are to be reviewed and revised (for example, via agreed change management procedures). This clearly implies that scope and requirements will not be “set in stone” but can and should be revisited at predetermined points as the process unfolds (for example, at design reviews, management reviews). If changes are accepted, then some form of traceability analysis and risk analysis should be used to ascertain the impact of those changes. A managed-change approach should also be useful when it comes time to review the outcome of the project,

as the scope and requirements should form the basis for the evaluation of success.

Software Project Planning The iterative planning process is informed by the scope and requirements and by the establishment of feasibility. At this point, software life cycle processes are evaluated and the most appropriate (given the nature of the project, its degree of novelty, its functional and technical complexity, its quality requirements, and so on) is selected. Where relevant, the project itself is then planned in the form of a hierarchical decomposition of tasks, the associated deliverables of each task are specified and characterized in terms of quality and other attributes in line with stated requirements, and detailed effort, schedule, and cost estimation is undertaken. Resources are then allocated to tasks so as to optimize personnel productivity (at individual, team, and organizational levels), equipment and materials utilization, and adherence to schedule. Detailed risk management is undertaken and the “risk profile” of the project is discussed among, and accepted by, all relevant stakeholders. Comprehensive software quality management processes are determined as part of the planning process in the form of procedures and responsibilities for software quality assurance, verification and validation, reviews, and audits. As an iterative process, it is vital that the processes and responsibilities for ongoing plan management, review, and revision are also clearly stated and agreed.

Process Planning Selection of the appropriate software life cycle model (for example, spiral, evolutionary prototyping) and the adaptation and deployment of appropriate software life cycle processes are undertaken in light of the particular scope and requirements of the project. Relevant methods and tools are also selected. At the project level, appropriate methods and tools are used to decompose the project into tasks, with associated inputs, outputs, and completion conditions (for example, work breakdown structure). This in turn influences decisions on the project’s high-level schedule and organization structure.

Determine Deliverables The product(s) of each task (for example, architectural design, inspection report) are specified and characterized. Opportunities to reuse software components from previous developments or to utilize off-the-shelf software products are evaluated. Use of third parties and procured software are planned and suppliers are selected.

Effort, Schedule, and Cost Estimation Based on the breakdown of tasks, inputs, and outputs, the expected effort range required for each task is determined using a calibrated estimation model based on historical size-effort data where available and relevant, or other methods like expert judgment. Task dependencies are established and potential bottlenecks are identified using suitable methods (for example, critical path analysis).

Bottlenecks are resolved where possible, and the expected schedule of tasks with projected start times, durations, and end times is produced. Resource requirements (people, tools) are translated into cost estimates. This is a highly iterative activity which must be negotiated and revised until consensus is reached among affected stakeholders (primarily engineering and management).

Resource Allocation Equipment, facilities, and people are associated with the scheduled tasks, including the allocation of responsibilities for completion. This activity is informed and constrained by the availability of resources and their optimal use under these circumstances, as well as by issues relating to personnel (for example, productivity of individuals/teams, team dynamics, organizational and team structures).

Risk Management Risk identification and analysis (what can go wrong, how and why, and what are the likely consequences), critical risk assessment (which are the most significant risks in terms of exposure, which can we do something about in terms of leverage), risk mitigation and contingency planning (formulating a strategy to deal with risks and to manage the risk profile) are all undertaken. Risk assessment methods (for example, decision trees and process simulations) should be used in order to highlight and evaluate risks. Project abandonment policies should also be determined at this point in discussion with all other stakeholders. Software-unique aspects of risk, such as software engineers’ tendency to add unwanted features or the risks attendant in software’s intangible nature, must influence the project’s risk management.

Quality Management Quality is defined in terms of pertinent attributes of the specific project and any associated product(s), perhaps in both quantitative and qualitative terms. These quality characteristics will have been determined in the specification of detailed software requirements. Thresholds for adherence to quality are set for each indicator as appropriate to stakeholder expectations for the software at hand. Procedures relating to ongoing SQA throughout the process and for product (deliverable) verification and validation are also specified at this stage (for example, technical reviews and inspections).

Plan Management How the project will be managed and how the plan will be managed must also be planned. Reporting, monitoring, and control of the project must fit the selected software engineering process and the realities of the project, and must be reflected in the various artifacts that will be used for managing it. But, in an environment where change is an expectation rather than a shock, it is vital that plans are themselves managed. This requires that adherence to plans be

systematically directed, monitored, reviewed, reported, and, where appropriate, revised. Plans associated with other management-oriented support processes (for example, documentation, software configuration management, and problem resolution) also need to be managed in the same manner.

Software Project Enactment The plans are then implemented, and the processes embodied in the plans are enacted. Throughout, there is a focus on adherence to the plans, with an overriding expectation that such adherence will lead to the successful satisfaction of stakeholder requirements and achievement of the project objectives. Fundamental to enactment are the ongoing management activities of measuring, monitoring, controlling, and reporting.

Implementation of Plans The project is initiated and the project activities are undertaken according to the schedule. In the process, resources are utilized (for example, personnel effort, funding) and deliverables are produced (for example, architectural design documents, test cases).

Supplier Contract Management Prepare and execute agreements with suppliers, monitor supplier performance, and accept supplier products, incorporating them as appropriate.

Implementation of measurement process The measurement process is enacted alongside the software project, ensuring that relevant and useful data are collected.

Monitor Process Adherence to the various plans is assessed continually and at predetermined intervals. Outputs and completion conditions for each task are analyzed. Deliverables are evaluated in terms of their required characteristics (for example, via reviews and audits). Effort expenditure, schedule adherence, and costs to date are investigated, and resource usage is examined. The project risk profile is revisited, and adherence to quality requirements is evaluated. Measurement data are modeled and analyzed. Variance analysis based on the deviation of actual from expected outcomes and values is undertaken. This may be in the form of cost overruns, schedule slippage, and the like. Outlier identification and analysis of quality and other measurement data are performed (for example, defect density analysis). Risk exposure and leverage are recalculated, and decisions trees, simulations, and so on are rerun in the light of new data. These activities enable problem detection and exception identification based on exceeded thresholds. Outcomes are reported as needed and certainly where acceptable thresholds are surpassed.

Control Process The outcomes of the process monitoring activities provide the basis on which action decisions are taken. Where appropriate, and where the impact and associated risks are modeled and managed, changes can be made to the project. This may take the form of corrective action (for example, retesting certain components), it may involve the incorporation of contingencies so that similar occurrences are avoided (for example, the decision to use prototyping to assist in software requirements validation), and/or it may entail the revision of the various plans and other project documents (for example, requirements specification) to accommodate the unexpected outcomes and their implications.

Reporting At specified and agreed periods, adherence to the plans is reported, both within the organization (for example to the project portfolio steering committee) and to external stakeholders (for example, clients, users). Reports of this nature should focus on overall adherence as opposed to the detailed reporting required frequently within the project team.

Review and Evaluation At critical points in the project, overall progress towards achievement of the stated objectives and satisfaction of stakeholder requirements are evaluated. Similarly, assessments of the effectiveness of the overall process to date, the personnel involved, and the tools and methods employed are also undertaken at particular milestones.

Determining Satisfaction of Requirements Since attaining stakeholder (user and customer) satisfaction is one of our principal aims, it is important that progress towards this aim be formally and periodically assessed. This occurs on achievement of major project milestones (for example, confirmation of software design architecture, software integration technical review). Variances from expectations are identified and appropriate action is taken.

Reviewing and Evaluating Performance Periodic performance reviews for project personnel provide insights as to the likelihood of adherence to plans as well as possible areas of difficulty (for example, team member conflicts). The various methods, tools, and techniques employed are evaluated for their effectiveness and appropriateness, and the process itself is systematically and periodically assessed for its relevance, utility, and efficacy in the project context. Where appropriate, changes are made and managed.

Closure The project reaches closure when all the plans and embodied processes have been enacted and completed. At this stage, the criteria for project success are

revisited. Once closure is established, archival, post mortem, and process improvement activities are performed.

Determining Closure The tasks as specified in the plans are complete, and satisfactory achievement of completion criteria is confirmed. All planned products have been delivered with acceptable characteristics. Requirements are checked off and confirmed as satisfied, and the objectives of the project have been achieved. These processes generally involve all stakeholders and result in the documentation of client acceptance and any remaining known problem reports.

Closure Activities After closure has been confirmed, archival of project materials takes place in line with stakeholder-agreed methods, location, and duration. The organization’s measurement database is updated with final project data and post-project analyses are undertaken. A project post mortem is undertaken so that issues, problems, and opportunities encountered during the process are analyzed, and lessons are drawn from the process and fed into organizational learning and improvement .

Software Engineering Measurement The importance of measurement and its role in better management practices is widely acknowledged, and so its importance can only increase in the coming years. Effective measurement has become one of the cornerstones of organizational maturity. Key terms on software measures and measurement methods have been defined in ISO15939 on the basis of the ISO international vocabulary of metrology ISO93. Nevertheless, readers will encounter terminology differences in the literature; for example, the term “metrics” is sometimes used in place of “measures.” This topic follows the international standard ISO/IEC 15939, which describes a process which defines the activities and tasks necessary to implement a software measurement process and includes, as well, a measurement information model.

Establish and Sustain Measurement Commitment •

Accept requirements for measurement. Each measurement endeavor should be guided by organizational objectives and driven by a set of measurement requirements established by the organization and the project. For example, an organizational objective might be “first-to-market with new products”. This in turn might engender a requirement that factors contributing to this objective be measured so that projects might be managed to meet this objective.



Define scope of measurement. The organizational unit to which each measurement requirement is to be applied must be established. This may consist of a functional area, a single project, a single site, or even the whole enterprise. All subsequent measurement tasks related to this

requirement should be within the defined scope. In addition, the stakeholders should be identified. •

Commitment of management and staff to measurement. The commitment must be formally established, communicated, and supported by resources (see next item).



Commit resources for measurement. The organization’s commitment to measurement is an essential factor for success, as evidenced by assignment of resources for implementing the measurement process. Assigning resources includes allocation of responsibility for the various tasks of the measurement process (such as user, analyst, and librarian) and providing adequate funding, training, tools, and support to conduct the process in an enduring fashion.

Plan the Measurement Process •

Characterize the organizational unit. The organizational unit provides the context for measurement, so it is important to make this context explicit and to articulate the assumptions that it embodies and the constraints that it imposes. Characterization can be in terms of organizational processes, application domains, technology, and organizational interfaces. An organizational process model is also typically an element of the organizational unit characterization.



Identify information needs. Information needs are based on the goals, constraints, risks, and problems of the organizational unit. They may be derived from business, organizational, regulatory, and/or product objectives. They must be identified and prioritized. Then, a subset to be addressed must be selected and the results documented, communicated, and reviewed by stakeholders.



Select measures. Candidate measures must be selected, with clear links to the information needs. Measures must then be selected based on the priorities of the information needs and other criteria such as cost of collection, degree of process disruption during collection, ease of analysis, ease of obtaining accurate, consistent data.



Define data collection, analysis, and reporting procedures. This encompasses collection procedures and schedules, storage, verification, analysis, reporting, and configuration management of data.



Define criteria for evaluating the information products. Criteria for evaluation are influenced by the technical and business objectives of the organizational unit. Information products include those associated with the product being produced, as well as those associated with the processes being used to manage and measure the project.



Review, approve, and provide resources for measurement tasks.



The measurement plan must be reviewed and approved by the appropriate stakeholders. This includes all data collection procedures, storage, analysis, and reporting procedures; evaluation criteria; schedules; and responsibilities. Criteria for reviewing these artifacts should have been established at the organizational unit level or higher and should be used as the basis for these reviews. Such criteria should take into consideration previous experience, availability of resources, and potential disruptions to projects when changes from current practices are proposed.



Resources should be made available for implementing the planned and approved measurement tasks. Resource availability may be staged in cases where changes are to be piloted before widespread deployment. Consideration should be paid to the resources necessary for successful deployment of new procedures or measures.



Acquire and deploy supporting technologies. This includes evaluation of available supporting technologies, selection of the most appropriate technologies, acquisition of those technologies, and deployment of those technologies.

Perform the Measurement Process •

Integrate measurement procedures with relevant processes. The measurement procedures, such as data collection, must be integrated into the processes they are measuring. This may involve changing current processes to accommodate data collection or generation activities. It may also involve analysis of current processes to minimize additional effort and evaluation of the effect on employees to ensure that the measurement procedures will be accepted. Morale issues and other human factors need to be considered. In addition, the measurement procedures must be communicated to those providing the data, training may need to be provided, and support must typically be provided. Data analysis and reporting procedures must typically be integrated into organizational and/or project processes in a similar manner.



Collect data. The data must be collected, verified, and stored.



Analyze data and develop information products. Data may be aggregated, transformed, or recoded as part of the analysis process, using a degree of rigor appropriate to the nature of the data and the information needs. The results of this analysis are typically indicators such as graphs, numbers, or other indications that must be interpreted, resulting in initial conclusions to be presented to stakeholders. The results and conclusions must be reviewed, using a process defined by the organization (which may be formal or informal). Data providers and measurement users should participate in reviewing the data to ensure that they are meaningful and accurate, and that they can result in reasonable actions.



Communicate results. Information products must be documented and communicated to users and stakeholders.

Evaluate Measurement •

Evaluate information products. Evaluate information products against specified evaluation criteria and determine strengths and weaknesses of the information products. This may be performed by an internal process or an external audit and should include feedback from measurement users. Record lessons learned in an appropriate database.



Evaluate the measurement process. Evaluate the measurement process against specified evaluation criteria and determine the strengths and weaknesses of the process. This may be performed by an internal process or an external audit and should include feedback from measurement users.



Identify potential improvements. Such improvements may be changes in the format of indicators, changes in units measured, or reclassification of categories. Determine the costs and benefits of potential improvements

and select appropriate improvement actions. Communicate proposed improvements to the measurement process owner and stakeholders for review and approval. Also communicate lack of potential improvements if the analysis fails to identify improvements. References: http://en.wikipedia.org/wiki/Software_engineering, http://ocw.mit.edu/OcwWeb/Electrical-Engineering-and-Computer-Science/6171Fall2003/CourseHome/, http://www.cs.cornell.edu/courses/cs501/2008sp/, http://www.comp.lancs.ac.uk/computing/resources/IanS/SE7/, http://www.ee.unb.ca/kengleha/courses/CMPE3213/IntroToSoftwareEng.htm, http://cs.wwc.edu/~aabyan/435/Management.html, http://www.sei.cmu.edu/programs/sepm/, etc...

Comments, questions, feedback, criticisms? Send feedback •

E-mail the author



E-mail Vietnam OpenCourseWare

More about this content: Metadata | Version History | Cite This Content This work is licensed by Hung Vo. See the Creative Commons License about permission to reuse this material. Last edited by Hung Vo on Apr 26, 2008 5:18 pm GMT-5.

Software quality control From Wikipedia, the free encyclopedia Jump to: navigation, search

Software Quality Control is the set of procedures used by organizations (1) to ensure that a software product will meet its quality goals at the best value to the customer, and (2) to continually improve the organization’s ability to produce software products in the future. [1] Software quality control refers to specified functional requirements as well as non-functional requirements such as supportability, performance and usability. [2] It also refers to the ability for software to perform well in unforeseeable scenarios and to keep a relatively low defect rate. These specified procedures and outlined requirements leads to the idea of Verification and Validation and software testing. It is distinct from software quality assurance which includes audits of the quality management system against a standard. Whereas software quality control is a control of products, software quality assurance is a control of processes.

COCOMO From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Kokomo. This article is in need of attention from an expert on the subject. WikiProject Computer science or the Computer science Portal may be able to help recruit one. (November 2008)

The COnstructive COst MOdel (COCOMO) is an algorithmic Software Cost Estimation Model developed by Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project data and current project characteristics. COCOMO was first published in 1981 Barry W. Boehm's Book Software engineering economics[1] as a model for estimating effort, cost, and schedule for software projects. It drew on a study of 63 projects at TRW Aerospace where Barry Boehm was Director of Software Research and Technology in 1981. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. These projects were based on the waterfall model of software development which was the prevalent software development process in 1981. References to this model typically call it COCOMO 81. In 1997 COCOMO II was developed and finally published in 2001 in the book Software Cost Estimation with COCOMO II[2]. COCOMO II is the successor of COCOMO 81 and is better suited for estimating modern software development projects. It provides more support for modern software development processes and an updated project database. The need for the new model came as software development technology moved from mainframe and overnight batch processing to desktop development, code reusability and the use of off-the-shelf software components. This article refers to COCOMO 81. COCOMO consists of a hierarchy of three increasingly detailed and accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes (Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally accounts for the influence of individual project phases.

Contents [hide] •

1 Basic COCOMO



2 Intermediate COCOMO



3 Detailed COCOMO



4 Projects using COCOMO



5 See also



6 References



7 Further reading



8 External links

[edit] Basic COCOMO Basic COCOMO computes software development effort (and cost) as a function of program size. Program size is expressed in estimated thousands of lines of code (KLOC). COCOMO applies to three classes of software projects: •

Organic projects - "small" teams with "good" experience working with "less than rigid" requirements



Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less than rigid requirements



Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ...)

form-based interface

A type of user interface used, for example, on the World-Wide-Web, to organize questions or options for the user so that they resemble a traditional paper form to be filled out by pointing to the fields and typing text, or by choosing from a list.

Related Documents