Software Development Proces 2

  • Uploaded by: Marvin Njenga
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Software Development Proces 2 as PDF for free.

More details

  • Words: 11,605
  • Pages: 24
Software Development Paradigms A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. "A paradigm is a set of rules and regulations (written or unwritten) that does two things: 1) it establishes or defines boundaries; and 2) it tells you how to behave inside the boundaries in order to be successful". (Joel Arthur Barker) "A shared set of assumptions. The paradigm is the way we perceive the world; water to the fish. The paradigm explains the world to us and helps us to predict its behavior. ... It is a conceptual model that is used to communicate descriptions of the component parts of a theory, a policy, a belief system or a worldview and how they interact and are interrelated. In software engineering and project management, a paradigm is used synonymously with methodology to mean or refer to a codified set of practices (sometimes accompanied by training materials, formal educational programs, worksheets, and diagramming tools) that may be repeatably carried out to produce software. Software engineering methodologies span many disciplines, including project management, analysis, specification, design, coding, testing, and quality assurance. All of the methodologies guiding this field are collations of all of these disciplines.

Processes and meta-processes A growing body of software development organizations implements process methodologies/paradigms. Many of them are in the defense industry, which in the U.S. requires a 'Rating' based on 'Process models' to obtain contracts. ISO 12207 is a standard for describing the method of selecting, implementing and monitoring a lifecycle for a project. The Capability Maturity Model (CMM) is one of the leading models. Independent assessments can be used to grade organizations on how well they create software according to how they define and execute their processes. ISO 9000 describes standards for formally organizing processes with documentation. ISO 15504, also known as Software Process Improvement Capability Determination (SPICE), is a "framework for the assessment of software processes". The software process life cycle is also gaining wide usage. This standard is aimed at setting out a clear model for process comparison. SPICE is used much like CMM and CMMI. It models processes to manage, control, guide and monitor software development. This model is then used to measure what a development organization or project team actually does during software development. This information is analyzed to identify weaknesses and drive improvement. It also identifies strengths that can be continued or integrated into common practice for that organization or team. Six Sigma is a project management methodology that uses data and statistical analysis to measure and improve a company's operational performance. It works by identifying and eliminating "defects" in manufacturing and service-related processes. The maximum permissible defects are 3.4 per million opportunities. However Six Sigma is manufacturing-oriented, not software development-oriented and needs further research to even apply to software development.

1

In software engineering and project management, a methodology is a codified set of practices (sometimes accompanied by training materials, formal educational programs, worksheets, and diagramming tools) that may be repeatably carried out to produce software. Software engineering methodologies span many disciplines, including project management, analysis, specification, design, coding, testing, and quality assurance. All of the methodologies guiding this field are collations of all of these disciplines.

Methodology versus Method There is a discussion in science in general about these two words: method and methodology. They are widely used as synonyms, although many authors believe it to be important to draw a difference between the two. In Software Engineering, in particular, the discussion continues. One could argue that a (software engineering) method is a recipe, a series of steps, to build software, where a methodology is a codified set of recommended practices, sometimes accompanied by training materials, formal educational programs, worksheets, and diagramming tools. In this way, a (software engineering) method could be part of a methodology. Also, some authors believe that in a methodology there is an overall philosophical approach to the problem. Using these definitions, Software Engineering is rich in methods, but has fewer methodologies. We could say that there are two main stream types of methodologies: Structured Methodology (Information Engineering, SSADM and others), which encompass many methods and software processes; and Object Oriented Methodology (OOA/OOD and others) .

Major Software Engineering Models A decades-long goal has been to find repeatable, predictable software engineering processes or methodologies that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management is proving difficult. 1) Iterative And Incremental Development Iterative and Incremental development is a software development process developed in response to the weaknesses of the more traditional waterfall model. The two most well known iterative development frameworks are the Rational Unified Process and the Dynamic Systems Development Method. Iterative and incremental development is also an essential part of Extreme Programming and all other agile software development frameworks. Life-Cycle The basic idea behind iterative enhancement is to develop a software system incrementally, allowing the developer to take advantage of what was being learned during the development of earlier, incremental, deliverable versions of the system. Learning comes from both the development and use of the system, where possible. Key steps in the process were to start with a simple implementation of a subset of the software 2

requirements and iteratively enhance the evolving sequence of versions until the full system is implemented. At each iteration, design modifications are made and new functional capabilities are added. The Procedure itself consists of the Initialization step, the Iteration step, and the Project Control List. The initialization step creates a base version of the system. The goal for this initial implementation is to create a product to which the user can react. It should offer a sampling of the key aspects of the problem and provide a solution that is simple enough to understand and implement easily. To guide the iteration process, a project control list is created that contains a record of all tasks that need to be performed. It includes such items as new features to be implemented and areas of redesign of the existing solution. The control list is constantly being revised as a result of the analysis phase. The iteration step involves the redesign and implementation of a task from project control list, and the analysis of the current version of the system. The goal for the design and implementation of any iteration is to be simple, straightforward, and modular, supporting redesign at that stage or as a task added to the project control list. The iterations can, in some cases, represent the major source of documentation of the system. The analysis of an iteration is based upon user feedback, and the program analysis facilities available. It involves analysis of the structure, modularity, usability, reliability, efficiency, and achievement of goals. The project control list is modified in light of the analysis results. Guidelines that drive the implementation and analysis include: • • • • • • • •

Any difficulty in design, coding and testing a modification should signal the need for redesign or recoding. Modifications should fit easily into isolated and easy-to-find- modules. If they do not, some redesign is needed. Modifications to tables should be especially easy to make. If any table modification is not quickly and easily done, redesign is indicated. Modifications should become easier to make as the iterations progress. If they are not, there is a basic problem such as a design flaw or a proliferation of patches. Patches should normally be allowed to exist for only one or two moons. Patches may be necessary to avoid redesigning during an implementation phase. The existing implementation should be analyzed frequently to determine how well it measures up to project goals. Program analysis facilities should be used whenever available to aid in the analysis of partial implementations User reaction should be solicited and analyzed for indications of deficiencies in the current implementation.

Characteristics Using analysis and measurement as drivers of the enhancement process is one major difference between iterative enhancement and the current agile software development. It provides support for determining the effectiveness of the processes and the quality of product. It allows one to study, and therefore improve and tailor, the processes for the particular environment. This measurement and analysis activity can be added to existing agile development methods. In fact, the context of multiple iterations provides advantages in the use of measurement. Measures are sometimes difficult to understand in the absolute but the relative changes in measures over the evolution of the system can be very informative as they provide a basis for comparison. For example, a vector of measures, m1, m 2, ... mn, can be defined to characterize various aspects of the product at some point in time, 3

e.g., effort to date, changes, defects, logical, physical, and dynamic attributes, environmental considerations. Thus an observer can tell how product characteristics like size, complexity, coupling, and cohesion are increasing or decreasing over time. One can monitor the relative change of the various aspects of the product or can provide bounds for the measures to signal potential problems and anomalies. Several utility softwares have been developed using this model, wherein the requirement is basically providing the customer with some working model at an early stage of the development cycle. As new features are added in, a new release is launched which has fewer bugs and more features than the previous release. Some of the typical examples of this kind of model are: Criticism Critics of iterative development approaches point out that these processes place what may be an unreasonable expectation upon the recipient of the software: that they must possess the skills and experience of a seasoned software developer. The approach can also be very expensive, akin to... "If you don't know what kind of house you want, let me build you one and see if you like it. If you don't, we'll tear it all down and start over." A large pile of building-materials, which are now scrap, can be the final result of such a lack of up-front discipline. The problem with this criticism is that the whole point of iterative programming is that you don't have to build the whole house before you get feedback from the recipient. Indeed, in a sense conventional programming places more of this burden on the recipient, as the requirements and planning phases take place entirely before the development begins and testing only occurs after development is officially over. 2) Extreme Programming (XP) XP is a software engineering methodology, the most prominent of several agile software development methodologies. Like other agile methodologies, Extreme Programming differs from traditional methodologies primarily in placing a higher value on adaptability than on predictability. Proponents of XP view requirements change as a constant in software development projects, and believe that being able to adapt to changing requirements at any point during the project life is a more realistic and better approach than attempting to define all requirements at the beginning of a project and then expending effort to control changes to the requirements. XP prescribes a set of day-to-day practices for managers and developers; the practices are meant to embody and encourage particular values. Proponents believe that the exercise of these practices—which are traditional software engineering best practices taken to so-called "extreme" levels—leads to a development process that is more responsive to customer needs ("agile") than traditional methods, while creating software of similar or better quality. History Extreme Programming was created by Kent Beck, Ward Cunningham, and Ron Jeffries during their work on the Chrysler Comprehensive Compensation System (C3) payroll project. Kent Beck became the C3 project leader in March 1996 and began to refine the development methodology used on the project. Kent Beck wrote a book on the methodology, and in October 1999, Extreme Programming Explained was published. Chrysler cancelled the C3 project in February 2000, but the methodology had caught on in the software engineering field. As of 2006, a number of software-development projects continue to use Extreme Programming as their methodology. XP created quite a buzz in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins. XP is still evolving, assimilating more lessons from experiences in the 4

field. In the second edition of Extreme Programming Explained, Beck added more values and practices and differentiated between primary and corollary practices. Goal of XP Extreme Programming Explained describes Extreme Programming as being: • An attempt to reconcile humanity and productivity • A mechanism for social change • A path to improvement • A style of development • A software development discipline The main aim of XP is to lower the cost of change. In traditional system development methods (like SSADM) the requirements for the system are determined at the beginning of the development project and often fixed from that point on. This means that the cost of changing the requirements at a later stage (a common feature of software engineering projects) will be high. XP sets out to lower the cost of change by introducing basic values, principles and practices. By applying XP, a system development project should be more flexible with respect to changes. XP values Extreme Programming initially recognized four values. A new value was added in the second edition of Extreme Programming Explained. The five values are: • Communication • Simplicity • Feedback • Courage • Respect Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme Programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, Extreme Programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback. Extreme Programming encourages starting with the simplest solution and refactoring to better ones. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed. Related to the "communication" value, simplicity in design and coding should improve the (quality of) communication. A simple design with very simple code could be easily understood by most programmers in the team. Within Extreme Programming, feedback relates to different dimensions of the system development: 5

• •



Feedback from the system: by writing unit tests, or running periodic integration tests, the programmers have direct feedback from the state of the system after implementing changes. Feedback from the customer: The functional tests (aka acceptance tests) are written by the customer and the testers. They will get concrete feedback about the current state of their system. This review is planned once in every two or three weeks so the customer can easily steer the development. Feedback from the team: When customers come up with new requirements in the planning game the team directly gives an estimation of the time that it will take to implement.

Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements (aka user stories). To quote Kent Beck, "Optimism is an occupational hazard of programming, feedback is the treatment." Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable with refactoring their code when necessary. This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: A programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, if only he or she is persistent. The respect value manifests in several ways. In Extreme Programming, team members respect each other because programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their work by always striving for high quality and seeking for the best design for the solution at hand through refactoring. Principles The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation. Feedback is most useful if it is done rapidly. The time between an action and its feedback is critical to learning and making changes. In Extreme Programming, unlike traditional system development methods, contact with the customer occurs in small iterations. The customer has clear insight into the system that is being developed. He or she can give feedback and steer the development as needed. Unit tests also contribute to the rapid feedback principle. When writing code, the unit test provides direct feedback as to how the system reacts to the changes one has made. If, for instance, the changes affect a part of the system that is not in the scope of the programmer who made them, that programmer will not notice the flaw. There is a large chance that this bug will appear when the system is in production. Assuming simplicity is about treating every problem as if its solution were "extremely simple". Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas. The advocates of Extreme Programming say that making big changes all at once does not work. Extreme Programming applies incremental changes: for example, a system might have small releases every three 6

weeks. By making many little steps the customer has more control over the development process and the system that is being developed. The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration. Activities XP describes four basic activities that are performed within the software development process. I. Coding The advocates of XP argue that the only truly important product of the system development process is code (a concept to which they give a somewhat broader definition than might be given by others). Without code you have nothing. Coding can be drawing diagrams that will generate code, scripting a web-based system or coding a program that needs to be compiled. Coding can also be used to figure out the most suitable solution. For instance, XP would advocate that faced with several alternatives for a programming problem, one should simply code all solutions and determine with automated tests (discussed in the next section) which solution is most suitable. Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem and finding it hard to explain the solution to fellow programmers might code it and use the code to demonstrate what he or she means. Code, say the exponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts. II. Testing One cannot be certain of anything unless one has tested it. Testing is not a perceived, primary need for the customer. A lot of software is shipped without proper testing and still works (more or less). In software development, XP says this means that one cannot be certain that a function works unless one tests it. This raises the question of defining what one can be uncertain about. • You can be uncertain whether what you coded is what you meant. To test this uncertainty, XP uses Unit Tests. These are automated tests that test the code. The programmer will try to write as many tests he or she can think of that might break the code he or she is writing; if all tests run successfully then the coding is complete. • You can be uncertain whether what you meant is what you should have meant. To test this uncertainty, XP uses acceptance tests based on the requirements given by the customer in the exploration phase of release planning. III. Listening Programmers do not necessarily know anything about the business side of the system under development. The function of the system is determined by the business side. For the programmers to find what the functionality of the system should be, they have to listen to business. Programmers have to listen "in the large": they have to listen to what the customer needs. Also, they have to try to understand the business problem, and to give the customer feedback about his or her problem, to improve the customer's own understanding of his or her problem. Communication between the customer and programmer is further addressed in The Planning Game. IV. Designing 7

From the point of view of simplicity, one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear. One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid lots of dependencies within a system; this means that changing one part of the system will not affect other parts of the system. Practices Extreme Programming has 12 practices, grouped into four areas, derived from the best practices of software engineering: Fine scale feedback • Pair Programming • Planning Game • Test Driven Development • Whole Team Continuous process • Continuous Integration • Design Improvement • Small Releases Shared understanding • Coding Standards • Collective Code Ownership • Simple Design • System Metaphor Programmer welfare • Sustainable Pace Controversial aspects  Change management: The most controversial aspect of Extreme Programming is the change management aspect of the process. More formal software development processes require change requests to be analyzed and approved by a change control board (or equivalent). In Extreme Programming, the on-site customer requests the changes informally, often by verbally informing the development team.  Unstable Requirements: Proponents of Extreme Programming claim that by having the on-site customer request changes informally, the process becomes flexible, and saves the cost of formal overhead. Critics of XP claim this can lead to costly rework and project scope creep beyond what was previously agreed or funded.  User Conflicts: Change control boards are a sign that there are potential conflicts in project objectives and constraints between multiple users. XP's expedited methodology is somewhat dependent on programmers being able to assume a unified client viewpoint so the programmer can concentrate on coding rather than documentation of compromise objectives and constraints. This also applies when multiple programming organizations are involved, particularly organizations which compete for shares of projects.  Revision Funding: Change control boards might distinguish between discrepancy reports and change requests to determine funding. A discrepancy from stated requirements could be considered a "mistake" by developers for the work originally funded, whereas a change request could be 8







considered a new requirement to be backed by new funding, especially for a large-scale change. The funding of software revisions could be controlled in that manner. Other Aspects: Other controversial aspects of Extreme Programming include: o Requirements are expressed as automated acceptance tests rather than specification documents. o Requirements are defined incrementally, rather than trying to get them all in advance. o Software developers are required to work in pairs. o There is no Big Design Up Front. Most of the design activity takes place on the fly and incrementally, starting with "the simplest thing that could possibly work" and adding complexity only when it's required by failing tests. Critics fear this would result in more redesign effort than only re-designing when requirements change. o A customer representative is attached to the project. This role can become a single-point-offailure for the project, and some people have found it to be a source of stress. Also, there is the danger of micro-management by a non-technical representative trying to dictate the use of technical software features and architecture. Scalability: Opponents of XP claim that XP would only work on teams of twelve or fewer people. However, it has been claimed that XP has been used successfully on teams of over a hundred developers. It is not that XP doesn't scale, just that few people have tried to scale it, and proponents of XP refuse to speculate on this facet of the process. ThoughtWorks has claimed reasonable success on distributed XP projects with up to sixty people. Controversy in Book: In 2003, Matt Stephens and Doug Rosenberg published a book under Apress called Extreme Programming Refactored: The Case Against XP which questioned the value of the XP process and suggested ways in which it could be improved. This triggered a lengthy debate in articles, internet newsgroups, and web-site chat areas. The core argument of the book is that XP's practices are interdependent but that few practical organisations are willing/able to adopt all the practices; therefore the entire process fails. The book also makes other criticisms and it draws a likeness of XP's "collective ownership" model to socialism in a negative manner.

XP Evolution: Certain aspects of XP have changed since the book Extreme Programming Refactored (2003) was published; in particular, XP now accommodates modifications to the practices as long as the required objectives are still met. XP also uses increasingly generic terms for processes. Some argue that these changes invalidate previous criticisms; others claim that this is simply watering the process down. Application of Extreme Programming Controversial aspects notwithstanding, Extreme Programming remains a sensible choice for some projects. Projects suited to Extreme Programming are those that: • Involve new or prototype technology, where the requirements change rapidly, or some development is required to discover unforeseen implementation problems • Are research projects, where the resulting work is not the software product itself, but domain knowledge • Are small and more easily managed through informal methods Projects suited for more traditional methodologies are those that: • Involve stable technology and have fixed requirements, where it is known that few changes will occur • Involve mission critical or safety critical systems, where formal methods must be employed for safety or insurance reasons • Are large projects which may overwhelm informal communication mechanisms 9

Have complex products which continue beyond the project scope to require frequent and significant alterations, where a recorded knowledge base, or documentation set, becomes a fundamental necessity to support the maintenance Project Managers must weigh project aspects against available methodologies to make an appropriate selection. However, some XP concepts could be applied outside, such as using Pair Programming to expedite related technical changes to the documentation set of a large project. •

3) The Cleanroom Software Engineering The cleanroom process is a software development process intended to produce software with a certifiable level of reliability. The Cleanroom process was originally developed by Harlan Mills and several of his colleagues at IBM[1]. The focus of the Cleanroom process is on defect prevention, rather than defect removal. The name Cleanroom was chosen to evoke the cleanrooms used in the electronics industry to prevent the introduction of defects during the fabrication of integrated circuits. The Cleanroom process first saw use in the mid to late 80s. Demonstration projects within the military began in the early 1990s[2]. Recent work on the Cleanroom process has examined fusing Cleanroom with the automated verification capabilities provided by specifications expressed in CSP[3]. Central principles The basic principles of the Cleanroom process are Software development based on formal methods Cleanroom development makes use of the Box Structure Method to specify and design a software product. Verification that the design correctly implements the specification is performed through team review. Incremental implementation under statistical quality control Cleanroom development uses an iterative approach, in which the product is developed in increments that gradually increase the implemented functionality. The quality of each increment is measured against pre-established standards to verify that the development process is proceeding acceptably. A failure to meet quality standards results in the cessation of testing for the current increment, and a return to the design phase. Statistically sound testing Software testing in the Cleanroom process is carried out as a statistical experiment. Based on the formal specification, a representative subset of software input/output trajectories is selected and tested. This sample is then statistically analyzed to produce an estimate of the reliability of the software, and a level of confidence in that estimate.

4) The waterfall model The best-known and oldest process is the waterfall model. This is a software development model (a process for the creation of software) in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. The origin of the term "waterfall" is often cited to be an article published in 1970 by W. W. Royce; ironically, Royce himself advocated an iterative approach to software development and did not even use the term "waterfall". Royce originally described what is now known as the waterfall model as an example of a method that he argued "is risky and invites failure" (see "Why people still believe in the waterfall model").

10

Background In 1970 Royce proposed what is now popularly referred to as the waterfall model as an initial concept, a model which he argued was flawed (Royce 1970). His paper then explored how the initial model could be developed into an iterative model, with feedback from each phase influencing previous phases, similar to many methods used widely and highly regarded by many today. Ironically, it is only the initial model that received notice; his own criticism of this initial model has been largely ignored. The "waterfall model" quickly came to refer not to Royce's final, iterative design, but rather to his purely sequentially ordered model. This article will use this popular meaning of the phrase waterfall model. For an iterative model similar to Royce's final vision, see the spiral model. Despite Royce's intentions for the waterfall model to be modified into an iterative model, use of the "waterfall model" as a purely sequential process is still popular, and, for some, the phrase "waterfall model" has since come to refer to any approach to software creation which is seen as inflexible and non-iterative. Those who use the phrase waterfall model pejoratively for non-iterative models that they dislike usually see the waterfall model itself as naive and unsuitable for a "real world" process. Usage of the waterfall model

The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall. In Royce's original waterfall model, the following phases are followed perfectly in order: 1. Requirements specification 2. Design 3. Construction (aka: implementation or coding) 4. Integration 5. Testing and debugging (aka: verification) 6. Installation 7. Maintenance To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes "requirements specification" — they set in stone the requirements of the software. (Example requirements for Wikipedia may be "Wikipedia allows anonymous editing of articles; Wikipedia enables people to search for information", although real requirements specifications will be much more complex and detailed.) When and only when the requirements are fully completed, one proceeds to design. The software in question is designed and a "blueprint" is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When and only when the design is 11

fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced by different teams are integrated. (For example, one team may have been working on the "web page" component of Wikipedia and another team may have been working on the "server" component of Wikipedia. These components must be integrated together to produce the whole system.) After the implementation and integration phases are complete, the software product is tested and debugged; any faults introduced in earlier phases are removed here. Then the software product is installed, and later maintained to introduce new functionality and remove bugs. Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are thus discrete, and there is no jumping back and forth or overlap between them. However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process. Arguments for the waterfall model Time spent early on in software production can lead to greater economy later on in the software lifecycle; that is, it has been shown many times that a bug found in the early stages of the production lifecycle (such as requirements specification or design) is more economical (cheaper in terms of money, effort and time) to fix than the same bug found later on in the process. This should be obvious to some people; if a program design is impossible to implement, it is easier to fix the design at the design stage then to realise months down the track when program components are being integrated that all the work done so far has to be scrapped because of a broken design. This is the central idea behind Big Design Up Front (BDUF) and the waterfall model - time spent early on making sure that requirements and design are absolutely correct is very useful in economic terms (it will save you much time and effort later). Thus, the thinking of those who follow the waterfall process goes, one should make sure that each phase is 100% complete and absolutely correct before proceeding to the next phase of program creation. Program requirements should be set in stone before design is started (otherwise work put into a design based on "incorrect" requirements is wasted); the programs design should be perfect before people begin work on implementing the design (otherwise they are implementing the "wrong" design and their work is wasted), etcetera. A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. More "agile" methodologies can de-emphasise documentation in favour of producing working code - documentation however can be useful as a "partial deliverable" should a project not run far enough to produce any substantial amounts of source code (allowing the project to be resumed at a later date). An argument against agile development methods, and thus partly in favour of the waterfall model, is that in agile methods project knowledge is stored mentally by team members. Should team members leave, this knowledge is lost, and substantial loss of project knowledge may be difficult for a project to recover from. Should a fully working design document be present (as is the intent of Big Design Up Front and the waterfall model) new team members or even entirely new teams should theoretically be able to bring themselves "up to speed" by reading the documents themselves. With that said, agile methods do attempt to compensate for this. For example, extreme programming (XP) advises that project team members should be "rotated" through sections of work in order to familiarise all members with all sections of the project (allowing individual members to leave without carrying important knowledge with them). As well as the above, some prefer the waterfall model for its simple and arguably more disciplined approach. Rather than what the waterfall adherent sees as "chaos" the waterfall model provides a structured approach; 12

the model itself progresses linearly through discrete, easily understandable and explainable "phases" and is thus easy to understand; it also provides easily markable "milestones" in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses. It is argued that the waterfall model and Big Design Up Front in general can be suited to software projects which are stable (especially those projects with unchanging requirements, such as with "shrink wrap" software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well made, complete design accurately, ensuring that the integration of the system proceeds smoothly. The waterfall model is widely used, including by such large software development houses as those employed by the United States Air Force (see "the waterfall model"), the US Department of Defense and NASA, and upon many large government projects (see "the standard waterfall model"). Those who use such methods do not always formally distinguish between the "pure" waterfall model and the various modified waterfall models, so it can be difficult to discern exactly which models are being used to what extent. Steve McConnell sees the two big advantages of the pure waterfall model as producing a "highly reliable system" and one with a "large growth envelope", but rates it as poor on all other fronts. On the other hand, he views any of several modified waterfall models (described below) as preserving these advantages while also rating as "fair to excellent" on "work[ing] with poorly understood requirements" or "poorly understood architecture" and "provid[ing] management with progress visibility", and rating as "fair" on "manag[ing] risks", being able to "be constrained to a predefined schedule", "allow[ing] for midcourse corrections", and "provid[ing] customer with progress visibility". The only criterion on which he rates a modified waterfall as poor is that it requires sophistication from management and developers. (Rapid Development, 156) Arguments for Big Design Up Front Joel Spolsky, a popular online commentator on software development, has argued strongly in favor of Big Design Up Front [1], the general family to which the waterfall model belongs. "Many times, thinking things out in advance saved us serious development headaches later on. ... [on making a particular specification change] ... Making this change in the spec took an hour or two. If we had made this change in code, it would have added weeks to the schedule. I can’t tell you how strongly I believe in Big Design Up Front, which the proponents of Extreme Programming consider anathema. I have consistently saved time and made better products by using BDUF and I’m proud to use it, no matter what the XP fanatics claim. They’re just wrong on this point and I can’t be any clearer than that." However, while Joel Spolsky does argue in favour of Big Design Up Front, a methodology which many consider to be one of the faults of the waterfall model, he does not argue in favour of the purely sequentially ordered waterfall model. See "Daily Builds Are Your Friend" [2], wherein Spolsky advocates "daily builds", a practice not used within the waterfall model, though often used in modified waterfall models. Criticism of the waterfall model The waterfall model however is argued by many to be a bad idea in practice, mainly because of their belief that it is impossible to get one phase of a software product's lifecycle "perfected" before moving on to the next phases and learning from them (or at least, the belief that this is impossible for any non-trivial program). For example clients may not be aware of exactly what requirements they want before they see a working prototype and can comment upon it - they may change their requirements constantly, and program designers and implementers may have little control over this. If clients change their requirements after a design is finished, that design must be modified to accommodate the new requirements, invalidating quite a good deal of effort if overly large amounts of time have been invested into "Big Design Up Front". (Thus methods 13

opposed to the naive waterfall model, such as those used in Agile software development advocate less reliance on a fixed, static requirements document or design document). Designers may not (or more likely, can not) be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. If this is the case, it is better to revise the design than to persist in using a design that was made based on faulty predictions and which does not account for the newly discovered problem areas. In summary, the criticisms of a non-iterative development approach (such as the waterfall model) are as follows: • Many software projects must be open to change due to external factors; the majority of software is written as part of a contract with a client, and clients are notorious for changing their stated requirements. Thus the software project must be adaptable, and spending considerable effort in design and implementation based on the idea that requirements will never change is neither adaptable nor realistic in these cases. • Unless those who specify requirements and those who design the software system in question are highly competent, it is difficult to know exactly what is needed in each phase of the software process before some time is spent in the phase "following" it. That is, feedback from following phases is needed to complete "preceding" phases satisfactorily. For example, the design phase may need feedback from the implementation phase to identify problem design areas. The counter-argument for the waterfall model is that experienced designers may have worked on similar systems before, and so may be able to accurately predict problem areas without time spent prototyping and implementing. • Constant testing from the design, implementation and verification phases is required to validate the phases preceding them. Constant "prototype design" work is needed to ensure that requirements are non-contradictory and possible to fulfil; constant implementation is needed to find problem areas and inform the design process; constant integration and verification of the implemented code is necessary to ensure that implementation remains on track. The counter-argument for the waterfall model here is that constant implementation and testing to validate the design and requirements is only needed if the introduction of bugs is likely to be a problem. Users of the waterfall model may argue that if designers (et cetera) follow a disciplined process and do not make mistakes that there is no need for constant work in subsequent phases to validate the preceding phases. • Frequent incremental builds (following the "release early, release often" philosophy) are often needed to build confidence for a software production team and their client. • It is difficult to estimate time and cost for each phase of the development process without doing some "recon" work in that phase, unless those estimating time and cost are highly experienced with the type of software product in question. • The waterfall model brings no formal means of exercising management control over a project and planning control and risk management are not covered within the model itself. • Only a certain number of team members will be qualified for each phase; thus to have "code monkeys" who are only useful for implementation work do nothing while designers "perfect" the design is a waste of resources. A counter-argument to this is that "multiskilled" software engineers should be hired over "specialized" staff anyway.... Variations of the Waterfall Model NB:/All software development models will bear at least some similarity to the waterfall model, as all software development models will incorporate at least some phases similar to those used within the waterfall model. However the following two are the closest variation of the waterfall Model. Royce's final model 14

Royce's final model, his intended improvement upon his initial "waterfall model", illustrated that feedback could (should, and often would) lead from code testing to design (as testing of code uncovered flaws in the design) and from design back to requirements specification (as design problems may necessitate the removal of conflicting or otherwise unsatisfiable / undesignable requirements). In the same paper Royce also advocated large quantities of documentation, doing the job "twice if possible" (a sentiment similar to that of Fred Brooks, famous for writing the Mythical Man Month, an influential book in software project management, who advocated planning to "throw one away"), and involving the customer as much as possible —now the basis of participatory design and of User Centred Design, a central tenet of Extreme Programming. The "sashimi" model The sashimi model (so called because it features overlapping phases, like the overlapping fish of Japanese sashimi) was originated by Peter DeGrace. It is sometimes simply referred to as the "waterfall model with overlapping phases" or "the waterfall model with feedback". Since phases in the sashimi model overlap, information of problem spots can be acted upon during phases of the waterfall model that would typically "precede" others in the pure waterfall model. For example, since the design and implementation phases will overlap in the sashimi model, implementation problems may be discovered during the "design and implementation" phase of the development process. This helps alleviate many of the problems associated with the Big Design Up Front philosophy of the waterfall model.

5) The Rational Unified Process (RUP) RUP is an iterative software development process created by the Rational Software Corporation, now a division of IBM. The RUP is an extensive refinement of the (generic) Unified Process. The RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. The Rational Unified Process (RUP) is also a software process product, originally developed by Rational Software, and now available from IBM. The product includes a hyperlinked knowledge base with sample artifacts and detailed descriptions for many different types of activities. RUP is included in the IBM Rational Method Composer (RMC) product which allows customization of the process. Background The roots of the Rational Process go back to the original spiral model of Barry Boehm. Ken Hartman, one of the key RUP contributors, collaborated with Boehm on research and writing. The Rational Approach was developed at Rational Software in the 1980s and 1990s. In 1995 Rational Software acquired the Swedish Company Objectory AB. The Rational Unified Process was the result of the merger of the Rational Approach and the Objectory process developed by Objectory founder Ivar Jacobson. The first results of that merger was the Rational Objectory Process, designed to an Objectory-like process, but suitable to wean Objectory users to the Rational Rose tool. When that goal was accomplished, the name was changed. The first version of the Rational Unified Process, version 5.0, was released in 1998. The chief architect was Philippe Kruchten. Design objectives The creators and developers of the process focused on diagnosing the characteristics of different failed software projects; by doing so they tried to recognize the root causes of these failures. They also looked at the existing software engineering processes and their solutions for these symptoms. A representative list of failure causes includes the following: • Ad hoc requirements management 15

Ambiguous and imprecise communication Brittle architecture (architecture that does not work properly under stress) Overwhelming complexity Undetected inconsistencies in requirements, designs, and implementations Insufficient testing Subjective assessment of project status Failure to attack risks Uncontrolled change propagation Insufficient automation Project failure is caused by a combination of several symptoms, though each project fails in a unique way. The outcome of their study was a system of software best practices they named the Rational Unified Process. The Process was designed with the same techniques the team used to design software; it has an underlying object-oriented model, using Unified Modeling Language (UML). • • • • • • • • •

Principles and best practices RUP is based on a set of software development principles and best practices, for instance: 1. Develop software iteratively 2. Manage requirements 3. Use component based architecture 4. Visually model software 5. Verify software quality 6. Control changes to software Develop software iteratively Given the time it takes to develop large sophisticated software systems it is not possible to define the problem and build the solution in a single step. Requirements will often change throughout a project's development, due to architectural constraints, customer's needs or a greater understanding of the original problem. Iteration allows the project to be successively refined and addresses a project's highest risk items as the highest priority task. Ideally each iteration ends up with an executable release – this helps reduce a project's risk profile, allows greater customer feedback and helps developers stay focused. The RUP uses iterative and incremental development for the following reasons: • Integration is done step by step during the development process, limiting it to fewer elements. • Integration is less complex, making it more cost effective. • Parts are separately designed and/or implemented and can be easily identified for later reuse. • Requirement changes are noted and can be accommodated. • Risks are attacked early in development since each iteration gives the opportunity for more risks to be identified. • Software architecture is improved by repeated scrutiny. Using iterations, a project will have one overall phase plan, but multiple iteration plans. Involvement from stakeholders is often encouraged at each milestone. In this manner, milestones serve as a means to obtain stakeholder buy in while providing a constant measure against requirements and organizational readiness for the pending launch. Manage requirements Requirements Management in RUP is concerned with meeting the needs of end users by identifying and specifying what they need and identifying when those needs change. Its benefits include the following: • The correct requirements generate the correct product; the customer's needs are met. 16

Necessary features will be included, reducing post-development cost. RUP suggests that the management of requirements has the following activities:  Analyze the problem is about agreeing on the problem and creating the measures that will prove its value to the organization.  Understand stakeholder needs is about sharing the problem and value with key stakeholders and finding out what their needs are surrounding the solution idea.  Define the system is about creating features from needs and outlining use cases, activities which show nicely the high-level requirements and the overall usage model of the system.  Manage the scope of the system is about modifying the scope of what you will deliver based on results so far and selecting the order in which to attack the use-case flows.  Refine the system definition is about detailing use-case flows with the stakeholders in order to create a detailed Software Requirements Specification (SRS) that can serve as the contract between your team and your client and that can drive design and test activities.  Manage changing requirements is about how to handle incoming requirement changes once the project has begun. •

Use component-based architecture Component-based architecture creates a system that is easily extensible, intuitively understandable and promotes software reuse. A component often relates to a set of objects in object-oriented programming. Software architecture is increasing in importance as systems are becoming larger and more complex. RUP focuses on producing the basic architecture in early iterations. This architecture then becomes a prototype in the initial development cycle. The architecture evolves with each iteration to become the final system architecture. RUP also asserts design rules and constraints to capture architectural rules. By developing iteratively it is possible to gradually identify components which can then be developed, bought or reused. These components are often assembled within existing infrastructures such as CORBA and COM, or Java EE. Visually model software Abstracting your programming from its code and representing it using graphical building blocks is an effective way to get an overall picture of a solution. Using this representation, technical resources can determine how best to implement a given set of inter-related logics. It also builds an intermediary between the business process and actual code through information technology. A model in this context is a visualization and at the same time a simplification of a complex design. RUP specifies which models are necessary and why. The Unified Modeling Language (UML) can be used for modeling Use-Cases, Class diagrams and other objects. RUP also discusses other ways to build models. Verify software quality Quality assessment is the most common failing point of software projects, since it is often an afterthought and sometimes even handled by a different team. RUP assists in planning quality control and assessment by building it into the entire process and involving all members of a team. No worker is specifically assigned to quality; RUP assumed that each member of the team is responsible for quality during the entire process. The process focuses on meeting the expected level of quality and provides test workflows to measure this level. Control changes to software

17

In all software projects, change is inevitable. RUP defines methods to control, track and monitor changes. RUP also defines secure workspaces, guaranteeing a software engineer's system will not be affected by changes in another system. The concept ties in heavily with component based architectures.

The RUP Project lifecycle

A typical project profile showing the relative sizes of the four phases The RUP lifecycle has been created by assembling the content elements (described below) into semi-ordered sequences. Consequently the RUP lifecycle is available as a work breakdown structure, which could be customized to address the specific needs of a project. The RUP lifecycle organizes the tasks into phases and iterations. A project has four phases: • Inception phase • Elaboration phase • Construction phase • Transition phase Inception phase In this phase the business case which includes business context, success factors (expected revenue, market recognition, etc), and financial forecast is established. To complement the business case, a basic use case model, project plan, initial risk assessment and project description (the core project requirements, constraints and key features) are generated. After these are completed, the project is checked against the following criteria: • Stakeholder concurrence on scope definition and cost/schedule estimates. • Requirements understanding as evidenced by the fidelity of the primary use cases. • Credibility of the cost/schedule estimates, priorities, risks, and development process. • Depth and breadth of any architectural prototype that was developed. • Actual expenditures versus planned expenditures. If the project does not pass this milestone, called the Lifecycle Objective Milestone, it can either be cancelled or it can repeat this phase after being redesigned to better meet the criteria. Elaboration phase The elaboration phase is where the project starts to take shape. In this phase the problem domain analysis is made and the architecture of the project gets its basic form. This phase must pass the Lifecycle Architecture Milestone by meeting the following criteria:

18

A use-case model in which the use-cases and the actors have been identified and most of the use-case descriptions are developed. The use-case model should be 80% complete. • A description of the software architecture in a software system development process. • An executable architecture that realizes architecturally significant use cases. • Business case and risk list which are revised. • A development plan for the overall project. If the project cannot pass this milestone, there is still time for it to be cancelled or redesigned. After leaving this phase, the project transitions into a high-risk operations where changes are much more difficult and detrimental when made. •

Construction phase In this phase the main focus goes to the development of components and other features of the system being designed. This is the phase when the bulk of the coding takes place. In larger projects, several construction iterations may be developed in an effort to divide the use cases into manageable segments that produce demonstrable prototypes. This phase produces the first external release of the software. Its conclusion is marked by the Initial Operational Capability Milestone. Transition phase In the transition phase, the product has moved from the development organization to the end user. The activities of this phase include training of the end users and maintainers and beta testing of the system to validate it against the end users' expectations. The product is also checked against the quality level set in the Inception phase. If it does not meet this level, or the standards of the end users, the entire cycle in this phase begins again. If all objectives are met, the Product Release Milestone is reached and the development cycle ends. The RUP Disciplines and workflows

The Rational Unified Process

19

RUP is based on a set of building blocks, or content elements, describing what is to be produced, the necessary skills required and the step-by-step explanation describing how specific development goals are achieved. The main building blocks, or content elements, are the following: • Roles (who) – A Role defines a set of related skills, competencies, and responsibilities. • Work Products (what) – A Work Product represents something resulting from a task, including all the documents and models produced while working through the process. • Tasks (how) – A Task describes a unit of work assigned to a Role that provides a meaningful result. Within each iteration, the tasks are categorized into nine Disciplines: Engineering Disciplines: • Business modeling discipline • Requirements discipline • Analysis and design discipline • Implementation discipline • Test discipline • Deployment discipline Supporting Disciplines: • Configuration and change management discipline • Project management discipline • Environment discipline Business modeling discipline Organizations are becoming more dependent on IT systems, making it imperative that information system engineers know how the applications they are developing fit into the organization. Businesses invest in IT when they understand the competitive advantage and value added by the technology. The aim of business modeling is to first establish a better understanding and communication channel between business engineering and software engineering. Understanding the business means that software engineers must understand the structure and the dynamics of the target organization (the client), the current problems in the organization and possible improvements. They must also ensure a common understanding of the target organization between customers, end users and developers. Business modeling explains how to describe a vision of the organization in which the system will be deployed and how to then use this vision as a basis to outline the process, roles and responsibilities. Requirements discipline The goal of the Requirements is to describe what the system should do and allows the developers and the customer to agree on that description. To achieve this, analysts elicit, organize, and document required functionality and constraints; track and document tradeoffs and decisions. A Vision document is created, and stakeholder needs are elicited. Actors are identified, representing the users, and any other system that may interact with the system being developed. Use cases are identified, representing the behavior of the system. Because use cases are developed according to the actor's needs, the system is more likely to be relevant to the users. Artifacts used in requirements: • Use Case Model • Use Cases • Supplementary specifications • Glossary • Storyboards, as a basis for User-Interface Prototypes 20

These artifacts can be combined and packaged together to define a Software Requirements Specification (SRS). Analysis and design discipline The goal of Analysis and Design is to show how the system will be realized in the implementation phase. You want to build a system that: • Performs—in a specific implementation environment—the tasks and functions specified in the usecase descriptions. • Fulfills all its requirements. • Is easy to change when functional requirements change. Analysis and Design results in a design model and optionally an analysis model. The design model serves as an abstraction of the source code; that is, the design model acts as a 'blueprint' of how the source code is structured and written.The design model consists of design classes structured into design packages and design subsystems with well-defined interfaces, representing what will become components in the implementation. It also contains descriptions of how objects of these design classes collaborate to perform use cases. Implementation discipline The purposes of implementation are: • To define the organization of the code, in terms of implementation subsystems organized in layers. • To implement classes and objects in terms of components (source files, binaries, executables, and others). • To test the developed components as units. • To integrate the results produced by individual implementers (or teams), into an executable system. Systems are realized through implementation of components. The process describes how you reuse existing components, or implement new components with well defined responsibility, making the system easier to maintain, and increasing the possibilities to reuse Test discipline The purposes of the Test discipline are: • To verify the interaction between objects. • To verify the proper integration of all components of the software. • To verify that all requirements have been correctly implemented. • To identify and ensure defects are addressed prior to the deployment of the software The Rational Unified Process proposes an iterative approach, which means that you test throughout the project. This allows you to find defects as early as possible, which radically reduces the cost of fixing the defect. Test are carried out along three quality dimensions reliability, functionality, application performance and system performance. For each of these quality dimensions, the process describes how you go through the test lifecycle of planning, design, implementation, execution and evaluation. Deployment discipline The purpose of deployment is to successfully produce product releases, and deliver the software to its end users. It covers a wide range of activities including: • Producing external releases of the software • Packaging the software • Distributing the software • Installing the software • Providing help and assistance to users 21

Although deployment activities are mostly centered around the transition phase, many of the activities need to be included in earlier phases to prepare for deployment at the end of the construction phase.The Deployment and Environment workflows of the Rational Unified Process contain less detail than other workflows. Configuration and Change management discipline The Change Management discipline in RUP deals with three specific areas: • Configuration management • Change request management • Status and measurement management  Configuration management Configuration management is responsible for the systematic structuring of the products. Artifacts such as documents and models need to be under version control and these changes must be visible. It also keeps track of dependencies between artifacts so all related articles are updated when changes are made.  Change request management During the system development process many artifacts with several versions exist. CRM keeps track of the proposals for change.  Status and measurement management Change requests have states such as new, logged, approved, assigned and complete. A change request also has attributes such as root cause, or nature (like defect and enhancement), priority etc. These states and attributes are stored in database so useful reports about the progress of the project can be produced. Rational also has a product to maintain change requests called ClearQuest. This activity has procedures to be follow. Project management discipline Project planning in the RUP occurs at two levels. There is a coarse-grained or Phase plan which describes the entire project, and a series of fine-grained or Iteration plans which describe the iterations.

Iteration plan The iteration plan is fine-grained plan. There is one per iteration. There are typically two iteration plans active at any point in time. • The current iteration plan is used to track progress in the current iteration. • The next iteration plan is used to plan the upcoming iteration. This plan is prepared toward the end of the current iteration. Limitations If the users of RUP do not understand that RUP is a process framework, they may perceive it as a weighty and expensive process. RUP was not intended, not envisioned and not promoted to be used straight "out of the box". The IBM Rational Method Composer product has been created to address this limitation and help process engineers and project managers customize the RUP for their project needs. OpenUP/Basic, the lightweight and open source version of RUP, is another attempt to address this limitation.

7) The Agile Unified Process (AUP) is a simplified version of the Rational Unified Process (RUP). It describes a simple, easy to understand approach to developing business application software using agile 22

techniques and concepts yet still remaining true to the RUP. The AUP applies agile techniques include test driven design (TDD), Agile Modeling, agile change management, and Database Refactoring to improve your productivity. Phases Like the RUP, the AUP is comprised of four phases: 1. Inception The goal is to identify the initial scope of the project, a potential architecture for your system, and to obtain initial project funding and stakeholder acceptance. 2. Elaboration The goal is to prove the architecture of the system. 3. Construction The goal is to build working software on a regular, incremental basis which meets the highest-priority needs of your project stakeholders. 4. Transition The goal is to validate and deploy your system into your production environment. Disciplines Unlike the RUP, the AUP only has seven disciplines: 1. Model. The goal of this discipline is to understand the business of the organization, the problem domain being addressed by the project, and to identify a viable solution to address the problem domain. 2. Implementation. The goal of this discipline is to transform your model(s) into executable code and to perform a basic level of testing, in particular unit testing. 3. Test. The goal of this discipline is to perform an objective evaluation to ensure quality. This includes finding defects, validating that the system works as designed, and verifying that the requirements are met. 4. Deployment. The goal of this discipline is to plan for the delivery of the system and to execute the plan to make the system available to end users. 5. Configuration Management. The goal of this discipline is to manage access to your project artifacts. This includes not only tracking artifact versions over time but also controlling and managing changes to them. 6. Project Management. The goal of this discipline is to direct the activities that takes place on the project. This includes managing risks, directing people (assigning tasks, tracking progress, etc.), and coordinating with people and systems outside the scope of the project to be sure that it is delivered on time and within budget. 7. Environment. The goal of this discipline is to support the rest of the effort by ensuring that the proper process, guidance (standards and guidelines), and tools (hardware, software, etc.) are available for the team as needed. Philosophies The Agile UP is based on the following philosophies: • Your staff knows what they're doing. People aren't going to read detailed process documentation, but they will want some high-level guidance and/or training from time to time. The AUP product provides links to many of the details, if you're interested, but doesn't force them upon you. • Simplicity. Everything is described concisely using a handful of pages, not thousands of them. • Agility. The Agile UP conforms to the values and principles of the agile software development and the Agile Alliance. • Focus on high-value activities. The focus is on the activities which actually count, not every possible thing that could happen to you on a project. 23





Tool independence. You can use any toolset that you want with the Agile UP. The recommendation is that you use the tools which are best suited for the job, which are often simple tools or even open source tools. You'll want to tailor the AUP to meet your own needs.

AUP Iterations The Agile Unified Process distinguishes between two types of iterations. A Development Release Iteration results in a deployment to the Quality Assurance and/or Demo area. A Production Release Iteration results in a deployment to the Production area. This is a significant refinement to the Rational Unified Process.

24

Related Documents

Software Development
June 2020 7
Proces
November 2019 19
Proces
August 2019 21
Software Development Model
November 2019 10

More Documents from ""