Testing

  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Testing as PDF for free.

More details

  • Words: 7,805
  • Pages: 20
Black box testing Black box testing takes an external perspective of the test object to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure. This method of test design is applicable to all levels of software testing: unit, integration, functional testing, system and acceptance. The higher the level, and hence the bigger and more complex the box, the more one is forced to use black box testing to simplify. While this method can uncover unimplemented parts of the specification, one cannot be sure that all existent paths are tested Typical black box test design techniques include: •

Equivalence partitioning



Boundary value analysis



Decision table testing



Pairwise testing



State transition tables



Use case testing



Cross-functional testing

Equivalence partitioning Equivalence partitioning is a software testing technique in which test cases are designed to execute representatives from each equivalence partition, i.e. a partition of input values undergoing similar treatment. In principle, test cases are designed to cover each partition at least once. This has the goal: 1. To reduce the number of test cases to a necessary minimum. 2. To select the right test cases to cover all possible scenarios. Although in rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behaviour. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained by the example of a function which takes a parameter "month". The valid range for the month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. ... -2 -1 0 1 .............. 12 13 14 15 ..... --------------|-------------------|--------------------invalidpartition 1 valid partition invalid partition 2

The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program.

The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably. An additional effect by applying this technique is that you also find the so called "dirty" test cases. An inexperienced tester may be tempted to use as test cases the input data 1 to 12 for the month and forget to select some out of the invalid partitions. This would lead to a huge number of unnecessary test cases on the one hand, and a lack of test cases for the dirty ranges on the other hand. The tendency is to relate equivalence partitioning to so called black box testing which is strictly checking a software component at its interface, without consideration of internal structures of the software. But having a closer look at the subject there are cases where it applies to grey box testing as well. Imagine an interface to a component which has a valid range between 1 and 12 like in the example above. However internally the function may have a differentiation of values between 1 and 6 and the values between 7 and 12. Depending on the input value the software internally will run through different paths to perform slightly different actions. Regarding the input and output interfaces to the component this difference will not be noticed, however in your grey-box testing you would like to make sure that both paths are examined. To achieve this it is necessary to introduce additional equivalence partitions which would not be needed for blackbox testing. For this example this would be: ... -2 -1 0 1 ..... 6 7 ..... 12 13 14 15 ..... --------------|---------|----------|--------------------invalidpartition 1 P1 P2 invalid partition 2 valid partitions

To check for the expected results you would need to evaluate some internal intermediate values rather than the output interface. Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.

Boundary-value analysis Boundary value analysis is a software testing design technique used to determine test cases covering off-by-one errors.

Introduction Testing experience has shown that the boundaries of input ranges to a software component are likely to contain defects. For instance: a function that takes an integer between 1 and 12, representing a month between January to December, might contain a check for this range:

Applying boundary value analysis To set up boundary value analysis test cases, the tester first determines which boundaries are at the interface of a software component. This is done by applying the equivalence partitioning technique. For the above example, the month parameter would have the following partitions: ... -2 -1 0 1 .............. 12 13 14 15 ..... --------------|-------------------|--------------------invalidpartition 1 valid partition invalid partition 2

To apply boundary value analysis, a test case at each side of the boundary between two partitions is selected. In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "negative" test case. A "clean" test case should lead to a valid result. A "negative" test case should lead to specified error handling such as the limiting of values, the usage of a substitute value, or a warning. Boundary value analysis can result in three test cases for each boundary; for example if n is a boundary, test cases could include n-1, n, and n+1. A further set of boundaries has to be considered when test cases are set up. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If working with signed values, for example, this may be the range around zero (-1, 0, +1). Similar to the typical range check faults, there tend to be weaknesses in programs in this range. e.g. this could be a division by zero problem where a zero value may occur although the programmer always thought the range started at 1. It could be a sign problem when a value turns out to be negative in some rare cases, although the programmer always expected it to be positive. Even if this critical natural boundary is clearly within an equivalence partition it should lead to additional test cases checking the range around zero. A further natural boundary is the natural lower and upper limit of the data type itself. E.g. an unsigned 8-bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of -1 and 0 as well as 255 and 256. The tendency is to relate boundary value analysis more to so called black box testing, which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing. After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component. Retrieved from "http://en.wikipedia.org/wiki/Boundary-value_analysis"

The difference between equivalence partition and boundary value analysis; The main difference between EP and BVA is that EP determines the number of test cases to be generated for a given scenario where as BVA will determine the effectivenss of those generated test cases. Equivalence partitioning : Equivalence Partitioning determines the number of test cases for a given scenario.

Equivalence partitioning is a black box testing technique with the following goal: 1.To reduce the number of test cases to a necessary minimum. 2.To select the right test cases to cover all possible scenarios. EP is applied to the inputs of a tested component. The equivalence partitions are usually derived from the specification of the component's behaviour. An input has certain ranges which are valid and other ranges which are invalid. This may be best explained at the following example of a function which has the pass parameter "month" of a date. The valid range for the month is 1 to 12, standing for January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13. . ... -2 -1 0 invalid partition 1

1 .............. 12 valid partition

13 14 15 ..... invalid partition 2

The testing theory related to equivalence partitioning says that only one test case of each partition is needed to evaluate the behaviour of the program for the related partition. In other words it is sufficient to select one test case out of each partition to check the behaviour of the program. To use more or even all test cases of a partition will not find new faults in the program. The values within one partition are considered to be "equivalent". Thus the number of test cases can be reduced considerably. Equivalence partitioning is no stand alone method to determine test cases. It has to be supplemented by boundary value analysis. Having determined the partitions of possible inputs the method of boundary value analysis has to be applied to select the most effective test cases out of these partitions.

Boundary Value Analysis : Boundary Value Analysis determines the effectiveness of test cases for a given scenario. To set up boundary value analysis test cases the tester first has to determine which boundaries are at the interface of a software component. This has to be done by applying the equivalence partitioning technique.Boundary value analysis and equivalence partitioning are inevitably linked together. For the example of the month a date would have the following partitions: . .. -2 -1 0 1 .............. 12 13 14 15 ..... --invalid partition 1 valid partition invalid partition 2 By applying boundary value analysis we can select a test case at each side of the boundary between two partitions . In the above example this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary. Each of these pairs consists of a "clean" and a "dirty" test case. A "clean" test case should give a valid operation result of the program. A "dirty" test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to warning and request to enter correct data. The boundary value analysis can have 6 test cases: n, n-1, and n+1 for the upper limit; and n, n-1, and n+1 for the lower limit.

All-pairs testing or pairwise testing is a combinatorial software testing method that, for each pair of input parameters to a system (typically, a software algorithm), tests all possible discrete combinations of those parameters. Using carefully chosen test vectors, this can be done much faster than an exhaustive search of all combinations of all parameters, by "parallelizing" the tests of parameter pairs. The number of tests is typically O(nm), where n and m are the number of possibilities for each of the two parameters with the most choices. The reasoning behind all-pairs testing is this: the simplest bugs in a program are generally triggered by a single input parameter. The next simplest category of bugs consists of those dependent on interactions between pairs of parameters, which can be caught with all-pairs testing.[1] Bugs involving interactions between three or more parameters are progressively less common[2], whilst at the same time being progressively more expensive to find by exhaustive testing, which has as its limit the exhaustive testing of all possible inputs. Many testing methods regard all-pairs testing of a system or subsystem as a reasonable costbenefit compromise between often computationally infeasible higher-order combinatorial testing methods, and less exhaustive methods which fail to exercise all possible pairs of parameters. Because no testing technique can find all bugs, all-pairs testing is typically used together with other quality assurance techniques such as unit testing, symbolic execution, fuzz testing, and code review.

State transition table In automata theory and sequential logic, a state transition table is a table showing what state (or states in the case of a nondeterministic finite automaton) a finite semiautomaton or finite state machine will move to, based on the current state and other inputs. A state table is essentially a truth table in which some of the inputs are the current state, and the outputs include the next state, along with other outputs. A state table is one of many ways to specify a state machine, other ways being a state diagram, and a characteristic equation.

Contents [hide] •

1 Common forms ○

1.1 One-dimensional state tables



1.2 Two-dimensional state tables



2 Example



3 Transformations from/to state diagram



4 References

[edit] Common forms

[edit] One-dimensional state tables Also called characteristic tables, single-dimension state tables are much more like truth tables than the two-dimensional versions. Inputs are usually placed on the left, and separated from the outputs, which are on the right. The outputs will represent the next state of the machine. Here's a simple example of a state machine with two states, and two combinatorial inputs: AB

Current State

Next State

Outp ut

0 0

S1

S2

1

0 0

S2

S1

0

0 1

S1

S2

0

0 1

S2

S2

1

1 0

S1

S1

1

1 0

S2

S1

1

1 1

S1

S1

1

1 1

S2

S2

0

S1 and S2 would most likely represent the single bits 0 and 1, since a single bit can only have two states.

Two-dimensional state tables State transition tables are typically two-dimensional tables. There are two common forms for arranging them. •

The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates events, and the cells (row/column intersections) in the table contain the next state if an event happens (and possibly the action linked to this state transition).

State Transition Table Even ts State

E1

E2

...

En

S1

-

S2

-

...

...

Sm

Ay/S

...

-

-

...

A x/Si

...

...

...

-

...

-

j

Az/S k

(S: state, E: event, A: action, -: illegal transition) •

The vertical (or horizontal) dimension indicates current states, the horizontal (or vertical) dimension indicates next states, and the row/column intersections contain the event which will lead to a particular next state.

State Transition Table ne

S1

S2

...

Sm

-

...

-

xt curren t S1

Ay/E j

Ax/E

S2

-

-

...

...

...

...

...

...

Sm

-

...

-

Az/E k

i

(S: state, E: event, A: action, -: impossible transition)

Example An example of a state transition table for a machine M together with the corresponding state diagram is given below. State Transition Table Input 1 State

0

S1 S1 S2 S2 S2 S1

State Diagram

All the possible inputs to the machine are enumerated across the columns of the table. All the possible states are enumerated across the rows. From the state transition table given above, it is easy to see that if the machine is in S1 (the first row), and the next input is character 1, the machine will stay in S1. If a character 0 arrives, the machine will transition to S2 as can be seen from the second column. In the diagram this is denoted by the arrow from S1 to S2 labeled with a 0. For a nondeterministic finite automaton (NFA), a new input may cause the machine to be in more than one state, hence its non-determinism. This is denoted in a state transition table by a pair of curly braces { } with the set of all target states between them. An example is given below. State Transition Table for an NFA Input State

1

0

ε

S1 S1 { S2, S3 }

Φ

S2 S2

S1

Φ

S3 S2

S1

S1

Here, a nondeterministic machine in the state S1 reading an input of 0 will cause it to be in two states at the same time, the states S2 and S3. The last column defines the legal transition of states of the special character, ε. This special character allows the NFA to move to a different state when given no input. In state S3, the NFA may move to S1 without consuming an input character. The two cases above make the finite automaton described non-deterministic.

[edit] Transformations from/to state diagram It is possible to draw a state diagram from the table. A sequence of easy to follow steps is given below:

1. Draw the circles to represent the states given. 2. For each of the states, scan across the corresponding row and draw an arrow to the destination state(s). There can be multiple arrows for an input character if the automaton is an NFA. 3. Designate a state as the start state. The start state is given in the formal definition of the automaton. 4. Designate one or more states as accept state. This is also given in the formal definition.

Sanity testing Jump to: navigation, search

A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or calculation. In mathematics, for example, when multiplying by three or nine, verifying that the sum of digits, in the result, is a multiple of 3 or 9 (casting out nines), respectively, is a sanity test. In computer science, a sanity test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that the system or methodology works as expected, often prior to a more exhaustive round of testing.

Contents [hide] •

1 Mathematical



2 Software development



3 See also



4 References

[edit] Mathematical A sanity test can refer to various order-of-magnitude and other simple rule-of-thumb devices applied to cross-check mathematical calculations. For example: •

If one were to attempt to square 738 and calculated 53,874, a quick sanity check could show that this result cannot be true. Consider that 700 < 738, yet 7002 = 721002 = 490000 > 53874. Since squaring positive numbers preserves their inequality, the result cannot be true, and so the calculation was bad. The correct answer, 7382 = 544,644, is more than 10 times higher than 53,874, and so the result had been off by an order of magnitude.



In multiplication, 918 x 155 is not 142135 since 918 is divisible by three but 142135 is not (digits add up to 16, not a multiple of three). Also, the product must end in the same digit as the product of end-digits 8x5=40, but 142135 does not end in "0" like "40", while the correct answer does, 918x155=142290. An even quicker check is that the product of even and odd numbers is even, whereas 142135 is odd.



When talking about quantities in physics, the power output of a car cannot be 700 kJ since that is a unit of energy, not power (energy per unit time). See dimensional analysis.

Software development This section needs additional citations for verification. Please help improve this article by adding reliable references (ideally, using inline citations). Unsourced material may be challenged and removed. (February 2008)

In software development, the sanity test (a form of software testing which offers "quick, broad, and shallow testing"[1]) determines whether it is reasonable to proceed with further testing. Software sanity tests are commonly conflated with smoke tests [2]. A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable[citation needed]. A software smoke test determines whether the program launches and whether its interfaces are accessible and responsive (for example, the responsiveness of a web page or an input button). If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests and unit tests on an automated build as part of their development process.[3] The Hello world program is often used as a sanity test for a development environment. If Hello World fails to compile or execute, the supporting environment likely has a configuration problem. If it works, the problem being diagnosed likely lies in the real application being diagnosed.

Build automation Build automation is the act of scripting or automating a wide variety of tasks that a software developer will do in their day-to-day activities including things like: •

compiling computer source code into binary code



packaging binary code



running tests



deployment to production systems



creating documentation and or release notes

This automated build is in contrast to a manual build process where a person has to perform multiple, often tedious and error prone tasks. The goal of this automation is to create a one-step

process for turning source code into a working system. This is done to save time and to reduce errors.

Contents •

1 Advantages of build automation



2 Types of automation



3 Makefile



4 Requirements of a build system



5 Build automation software



6 References



7 See also



8 External links

Advantages of build automation •

Improve product quality



Reduce boring jobs



Eliminate dependencies on key people



Have history of builds and releases in order to investigate issues



Save time and money - because of the reasons listed above.[1]

Types of automation •

Commanded automation such as a user running a script on the command line



Scheduled automation such as a continuous integration server running a nightly build



Triggered automation such as a continuous integration server running a build on every commit to a version control system.

Makefile One specific form of build automation is the automatic generation of Makefiles. This is accomplished by tools like •

GNU Automake



CMake



imake



qmake



Apache Ant



Apache Maven

Requirements of a build system

Basic requirements: 1. Frequent or overnight builds to catch problems early.[2][3][4]

Optional requirements:[5] 1. Generate release notes and other documentation such as help pages. 2. Build status reporting 3. Test pass or fail reporting 4. Summary of the features added/modified/deleted with each new build Smoke testing Smoke testing is a term used in plumbing, woodwind repair, electronics, computer software development, and the entertainment industry. It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.

Smoke testing in software development Software Testing portal

Smoke testing is done by developers before the build is released or by testers before accepting a build for further testing. Microsoft claims[1] that after code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software. In software engineering, a smoke test generally consists of a collection of tests that can be applied to a newly created or repaired computer program. Sometimes the tests are performed by the automated system that builds the final software. In this sense a smoke test is the process of validating code changes before the changes are checked into the larger product’s official source code collection or the main branch of source code. In software testing, a smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a build verification test. This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?". The purpose is to determine whether or not the application is so badly broken that testing functionality in a more detailed way is unnecessary. These written tests can either be performed manually or using an automated tool. When automated tools are used, the tests are often initiated by the same process that generates the build itself. This is sometimes referred to as "rattle" testing - as in "if I shake it does it rattle?" stress test refers In software testing, stress test refers to tests that put a greater emphasis on robustness, availability, and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances. In particular, the goals of such tests may be to

ensure the software doesn't crash in conditions of insufficient computational resources (such as memory or disk space), unusually high concurrency, or denial of service attacks. Examples: •

A web server may be stress tested using scripts, bots, and various denial of service tools to observe the performance of a web site during peak loads.

Use case Jump to: navigation, search

A use case in software engineering and systems engineering is a description of a system’s behaviour as it responds to a request that originates from outside of that system. In other words, a use case describes "who" can do "what" with the system in question. The use case technique is used to capture a system's behavioral requirements by detailing scenario-driven threads through the functional requirements.

Contents •

1 Overview



2 History



3 Use case topics ○

3.1 Use case focus



3.2 Degree of detail



3.3 Appropriate detail



3.4 Use case notation



3.5 Use cases and the development process



4 Use case templates



5 Limitations of use cases



6 See also



7 References



8 Further reading



9 External links

Overview Use cases describe the system from the users point of view. Use cases describe the interaction between a primary actor (the initiator of the interaction) and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and that take part in a sequence of activities in a dialogue with the system to achieve some goal. Actors may be end users, other systems, or

hardware devices. Each use case is a complete series of events, described from the point of view of the actor.[1] According to Bittner and Spence, “Use cases, stated simply, allow description of sequences of events that, taken together, lead to a system doing something useful.”[2] Each use case describes how the actor will interact with the system to achieve a specific goal. One or more scenarios may be generated from a use case, corresponding to the detail of each possible way of achieving that goal. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert. Use cases are often co-authored by systems analysts and end users. The UML use case diagram can be used to graphically represent an overview of the use cases for a given system and a use-case analysis can be used to develop the diagram. Within systems engineering, use cases are used at a higher level than within software engineering, often representing missions or stakeholder goals. The detailed requirements may then be captured in SysML requirement diagrams or similar mechanisms.

History In 1986, Ivar Jacobson, later an important contributor to both the Unified Modeling Language (UML) and the Rational Unified Process (RUP), first formulated the visual modeling technique for specifying use cases. Originally he used the terms usage scenarios and usage case, but found that neither of these terms sounded natural in English, and eventually he settled on the term use case.[3] Since Jacobson originated use case modeling many others have contributed to improving this technique, including Kurt Bittner, Alistair Cockburn, Gunnar Overgaard, and Geri Schneider. During the 1990s use cases became one of the most common practices for capturing functional requirements. This is especially the case within the object-oriented community where they originated, but their applicability is not restricted to object-oriented systems, because use cases are not object-oriented in nature.

Use case topics

Use case focus Each use case focuses on describing how to achieve a goal or task. For most software projects this means that multiple, perhaps dozens, of use cases are needed to define the scope of the new system. The degree of formality of a particular software project and the stage of the project will influence the level of detail required in each use case. Use cases should not be confused with the features of the system under consideration. A use case may be related to one or more features, and a feature may be related to one or more use cases. A use case defines the interactions between external actors and the system under consideration to accomplish a goal. An actor specifies a role played by a person or thing when interacting with the system.[4] The same person using the system may be represented as different actors because they are playing different roles. For example, "Joe" could be playing the role of a Customer when using an Automated Teller Machine to withdraw cash, or playing the role of a Bank Teller when using the system to restock the cash drawer. Use cases treat the system as a black box, and the interactions with the system, including system responses, are perceived as from outside the system. This is a deliberate policy, because it forces the author to focus on what the system must do, not how it is to be done, and avoids the trap of making assumptions about how the functionality will be accomplished.

Use cases may be described at the abstract level (business use case, sometimes called essential use case), or at the system level (system use case). The differences between these is the scope. •

The business use case is described in technology-free terminology which treats the business process as a black box and describes the business process that is used by its business actors (people or systems external to the business) to achieve their goals (e.g., manual payment processing, expense report approval, manage corporate real estate). The business use case will describe a process that provides value to the business actor, and it describes what the process does. Business Process Mapping is another method for this level of business description.



The system use cases are normally described at the system functionality level (for example, create voucher) and specify the function or the service system provides for the user. A system use case will describe what the actor achieves interacting with the system. For this reason it is recommended that a system use case specification begin with a verb (e.g., create voucher, select payments, exclude payment, cancel voucher). Generally, the actor could be a human user or another system interacting with the system being defined.

A use case should: •

Describe what the system shall do for the actor to achieve a particular goal.



Include no implementation-specific language.



Be at the appropriate level of detail.



Not include detail regarding user interfaces and screens. This is done in userinterface design.

Degree of detail Alistair Cockburn, in Writing Effective Use Cases, identified three levels of detail in writing use cases:[5] •

Brief use case : consists of a few sentences summarizing the use case. It can be easily inserted in a spreadsheet cell, and allows the other columns in the spreadsheet to record priority, technical complexity, release number, and so on.



Casual use case : consists of a few paragraphs of text, summarizing the use case.



Fully dressed use case : is a formal document based on a detailed template with fields for various sections; and it is the most common understanding of the meaning of a use case. Fully dressed use cases are discussed in detail in the next section on use case templates.

Appropriate detail Some software development processes do not require anything more than a simple use case to define requirements. However, some other development processes require detailed use cases to define requirements. The larger and more complex the project, the more likely that it will be necessary to use detailed use cases.

The level of detail in a use case often differs according to the progress of the project. The initial use cases may be brief, but as the development process unfolds the use cases become ever more detailed. This reflects the different requirements of the use case. Initially they need only be brief, because they are used to summarize the business requirement from the point of view of users. However, later in the process, software developers need far more specific and detailed guidance. The Rational Unified Process invites developers to write a brief use case description in the use case diagram, with a casual description as comments and a detailed description of the flow of events in a textual analysis. All those can usually be input into the use case tool (e.g., a UML Tool, SysML Tool), or can be written separately in a text editor.

Use case notation In Unified Modeling Language, the relationships between all (or a set of) the use cases and actors are represented in a use case diagram or diagrams, originally based upon Ivar Jacobson's Objectory notation. SysML, a UML profile, uses the same notation at the system block level.

Use cases and the development process The specific way use cases are used within the development process will depend on which development methodology is being used. In certain development methodologies, a brief use case survey is all that is required. In other development methodologies, use cases evolve in complexity and change in character as the development process proceeds. In some methodologies, they may begin as brief business use cases, evolve into more detailed system use cases, and then eventually develop into highly detailed and exhaustive test cases.

Use case templates There is no standard template for documenting detailed use cases. There are a number of competing schemes, and individuals are encouraged to use templates that work for them or the project they are on. Standardization within each project is more important than the detail of a specific template. There is, however, considerable agreement about the core sections; beneath differing terminologies and orderings there is an underlying similarity between most use cases. Different templates often have additional sections, e.g., assumptions, exceptions, recommendations, technical requirements. There may also be industry specific sections. Use case name A use case name provides a unique identifier for the use case. It should be written in verb-noun format (e.g., Borrow Books, Withdraw Cash), should describe an achievable goal (e.g., Register User is better than Registering User) and should be sufficient for the end user to understand what the use case is about. Goal-driven use case analysis will name use cases according to the actor's goals, thus ensuring use cases are strongly user centric. Two to three words is the optimum. If more than four words are proposed for a name, there is usually a shorter and more specific name that could be used. Version Often a version section is needed to inform the reader of the stage a use case has reached. The initial use case developed for business analysis and scoping

may well be very different from the evolved version of that use case when the software is being developed. Older versions of the use case may still be current documents, because they may be valuable to different user groups. Goal Without a goal a use case is useless. There is no need for a use case when there is no need for any actor to achieve a goal. A goal briefly describes what the user intends to achieve with this use case. Summary A summary section is used to capture the essence of a use case before the main body is complete. It provides a quick overview, which is intended to save the reader from having to read the full contents of a use case to understand what the use case is about. Ideally, a summary is just a few sentences or a paragraph in length and includes the goal and principal actor. Actors An actor is someone or something outside the system that either acts on the system – a primary actor – or is acted on by the system – a secondary actor. An actor may be a person, a device, another system or sub-system, or time. Actors represent the different roles that something outside has in its relationship with the system whose functional requirements are being specified. An individual in the real world can be represented by several actors if they have several different roles and goals in regards to a system.These interact with system and do some action on that. Preconditions A preconditions section defines all the conditions that must be true (i.e., describes the state of the system) for the trigger (see below) to meaningfully cause the initiation of the use case. That is, if the system is not in the state described in the preconditions, the behavior of the use case is indeterminate. Note that the preconditions are not the same thing as the "trigger" (see below): the mere fact that the preconditions are met does NOT initiate the use case. However, it is theoretically possible both that a use case should be initiated whenever condition X is met and that condition X is the only aspect of the system that defines whether the use case can meaningfully start. If this is really true, then condition X is both the precondition and the trigger, and would appear in both sections. But this is rare, and the analyst should check carefully that they have not overlooked some preconditions which are part of the trigger. If the analyst has erred, the module based on this use case will be triggered when the system is in a state the developer has not planned for, and the module may fail or behave unpredictably.

Triggers A 'triggers' section describes the event that causes the use case to be initiated. This event can be external, temporal or internal. If the trigger is not a simple true "event" (e.g., the customer presses a button), but instead "when a set of conditions are met", there will need to be a triggering process that continually (or periodically) runs to test whether the "trigger conditions" are met: the "triggering event" is a signal from the trigger process that the conditions are now met. There is varying practice over how to describe what to do when the trigger occurs but the preconditions are not met. •

One way is to handle the "error" within the use case (as an exception). Strictly, this is illogical, because the "preconditions" are now not true preconditions at all (because the behavior of the use case is determined even when the preconditions are not met).



Another way is to put all the preconditions in the trigger (so that the use case does not run if the preconditions are not met) and create a different use case to handle the problem. Note that if this is the local standard, then the use case template theoretically does not need a preconditions section!

Basic course of events At a minimum, each use case should convey a primary scenario, or typical course of events, also called "basic flow", "happy flow" and "happy path". The main basic course of events is often conveyed as a set of usually numbered steps. For example: 1. 2. 3. 4.

The The The The

system prompts the user to log on, user enters his name and password, system verifies the logon information, system logs user on to system.

Alternative paths Use cases may contain secondary paths or alternative scenarios, which are variations on the main theme. Each tested rule may lead to an alternative path and when there are many rules the permutation of paths increases rapidly, which can lead to very complex documents. Sometimes it is better to use conditional logic or activity diagrams to describe use case with many rules and conditions. Exceptions, or what happens when things go wrong at the system level, may also be described, not using the alternative paths section but in a section of their own. Alternative paths make use of the numbering of the basic course of events to show at which point they differ from the basic scenario, and, if appropriate, where they rejoin. The intention is to avoid repeating information unnecessarily.

An example of an alternative path would be: "The system recognizes cookie on user's machine", and "Go to step 4 (Main path)". An example of an exception path would be: "The system does not recognize user's logon information", and "Go to step 1 (Main path)" According to Anthony J H Simons and Ian Graham (who openly admits he got it wrong - using 2000 use cases at Swiss Bank), alternative paths were not originally part of use cases. Instead, each use case represented a single user's interaction with the system. In other words, each use case represented one possible path through the system. Multiple use cases would be needed before designs based on them could be made. In this sense, use cases are for exploration, not documentation. An Activity diagram can give an overview of the basic path and alternative path. Postconditions The post-conditions section describes what the change in state of the system will be after the use case completes. Post-conditions are guaranteed to be true when the use case ends. Business rules Business rules are written (or unwritten) rules or policies that determine how an organization conducts its business with regard to a use case. Business rules are a special kind of requirement. Business rules may be specific to a use case or apply across all the use cases, or across the entire business. Use cases should clearly reference business rules that are applicable and where they are implemented. Business Rules should be encoded in-line with the Use Case logic and execution may lead to different post conditions. E.g. Rule2. that a cash withdraw will lead to an update of the account and a transaction log leads to a post condition on successful withdrawal - but only if Rule1 which says there must be sufficient funds tests as true. Notes Experience has shown that however well-designed a use case template is, the analyst will have some important information that does not fit under a specific heading. Therefore all good templates include a section (eg "Notes to Developers") that allows less-structured information to be recorded. Author and date This section should list when a version of the use case was created and who documented it. It should also list and date any versions of the use case from an earlier stage in the development which are still current documents. The author is traditionally listed at the bottom, because it is not considered to be

essential information; use cases are intended to be collaborative endeavors and they should be jointly owned.

Limitations of use cases Use cases have limitations: •

Use case flows are not well suited to easily capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere.



Use cases templates do not automatically ensure clarity. Clarity depends on the skill of the writer(s).



There is a learning curve involved in interpreting use cases correctly, for both end users and developers. As there are no fully standard definitions of use cases, each group must gradually evolve its own interpretation. Some of the relations, such as extends, are ambiguous in interpretation and can be difficult for stakeholders to understand.



Proponents of Extreme Programming often consider use cases too needlessly document-centric, preferring to use the simpler approach of a user story.



Use case developers often find it difficult to determine the level of user interface (UI) dependency to incorporate in a use case. While use case theory suggests that UI not be reflected in use cases, it can be awkward to abstract out this aspect of design, as it makes the use cases difficult to visualize. Within software engineering, this difficulty is resolved by applying requirements traceability through the use of a traceability matrix.



Use cases can be over-emphasized. In Object Oriented Software Construction (2nd edition), Bertrand Meyer discusses issues such as driving the system design too literally from use cases and using use cases to the exclusion of other potentially valuable requirements analysis techniques.



Use cases have received some interest as a starting point for test design.[6] Some use case literature, however, states that use case pre and postconditions should apply to all scenarios of a use case (i.e., to all possible paths through a use case) which is limiting from a test design standpoint. If the postconditions of a use case are so general as to be valid for all possible use case scenarios, they are likely not to be useful as a basis for specifying expected behavior in test design. For example, the outputs and final state of a failed attempt to withdraw cash from an ATM are not the same as a successful withdrawal: if the postconditions reflect this, they too will differ; if the postconditions don’t reflect this, then they can’t be used to specify the expected behavior of tests. An alternative perspective on use case pre & postconditions more suitable for test design based on model-based specification is discussed in [7]

Related Documents

Testing
July 2020 14
Testing
May 2020 16
Testing
June 2020 24
Testing
October 2019 41
Testing
April 2020 12
Testing
August 2019 19