Black Box Test Case Designing Introduction This document provides a comprehensive view of designing software test cases. This document discuss the Objective, Purpose, Type, format and knowhow about test case and gives tips on writting test cases for a successive testing of a software. The test case format is suitable for manual system test cases. The test cases should be written in enough detail that they could be given to a new team member who would be able to quickly start to carry out the tests and find defects.
Who Should Read Quality assurance engineers, test engineers, managers or anyone who designs or performs software tests including Testers and Test Leads. Eligbility of the person who designs the Test cases -> The Person / tester who designs the test case should be well trained. Well trained in the sense he / she should know each and every function of module whose test case has to be designed. He / She need to have a thorough understanding of the application and business reason of the application. Person / tester need to have a complete understanding of the project.
Topics Covered 1. 2. 3. 4. 5. 6.
Objective and Purpose of Black box testing. Type of Black Box Test Case Design. Definition of a Test Case. Qualities of a Good Test case. Format with an example of test case. Features of the Test Cases.
OBJECTIVE AND PURPOSE The purpose of the Black Box Test Case Design (BBTD) is to discover circumstances under which the assessed object will not react and behave according to the requirements or respectively the specifications. The test cases in a black box test case design are derived from the requirements or the specifications. The object to be assessed is considered as a black box, i. e. the assessor is not interested in the internal structure and the behavior of the object to be assessed.
BLACK BOX TEST CASE DESIGNS • • • •
generation of equivalence classes marginal value analysis intuitive test case definition function coverage
1. Generation of Equivalence Classes Objective and Purpose It is the objective of the generation of equivalence classes to achieve an optional probability to detect errors with a minimum number of test cases. Operational Sequence The principle of the generation of equivalence classes is to group all input data of a program into a finite number of equivalence classes so it can be assumed that with any representative of a class it is possible to detect the same errors as with any other representative of this class. The definition of test cases via equivalence classes is realized by means of the following steps: o o o
Analysis of the input data requirements, the output data requirements, and the conditions according to the specifications Definition of the equivalence classes by setting up the ranges for input and output data Definition of the test cases by means of selecting values for each class
When defining equivalence classes, two groups of equivalence classes have to be differentiated: o o
valid equivalence classes invalid equivalence classes
For valid equivalence classes, the valid input data are selected; in case of invalid equivalence classes erroneous input data are selected. If the specification is available, the definition of equivalence classes is predominantly a heuristic process. 2. Marginal Value Analysis Objective and Purpose It is the objective of the marginal value analysis to define test cases that can be used to discover errors connected with the handling of range margins.
Operational Sequence The principle of the marginal value analysis is to consider the range margins in connection with the definition of test cases. This analysis is based on the equivalence classes defined by means of the generation of equivalence classes. Contrary to the generation of equivalence classes, not any one representative of the class is selected as test case but only the representatives at the class margins. Therefore, the marginal value analysis represents an addition to the test case design according to the generation of equivalence classes. 3. Intuitive Test Case Definition Objective and Purpose It is the objective of the intuitive test case definition to improve systematically detected test cases qualitatively, and also to detect supplementary test cases. Operational Sequence Basis for this methodical approach is the intuitive ability and experience of human beings to select test cases according to expected errors. A regulated procedure does not exist. Apart from the analysis of the requirements and the systematically defined test cases (if realized) it is most practical to generate a list of possible errors and errorprone situations. In this connection it is possible to make use of the experience with repeatedly occurred standard errors. Based on these identified errors and critical situations the additional test cases will then be defined. 4. Function Coverage Objective and Purpose It is the purpose of the function coverage to identify test cases that can be used to proof that the corresponding function is available and can be executed as well. In this connection the test case concentrates on the normal behavior and the exceptional behavior of the object to be assessed. Operational Sequence Based on the defined requirements, the functions to be tested must be identified. Then the test cases for the identified functions can be defined. Recommendation With the help of a test case matrix it is possible to check if functions are covered by several test cases. In order to improve the efficiency of the tests, redundant test cases ought to be deleted.
DEFINITIONS OF TEST CASE 1. A test case is a detailed procedure that fully tests a feature or an aspect of a feature. 2. The smallest entity that is always executed as a unit, from beginning to end. 3. A good test case is one that has a high probability to find an error. 4. Test case is a set of 1. test inputs 2. execution conditions 3. expected results
developed for a particular objective.
QUALITITES OF A GOOD TEST CASE • • • • •
High probability of catching an error Is not redundant Is the best of its breed Is neither too simple nor too complex Is reproducible
FORMAT OF TEST CASE DESIGN
Test Case Design should contain Test Case ID: It is unique number given to test case in order to be identified. Test description: The description of test case you are going to test. Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified. Function to be tested: The name of function to be tested. Environment:
It tells in which environment you are testing. Test Setup: Anything you need to set up outside of your application for example printers, network and so on. Test Execution: It is detailed description of every step of execution. Expected Results: The description of what you expect the function to do. Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed. EXAMPLE Test case ID: B 001 Test Description: verify B - bold formatting to the text Revision History: 3/ 23/ 00 1.0- Valerie- Created Function to be tested: B - bold formatting to the text Environment: Win 98 Test setup: N/A Test Execution: 1. Open program 2. Open new document 3. Type any text 4. Select the text to make bold. 5. Click Bold Expected Result: Applies bold formatting to the text Actual Result: pass
FEATURES OF TEST CASE Good test cases clearly state following components: 1. No iteration-> Any feature or case to be tested should not be repeated. 2. Completeness-> Test cases should contain all the features that have to be tested. 3. Detailed-> Test cases should contain detailed steps and all the requirement that are needed to test a particular function. 4. Accuracy-> Test cases that have been written should be accurate or without any drawbacks like spelling mistake, unclear case. 5. More Case should get failed-> There should be cases which have more probablility to fail than pass. Test Cases must be Written for Invalid and Unexpected Cases as well as for Valid and Expected Cases. 6. Meaningful case-> The case should contain proper Input, Action, Output and the result of case. ( Pass / Fail )
7. Short and Simple language-> The case written should be short rather than lengthy and it should be written in very simple language so that any person is able to understand the scope of each case. 8. Append a unique number-> The name of each test case should be a short phrase describing a general test situation. Append a unique number to each test for the given test situation. For example: login-1, login-2, login-3 for three alternative ways to test logging in. 9. Use distinct test cases-> When different steps will be needed to test each situation. One test case can be used when the steps are the same and different input values are needed. 10. Number of test case-> The advantage of having a large number of tests is that it usually increases the coverage.The disadvantage to creating a big test suite is simply that it is too big. It could take a long time to fully specify every test case that you have mapped out. And, the resulting document could become too large, making it harder to maintain. So the number should be neither too small or to large. 11. Focus on the test cases that seem most in need of additional detail. For example, select system test cases that cover: • • • • •
High priority use cases or features Software components that are currently available for testing (rather than specifying tests on components that cannot actually be tested yet) Features that must work properly before other features can be exercised (e.g., if login does not work, you cannot test anything that requires a logged in user) Features that are needed for product demos or screenshots Requirements that need to be made more clear
12. Carefully selecting test data is as important as defining the steps of the test case. The concepts of boundary conditions and equivalence partitions are key to good test data selection. Try these steps to select test data: • •
•
•
•
Determine the set of all input values that can possibly be entered for a given input parameter. For example, the age of a person might be entered as any integer. Define the boundary between valid and invalid input values. For example, negative ages are nonsense. You might also check for clearly unreasonable inputs. For example, an age entered as 200 is much more likely to be a typo than a user who is actually twohundred years old. Review the requirements and find boundaries in the valid range that should cause the system to behave in different ways. For example, the system might treat minors differently than adults, so the boundary would be age 18. Choose one input value somewhere in the middle of each equivalence partition (e.g., -5, 12, and 44), one directly on each boundary (e.g., 0 and 18), and one on each side of each boundary (e.g., 1, 17, and 19). Test data vales that are expected to cause errors (e.g., -5) should be tested in separate robustness test cases. In functional correctness test cases, make sure that you have inputs that will force the system to generate each possible type of response to valid input. And, in robustness test cases, make sure to force the system to generate each relevant error message