Black Box Testing Black Box Testing is testing without knowledge of the internal workings of the item being tested. For example, when black box testing is applied to software engineering, the tester would only know the "legal" inputs and what the expected outputs should be, but not how the program actually arrives at those outputs. It is because of this that black box testing can be considered testing with respect to the specifications, no other knowledge of the program is necessary Black - Box Testing
Black Box Testing: Focus on functional requirements. Compliments white box testing. “A programmer makes mistakes by Omission as well as Commission” Black box testing is concerned only with testing the specification, it cannot guarantee that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover faults of commission, indicating that part of the implementation is faulty. In order to fully test a software product both black and white box testing are required. Black Box Testing Attempts to find: 1. incorrect or missing functions 2. interface errors 3. errors in data structures or external database access 4. performance errors 5. initialization and termination errors The advantages of this type of testing include: • The test is unbiased because the designer and the tester are independent of each other. Birlasoft Ltd.
Classification: CONFIDENTIAL
• The tester does not need knowledge of any specific programming languages. • The test is done from the point of view of the user, not the designer. • Test cases can be designed as soon as the specifications are complete. The disadvantages of this type of testing include: • The test can be redundant if the software designer has already run a test case. • The test cases are difficult to design. • Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested. Black Box Testing Methods: 1. 2. 3. 4.
Equivalence Partitioning Boundary Value Analysis. Cause Effect Graphing Techniques. Comparison Testing
Equivalence Partitioning Equivalence partitioning is a black-box testing method that divides the input domain of a program into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers a class of errors that might otherwise require many cases to be executed before the general error is observed. Equivalence partitioning strives to define a test case that uncovers classes of errors, thereby reducing the total number of test cases that must be developed. – An equivalence class represents a set of valid or invalid classes for an input condition – An input condition is a specific numeric value, a range of values, a set of related values or a Boolean condition (true or false) Equivalence Partitioning
Birlasoft Ltd.
Classification: CONFIDENTIAL
• Example: Suppose a program called for an input that is a 5-digit integer between 10,000 and 99,999. This input condition defines the following equivalence classes: – equivalence partitions are <10,000, 10,000-99,999 and >10, 000 – Test cases would be selected from each equivalence class i.e. 00000, 09999, 10000, 99999, 10001
If input condition specifies – A range, 1valid and 2 invalid classes are defined – A specific value, 1 valid and 2 invalid classes are defined – A member of a set, 1 valid and 1 invalid classes are defined – A Boolean condition, 1 valid and 1 invalid classes are defined Basic idea: Partition the input domain of a program into a finite number of equivalence classes such that one can reasonably assume that a test of a representative value of each class is equivalent to a test of any other value. The method: Step 1: Study the program specification to find input conditions and identify the equivalence classes. (1) for each condition Pi, identify one valid equivalence class defined by Pi and one invalid equivalence class defined by ¬Pi. e.g., if Pi is "the first symbol of the identifier must be a letter" then the valid equivalence class is defined by "it is a letter" and the invalid one is defined by "it is not a letter". (2) for each input condition of the form Pi and Pj, identify one valid equivalence class defined by Pi and Pj, and two invalid ones defined by ¬ Pi and ¬ Pj, respectively. e.g., if the input condition is 1 = n = 5, the valid equivalence class is defined by 1 = n = 5, and the two invalid equivalence classes are defined by n < 1 and n > 5, respectively.
Birlasoft Ltd.
Classification: CONFIDENTIAL
(3) if there is any reason to suspect that elements in an equivalence class are not treated in an identical manner by the program, split the equivalence class into smaller ones. The method: Step 2: Select the test cases. (1) Until all valid equivalence classes have been covered by test cases, find a new test case that covers as many of the uncovered valid equivalence classes as possible. (2) For every invalid equivalence class, find a test case that covers that invalid equivalence class only. Boundary Value Analysis Since a greater number of errors tend to occur at the boundaries of the input domain than in the "center", boundary value analysis has been developed as a testing technique. BVA compliments equivalence partitioning and leads to a selection of test cases that exercise bounding values. Rather than focusing solely on input conditions, BVA derives test cases from the output domain as well. • Boundary Value Analysis is a black box testing technique that where test cases are designed to test the boundary of an input domain. Studies have shown that more errors occur on the "boundary" of an input domain rather than on the "center". • Boundary value analysis complements and can be used in conjunction with equivalence partitioning. – Test at min and max values of an input and output range, and just below max and just above min values – Test values at input and output min and max numbers and just above min and just below max numbers – Test at data structure boundaries Basic idea: Select test cases that lie directly on, above, and beneath the boundaries of input equivalence classes and output equivalence classes to explore the program behavior along the border. (The counterpart of domain-strategy testing) Boundary-value analysis This method differs from the equivalence partitioning in two respects: (a) rather than checking to see if the program will execute correctly for a representative element in an equivalence class, it attempts to determine if the program defines the equivalence class correctly, and (b) rather than selecting test cases based on input conditions only, it also requires derivation of test cases base on output conditions. The method: (1) If an input variable is defined in a range from LB to UB, use LB, UB, LB - ? , and UB + ? as the test cases. (Here delta represents the smallest possible change in value.) (2) Use rule (1) for each output variable. (3) If the input or output of a program is a sequence (e.g., a sequential file or a linear list), focus attention on the first and last element of the sequence.
Birlasoft Ltd.
Classification: CONFIDENTIAL
(4) Use your ingenuity to search for additional boundary values. Cause Effect Graphing Techniques Translation of natural language descriptions of procedures to software based algorithms is error prone. – Causes and effects are listed, graphed, converted to a decision table, and then to test cases Basic idea: A "cause" is an input condition, and an "effect" is a specific sequence of computations to be performed. A cause-effect graph is basically a directed graph that describes the logical combinations of causes and their relationship to the effects to be produced. Cause-effect graphing is a technique that aids in selecting test cases to check if the program will produce right effect for every possible combination of causes. The method: (1) Divide the program specification into pieces of workable size. (2) Identify causes and effects in the specification. (3) Analyze the specification to determine the logical relationship among causes and effects, and express it as a cause effect graph. (4) Identify syntactic or environmental constraints that make certain combinations impossible. (5) Translate the graph into a limited-entry decision table. (6) Select a test case for every column in the decision table. The format of a limited-entry decision table condition stubs condition entries action stubs action entries • The condition stub contains a list of conditions, one to each row. • The condition entry lists combinations of condition values in a column. • The condition constraints express interactions between conditions. • The action stub contains a list of actions. • The action entry contains an "X" for each action to be performed. • The application constraints express the conditions under which an action is to be performed. Example: Shown below is an example of a limited-entry decision table obtained by analyzing and completing the following program specification: From US Army Corps of Engineers: Executive Order 10358 provides in the case of an employee whose work week varies from the normal Monday through Friday work week, that Labor Day and Thanksgiving Day each were to be observed on the next succeeding workday when the holiday fell on a day outside the employee's regular basic work week. Now, when Labor Day, Thanksgiving Day or any of the new Monday holidays are outside an employee's basic workbook, the immediately preceding workday will be his holiday Birlasoft Ltd.
Classification: CONFIDENTIAL
when the non-workday on which the holiday falls is the second non-workday or the non-workday designated as the employee's day off in lieu of Saturday. When the nonworkday on which the holiday falls is the first non-workday or the non-workday designated as the employee's day off in lieu of Sunday, the holiday observance is moved to the next succeeding workday. How do you test code which attempts to implement this? Cause-effect graphing attempts to provide a concise representation of logical combinations and corresponding actions. 1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each. 2. A cause-effect graph developed. 3. Graph converted to a decision table. 4. Decision table rules are converted to test cases. Comparison Testing In some applications the reliability is critical. Redundant hardware and software may be used. For redundant s/w, use separate teams to develop independent versions of the software. Test each version with same test data to ensure all provide identical output. Run all versions in parallel with a real-time comparison of results. Even if will only run one version in final system, for some critical applications can develop independent versions and use comparison testing or back-to-back testing. When outputs of versions differ, each is investigated to determine if there is a defect. Method does not catch errors in the specification.
Birlasoft Ltd.
Classification: CONFIDENTIAL