Soft Test Unit 4

  • Uploaded by: Kiruthivasaan
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Soft Test Unit 4 as PDF for free.

More details

  • Words: 2,250
  • Pages: 47
Software Testing

1

Objectives 







To discuss the distinctions between validation testing and defect testing To describe the principles of system and component testing To describe strategies for generating system test cases To understand the essential characteristics of tool used for test automation 2

The testing process 

Component testing • • •



Testing of individual program components; Usually the responsibility of the component developer (except sometimes for critical systems); Tests are derived from the developer’s experience.

System testing • • •

Testing of groups of components integrated to create a system or sub-system; The responsibility of an independent testing team; Tests are based on a system specification. 3

Testing phases

Component testing Software developer

System testing Independent testing team

4

The software testing process

Test cases

Design test cases

Test data

Prepare test data

Test results

Run program with test da ta

Test reports

Compare results to test cases

5

Black Box Test 







The black box testing is also called Behavioral testing. It focus on the functional requirements of the software. It is not alternative to white box testing and it uncovers different classes of errors. Errors are Incorrect functions, Interface errors, Errors in data structures, performance errors and initialization or termination errors. 6

Equivalence partitioning Invalid inputs

Valid inputs

System

Outputs

7

Equivalence Partitioning 









Divides the input domain into classes of data, form this data test cases can be derived. An ideal test case uncovers a class of errors that might require many arbitrary test cases to be executed before a general error is observed. Equivalence classes are evaluated for given input condition, Equivalence class represents a set of valid or invalid states for input conditions. If input condition specifies a range, one valid and two invalid equivalence classes are defined. 8

Boundary Value Analysis 







A testing technique in which the elements at the edge of the domain are selected and tested. Using BVA, without focusing input conditions the test cases from output domain are also derived. BVA is test case design technique that complements equivalence partitioning technique. If the input condition specified the range bounded by values X and Y, then test cases should be designed with values X and Y. Also test cases should be with values above and below X & Y. 9

White Box Testing 

 

A testing method which is based on close examination of procedural details. Also called as Glass box testing. The test cases are derived for,  Examining all the independent paths within a module.  Exercising all the logical paths with their true and false sides.  Executing all the loops with in their boundaries and within operational bounds.  Exercising internal data structures to ensure their 10 validity.

Why to perform WB testing? 

Three main reasons were,  To detect and correct logical errors procedural details need to be examined.  To uncover the errors on logical path, white box testing is must.  There may be certain typographical errors that remain undetected even after syntax and type checking mechanisms. Such errors can be uncovered during white box testing. 11

Cyclomatic Complexity 



Cyclomatic complexity is a software metric that gives the quantitative measure of logical complexity of the program. Defines the number of paths in the basis set of the program that provides the upper bound for the number of tests that must be conducted to ensure that all the statements have been executed at least once. 12

Methods of computing cyclomatic complexities 



Method 1: The total no. of regions in the flow graph is a cyclomatic complexity. Method 2: The cyclomatic complexity, V(G) for a flow graph G can be defined as, V(G)=E-N+2 E  Total no. of edges in the flow graph N  Total no. of nodes in the flow graph

13

Method 3: The CC V(G) for a flow graph G can be defined as V(G)=P+1 Where P is the total no. of predicate nodes contained in the flow graph. Eg: Fragment code { If (a
14

Step 1: Design flow graph for the given code fragment. 1 3 2 4

5

6

Step 2: Compute regions and list the total nodes in the flow graph

1 3 2 4

5

6

15

Three regions are denoted R1, R2 and R3. Total edges = 7 Total nodes = 6 Step 3: Apply formula 6) Cyclomatic complexity= Total regions = 3 7) CC=E-N+2 CC=7-6+2 CC=3

16

Structural testing  



Sometime called white-box testing. Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. Objective is to exercise all program statements (not all path combinations). 17

Structural testing Test data

Tests

Derives Component code

Test outputs

18

Condition Testing 

To test the logical conditions in the program module the condition testing is used. Boolean operator is incorrect/missing. Boolean variable is missing Boolean parenthesis may be missing. Error in relational operator Error in arithmetic expression



It focuses on each testing condition in the program. 19

Strategy: 

The two testing strategies obtained in condition testing were,  Branch testing  In compound condition each and every true or false branches are tested.  Domain testing  Relational expression can be tested using three or four tests. 20

Path testing 



The method is to exercise every independent execution path of a program at least once. The starting point for path testing is a program flow graph, Nodes  representing program decisions Arcs  representing the flow of control. 21

Steps carried out while performing Path Testing: 



 

Step 1: Design the flow graph for the program or a component. Step 2: Calculate the cyclomatic complexity Step 3: Select a basis set of path Step 4: Generate test cases for these paths. 22

If-else

While

Case 23

1

2 I

Cyclomatic complexity is computed using the formula, CC=E-N+2 CC=12-10+2 CC= 4 Cyclo complex= P+1 3 predicate nodes are 2,4 and 7 decision making nodes. CC=3+1=4

III 3

4

7

8

5

II

9

6 IV

10

24

Independent paths The basis paths are  1, 2, 3, 4, 5, 6, 10  1, 2, 3, 10  1, 2, 3, 4, 7, 8, 2, …  1, 2, 3, 4, 7, 9, 2, …  Test cases should be derived so that all of these paths are executed  A dynamic program analyser may be used to check that paths have been executed 25

Test coverage criteria Based on Data Flow Mechanisms 







It performs testing on definitions and uses of variables in the program. In this method, definition and use chain (DU chain) is required to identify the defn and use pairs from the program structure. Set DEF (n) contains variables that are defined at node n. Set USE (n) contains variables that are read or used at node n. 26

Strategic Approach 



A testing strategy provides a process that describes for the developer, Quality analyst and the customer. It includes, * Test Planning * Test case design * Test execution * Data Collection * Effectiveness evaluation 27

The strategic approach for testing can be, 







The process of testing begins at the component level and works toward the integration of entire computer based sys Different testing techniques can be applied at diff point of time. Testing and debugging are different activities. Debugging must be accommodated in any testing strategy. 28

Who are involving software testing?   

Developers Testers SQA group The software strategy for software testing must perform low-level tests that are necessary to verify that small source code segment has been correctly implemented. 29

Verification and Validation 





Verification refers to the set of activities that ensure software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software has been built is traceable to customer requirements. According to Boehm, Verification: Are we building the prd right? Validation: Are we building the right prod? 30

Software testing is only one element of software quality assurance. 

Verification and Validation involve large number of SQA activities such as,        

Formal technical reviews Quality and Configuration audits Performance monitoring Feasibility Study Documentation review Database review Algorithmic analysis Development testing

31

The Software Testing Strategy 

    

It begin by “ testing in the small” and move toward “ testing in the large”. Various testing strategies are, Unit testing Integration testing Validation testing System testing

32

Strategic Issues: 

 



 



Specify product requirements in a quantifiable manner before testing starts Specify testing objectives explicitly Identify categories of users for the software and develop a profile for each. Develop a test plan that emphasizes rapid cycle testing. Build robust software to test itself Use effective formal reviews as a filter prior to testing Conduct formal technical reviews to assess the test strategy and test cases. 33

Unit Testing 



    

The individual components are tested independently to ensure their quality. The focus is to uncover the errors in design and implementation. The various tests that are conducted, Module interfaces are tested. Local data are examined Boundary conditions are tested. All error handling paths should be tested. 34

Integration testing 









A group of dependent components are tested together to ensure their quality of their integration unit. Involves building a system from its components and testing it for problems that arise from component interactions. Top-down integration • Develop the skeleton of the system and populate it with components. Bottom-up integration • Integrate infrastructure components then add functional components. To simplify error localisation, systems should be incrementally integrated. 35

Regression Testing 





Used to check for defects propagated to other modules by changes made to existing program. Used to reduce the side effects of the changes. Three different classes of test cases, • Representative sample of existing test cases • Additional test cases • Test cases 36

Smoke testing 



A kind of integration testing technique used for time critical projects where in the project needs to be assessed on frequent basis. Activities need to be carried are, • S/w components already translated into code are integrated into a “build”. • A series of tests are designed to expose errors from “build”. • The “build” is integrated with the other builds and the entire product is smoke tested daily.

37

Validation Testing 



The integrated software is tested based on requirements to ensure that the desired product is obtained. The main focus is to uncover errors in • • • • • •

System input/output System functions System interfaces User interfaces System behaviour System performance

38

System Testing 





A series of tests conducted to fully the computer based system Involves integrating components to create a system or sub-system. Various types of system tests are • Recovery testing • Security testing • Stress testing • Performance testing 39

The main focus of such testing is to test,    

  

System functions and performance System reliability and recoverability System installation System behavior in the special conditions System user operations Hardware and software integration Integration of external software and the system. 40

Recovery Testing 





Intended to check the system’s ability to recover from failures. The software is forced to fail and then it is verified whether the system recovers properly or not. For automated recovery, checkpoint mechanisms and data recovery are verfifed. 41

Security Testing 





Verifies that system protection mechanism prevent improper penetration or data alteration. Verifies that protection mechanisms built into the system prevent intrusion System design goal is to make the penetration attempt more costly than the value of the information that will be obtained.

42

Performance testing 





Evaluates the run time performance of the software especially real time systems. Involve testing the emergent properties of a system, such as performance and reliability. Performance testing resource utilization such as CPU load, throughput, response time and memory usage can be measured. Eg: Banking system 43



Stress testing 









Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. Stressing the system test failure and its behaviour. Stress testing checks for unacceptable loss of service or data. A variation of stress testing is a technique called sensitivity testing. 44

Debugging 

 



Debugging is a process of removal of a defect. Starts with a execution of test cases. Actual results are compared with the expected results. Attempts to find the lack of correspondence between actual and expected results. 45

Execution of test cases

Test Cases

New Additional tests Suspected causes

Test Results

Debugging

Identified causes

corrections Regress testing

46

Common approaches in debugging are: 

Brute Force method: • “Let computer find the error” approach is used. Least efficient method of debugging.



Backtracking method: • This method is applicable to small programs. The source code is examined by looking backwards.



Cause elimination method: • Uses binary partitioning to reduce the number of locations where errors can exist.

47

Related Documents

Soft Test Unit 4
May 2020 4
Sample Test Unit 4
May 2020 9
Unit 4 Test Outline
June 2020 11
Unit Test
November 2019 23
Test Unit
May 2020 10
Unit-test
May 2020 12

More Documents from "Nick Hershman"