Overview Of Testing

  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Overview Of Testing as PDF for free.

More details

  • Words: 4,510
  • Pages: 18
Overview of Testing Fundamentals & Principles

[email protected] 1

Deepak Kumar Rout

Key Terms and Definitions 



Verification: - Are we building the product right? - Looks at Process compliance. - Preventive in nature - IEEE/ANSI definition:  The process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase Validation: - Are we building the right product? - Looks at Product compliance. - Corrective in nature but can be preventive also. - IEEE/ANSI definition:  The process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements - Verification and Validation are complimentary



Reliability: - Probability that a given software program performs as expected for - A period of time without error.



Testing: - Examination of the behavior of a Software program over a set of Sample data. - The process of executing a system with the intent of finding defects - Corrective in nature - General definition:  Testing = Verification + Validation

      

Error: A human mistake that can happen either intentionally or unintentionally Faults: Bugs which appear in a given program Failure: Running an input sequence that causes a bug, and/or produces an output that is different from the specified output. One error can result in multiple bugs. Multiple errors can result in one bug. One bug can have one or more failures. Multiple bugs can lead to one or multiple failures.



Requirement:

[email protected] 2

-

Is a condition or capability that is necessary for a system to meet its objectives.



Types of requirements (The IEEE Standard Glossary of Software Engineering Terminology ): - Functional (specification of system behavior) - Design - Implementation - Interface - Performance - Physical - ( Also non-functional (operational) such as Availability, Efficiency, Performance, Compatibility, Reliability, Quality, Safety, Scalability, Security, Usability, Documentation, Cost, etc.,)



Quality Assurance (QA):

- A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives. - QA activities ensure that the process is defined and appropriate. - QA is process oriented and is Preventive in nature. - Quality Assurance makes sure you are doing the right things, the right way - Examples are Process Checklists and Quality Audits 

Quality Control (QC): -

A set of activities designed to evaluate a developed work product. QC activities focus on finding defects in specific deliverables. Testing, Reviews (?) and Inspection are examples of a QC activity. QC is Corrective in nature.

Useful Quotes  



"An effective way to test code is to exercise it at its natural boundaries." - Brian Kernighan "Program testing can be used to show the presence of bugs, but never to show their absence!" - Edsger Dijkstra "Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth

Q&A-1  

Distinguish between Validation, Verification and Testing Give examples of preventive and corrective activities

[email protected] 3

  

Distinguish between Quality Assurance and Quality Control Give examples for QA and QC Preventive measures are better than Corrective ones. But still we need some corrective measures to be taken. Explain.

Necessity for Software Testing     

No One can write Perfect Code all the time. Errors in Commercial Products cause Loss in Revenue. Failures in High Availability and Safety Critical Systems can cause serious irreversible damages. Misunderstanding user requirements can lead to development of perfectly good wrong products. Eventual loss of reputation and future business

Objectives of Testing  

 

Testing does not mean “Finding Bugs” ONLY. The Objectives of Software Testing are  - Find Errors.  - Verify Requirements. Make Prediction about the product(s). Of the above mentioned factors the last one is pretty difficult. Why? Because it depends on several external factors in addition to the standard factors.

Some of the Testing Criteria 

    

Robustness: Does the software component deteriorate gracefully as it approaches the limits specified in the specification. Neither positive nor negative testing can result in unpredictable system behavior Completeness: Does the software solve the problem completely. Consistency: Does the software component perform consistently, i. e in the sense does it produce the same output each time for the same input(s). Usability: Is the software easy to use. Testability: Is the software easily testable. Safety: If the software component is safety critical, is it safe to use.

Cost of Testing   

Cost of test input generation (positive) Cost of expected output generation (positive) Cost of running the test (positive)

[email protected] 4

      

Cost of comparing test results and their expected outputs (positive) Cost of finding bugs (negative cost) Cost of missing bugs (positive and can be large) Cost of test management such as bug reporting, bug tracking, scheduling(positive) Most research papers do not consider all the factors. Usually high, can be as high as 70% to 90% of the cost, especially for those projects which have a poor design and development phase. Cost of software testing can be reduced by automation (almost all the activities of testing can be automated, e.g., test input generation, expected output generation, test case reuse, and test running can be automated but many of these techniques are still highly manual).

Q&A-2       

What are the objectives of Testing? What are the External Factors and Standard Factors, which influence testing? How do you test the robustness of a software product? Give a couple of examples each for Positive and Negative costs Explain how/why “Cost of missing bugs” is considered as a Positive cost Should we consider “Set up of test environment” as Positive or Negative cost? What are the methods of reducing cost of testing other than Automation?

Levels of Testing 





Unit/module/component test - Test individual units separately. - Deals with finding logic errors, syntax errors etc. - Verify that component adheres to its specification. - Done by programmers - Generally all white box - Automation desirable for repeatability Integration test - Verify component interactions to make sure they are correct. - Find interface defects. - Must carry out Regression testing and Smoke testing - Done by programmers as they integrate code into code base - Generally white box, may be some black box - Automation desirable for repeatability System test - Verify the overall system functionality. The target computer system also is exercised though hardware testing is not in the scope Recommended that it is done by an external test group Mostly black box so that testing is not ‘corrupted’ by too much knowledge - - Test automation desirable

[email protected] 5



Alpha testing (Validation)



- - Testing with select customers within the organization. Beta testing (Validation) - Testing with select customers external to the Organization.



Acceptance Testing (Validation) - Generally done by customer in their environment

More about the Levels of Testing 

Unit Testing - Individual components are tested. - It is a path test. - To focus on a relatively small segment of code and aim to exercise a high percentage of the internal path - Disadvantages: the tester may be biased by previous experience. And the test value may not cover all possible values.



Integration Testing - Top-down Integration Test - Bottom-up Integration Test

Top-down Integration Test  



The control program is tested first. Modules are integrated one at a time. Emphasize on interface testing Advantages: - No test drivers needed - Interface errors are discovered early - Modular features aid debugging Disadvantages : - Test stubs are needed - Errors in critical modules at low levels are found late.

Bottom-up Integration Test   



Emphasize on module functionality and performance Allow early testing aimed at proving feasibility Advantages: - No test stubs are needed - Errors in critical modules are found early Disadvantages: - Test drivers are needed - Interface errors are discovered late

System Test: [email protected] 6

  

 

 

Conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements Falls within the scope of Black box testing Takes, as its input, all of the "integrated" software components that have successfully passed Integration testing and also the software system itself integrated with any applicable hardware system(s) System testing is the first time that the entire system can be tested against the Functional Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS) The focus is to have almost a destructive attitude and test not only the design, but also the behavior and even the believed expectations of the customer Is intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s) Different types of testing falling under system testing are: Functional testing User interface test Usability test Compatibility test User help test Security test Performance test Sanity test Regression test Reliability test Recovery test Installation test Maintenance test Accessibility test

Q&A-3      

What other quality control activities will help in avoiding more testing bugs before commencing test activities? Explain when can the integration test become a black box testing? Explain why an external test group is recommended to carry out the System Testing? What is the need for further testing beyond System Testing Give instances of the need for Top-down-integration test and Bottom-up-integration test Unit-> Integration -> System testing is the normal order followed. Can we change this order? Or can we by-pass any of the first two phases?

White-Box Testing 

A test case design method that uses the control structure of the procedural design to derive test cases. - Guarantee that all independent paths within a module have been exercised at least once. - Exercise all logical decision on their true and false values. - Execute all loops at their boundaries and within their operational bounds. - Exercise internal data structures to assure their validity.

[email protected] 7



You know the code - Given the knowledge of internal working, you thoroughly test what is happening on the inside - Close examination of procedural level of details - Logical paths through code are tested  Conditions  Loops  Branches - Impossible to thoroughly exercise all paths  Exhaustive testing grows without bound

Black-Box Testing 

Focuses on the functional requirements of the software. It is not an alternative approach to white-box testing. Instead it acts as a complement to the WB Testing technique. - Runtime errors (Missing function definitions etc). - Interface errors. - Performance errors, and - Initialization and termination errors.



You know the functionality - Given that you know what it is supposed to do, you design the tests that make it do what you think it should do From the outside you are testing its functionality against the specifications/requirements For software this is testing the interface  What is input to the system  What you can do from outside to change the system  What is output from the system Test the functionality of the system by observing its external behaviour

Function Testing (Black Box)     

Designed to exercise the product to its external specifications Testers not biased by knowledge of the program’s design Disadvantages: - The need for explicitly stated requirements - Only cover a small portion of the possible test conditions

Regression Testing  

Test the effects of the newly introduced changes on all the previously integrated code. The common strategy is to accumulate a comprehensive regression bucket but also to define a subset.

[email protected] 8

 

The full bucket is run only occasionally, but the subset is run against every spin. Disadvantages : - To decide how much of a subset to use and which tests to select

Stress and Load Testing 



 

Stress Testing - Aimed at examining the behavior of the system under varied stress conditions Stress can vary gradually with definite incremental addition and reach the maximum allowable limit Normally the system is kept under the chosen stress condition for longer duration Load Testing - Aimed at examining the behavior of the system under half/full load conditions or any other specific % of full load The load is suddenly applied and the system’s response is observed Observations may include memory leaks, % utilisation of CPU and memory, etc. These tests also help in benchmarking the system capacity

Data Flow Testing    

Tests the use of variables along different paths of program execution. Most common types of errors occur because of initialization before declaration or usage before declaration. Global variables cause more problems than local variables. Very Expensive to perform and is used mainly to test High Performance Applications and High Risk Applications.

Equivalence Partitioning      



Functional Testing Criteria. Applicable when the inputs are independent, that is there are no input combinations. Helps to reduce the number of test cases to a minimum Helps to come up with right set of test cases to cover all possible scenarios To be supplemented by Boundary Value Analysis How is EP done? - Divide the input space into finite partitions. - For each partition defined, create a set of test cases. - Develop test cases covering as many partitions as possible. - For each invalid partition, develop additional test cases. - Use Coverage Matrix to keep track of the test cases.

Boundary Value Analysis  

An important technique to detect errors occurring at component interfaces. Several errors tend to occur when components interact.

[email protected] 9

 



Programmers tend to look how to implement their code correctly. Generally overlook how to handle exceptions that MAY occur. As an example consider an API that tests if a point lies in a rectangle or not. The CRect class has an API bool PtInRect (POINT p) that accepts a POINT type input parameter and returns a BOOL depending on the position of the point w.r.p to the rectangle

Boundary Value Testing    

 

From a programmer’s point of view, the implementation is straight forward. Check if the point is within the co-ordinates of the rectangle and return an appropriate value. Some Special cases: - Point is “ON” the rectangle. - Point is one of the vertices itself. (Spl case of above). What should happen in these cases? Have these cases been taken care of by the developer? BT helps solve Some problems of these types.

Random Testing 

Select a random input from a given domain - Can be either input or output domain, but most of the time, input domain is used.

Positive and Negative Testing  



Positive testing is a black box approach where the tests are chosen according to specs. Simple and complex combination of VALID test cases and functions are used. In contrast, Stress or negative testing has the goal to show how the program reacts to abnormal and even unspecified inputs or events. INVALID inputs are used. Exception handling is taxed to the limit. The crash testing and unfriendly users are parts of this testing. The crash test tries to bring the system down. Environment and test cases are made as abnormal as possible.

Q&A-4  

   

Describe the objectives of White-box and Black-box testing When in the life cycle of the development one can prepare the test plan for White-box and Black-box testing? What does the regression test suite contain? When do we normally run regression test suite? How do stress and load testing help in benchmarking the performance/capacity of a system? By making the coverage of Positive testing 100% can we avoid Negative testing? What do we mean by Test coverage and how can we achieve 100% coverage

V Model: Distinguishes between development and verification activities

[email protected] 10

Acceptance Testing plan

Requirements Elicitation

System Testing plan

Analysis Design

Implementation

Integration Testing plan Unit Testing plan

Test Planning                

Test plan identifier Introduction Test items Features to be tested Features not to be tested Approach (test strategy) Item pass/fail criteria Suspension criteria and resumption requirements Test deliverables Testing tasks Environmental needs Responsibilities Staffing and training needs Schedule Risks and contingencies Approvals

Test Strategy        

Defining the purpose and the scope of test activities Defining the levels of testing to be carried out Estimating the effort in persons to be involved at each level Defining the responsibilities at each level Defining the input and output criteria at each level Defining the sequence of test execution (test priorities) in terms of test types Defining the suspension and re-start criteria

[email protected] 11



Defining the test resource requirements

Test Case Definition 

A test case should contain the following attributes - Test case Identity - Title - Pre-conditions - Test setup - Input parameters - Procedure - Expected output - Special observations - Mapping to requirements (optional)

Test Result Summary 



A test result summary should contain the following Information: - List of test cases with sequence number, TC ID, title, estimated time, actual time taken, observations made, Procedure, expected output and result - Version number and the name of the build used for testing - Version number and the name of the components deployed - Version number and the name of the other related components used in setting up the test platform - Information on the Defect or MR/CR raised against the failed test cases

Product release types 





General Availability Release (GA) - Is a major release meant for multiple customers - May contain one or more major features and/or large number of MRs/CRs resolved - Multiple components may get modified - Multiple test cycles will be required - Full regression test suite will be run Feature Introduction Release (FIR) - Is a major release with specific feature being implemented - Meant for specific customers only - Multiple components may get modified - Multiple test cycles may be required - Full regression test suite may be run Patch/Problem Resolution Release (PRR) - Meant for a specific customer - Involves providing fix for specific set of MRs/ CRs - May be a periodic or a planned release - Normally one test cycle will be involved with either a full or

[email protected] 12



truncated regression test suite Emergency Release (ER) - Meant for a specific customer - Involves providing urgent fix for critical MRs/ CRs - Done on an emergency basis when a customer raises alarm - Normally a truncated test cycle will be run focusing on verifying the MR/CR and carrying out a sample regression test suite - Difficult to adhere to any quality process because of the time critical nature of the release

A typical Test Execution Cycle 



Unit Testing - Check-1: If the Pass criteria is met proceed with Check-2, else go to step 1 - Check- 2: If the test readiness review approves go to step 2, else wait Integration Testing - Check-1: Is the condition for suspension met? if yes wait else proceed with Check-2 - Check-2: If the Pass criteria is met proceed with Check-3, else go to step 2 - Check-3: If the test readiness review approves go to step 3, else wait



System Testing - Check-1: Is the condition for suspension met? if yes wait else proceed with Check-2 - Check-2: If the Pass criteria is met proceed with Check-3, else go to step 3 - Check-3: Are all testing cycles completed? if yes go to step 4 else step 3



Acceptance Testing

Test Priorities based on the release type   



  

Build installation on the test machine If installation successful, Carry out basic Sanity Checks If Sanity Checks meet the pass criteria - Carry out either the Functionality test (for GA or FIR) or - the MR/CR Verification(for PRR or ER) else Reject the build Carry out the stress and load test in parallel with the Functionality test if the test environment permits parallel running of tests Carry out Regression testing (full for GA/FIR, customized for PRR/FR) Check if the number of test cycles reached max (for GA/FIR). If yes declare Test Completed and go to Step 9 else repeat steps 3 onwards If any critical MR/CR raised during steps 3 till 5, suspend testing if it is a test blocker else continue with next test cases

[email protected] 13





If any new code submissions received during testing (occasionally for ER), Carry out steps 1 and 2. If the pass criteria is met continue from the point where you had stopped before getting the new code submission Prepare the test report

Risk Identification, Mitigation & Contingencies 

Risks are of following nature: - Frequent changes to requirements - Ambiguous/Incomplete/Incorrect requirements - Error in effort estimations - Lack of adequate resources - Delay in procuring critical inputs or materials - Loss of bandwidth due to people absence - Lack of domain expertise - Lack of basic skills and experience - Delay in providing critical trainings - Schedule slippages in earlier phases or cycles - Too many critical and time consuming bugs during testing - Too many code submittals - Inadequate/Improper reviews of test plan

 

Mitigation is the steps taken to prevent any risk Contingency is the action to be taken upon occurrence of any risk

Test Cycle Time Reduction 



What is meant by Test cycle time reduction? - An act of trying to cut down the turn around time of test execution in a given release by  Implementing both test automation and process automation  By deploying multiple resources and running in parallel those test cases, which are independent  By customizing the test checklist for a given release on the basis of priority  By separating NOT SO IMPORTANT test cases (in case of PRR/ER) What are the benefits of reducing the test cycle time? - Achieving quick time to market - Better utilisation of available resources - Eventual reduction of people’s bandwidth ,which can be channelised to other important tasks - Increase in test coverage and hence improved product quality

Q&A-5     

What is Test Planning? Why is it required? When is the Test Plan document prepared normally? What are the attributes of a test case? Why is it difficult to follow a quality process strictly in case of Emergency Releases? Why do we need multiple test cycles in case of major releases?

[email protected] 14

  

  

What is the significance of conducting Test Readiness Review (TRR) before commencing the System Testing? What does a Regression Test Checklist contain? Why do we need to run it? What are the conditions that trigger the suspension of testing? Why do we need to manage risks? Why do we need to prioritise the test types? How can we increase the test coverage when we want to reduce test cycle time?

Responsibilities of a tester         

Participate in the reviews of Requirement documents, Design, Test strategy and Test plan documents and validate them Participate in code walkthroughs if involved in integration testing and validate them Contribute towards maintaining the requirement traceability (traceability of requirements against SRS/FFD, HLD/LLD, Test Plan and Test Results) Try to achieve 100% test coverage Own the responsibility of achieving highest level of quality Be responsible for proper test planning and de-risking Better co-ordination with development team Early and judicial escalation of issues

Q&A-6     

What are the differences between a formal and an informal review process? Explain what is meant by a formal review? What are the benefits of holding FTRs? Why the testers need to participate in reviews of design documents and code? What is the importance of having a better co-ordination between test and development teams?

Software Test Metrics 





Unit Test Level - Defect tracking - Code complexity - Code coverage – test effectiveness ratio  Statement coverage  Branch/ decision coverage  Decision-condition coverage Integration Test Level - Error rates in design - Error rates in implementation - Error rate in test design - Test execution /progress metrics - Time/Cost metrics - Requirements churn metrics System Test Level - Test completeness metrics - Defect arrival rate - Cumulative defects by status

[email protected] 15



- Defect closure rate - Reliability prediction - Schedule tracking metrics - Staff and resource tracking metrics Validation Test Level - Defect arrival rate - Cumulative defects by status - Defect closure rate - Defect backlog by severity - Cost of quality metrics (COQ & COPQ)

Attitude(s) that make a Good Tester     

Independent. Customer Perspective Testing intended functionalities. Testing unintended functionalities. Professionalism.



Independent - Independent from the developer. Why? Developers tend to be biased towards their mistakes.

  

  





Customer Perspective - - Must be able to think from a customer’s perspective. Why? Ultimately the customer is the one going to use the the revenue, so a good tester Must be able to think from a customer’s perspective.

product. They bring in

Testing Intended Functionality - - This is one of the basic purposes of testing. A good tester is one who tests each and every intended functionality to make sure that the software is exactly what the client wanted. Testing Unintended Functionality - - Sometimes called break-it testing (Dirty Testing). In this process the tester intentionally tries to make the code fail. Helps in detecting some special cases where the code may fail. Professionalism - Adhere to the principle of doing right thing right way Do not be influenced by oral explanations or justifications given by developers for not reporting a defect Report a defect or bug NOT for the sake of reporting BUT for the sake of getting it rectified Do not make any assumptions in either reporting a bug or hiding it Provide adequate and relevant information regarding the

[email protected] 16

-

defect so that it becomes helpful for the developers to fix it within the expected time - Confirm thoroughly before raising a bug

Testing - Misconceptions         

Testing job is inferior to development job Testing job is not as challenging as development job Since the focus of testing is on finding defects testing personnel will eventually become less liked by others Testing personnel are less paid as compared to development personnel Testing comes as the last phase in the SDLC and hence much of the time is wasted in waiting for the code drops Career growth is not promising in testing field

Tuesday, April 03, 2007

The End

[email protected] 17

[email protected] 18

Related Documents