5th Chapter Exe Test Plan

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 5th Chapter Exe Test Plan as PDF for free.

More details

  • Words: 3,752
  • Pages: 12
SKILL CATEGORY 5

EXECUTING THE TEST PLAN

1. TEST CASE DESIGN Test case Design  (1) Functional  Design Specific tests for testing code  Functional testing independent of the specification Technique  Functional Testing based on the interface  Functional Testing based on the function to be computed  Functional testing dependent on the specification Technique  Design Specific tests for testing code  Program testing is functional when test data is developed from documents that specify a module’s intended behavior.  The actual specifications and the high-low-level design of the code to be tested  The goal is to be test the specified behavior for each software feature, including the input and output  Functional testing independent of the specification Technique  The behavior of the module always includes the function(s) to be computed, and sometimes the runtime characteristics, such as its space and time complexity.  Functional testing derives test data from the features of the specification  Functional Testing based on the interface  Input testing : i. use extreme ranges in inputs ii. This procedure can generate a large quantity of data, though considerations of the inherent relationships among components can ameliorate this problem somewhat.  Equivalence Partitioning :i. constrain what is input to classes ii. Specifications frequently partition the set of all possible inputs into classes that receive equivalent treatment. Such partitioning is called equivalence partitioning. iii. Result of equivalence partitioning is the identification of a finite set of functions and their associated input and output results  Syntax Checking :i. Every program tested must assure its input can handle incorrectly formatted data. Verifying this feature is called syntax checking. ii. improper input formatting  Functional Testing based on the function to be computed  Special-Value Testing i. Selecting test data on the basis of features of the function to be computed is called special-value testing. This procedure is particularly applicable to mathematical computations.  Output Result Coverage i. use inputs that get extreme outputs ii. This ensures that modules have been checked for maximum and minimum output conditions and that all categories of erro messages.  Functional testing dependent on the specification Technique  Algebraic  Axiomatic  State Machines

Page 1 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

 Decision Tables  (2)Structural Test Cases  Statement Testing i. A test method that executes each statement in a program at least once during program testing.  Branch Testing i. A test method that requires that each possible branch on each decision point be executed at least once.  Conditional Testing i. In conditional testing, each clause in every condition is forced to take on each of its possible values in combination with those of other clauses.  Expression Testing i. Expression testing requires that every expression assume a variety of values during a test in such a way that no expression can be replaced by a simpler expression and still pass the test.  Path Testing i. A test method satisfying the coverage criteria that each logical path through the program be tested. Often, paths through the program are grouped into a finite set of classes and one path from each class is tested.  (3) Erroneous Test Cases  Statistical Methods i. How faults in the program affect its failure rate in its actual operating environment  Error-Based Testing  Fault Estimation: - Fault seeding is a statistical method used to assess the number and characteristics of the faults remaining in a program. Harlan Mills originally proposed this technique, and called it error seeding. First, faults are seeded into a program. Then the program is tested and the number of faults discovered is used to estimate the number of faults yet undiscovered. A difficulty with this technique is that the faults seeded must be representative of the yet-undiscovered faults in the program.  Input Testing: - The input of a program can be partitioned according to which inputs cause each path to be executed. These partitions are called paths. Faults that cause an input to be associated with the wrong path are called input faults. Other faults are called computation faults  Perturbation Testing: - Perturbation testing attempts to decide what constitutes a sufficient set of paths to test.  Fault-Based Testing: - Fault-based testing aims at demonstrating that certain prescribed faults are not in the ode. It functions well in the role of test data evaluation. Test data that does not succeed in discovering the prescribed faults is not considered adequate.  Local Extent, Finite Breadth  Global Extent, Finite Breadth  Local Extent, Infinite Breadth  Global Extent, Infinite Breadth  (4)Stress Test Cases: - Stress or volume testing needs a tool that supplements test data. The objective is to verify that the system can perform properly when stressed, or when internal program or system limitations have been exceeded.

Page 2 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

 Types of limitations  Internal accumulation of information  Number of line items in an event  Size of accumulation fields  Data-related limitations  Field size limitations  Number of accounting entities  Determining Limitations  Identify input data used by the program  Identify data created by the program  Challenge each data element for potential limitations  Document limitations  Perform stress testing  (5)Test Scripts  Determine testing levels i. Unit Scripting – Develop a script to test a specific unit or module. ii. Pseudo-concurrency Scripting: - Develop scripts to test when there are two or more users accessing the same file at the same time. iii. Regression Scripting – Determine that the unchanged portions of systems remain unchanged when the system is changed. iv. Stress and Performance Scripting – Determine whether the system will perform correctly when it is stressed to its capacity  Develop the scripts i. Script through the components system ii. Input xvi. Inquiry during iii. Programs to be processing tested xvii. External iv. Files involved considerations v. On-line operating xviii. Program libraries environment xix. File states and vi. Output contents vii. Manual entry of xx. Screen script initialization transactions xxi. Operating viii. Date setup environment ix. Secured xxii. Security initialization considerations x. File restores xxiii. Complete scripts xi. Password entry xxiv. Start and stop xii. Update considerations xiii. Automated entry xxv. Start; usually of script begins with a transactions clear screen xiv. Edits of xxvi. Start; begins transactions with a xv. Navigation of transaction code transactions xxvii. Scripts; end with a clear screen

Page 3 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

xxviii. xxix. xxx. xxxi. xxxii. xxxiii. xxxiv. xxxv. xxxvi.

Script contents xli. Entry of scripts Sign-on xlii. Operations Setup initialization of Menu navigation files Function xliii. Application Exit program Sign-off interface (API) Clear screen communications Changing xliv. Special passwords considerations xxxvii. User xlv. Single versus identification and multiple security rules terminals xxxviii. Regrouping xlvi. Date and time xxxix. Single-user dependencies identifications xl. Sources of scripting transactions  Test Item – a unique item identified of the test condition.  Entered by – Who will enter the script.  Sequence – The sequence in which the actions are to be entered.  Action – The action or scripted item to be entered.  Expected Result – The result expected from entering the action.  Operator Instructions – What the operator is to do if the proper result is received, or if an improper result is returned.

 Execute the scripts i. Environmental setup ii. Program libraries iii. File states and contents iv. Date and time v. Security vi. Multiple terminal arrival modes vii. Think time

viii. Serial (crossterminal) dependencies ix. Pseudoconcurrent x. Processing options xi. Stall detection xii. Synchronization xiii. Rate xiv. Arrival rate



Page 4 of 12

[email protected]

SKILL CATEGORY 5

 Analyze the results i. System components ii. Outputs (screens) iii. File content at conclusion of testing iv. Status of logs v. Performance data (stress results) vi. On-screen outputs

EXECUTING THE TEST PLAN

vii. Individual screen outputs viii. Multiple screen outputs ix. Order of outputs processing x. Compliance of screens to specifications xi. Ability to process actions xii. Ability to browse through data

 Maintain the scripts i. Programs vii. Field ii. Files viii. Changed (length, iii. Screens content) iv. Insert ix. New (transactions) x. Moved v. Delete xi. Expand test vi. Arrange cases  (6)Use Case  A use case is a description of how a user (or another system) uses the system being designed to erform a given task. A system is described by the sum of its use cases. Each instance or scenario of a use case will correspond to one test case.  Build a System Boundary Diagrama  An example of a system boundary diagram for an automated teller machine (ATM) for an organization called “Best Bank” is

 Individuals/groups that manually interface with the software  Other systems that interface with the software  Libraries  Objects within object-oriented systems  Define Use Cases  Preconditions that set the stage for the series of events that should occur for the use case  Results that state the expected outcomes of the above process

Page 5 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

 Sequential narrative of the execution of the use case  Use cases are used to: i. Manage (and trace) requirements ii. Identify classes and objects (OO) iii. Design and code (Non-OO) iv. Develop application documentation v. Develop training vi. Develop test cases  The information about each use case that needs to be determined for defining the case follows:  Use Case Name or ID:- A short phrase in business terms or identifier that identifies and describes the use case.  Actor:- Anything that needs to exchange information with the system.  Objective: - A description of what a use case accomplishes given a defined set of conditions.  Preconditions:- The entrance criteria or state that the system must be in for the use case to execute.  Results:- The expected completion criteria of the use case.  Detailed Description  Exceptions  Develop Test Cases  A test case is a set of test inputs, execution conditions, and expected results developed for a particular test objective. There should be a one-to-one relationship between use case definitions and test cases. i. Test Objective: - The specific objective of the test case. ii. Test Condition:- One of the possible scenarios as a result of the action iii. Operator Action:- The detailed steps that the operator performing the test condition iv. Input Specifications:- The input that is necessary in order for the test case v. Output Specifications:- The results expected from performing the operator actions vi. Pass or Fail: - The results of executing the test. vii. Comments  Building Test Cases  Identify test resources  Identify conditions to be tested  Rank test conditions  Select conditions for testing  Determine correct results of processing  Create test cases  Document test conditions  Conduct test  Verify and correct 2. TEST COVERAGE Based upon the risk and criticality associated with the application under test, the project team should establish a coverage goal during test planning. The coverage goal defines the amount of the code that must be executed by the tests for the application.

Page 6 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

The objective of the test coverage is simply to assure that the test process has covered the application. There are many methods that can be used to define and measure test coverage, including: Statement Coverage Modified Decision Coverage Branch Coverage Global Data Coverage Basis path Coverage User-specified data Coverage Integration Sub-tree Coverage Tools like Mccabe and Battle Map support test coverage analysis in order to both accelerate testing and widen the coverage achieved by the tests. It enables the team  Measure the “Coverage” of set of test Cases  Analyze test case coverage against system requirements  Develop new test cases to test previously “uncovered” parts of system Even with the use of tools to measure coverage, it is usually cost prohibitive to design tests to cover 100% of the application outside of unit testing or block-box testing methods. 3. PERFORMING TESTS The execution is the operation of a test cycle. Each cycle needs to be planned, prepared for, executed and the results recorded. This section addresses these activities involved in performing tests:  Test Platforms o Refer Skill-Category 2 o For example, In testing of web based system the test environment needs to simulate the type of platforms that would be used in the web environment o Test data & test scripts may need to run on different platforms, the platforms must be taken in to consideration in the test design of test data and test scripts  Test Cycle Strategy o Cycles should be planned and included in the test plan o However, as defects are uncovered, and change is incorporated into the software, additional test cycles may be needed o Some of these cycles focus on level of testing, For example unit, Integration & system testing o Some of these cycles focus on program attributes, for example data entry, database updating and maintenance, and error processing.  Use of tools in testing o Testing, like program development, generates large amounts of information, necessitates, and requires coordination between workers. o Many different levels of tooling, from a word processor\spreadsheet to Library support systems with report generation tools o Test Documentation: - The preparation of test plan and issuing a test Analyze report is recommended. The test plan should identify test milestones and provide the testing schedule and requirements. It addition, it should include specification, descriptions, and procedures for all test. o Test Drives:-when testing is performed incrementally, an untested function is combined with a tested one and the package is then tested. Such packaging can lessen the number of drivers and/or stubs that must be written

Page 7 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

Automatic Test Systems and test languages:- The testing operation is repetitivein nature, with the same code executed numerous times with different values, an effort has been made to automate the process of test execution. Programs that perform this function of ignition are called test drivers, test hard nesses, or test systems Perform Tests o The Test Plan is the source of the needed information o Reference: Skill Category 4 o The more detailed the Test Plan, the easier the task becomes for the executioners. o The roles and responsibilities should be spelled out in the test plan o There should be a Test Readiness Review BEFORE testing starts o Should contain the procedures, environment and tools necessary to implement and carry out test execution. o Test Results should be recorded in a test log – either a spreadsheet or a tool. o Used to maintain control of the test o Should contain Test ID, test activities, start and stop times, pass or fail results, and comments o Defects should be logged at this time o Retest when defects have been fixed. Perform Unit testing o Usually done by the developer o Integration testing should be blocked by any defects found in the unit tests Perform Integration testing o Used to make sure the units fit together o Used to validate the application design o Used to prove that the application integrates correctly into its environment o Multiple integrations may be required For Example,…., o Test the client components , test the server components, test the network & integrate the client, Server and network o Validation of the links between the client & servers o Security controls, performance and load tests on individual application components such as db, network & Application server o Simple transaction completion, Tests for empty database or file conditions , Output interface accuracy & back-out Situations Perform System test o Should start early once a “minimal set of components” have been integrated o System testing ends when the test team has determined that the system will operate in production successfully For example ….., o Set up system test environment, Establish the test bed, identify the system test cases & assign the test scripts to testers for execution o Review the test results and determine whether problems identified are actually defects o Record defect in tracking system, making sure the developer responsible for fixing the defect is notified When is testing complete? o















Page 8 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

Test Managers may use test metrics and other methods in order to determine whether the application is ready for deployment o Other factors may include number of open defects and severity levels, plus the risk of putting the application into production and the risk of not putting it into production  General Concerns o Software is not in a testable mode for this test level.  The previous testing levels will not have been completed adequately to remove most of the defects and the necessary functions will not have been installed, or correctly installed in the software o There is inadequate time and resources  Because of delays in development or failure to adequately budget sufficient time and resources for testing, the testers will not have the time or resources necessary to effectively test the software. In many IT organizations, management relies on testing to assure that the software is ready for production prior to being placed in production. o Significant problems will not be uncovered during testing.  Unless testing is adequately planned and executed according to that plan, the problems that can cause serious operational difficulties may not be uncovered. This can happen because testers at this step spend too much time uncovering defects rather than evaluating the operational performance of the application software. o

4. RECORDING TEST RESULTS  A test problem is a condition that exists within the software system that needs to be addressed  Four Attributes should be developed for all test problems o Statement of condition – Tells what is. o Criteria – Tells what should be o Effect – Tells why the difference between what is and what should be is significant. o Cause – Tells the reasons for the deviation. Identification of the cause is necessary as a basis for corrective action  Problem Deviation o “What is” vs “What is desired” o Activities involved o Procedures used to perform work o Output/Deliverables o Inputs o User/Customers served – who is affected o Deficiencies noted – What happens and any interpretations.  Problem Effect o Significance of the problem is judged by the effect that the problem causes o Efficiency, economy, and effectiveness are useful measures of effect.  Problem Cause o Define the problem o Identify the flow of work and information leading to the condition o Identify the procedures used in producing the condition o Identify the people involved

Page 9 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

Recreate the circumstance to identify the cause of a condition Usually Nonconformity with standards, guidelines, instructions, policies, generally accepted business practices and so on.  Use of Test Results o Decisions need to be made as to who should receive the results of testing. Obviously, the developers whose products have been tested are the primary recipients of the results of testing. However, other stakeholders have an interest in the results including:  End users  Software project manager  IT quality assurance o o

5. DEFECT MANAGEMENT  The primary goal is to prevent defects or to find and fix defects as quickly as possible. o Should be risk driven. o Shouldn’t spend all your time on trivial defects that no one will see and\or notice o Should be integrated with the development process o Where possible, capture and analysis of the information should be automated o Defect information should be used to improve the process. This is the main reason for gathering defect information. o Should be altered so not flawed or imperfect  DEFECT NAMING  Level 1 – Name the defect o ather a sample of defects from various areas o Identify the major phases and activities o The defects in part 1 should be sorted into the results of part 2. o Categorize the defects from the sorted groups into sub-groups that have similar characteristics.  Level 2 – Developmental Phase or Activity in which the Defect Occurred  Level 3 – The Category of the Defect o Missing o Inaccurate o Incomplete o Inconsistent  DEFECT MANAGEMENT PROCESS

 Defect Prevention

Page 10 of 12

[email protected]

SKILL CATEGORY 5

EXECUTING THE TEST PLAN

 Minimize impact by o Eliminate the Risk o Reduce the probability of a risk becoming a problem o Reduce the impact if there is a problem  Reduce Expected impact by o Quality Assurance o Training and Education of Workforce o Training and Education of Customers o Methodology and Standards o Defensive Design o Defensive Code  DELIVERABLE BASE LINE

Issue found in Unit test is not a defect Issue found in Unit test IS a defect

  DEFECT DISCOVERY o Static techniques  Reviews, Walkthrough, and Inspections  Usually the most efficient in finding defects and less costly o Dynamic techniques  Testing! o Operational techniques  Running into a system failure -> defect o Record Defect  To correct the defect  To report status of the application  To gather statistics used to develop defect expectations in future applications  To improve the software development process Page 11 of 12

[email protected]

SKILL CATEGORY 5

o

o

o

o

EXECUTING THE TEST PLAN

Severity vs. Priority  How bad the defect is vs. How soon should it be addressed  Severity rules should be set early in the project than used throughout consistently Report Defects  If found in test – usually easy to get information to developers  If found in the field, hope you get enough information Acknowledge Defect  Developers need to acknowledge defects quickly and not try to brush off Ways to Help  Instrument the code to trap the state of the environment when anomalous conditions occur  Write code to check the validity of the system  Analyze reported defects to discover the cause of a defect

 DEFECT RESOLUTION

 Priorities o o o o

Page 12 of 12

Critical Major Minor Someone else's

[email protected]

Related Documents

5th Chapter Exe Test Plan
November 2019 0
Exe
October 2019 37
Test 5th Grade B.docx
December 2019 44