Build The Test Environment

  • Uploaded by: api-19934187
  • 0
  • 0
  • July 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Build The Test Environment as PDF for free.

More details

  • Words: 3,195
  • Pages: 73
Knowledge Domain 4 Section 1

Build the Test Environment

Selecting Test Tools and Techniques 

Structural vs. Functional Testing Functional: tests what the system does  Structural: evaluates how the system performs 



Dynamic vs. Static Testing Dynamic: program test phase  Static: requirements and design phase 



Manual vs. Automated Testing



Selecting Tools and Techniques Test Factors appropriate to the application  Test Concerns/Risks based on the development life cycle  Tactics  Choose between Structural or Functional technique  Determine Dynamic or Static method  Choose between Manual or Automated tool 

Select Test Factor Determine SLDC Phase Identify Criteria To test Select Type of Test

System Structural Select Technique

Static

Select Test method

Select Manual or Automated tool

System Functional Select technique

Unit test technique

Dynamic

Select Manual or Automated tool

Static

Select Manual or Automated tool

Select Test method

Dynamic

Select Manual or Automated tool

Static

Select Test method

Select Manual or Automated tool

Dynamic

Select Manual or Automated tool

Structural System Testing Techniques      

Stress Testing Execution Testing Recovery Testing Operations Testing Compliance Testing Security Testing

Stress Testing Technique     

Simulate the production environment Normal or above normal volumes of transactions System is structurally able to process large volumes of data System capacity has sufficient resources to meet turnaround times Users able to perform tasks within turnaround time

How to Use Stress Testing  



  

Test data generators Test transactions created by test group Test transactions previously processed in production environment Include error conditions Use standard documentation Real world operators

Execution Testing Technique 

   

Determine whether system can meet specific performance criteria Determine performance of system structure Verify optimum use of hardware and software Determine response time to on-line requests Determine transaction processing turnaround time

How to Use Execution Testing  

  

Can be conducted in any phase of the development life cycle Evaluate a single aspect, a critical routine, or the ability to satisfy performance criteria Use hardware or software monitors Use simulation models Create quick and dirty programs to evaluate performance of completed system

Recovery Testing Technique 



    

Ability to restart operations after integrity of application has been lost Verify the recovery process and effectiveness of component parts of process Verify adequate backup data is preserved Backup data stored in secure location Recovery procedures are documented Recovery personnel are assigned and trained Recovery tools developed and available

How to Use Recovery Testing 

 



Evaluate adequacy of procedures, methods, tools, techniques Cause failure and test recoverability Test one segment at a time rather than induce multiple failures Treat test as real world disaster to better evaluate human reactions and recovery procedures

Operations Testing Technique 

 

 

Verify prior to production that operating procedures and staff can properly execute application Determine completeness of documentation Ensure support mechanisms are prepared and function properly Evaluate completeness of operator training Ensure that operators can operate the system using the prepared documentation

How to Use Operations Testing 





 



Evaluate the process and execution of the process Evaluate operational requirements during requirements phase Evaluate operational procedures during design phase Perform in conjunction with other tests Operators should not receive outside aid during testing Test before placing in production status

Compliance Testing Technique 





Verifies that application developed in accordance with standards, procedures and guidelines Determine that systems development and maintenance methodologies are followed Evaluate completeness and reasonableness of application system documentation

How to Use Compliance Testing 

 



Compare prepared document/program to standards for it Inspection process Dependent on management’s desire to follow procedures and enforce standards May not want to follow poor standards

Security Testing Technique      

Verify secure confidential information Assure third parties that their data is protected Determine that adequate attention devoted to identifying security risks Determine that realistic definition and enforcement of access to system is implemented Determine that sufficient expertise exists to perform tests Conduct tests that ensure implemented measures function properly

How to Use Security Testing  

   

Highly specialized part of test process Identify security risks and potential loss associated with them Low risks can be tested by IT personnel High risks require more specialized help Test both before and after system is operational. Extent of testing based on estimated sophistication that might be used to penetrate security

Functional System Testing Techniques       

Requirements testing Regression testing Error-handling testing Manual-support testing Intersystem testing Control testing Parallel testing

Requirements Testing Technique       

Verify the system can perform functions correctly Sustains correct functionality over continuous period of time Ensure user requirements are implemented Application processing complies with organizations policies and procedures Secondary user needs included System processes accounting information in accordance with accepted procedures Application systems process information in accordance with government regulations

How to Use Requirements Testing 







Primarily performed via creation of test conditions and functional checklists Test conditions generalized during requirements; more specific as SDLC progresses More effective when created from user requirements Test through all phases of life cycle, from requirements to operations and maintenance

Regression Testing Technique 

  

Retests previously tested segments to ensure proper functionality after a change is made to other parts of application Assures all aspects of system remain functional after testing Determine whether systems documentation remains current Determine that system test data and conditions remain current

How to Use Regression Testing  





Retest unchanged segments of system Rerun previously executed tests to ensure same results Evaluate cost/benefit needs to avoid effort expended with minimal payback Use when there is high risk from new changes affecting unchanged areas

Error-Handling Testing Technique    

Determine the ability of the system to properly process incorrect transactions Determine all reasonably expected error conditions are recognizable by the system Determine that procedures provide high probability that error will be corrected Determine that reasonable control is maintained over errors during correction process

How to Use Error Handling Testing 

  



Requires knowledgeable people to anticipate what can go wrong with the system Brainstorm what might go wrong with application Organize by function to create logical set of tests Test introduction and processing of error, control condition and reentry of corrected condition Testing should occur throughout system development life cycle

Manual-Support Testing Technique 

   

Involves all functions performed by people in preparing and using data from automated applications Verify manual-support procedures are documented and complete Determine that manual-support responsibility is assigned Determine that manual-support people are adequately trained Determine that manual support properly interfaces with automated segment

How to Use Manual-Support Testing 









Evaluate the adequacy and execution of the process Execute in conjunction with normal systems testing Test interface between people and application Test throughout systems development life cycle Extensive testing best during installation phase

Intersystem Testing Technique 







Ensure that interconnection between applications functions correctly Determine that proper parameters and data are correctly passed between applications Ensure that proper coordination and timing of functions exists between application systems Determine that documentation for involved systems is accurate and complete

How to Use Intersystem Testing    

Involves operation of multiple systems in test Costs may be high, especially if systems must be run through multiple iterations An integrated test facility can be cost effective Perform tests whenever there is a change in parameters between application systems

Control Testing Technique 



 



Requires negative look at the application system (like error-handling) Ensure “what can go wrong” conditions are adequately protected Looks at the totality of the system Should be viewed as a system within a system and tested in parallel with other system tests 50% of development in controls; should have a proportionate part of testing allocated

Parallel Testing Technique 







Used to determine results of a new application as consistent with previous application Difficult to conduct as systems become more integrated Conduct redundant processing to ensure new version or application performs correctly Demonstrate consistency or inconsistency between 2 versions of the same application

How to Use Parallel Testing 





Requires same input data be run through 2 versions of same application Can be done with entire application or segment Use when there is uncertainty in correctness of processing in new application

Three Major Classes of Testing and Analysis    



Functional Structural Error Oriented Benefits of each class are complementary No single technique is comprehensive

Functional Testing and Analysis 



Ensure major characteristics of code are covered Functional Analysis 



Verify code implements specification w/o execution

Functional Testing 



Test data is developed from documents that specify a module’s intended behavior Test for each software feature of specified behavior

Testing Independent of the Specification Technique • • • • • •

Specs detail assumptions made about a software unit Describe the interface for access to the unit Describe behavior when access is given Interface includes features of inputs, outputs and related value spaces (domains) Behavior always includes functions to be computed (semantics) Behavior sometimes includes runtime characteristics such as space and time complexity

Testing Based on the Interface 

Input Domain Testing  



Equivalence Partitioning  



External testing – test data chosen to cover extremes Midrange testing – selects data from interior Specs partition the set of all possible inputs into classes that receive equivalent treatment Identifies a finite set of functions and associated input and output domains

Syntax Checking 

Verification of program’s ability to parse input and handle incorrectly formatted data

Testing Based on the Function to be Computed  

Equivalence partitioning Special Value Testing  



Select test data by the features of the function to be computed Most applicable to mathematical computations

Output Domain Coverage   

Select points that cause extremes output domains to be achieved Ensure that modules get checked for max and min output conditions Ensure that all categories of error messages have been produced

Testing Dependent on the Specification Technique 

Algebraic 



Axiomatic 



Potential has not been exploited much

State Machines 



Properties of a data abstraction are expressed by axioms or rewrite rules

Testing can decide if a program that simulates a finite automation is equivalent to that specified

Decision Tables 

Concise method of representing equivalence partitioning

Structural Testing and Analysis 



Test data is developed or evaluated from source code Goal is to ensure various characteristics of the program are covered adequately

Structural Analysis •

Complexity Measures  



Data Flow Analysis  



Finite resources require the need to allocate them efficiently Evidence suggests small percentage of code typically contains largest number of errors Anomalies are flow conditions that may indicate problems Examples: defining a variable twice with no reference, undefined variables

Symbolic Execution  

Accepts 3 inputs: program to interpret, symbolic input for program, path to follow Produces 2 outputs: symbolic output for computation of selected path, path condition for that path

Structural Testing 

Statement Testing 



Branch Testing 





Every branch (if…then…else) is executed

Conditional Testing 



Every statement in the program is executed

Each clause in every condition takes on all possible values in combination with other clauses

Expression Testing Path Testing  

Data is selected to ensure all paths have been executed Impossible to achieve complete coverage

Error-Oriented Testing and Analysis 



Assess the presence or absence of errors Three categories of techniques   

Statistical assessment Error-based testing Fault based testing

Statistical Methods 





Employs statistical techniques to determine the operational reliability of program Concerned with how faults in a program affect the failure rate in an operational environment May be futile, since it’s not directed towards finding errors, but may be a viable alternative to structural testing

Error-Based Testing • • • • • • • •

Fault Estimation Domain Testing Perturbation Testing Fault-Based Testing Local Extent, Finite Breadth Global Extent, Finite Breadth Local Extent, Infinite Breadth Global Extent, Infinite Breadth

Managerial Aspects of Unit Testing and Analysis 

Selecting Techniques Goals impose different demands on technique selection  Nature of the Product  Nature of the Testing Environment 



Control Configuration Control – test plan, procedures, data and results  Conducting Tests – test bed 

Testing Tools 



 

Ease burden of test production, execution, data handling, communication Tool selection is an important aspect of test process – it affects effectiveness and efficiency Overview list of tools, pages 36-43 Tools by Test Phase Table, page 44

Tool Development and/or Acquisition Domain 4, Part 2

Consider a “Test Manager” to identify, select and acquire automated tools. Introduction of a new tool should be determined in advance to permit orderly progress. Tools obtained without adequate consideration of resources for effective employment of them are seldom used to their potential and may end up useless.

 





Identify goals to be met by the tool Approve a detailed tool acquisition plan Approve procurement of tools and training Determine, after tool test period, whether goals have been met

Software Management Responsibility    

Identifying tool objectives Approving acquisition plan Defining selection criteria Making final selection of the tool

Software Engineer Responsibility    

Identifying candidate tools Applying selection criteria Preparing a ranked list of tools Conducting detailed evaluations



Distribution of responsibilities reduces chances of selecting tools that Don’t meet recognized needs  Are difficult to use  Require excessive computer resources  Lack adequate documentation 

Recommended Event Sequence 

Event 1: Goals Should be identified in a format that permits later determination (event 14)  Identify role headquarters staff may have and coordination requirements with other organizations  Budget  Completion date 



Event 2: Tool Objectives Goals of event 1 are translated into desired tool features and requirements based on the development and operating environment  Constraints on tool cost/availability 

Acquisition Activities for Informal Procurement      

A1: Acquisition Plan A2: Selection Criteria A3: Identify Candidate Tools A4: User Review of Candidates A5: Score Candidates A6: Select Tool

Acquisition Activities for Formal Procurement    

B1: Acquisition Plan B2: Technical Requirements Document B3: User Review of Requirements B4: RFP Generation 

  

Specification, statement of work, proposal evaluation criteria and format requirements

B5: Solicitation of Proposals B6: Technical Evaluation Should be Consistent B7: Source Selection



Event 3: Procure Tool Within budget  Adequacy of licensing, contractual provisions  Vendor’s responsibility for delivery, meeting test and performance requirements, and tool maintenance 



Event 4: Evaluation Plan Based on Event 1 goals and Event 2 objectives  Assign responsibility for tests, reports, other actions 



Event 5: Toolsmithing Plan 



Event 6: Training Plan 

 

Selection of toolsmith, responsibilities for tool adaptation, required training Consider documentation, test cases, online diagnostics, help files, vendor training

Event 7: Tool Received Event 8: Acceptance Test Staff tests the tool  “As received” condition  Report on test issued  Approval constitutes official acceptance 



Event 9: Orientation 



Orientation for personnel involved in tool use

Event 10: Modifications Carried out by toolsmith in accordance with toolsmithing plan  Modifications of tool, documentation, operating system 



Event 11: Training



Event 12: Use in the Operating Environment   



Event 13: Evaluation Report  



Most qualified users first Minimal options, loose time constraints Resolve issues here before full deployment Discuss how goals and tools objectives were met Include user comments and toolsmith observations

Event 14: Determine if Goals Are Met  

Evaluation report to funding management Include attainment of technical objectives, adherence to budget, timeliness of effort, cooperation from other depts., recommendations for future tool acquisition

Quality Assurance versus Quality Control Domain 4, Part 3

Definitions 

Quality Assurance 



A planned and systematic set of activities necessary to provide adequate confidence that products and services will conform to specified requirements and meet user needs. A staff function, responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.

Quality Control 

The process by which product quality is compared with applicable standards, and action taken when nonconformance is detected. A line function with the work done within a process to ensure that the work product conforms to standards and/or requirements.









Quality assurance is an activity that establishes and evaluates the processes that produce products. Quality Control activities are focused on identifying defects in the actual products produced. Possible to have quality control w/o quality assurance Differences between the two, pages 59-60

Test Policies, Standards, and Procedures Domain 4, Part 4

Overview 



Test policies, standards, and procedures are the cornerstone of quality and productivity Standards stabilize the environment so it can be analyzed, opportunities identified, improvements installed

Definitions 

Policy 



Standards 





Managerial desires and intents concerning either process or products The measure used to evaluate products and identify nonconformance The basis on which adherence to policies is measured

Procedure 

Step-by-step method followed to ensure that standards are met

Purpose of Standards     

Improves communications Knowledge Transfer Enable Productivity Improvement Master Technology Reduce Cost of Doing Work

Responsibility for Policy, Standards, and Procedures 

Responsibility for Policy Data Processing Management  Define direction 



Responsibility for Standards and Procedures Workers who use procedures and must comply with standards  Driven by management policies 

Putting a Standards Program into Practice   

Components of a Standards Program Building a Standards Program Recommend Standards Organizational Structure

Role of Standards Committee  





Accept topics for standards Set priority for implementation of standards Obtain resources necessary to develop the standard Approve/reject developed standards

Role of the Ad Hoc Committee 

  



Gain representatives from all involved areas Size of committee Create technical standard Coordinate/review technical standard with involved party Review currentness of the standards/procedure previously developed by ad hoc group

Standards Manager Responsibilities   





Promote the concept of standards Be driving force behind standards Administer standards program defined by committee Be a resource to standards and ad hoc committees Ensure involved parties are adequately trained

Standards Standards 



Developing a standard and procedure for standards Defining attributes of a standard for a standard





Professional Test Standards Domain 4, Part 5

Professionals in industry need to be aware of industry standards Test professionals should be familiar with standards organizations ISO – International Standards Organization  National Institute of Standards and Technology  IEEE 



See page 74 for current standards applying to the test process

Related Documents

Build Test Debug
July 2020 10
Build Test Document
April 2020 3
7c Environment Test 2004
October 2019 16
Build
November 2019 40