6th Chapter

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 6th Chapter as PDF for free.

More details

  • Words: 2,102
  • Pages: 6
SKILL CATEGORY 6

TEST REPORTING PROCESS

 Testers need to demonstrate the ability to develop testing status reports. These reports should show the status of the testing based on the test plan. Reporting should document what tests have been performed and the status of those tests. The test reporting process is a process to collect data, analyze the data, supplement the data with metrics, graphs and charts and other pictorial representations which help the developers and users interpret that data.  (1)PREREQUISITES TO TEST REPORTING  It is recommended that a database be established in which to store the results collected during testing. It is also suggested that the database be put online through client/server systems so that those with a vested interest in the status of the project can readily access that database for status update.  The prerequisites to the process of reporting test results are: o Define the test status data to be collected o Define the test metrics to be used in reporting test results o Define effective test metrics  1.1DEFINE AND COLLECT TEST STATUS DATA  Processes need to be put into place to collect the data on the status of testing that will be used in reporting test results. Before these processes are built testers need to define the data they need to collect. Four categories of data that testers collect more often are: o Test results data o Test case results and test verification results o Defects o Efficiency  1.1.1Test Result data  Test factors - The factors incorporated into the plan  Business objectives - The validation that specific business objectives have been met.  Interface objectives - Validation that data/objects can be correctly passed among software components  Functions and sub-functions - Pieces normally associated with requirements  Units - The smallest identifiable software components  Platform - The system environment for the application  1.1.2Test Case Results and Test Verification Results  Test cases – The type of tests that will be conducted during the execution of tests, which will be based on software requirements.  Inspections – A verification of process deliverables against deliverable specifications.  Reviews – Verification that the process deliverables/phases are meeting the user’s true needs.  1.1.3 Defects  Data the defect uncovered, Name of the defect, location of the defect, severity of the defect, type of defect, and how the defect was uncovered.  1.1.4 Efficiency  Two types of efficiency can be evaluated during testing: efficiency of the software system and Efficiency of the test process. If included in the mission of software testing, the testers can measure the efficiency of both developing and operating the software system. This can involve simple metrics such as the cost to produce a function point of logic, or as complex as using measurement software.  1.2 Define Test Metrics used in Reporting  1.2.1 Establish a test metrics team  Members:

Page 1 of 6

[email protected]

SKILL CATEGORY 6

TEST REPORTING PROCESS

Should have a working knowledge of quality and productivity measures o Are knowledgeable in the implementation of statistical process control tools o Have a working understanding of benchmarking techniques o Know the organization's goals and objectives o Are respected by their peers and management o Should be relative to the size of the organization (>=2) o Should have representatives from management, development, and maintenance projects Inventory exist IT measures o All identified data must be validated to determine if they are valid and reliable. o Should kick off with a meeting o Introduce al members o Review scope and objectives of the inventory process o Summarize the inventory processes to be used o Establish communication channels to use o Confirm the inventory schedule with major target dates o Should contain the following activities: o Review all measures currently being captured and recorded o Document all findings o Conduct interviews Develop a consistent set of metrics o To implement a common set of test metrics for reporting that enables senior management to quickly access the status of each project, it is critical to develop a list of consistent measures spanning all project lines. Define Desired test metrics o Use the previous 2 tasks to define the metrics for the test reporting process o Description of desired output reports o Description of common measures o Source of common measures and associated software tools for capture o Definition of data repositories Develop and implement the process for collecting measurement o

 1.2.2

 1.2.3

 1.2.4

 1.2.5 data

Document the workflow of the data pature and reporting process Procure software tools to capture, analyze, and report the data Develop and test system and user documentation Beta-test the process using a small to medium-size project Resolve all management and project problems Conduct training sessions for the management and project personnel on how to use the process and interrelate the reports o Roll out the test status process  1.2.6 Monitor process o Monitoring the test reporting process is very important because the metrics reported must be understood and used. It is essential to monitor the outputs of the system to ensure usage. The more successful the test reporting process, the better the chance that management will want to use it and perhaps expand the reporting criteria.  1.3 Define Effective Test Metrics o o o o o o

Page 2 of 6

[email protected]

SKILL CATEGORY 6

TEST REPORTING PROCESS

 Metric - A quantitative measure of the degree to which a system, component, or process possesses a given attribute  Process Metric - A metric used to measure characteristics of the methods, techniques, and tools employed in developing, implementing, and maintaining the software system  Product Metric - A metric used to measure the characteristics of the documentation and code  Software Quality Metric - A function whose inputs are software data and whose output is a single numerical value that can be interpreted as the degree to which software possesses a given attribute that affects its quality o The following measurements generated during testing are applicable:  Total number of tests  Number of tests executed to date  Number of tests executed successfully to date o Data concerning software defects include:  Total number of defects corrected in each activity  Total number of defects detected in each activity  Average duration between defect detection and defect correction  Average effort to correct a defect  Total number of defects remaining at delivery  1.3.1 Objective vs. Subjective Measures o Objective measure can be obtained by counting o Subjective measure has to be calculated based upon a person’s perception o Subjective measures are much more important, but people tend to go towards the objective measures  1.3.2 How do you know a Metric is good? o Reliability - If two people take the measure, will they get the same results? o Validity - Does the metric really measure what we want to measure? o Ease of Use and Simplicity - These are functions of how easy it is to capture and use the measurement data. o Timeliness - Is the data available while the data is still relevant? o Calibration - Can the metric be changed easily in order to better meet the other requirements?  1.3.3 Standard Units of Measure o A measure is a single attribute of an entity o It is the basic building block for a measurement program o Must be defined before taking a measurement o Weighing factors must also be defined o Measurement program should have between 5 and 50 standard units of measure o For example, lines of code may mean lines of code written, executable lines of code written, or even noncompound lines of code written. If a line of code was written that contained a compound statement it would be counted as two or more lines of code, such as a nested IF statement two levels deep. In addition, organizations may desire to use weighting factors; for example, one verb would be weighted as more complete than other verbs in the same programming language.  1.3.4 Productivity vs. Quality o Quality is an attribute of a product or service. Productivity is an attribute of a process. They have frequently been called two sides of

Page 3 of 6

[email protected]

SKILL CATEGORY 6

 













TEST REPORTING PROCESS

the same coin. This is because one has a significant impact on the other o Quality can drive productivity  Lower or not meet quality standards, some productivity measures will likely increase  Improve the processes so that defects do not occur, some productivity measures will likely increase  QAI likes this method better 1.3.5 Test Metric Categories o It is useful to categorize metrics for use in status reporting 1.3.5.1 Metrics Unique to Test o Defect removal efficiency – the percentage of total defects occurring in a phase or activity removed by the end of that activity. o Defect density – the number of defects in a particular product. o Mean time to failure – the average operational time it takes before a software system fails. o Mean time to last failure – an estimate of the time it will take to remove the last defect from the software o Coverage metrics – the percentage of instructions or paths executed during tests. o Test cycles – the number of testing cycles required to complete testing o Requirements tested – the percentage of requirements tested during testing. 1.3.5.2 Complexity Measurements o Size of module/unit (larger module/units are considered more complex). o Logic complexity – the number of opportunities to branch/transfer within a single module. o Documentation complexity – the difficulty level in reading documentation usually expressed as an academic grade level. 1.3.5.3 Project Metrics o Percent of budget utilized o Days behind or ahead of schedule o Percent of change of project scope o Percent of project completed 1.3.5.4 Size Measurements o KLOC – thousand lines of code, used primarily with statement level languages. o Function points – a defined unit of size for software. o Pages or words of documentation 1.3.5.5 Defect Metrics o Defects related to size of software. o Severity of defects such as very important, important, and unimportant. o Priority of defects – the importance of correcting defects. Age of defects – the number of days the defect has been uncovered but not corrected. o Defects uncovered in testing o Cost to locate a defect 1.3.5.6 Product Measures o Defect density – the expected number of defects that will occur in a product during development. 1.3.5.7 Satisfaction Metrics

Page 4 of 6

[email protected]

SKILL CATEGORY 6

TEST REPORTING PROCESS

o Ease of use o Customer complaints o Customer subjective assessment o Acceptance criteria met o User participation in software development  1.3.5.8 Productivity Metrics o Cost of testing in relation to overall project costs o Under budget/Ahead of schedule o Software defects uncovered after the software is place into the operational status o Amount of testing using automated tools  (2)TEST TOOLS USED TO BUILD TEST REPORTS O Testers use many different tools to help in analyzing the results of testing,

and to create the information contained in the test reports.  2.1

Pareto Charts

A type of bar chart to view causes of a problem in order of severity, largest to smallest o Provides the ability to  Categorize items, usually by content or cause factors  Identify the causes and characteristics that most contribute to a problem  Decide which problem to solve or basic causes of a problem to work on first  Understand the effectiveness of the improvement by doing preand post-improvement charts  2.1.1 Deployment o Define the problem clearly o Collect data o Sort or tally data in descending order o Construct chart o Draw bars to correspond to sorted data in descending order o Determine vital few causes o Compare and select major causes o

 2.1.2 Examples o Problem-solving for vital few causes o Defect analysis. o Cycle or delivery time reductions.

Page 5 of 6

and characteristics.

[email protected]

SKILL CATEGORY 6

TEST REPORTING PROCESS

o Unexpected computer processing terminations found in production. o Employee satisfaction or dissatisfaction  2.1.3 Results o A necessary first step in continuous process improvement o Graphically demonstrates the 20-80 Rule o Provides the ability to identify which problem or cause to work on first by its severity or impact  2.1.4 Recommendations o It is easy to understand, but it requires discipline by management teams, facilitators, and teams involved 

Page 6 of 6

[email protected]

Related Documents

6th Chapter
November 2019 13
Garbhanal 6th
July 2020 10
October 6th
June 2020 10
Caulderon 6th
October 2019 29
Lap 6th
May 2020 10
6th Sense.docx
June 2020 9