DESIGN
Guidelines Testing
Srinivas Polimera Test Engineer
Table of Contents Table of Contents..................................................................................................2 1 Test Initiation.....................................................................................................3 2 Knowledge Transfer.............................................................................................3 3 Test Strategy and Planning Guidelines...................................................................3 4 Test Design Guidelines.........................................................................................4 5 Test Execution Guidelines....................................................................................6 6 Test Metrics Guidelines........................................................................................6 7 Project tracking guidelines...................................................................................7 8 Automation guidelines.........................................................................................7 9 Defect tracking guideline.....................................................................................7 9.1 Defect Tracking.............................................................................................7 9.2 Logging of Defects........................................................................................8 9.3 Severity Levels.............................................................................................8 9.4 Cause of the defect.......................................................................................9 9.5 Defect Resolution..........................................................................................9 9.6 Process of Defect triaging...............................................................................9 9.7 Process of Defect assignment........................................................................10 9.8 Process of Re-testing and closure..................................................................10 9.9 Defect Analysis...........................................................................................10
Engineering QMS-GL-14
Version 2.0
Page 2 of 10
Guidelines Testing
Srinivas Polimera Test Engineer
1 Test Initiation Following are some of the Guidelines for collecting Test Requirements: •
Go thru Proposal, SOW, any other pre-sales materials related to the project.
•
Understand the project scope and customer expectations by going thru resources such as customer’s website, product documents published in website, Product Requirements document, design documents or any other relevant documents.
•
Understand the non-functional requirements, if not explicitly stated.
•
Ensure that functional and non-functional requirements are testable.
•
Understand other requirements such as deliverables, compliance to standards and regulations, expectations on tools and test environment requirements.
•
Identify the point of contact (preferably Business analyst) to understand the domain and product requirements.
•
If u has experience in similar project, gather the experience gained and lessons learnt.
2 Knowledge Transfer •
Identify generic training needs (like Database, Unix, Automation training) and project specific training needs (Domain knowledge, Product specific training, exposure to project specific tools, etc).
•
Ensure to understand the business scenarios.
•
On need basis, document the training / knowledge transfer sessions to help training new comers.
3 Test Strategy and Planning Guidelines The following guidelines are to be considered during Test Strategy and Test Planning phase:
Engineering QMS-GL-14
•
It is recommended to study the Proposal, SOW, requirement specification, product architecture, design documents before designing the test strategy or test plan.
•
The test strategy should be designed by considering the objectives of testing. The test strategy should include overall approach to testing, various test types planned, integration test strategy, testing techniques (if any) for test design. Version 2.0
Page 3 of 10
Guidelines Testing
Srinivas Polimera Test Engineer
•
Identify the possibility of using any tools for improving the quality of testing.
•
Ensure adequate number of Test cycles for the product and optimum combination of test matrix (Operating System, Application Server, Web Server, Database, etc) are planned.
•
Review the test strategy even during the subsequent phases and revise on need basis (Example: Change in requirements, design, architecture, bug trend, etc.)
4 Test Design Guidelines
Engineering QMS-GL-14
•
Ensure that all the functional requirements have been covered by the test cases.
•
Test design should cover all the non-functional requirements also.
•
Ensure that the test cases cover end user scenarios.
•
The test case should be specific and with clear intention. The expected results should be stated clearly to avoid any ambiguity. The purpose is to create clear understanding on the context and purpose of the test case.
•
Optimize the test case [Knowing what to combine, what not to combine in one single test case]
•
Ensure that the test cases are covering various types (both in terms of volume and complexity) of data including boundaries, negative data, etc.
•
Test data design should consider the design specifications and database schema.
•
It is recommended to define priorities (High, Medium, and Low) for test cases. The priority could be decided based on the importance of the functionality or type of test case (Positive, Negative, Security, etc).
•
Ensure adequate distribution of test cases for each feature and subfeature (in the ‘TestCaseAnalysis’ sheet in Test Case template).
•
Ensure adequate distribution of test cases under each test type (Refer to the ‘TestCaseAnalysis’ sheet in Test Case template) for each feature and sub-feature.
•
Ensure adequate distribution of test cases under each priority (Refer to the ‘TestCaseAnalysis’ sheet in Test Case template) for each feature and sub-feature.
•
If required, get the test cases reviewed by developer and then by business analyst.
•
If automation is in the scope of the project, identify the test cases to be automated.
•
Ensure history tracking for any changes made to test cases, for easy future reference. Version 2.0
Page 4 of 10
Guidelines Testing
Engineering QMS-GL-14
Srinivas Polimera Test Engineer
Version 2.0
Page 5 of 10
Guidelines Testing
Srinivas Polimera Test Engineer
5 Test Execution Guidelines •
Understand customer expectations on bugs (What kind of defects are important to customers)
•
Identify the optimal test coverage matrix based on the time availability, priorities of features and suggest to customer.
•
It is always good to start with build verification or smoke or sanity test cases for new build.
•
Identify test cases that cannot be executed in available test environment [e.g. dial up cases from India] and notify the customer in advance.
•
Plan and execute ad-hoc testing on need basis.
•
Add test cases if any defects found in ad-hoc testing (where the specific scenario is not covered by existing test cases)
•
During regression testing, ensure to rerun and re-log test cases that earlier failed on these bugs. The following tasks could be taken up:
If the bug is resolved as fixed, need to re-execute the test case and log the results as “PASS” (so for such cases, there are two results logged, one as fail which we logged after filing the bug and one as “PASS” which we log after re- executing the test case, once the bug gets fixed).
If the bug is resolved as 'By design', modify the test case and remove the previous failed result and re log the result as “PASS” after executing the modified test case.
If the bug is resolved as won't fix, inactivate such test cases.
•
Ensure the unstable/risk areas and identified regression test cases are covered as part of regression testing.
•
Recommend re-prioritization of tests based on bug history / feedback from customer.
•
Refer to section “defect tracking guidelines” for defects reporting
6 Test Metrics Guidelines
Engineering QMS-GL-14
•
Refer to Measurement guidelines (One of the guidelines)
•
Periodically (at least once in a release), the Test Quality Index can be measured by using the Test Quality Index template. The test manager or Version 2.0
Page 6 of 10
Guidelines Testing
Srinivas Polimera Test Engineer
lead can define the frequency of update as part of test planning and strategy.
7 Project tracking guidelines •
It is recommended to review the status of the project and status should be reported to delivery head using the Interim Project Review summary template.
•
The test manager can define the frequency of update as part of test planning and strategy.
•
Review (or request review) estimates after each milestone against actual time, and review reasons for delay (if any).
8 Automation guidelines •
Ensure that customer is involved in deciding the scope of the automation and automation tool.
•
Define automation architecture and get it approved before starting automation activity.
•
Focus on data driven test scripts rather than hard coding the values.
•
Develop scripts in adherence to coding standards and avoid logic errors.
•
Reuse code wherever possible.
•
Provide appropriate comments for scripts, preferably descriptive comment for every block of code.
•
Conduct Internal code reviews before any code is released to the customer.
•
Ensure code is not dependent on the machine on which it is being run. It should be deployable on any environment/setup.
9 Defect tracking guideline 9.1 Defect Tracking If u have an internally developed Defect Tracking System, which is to be used for all projects unless the customer specifies a particular defect tracking system. The Defect Tracking System, which has to be is web based and provides enough flexibility for customization for different requirements. As such the system can be exposed to customer if required. • If the customer specifies a Defect Tracking system it is to be ensured that adequate number of licenses is available depending on the team size. Engineering QMS-GL-14
Version 2.0
Page 7 of 10
Guidelines Testing
•
Srinivas Polimera Test Engineer
It is always helpful to select a tool, which provides for collection of data for Metrics.
It should be easy to use, accessible over the web and should provide for a system capable of tracking defects to completion
9.2 Logging of Defects Defects are to be logged and monitored in the Defect Tracking System. The details recorded for each defect should include: • Defect number that is a unique identifier. •
Version of application with defect.
•
Area of application causing defect.
•
Symptoms of the defect and how to reproduce it.
•
Expected behavior of the application without the defect.
•
Severity of the defect and extent of effect on functioning of application.
These details ensure efficient tracking of defects. Since many people work together to test and change an application, tracking is required to maintain up to date information. As defects can be exposed to customers, care should be taken while documenting the defect. The following activities can be done in logging the defect • Any defect should be reproduced twice before logging •
Verify the similar and relevant areas of the identified defect so as to find to complete symptom of the defect
•
Before logging a defect, ensure that the defect is not among the open defects
•
After logging the defect, add the defect id in the test result file again the failed test case
•
If a defect is found and there is no direct test case to find the defect, a new test case should be added to the test case file.
While filing the bug, all relevant details like screenshots, reproduction steps, etc should be provided.
9.3 Severity Levels The four levels of classification by severity are: •
Critical: Defects that cause a crash or data loss. They do not allow further operation. In addition if the system is not fulfilling some basic requirements in terms of functionality it can be classified as a Critical defect.
•
Major: Defects that prevent the system from meeting its specification and there is no workaround. For example failure of save, update, insert or delete operations into this category. If a function does not perform in the intended manner or functionality is working erroneously (wrong calculations in a finance application) it would be classified as a Major defect.
•
Minor: Defects that prevent the system from meeting its specification but there is a workaround.
Engineering QMS-GL-14
Version 2.0
Page 8 of 10
Guidelines Testing
•
Srinivas Polimera Test Engineer
Cosmetic: Defects that do not prevent the system from functioning in any way but are visual defects. They may pertain to UI, validations, wrong defect messages etc.
9.4 Cause of the defect •
The Field Issue Type of the Defect-tracking tool captures probable areas where the defect could have been detected earlier.
•
The causes are grouped under the following:
•
o
Requirements: Requirements review could have caught the defect.
o
Code: Code review could have caught the defect.
o
Documents: Proper documentation of the behavior will help.
o
Design: Design review could have caught the defect.
o
Unit Test: Unit testing could have caught the defect.
This field can be blank if the defect is integration /system level / too complex to have been caught with any of the above.
9.5 Defect Resolution Defects can be resolved in various ways: •
By Design: this is a feature and is hence not a defect.
•
Not a defect: the issue was not actually a defect e.g. the tester misunderstood how the system should behave.
•
Duplicate: the defect is logged elsewhere and hence this defect is closed so that recording of consolidated information can be performed.
•
Not reproducible: it is not possible to reproduce the behavior described and hence placing of a fix is not possible.
•
Postponed to the next release.
9.6 Process of Defect triaging All critical and major defects will be triaged to decide on their priority. Minor and Cosmetic defects will be assessed and prioritized for resolving as necessary. If the defects to be fixed have been identified, the defect will be assigned to appropriate owner. Bug Triage Meetings (sometimes called Bug Councils) are project meetings in which open bugs are divided into categories. The most important distinction is between bugs that will not be fixed in this release and those that will be fixed. There are three categories for the medical usage, software also three categories - bugs to fix now, bugs to fix later, and bugs we'll never fix Triaging a bug involves: Making sure the bug has enough information for the developers and makes sense Making sure the bug is filed in the correct place Engineering QMS-GL-14
Version 2.0
Page 9 of 10
Guidelines Testing
Srinivas Polimera Test Engineer
Making sure the bug has sensible "Severity" and "Priority" fields Let us see what Priority and Severity means Priority is Business; Severity is Technical In Triages, team will give the Priority of the fix based on the business perspective. They will check “How important is it to the business that we fix the bug?” In most of the times high Severity bug is becomes high Priority bug, but it is not always. There are some cases where high Severity bugs will be low Priority and low Severity bugs will be high Priority.
9.7 Process of Defect assignment Every defect that is identified as being in scope needs to be assigned to a developer or team to resolve it. Every defect will have an application area associated with it. Every application area will be the responsibility of one of the teams; so defects that appear in that area should be assigned to them. If the defect is subsequently traced to a different area of the system, its area will be changed and the defect will be reassigned as appropriate.
9.8 Process of Re-testing and closure When a defect is resolved, it is to be entered in the Defect Tracking Tool. This implies that the defect has been fixed and unit tested. The details should indicate the version of the system or software where the fix first appears, or the date and details of any nonapplication changes that resolved the defect. For example a change in the web server setting that resulted in the resolution of the defect. The test team responsible for this part of the testing is to re-test the defect. If the defect passes re-test, the details in the Defect Tracking tool are updated to indicate that closure of defect. When the defect fails re-test, the defect status is updated to indicate that it is re-open. Care should be taken to ensure understanding and agreement of the expected behavior. In addition it is necessary to ensure that adequate, appropriate unit testing and configuration control occurs.
9.9 Defect Analysis •
The defect trend will be analyzed periodically and priority wise and feature wise distribution of the defects will be arrived. This data can be used to modify the test strategy for next cycle or regression testing.
•
Also Review of defect leakage at each stage of the development cycle - Identify how many bugs were missed out in earlier milestone and reasons for it.
•
Monitor defect-tracking process. Monitor efficiency of driving bugs to closure. Minimize the gap between defect insertion, defect reporting and defect closure
Engineering QMS-GL-14
Version 2.0
Page 10 of 10