Re: what documents are used to write testcase for integration testing and system testing?
generally to write test cases we use UCD(use case document), Business rules and DDD(data design document).Generally Use cases are derived from SRS(s/w Req spec)
for writing integration testcases,we use high level design document(HLD) and for writing system testcases,we use URD/SRS
Re: Describe components of a typical test plan, such as tools for interactive products and for database products, as well as cause-and-effect graphs and data-flow diagrams.
As for IEEE829 standards test plan has following format 1.Project_id 2.Description 3.Test items 4.Features to be tested 5.Features to be not tested 6.Test Environment 7.Entry cryteria 8.Approach 9.Exit cryteria 10.Staff and training 11.Test deliverables 12.Date and time 13.Risk anlysis 14.Approvals
Re: What are cookies and how to test cookies
Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies.
Re: If tester had written test cases for application, then in next build if same features are modified with some changes. Then Should I have to write test cases for the same features to verify new expected results.OR What should I do?
In this case you will have to do regression testing. Regression testing: Testing of previously tested program following modifiaction to endure that defects have not been introduced or uncovered in unchanged areas of software as a result of the changes made. It is performed when the software or its environment is changed. You will have to write only new test cases for the new features added. Will have to run new and old test cases together.
Re: what are the duties of Quality Assurance engineer and QA Tester?plzzz answer as soon as possible thanku
Duties of a QA Engg. Generally a QA Engineer and a QA Tester would mean the same in real time environments. However, when it comes to hierarch levels.. QA Manager will have QA Test Lead under him and QA Test Engineers under the lead.. General Duties of the QA would be: 1. To collect all the docs like: BRD(business req doc),
technical tp, functional tp from the management or QA manager. 2. To review the above docs. 3. To moderate the FTR (Formal Technical Reviews) 4. To write RTM 5. Design Test Cases. 6. Once the build release to QA, begin the testing 7. Log the defects in teh Defect tracking tool. 8. Conduct client meetings.., review defects with the dev team and demo the defects to the developers if required. The above are in general and can be disscussed in detail further.
Re: tell about build process???how do u get abuild?? what u l do when u get a build??
Lead or Feature set lead or Module lead or Project lead will provide the Build release team (BRT) with the files to be used for making a build. The BRT makes the build and gives it back to Development team, the builds goes through several rounds of alpha testing (which includes unit testing) based on the Test cases (they are influenced by the TC's of Quality Testing team). Once the build passes the alpha testing (n rounds) then the build files are provided back to BRT with fixes and then the BRT creates a new Beta build to be forwarded to the Testing team along with the release build notes (which carries information on the bug fixes/patches/quick fixes/product components version numbers etc). Testing team performs the Smoke/Sanity test on the Beta build and verifies whether the major functionalities are working or not so that they can send the build approval mail saying that "the build is accepted/rejected due to X bugs (with priorit/severity)". If the build is approved for further testing then the bugs mentioned in the release build notes are validated and finally after the testing completion (this includes test strategy as per test plan, for e.g, Functional, regression, adhoc testing etc..) the testing completion report of the Beta build is sent. Re: how can u map SDLC With STLC
Re: how can u map SDLC With STLC
Yes SDLC can be maped to STLC with the help of V model. I have tried to show it 1. Requirement Specifications ----------> Acceptance Test Acceptance TP Cases 2. Functional design -------------------> System Test Cases System TP 3. Detailed design ------------------->Integrated Test Cases Integrated TP 4. Program Specifications--------------->Unit Test Cases Unit TP TP refers to Test Plan and not Time Pass :) So in each of the phases of SDLC , we start planning and writing Test Cases accordingly,this is how u map SDLC with STLC. Hope my answer was fruitful to you
TEST PLAN
Test plan: a software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. Objective product overview relevant document, standards, legal requirements traceability requirement overall project organization and personnel contactinfo/responsibility assumption and dependencies risk analysis testing priorities and focus scope and limitation test outline: decomposition of test approach by test type, feature, functionalities, modules etc. Test environment configuration management process test tools to be used including version, patches etc problem tracking and resolution project test metrics entry and exit criteria test suspension and restart criteria. Personnel allocation and pertaining needs outside test organizations to be utilized with purpose, responsibilities, deliverables etc. Etc. Open issues
a software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. A test plan states what the items to be tested are, at what level they will be tested, what sequence they are to be tested in, how the test strategy will be applied to the testing of each item, and describes the test environment. A test plan should ideally be organisation wide, being applicable to all of an organisations software developments. The objective of each test plan is to provide a plan for verification, by testing the software, the software produced fulfils the functional or design statements of the appropriate software specification. In the case of acceptance testing and system testing, this generally means the functional specification. The first consideration when preparing the test plan is who the intended audience is – i.e. The audience for a unit test plan would be different, and thus the content would have to be adjusted accordingly. You should begin the test plan as soon as possible. Generally it is desirable to begin the master test plan as the same time the requirements documents and the project plan are being developed. Test planning can (and should) have an impact on the project plan. Even though plans that are written early will have to be changed during the course of the development and testing, but that is important, because it records the progress of the testing and helps planners become more proficient.
Test Plan Format: Test Plan ID: Unique number of name for future reference. Introduction: About Project & Testing Test Items: Names of all modules in that project Features to be Tested: Names of modules to prepare test cases for testing. Features not to be Tested: Names of modules to copy test cases from test case data base. 3, 4, 5 specifies "what to test?" Approach: Selected list of test factors by Project Manager and select list of testing techniques to be applied. Entry Criteria: Testing environment establised Test cases prepared Received build from developers Suspension Criteria: Testing environment is not working due to repair sudden changes in customer requirements pending defects are more Exit Criteria: all modules tested all types of tests applied all major bugs resolved Test Environment: Required software and hardware to conduct testing on this project including testing tools. Test Deliverables: List of documents to be prepared by testing engineers. 6 to 11 specifies "how to test?" Staff & Training Needs: Names of selected test engineers and required number of training sessions. Responsibilities: Work allotments to above selected test engineers 12 & 13 specifies "who to test?" Schedule: Dates and time 14 specifies "when to test?" Risks & Mitigation: The list of risks and solutions to overcome. Approvals: Signatures of test lead and project manager.
AGILE TESTING USES Testing from the beginning of the project and continually testing throughout the project lifecycle is thebasis of agile testing. If we can work with the customer to help him specify his requirements in terms oftests it makes them completely unambiguous, the tests either pass or they don’t. If our coders only writecode to pass tests, we can be sure of one hundred percent test coverage. Most of all, if we keep ourtesters, developers and customers (or customer representatives) in constant faceto-face communicationwith each other, we can eradicate most of the errors caused by us climbing the ladder of inference. Breaking our projects into smaller chunks of work and iterating them will give us frequent feedback on the current state of the project.There are many teams now using agile testing techniques to improve the quality of their products andhaving great success. There is some investment in training required and changes to the workspace are necessary to allow customers, testers, and developers to work side-by-side but these are a small price to pay for the advantages gained.The most difficult thing for most teams is shifting the perception of the test team competing with thedevelopers where their focus is detecting faults and preventing poor quality products from being released.The new, agile testing, paradigm is the test team collaborating with the developers to build quality in from the start and release robust products that deliver the best possible business value for the customer.