SYSTEM TESTING
Testing Objective Testing is a process of executing a program with the intent of finding an error. Software testing has the following objectives. •
A good test case is one that has high probability of finding an as-yet undiscovered error.
•
A successful testing is one that uncovers an as-yet undiscovered error.
The above objectives are simply a dramatic change in viewpoint. They move counter to he commonly help view that successful test is one systematically uncover different classed of errors and do so with minimum amount of time and effort. If testing conducted successfully (according to the object stated above), it will uncover error in the software. As a secondary benefit, testing demonstrates that software functions appear to be working according to specification and that performance requirement appears to have been met. There is one thing that testing show the absence of errors; it can only show that software errors are present.
Software testing principals: Software testing has following principals as well: •
A tests should be traceable to user requirements
•
Tests should be planned long before test begins
The software testing must be carried from small scale testing to large one.
Software testing strategies Different testing techniques are appropriated at different points in time. Testing is conducted by the developer of the software and (large projects) an independent testing group. Testing & debugging are different activities, but debugging must be accommodating in any testing strategy.
LAN CHAT MANIA
48
SYSTEM TESTING
System Testing Validation Testing Integration Testing Unit Testing Code Design
A strategy for software testing integrates software test case design methods into a well planned series of steps that result in the successful construction of software. The software testing strategy provides a roadmap for software developer. Before software strategies are discussed the following testing techniques are explained;
White-Box Testing A strategy in which software component is treated as a transparent box. Test designer can peek into the box and gain knowledge about the implementation. They can use this knowledge to build test cases cover different parts of the code and also follow different execution paths. White box testing is a test case design method that uses the control structure of the procedure design to derive test cases. White box testing enables to derive test case that: •
Guarantee that all independent paths within a module have exercised at least once.
•
Execute the all loops at their boundaries and with in their operational bounds.
•
Exercise all logical conditions on their true and false paths.
LAN CHAT MANIA
49
SYSTEM TESTING
.
Black-Box Testing A strategy in which a software component is treated like an opaque box. This tests designers’ focus on determining how well the component conforms to the published requirements for the component, instead of worrying about the implementation details. Black box testing focuses on the functional requirement of the software. This testing strategy enables us to derive sets of input conditions that willfully exercise all functional requirements for a program. Black box testing is not an alternative to white box testing. Rather, it is a complementary approach that is likely to uncover a different class of errors than white box testing method. Black box testing used to find following errors: •
Incorrect or missing functions.
•
Interface errors.
•
Performance errors.
•
Initializing and termination errors.
Any strategy must be flexible enough to promote the creativity and customization that are necessary to adequately test all large software based system. Following are the few strategies :-
Unit Testing: There been various components in the Project LAN CHAT that were required to tested independently to ensure the correct and error free performance. Another benefit is this will also help finding the exact problem class wise, design wise or the interface wise. The testing of individual units of the application in isolation for example, a single class. Unit testing focuses verification efforts on the smallest unit of software design, the module. Important control paths are tested to uncover errors with in the boundary of he
LAN CHAT MANIA
50
SYSTEM TESTING module. This testing is basically white-box oriented technique and the step can be conducted in parallel for multiple modules.
Integration Testing: Project modules after being tested isolate then integrated and tested again to ensure the consistency and flow of correct information among them. Integration testing is a systematic technique for construction of the program structure while conducting tests to uncover errors associated with interfacing. During integration testing different modules of a system are integrated using an integration plan. The objective is to take unit tested modules and associated with interfacing. The objective is to take unit tested modules and built a program structure that has been indicated by design.
Validation Testing: This is sort of important testing section. Project suffered various possibilities after that incorrect information provided as input. This results in the inconsistency in the chat. This testing methodology is adopted to ensure the correct information flow. When software has been completely assembled as a package interfacing error have been uncovered and corrected, a final series of test is carried out which is commonly known as Validation Testing. Validation can be defined as “Validation success when software functions in a manner that can be reasonable expected by user.”
System Testing: After a system has been developed, the software is integrated with the system (e.g. hardware, information). At this time a series of system integration and validation tests are conducted. These tests fall outside the scope of the system development and are not
LAN CHAT MANIA
51
SYSTEM TESTING mostly performed by the developer. The end user is also involved in this process and He/She is the one who point out the errors in the new system. System testing is actually a series of test whose primary purpose is to fully exercise the computer-based system.
Testing Specification Plan For quality control to be effective, one must test time same things he same way every time one test. When you change your tests, your results become inconsistent. You need a test plan. A test plan is simply a high-level summary of the areas (functionality, elements, regions, etc) one will test, how frequently one will them, and where in the development or publication process one will test them. A test plan also needs an estimate of the duration of testing, and statement of any required resources. The major phases of software need test plans, because the focus and emphasis of testing will change over time. Testing new software in development is very different from testing software that has been running for some time. Furthermore, any changes to the software code, incremental or major, require regression test plans. Clearly, one needs to decide what will be test, understand the software-the software should have a concrete explanation of the “vision” behind its creation and the hoped-for “path “for its success. If the software has no such explicit statement of direction, then the codifying of such a statement should be first goal. To help define what one should test, ask yourself if these types of questions. •
Why did I make this software? What is the software’s purpose?
•
What are the business goals, if any, behind this software?
•
What has to work for this software to be effective? What has to work for his to even function as software?
•
Who are the end-users for this software? Can they use this software with ease?
•
What is core functionality offered by this software? Can all users at least access this core functionality?
Use the answers to these questions to decide what needs to be tested, and then develop your test cases.
LAN CHAT MANIA
52
SYSTEM TESTING
Testing Metrics: Most bugs are the result of poor design. It is important to realize that when you develop software, bugs will appear. Instead of trying to create a bug free product, your goal should be to achieve software of good quality. A handy rule of thumb is to expect 20 bugs for every 1,000 lines of code generated. Keeping such a realistic expectancy will enable you to focus on finding those bugs and fixing them. It also enables you to have a quantifiable sense of software quality. The easiest bugs are found in the early stages of testing whereas the difficult ones are found at the later stages under obscure situations. It is important to track the number of bugs found over time. In general, such a chart should rapidly rise during the early stage and the eventually level off. The leveling off indicates that the software quality is becoming stable. If the curve just keeps rising, this indicates that either the design is very poor or new bugs are being introduced as he old ones are being fixed. Another metric to determine your software quality is the bug discovery trend. This technique is commonly used at sun Microsystems it involves determining the ratio of the number of hours it takes to find the next bug. In order word you keep track of the number of bugs found in say a day. The logic is that as the software quality gets better it should become more and more difficult to find new bugs. As a result the ratio obtained as the number of hours spent to find a new bug should keep rising all the time. Code-coverage analyzers are important tools that can provide code-coverage metrics. The way these work is that you compile your software and then run all your test cases. The code-coverage analyzer will capture all the methods statements and so percentage of your code was used. The remaining portion of your code was used. The remaining portion of your code is considered to be dead code because it indicates this code was not really tested. Let’s say that during the testing, we found 50 percent of target number of bugs. This means that we might have about 50 percent of the bugs in the untested 30 percent of the code. As you can see, we need a more rigorous testing strategy.
Software Bug: A piece of software code that is not working as expected. No matter what metrics you use for testing your software, it is important to realize that robustness you expect from the LAN CHAT MANIA
53
SYSTEM TESTING software is dependent on his development phase of the software. In other words the robustness expected from a final product is more than that from a beta release, which in turn is more than an alpha release.
Project Testing Report: The testing of “ Hospital Management system “ is under gone through all stages of black box testing and white box testing. In the evaluation phase the system is reviewed to see whether the objectives of the system are accomplished or not. A major factor during system evaluation is to evaluate the system with the perspective of the user because he/she is eventually being the one who use it. The testing of “Online Admission system “is as follows. •
Trace-ability Test matrix
•
Test Case Description
•
GUI Test Plan
Trace-ability Matrix:
Test Case ID Test 1
Test Result Verify that user can access Home Test
has
passed
Test 2
page. successfully. Verify that user can Access all Test has
passed
modules Test 3
successfully.
Verify that user get error messages Test on wrong entry
has
passed
successfully.
Test 4
Verify that admin gets error message Test
has
passed
Test 5
on entering wrong password successfully. Verify that all other links are Test has
passed
Test 6
working on home page. successfully. Verify that user can submit the filled Test has
passed
Test 7
form. successfully. Verify that user can chat offline and Test has
passed
LAN CHAT MANIA
54
SYSTEM TESTING online
successfully. Verify that offline messaging has been Test has
Test 8
performed successfully.
passed
successfully.
Conclusions: The software developed is hypothetical idea, which of course can be implemented as well. The software is flexible enough to be modified easily for further needs. This software will be serving as a product for information system. Therefore it will be customized for every change in the policy. Due to the time constraints to the submission of his project the system could not be fully evaluated but generally it produces information that posses the properties of accuracy, completeness, timeliness and conciseness. Some of the measurable human factors that are central in evaluation are ease of use, speed of performance and rate of errors. All the factors mentioned above do not guarantee a unique interface and every software no matter how carefully designed and implemented has got its respective pros and cons. The ones associated with our software are mentioned bellow:
The software at its best: The pro of our software is as below: •
The software is reliable because it produces accurate results and there is no probability of loss of data.
•
The software is user friendly because its design is made as use r friendly as possible, keeping in mind the diversity of its users.
•
The software has all of the helping aspects that are covered while developing this software, so it comes with a complete helping text.
•
The software also generates proper error messages for his convenience of the user. This enables the users to interact more easily with this software.
•
A regular schedule for the database backup should be followed to avoid problems causing from software breakdown.
LAN CHAT MANIA
55
SYSTEM TESTING •
The system is secure, fault tolerant and efficient.
LAN CHAT MANIA
56