B4.3-R3: SOFTWARE TESTING AND QUALITY MANAGEMENT Previous Question Papers • • • • • • •
January, 2007 January, 2006 July 2006 July, 2005 January, 2005 July, 2004 January, 2004
January, 2007 Time: 3 Hours 1. a) What are the parameters checked in performance testing of software? How is it performed? b) What is the difference between robustness and correctness? c) What are the steps in automating the testing process? d) When do you decide to stop testing any further? e) What is information cohesion? f) What is typical about a windows process in regards to memory allocation? g) What is a testable design? (7x4) 2. a) What are the typical problems in testing web services? What are the differences between testing intranet and internet-based web services in an organization? b) In the context of web services testing, explain the following with example: . Proof-of-concept testing . Functional testing . Regression testing (6+12) 3. a) What are the basic concepts behind software fault tolerance? What are design diversity and independent failure modes? Explain in detail. b) Explain the difference between recovery block and N-version software method of fault tolerance. (10+8) 4.
a) Can we directly measure the quality of software? What are the basic three sets of factors that determine software quality? How can their indicators be derived? b) What are clean tests and dirty tests? Which one works when? (10+8) 5. a) What are the parameters of a program that can be tested using glass-box or whitebox testing? Give examples. b) What is mutation testing and random testing? IIIustrate your answer by an example? (8+10) 6. a) What are the advantages of dynamic analysis over static testing? Explain dynamic analysis process? b) What are the categories of dynamic analyzers available? Describe their important Features. (10+8) 7. a) Explain the general architecture of a test-data generator with a diagram. Give an example. b) What is SPICE? What are it’s advantages to various communities involved in the software industry? Compare it with CMM of software.
july 2006 Time: 3 Hours
1. a) b) c) d) e) f) g)
2. a) b) c)
Total Marks: 100
Distinguish clearly between the terms fault and failure in software development. A pure top-down integration testing is not just sufficient for the software testing process. Justify your answer with a suitable example. What is a code walkthrough and explain, how it is useful in White-Box testing. Describe, how cyclomatic complexity is useful in software testing? Explain, the differences between validation and verification. Why is validation considered a difficult process? How can a Client-Server Software be effectively tested? Explain, why measurement of software reliability is much harder problem that the measurement of hardware reliability. (7x4) How can you determine the number of latent defects in a software product during the testing phase? Identify the types of information that should be presented in the test summary report. What do you understand by “code review effectiveness”? How can review effectiveness be determined?
(6+6+6) 3. a) b) c)
4. a) b) c)
5. a) b) c)
6. a) b) c)
7. a) b)
What do you understand by test data generation? Explain, how test data can be generated automatically. Among the different development phases of life cycle, testing typically requires the maximum effort. Identify the main reasons behind the large effort necessary for this phase. Design the black-box test suite for a program that accepts two strings and checks if the first string is a substring of the second string and displays the number of times the first string occurs in the second string. (6+6+6) What do you understand by static analysis of a program? What are the different types of information that are normally generated by static analysis tools? How are these information useful? Explain how the different defects in a system can be classified. Why is it necessary to classify the defects into several classes? How can we estimate the Cost of Repairing the software defect in a program. (6+6+6)
Usability of a software product is tested during which type of testing: unit, integration, or system testing? How is usability tested? Discuss the relative merits of ISO 9001 certification and the SEI CMM-based quality assessment. What do you understand by Key Process Area (KPA), in the context of SEI CMM? Would there be any problem if an organization tries to implement higher level SEI CMM KPAs before achieving lower level KPAs? Justify your answer using suitable examples. (6+6+6) What is the difference between process metrics and product metrics? Give four examples of each. Why is testing of real-time and embedded systems is considered more difficult than testing of traditional systems? Explain a satisfactory scheme for testing real-time and embedded systems. What is a coding standard? Identify the problems that might occur if the engineers of an organization do not adhere to any coding standard? (6+6+6) What do you understand by stress testing? Explain using suitable examples, how stress testing for a software product can be carried out. Explain the importance of software configuration management in modern quality paradigms such as SEI CMM and ISO 9001. An organization not using nay configuration management tool can qualify for which SEI CMM level(s)?
c)
List four metrics that can be determined from an analysis of a program’s source code and would correlate well with the reliability of the delivered software.
JANUARY 2006 NOTE:
1. Answer question 1 and any FOUR questions from 2 to 7. 2. Parts of the same question should be answered together and in the same sequence. Time: 3 Hours Total Marks: 100
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer using one or two sentences. Irrelevant and unnecessarily long answers will be penalized. a) The terms software verification and software validation are essentially synonyms. b) Introduction of additional sequence type of statements in a program can not increase its cyclomatic complexity. c) Code walkthrough for a module is normally carried out after completion of unit test. d) During code review you detect errors whereas during code testing you detect failures. e) Branch coverage is a stronger testing technique compared to statement coverage technique. f) Modern quality assurance paradigms are centered around to carryout thorough product testing. g) A satisfactory way to test object-oriented programs to test all the methods supported by the different classes individually and then by performing adequate integration and system testing. (7x4)
2.a) Explain why testing techniques used for traditional procedure-oriented programs can not effectively be used to test object-oriented programs? What additional types of tests are needed for object-oriented programs? b) Explain the difference between code inspection and code walk through. Why is detection and correction of errors during inspection and walkthrough preferable to that achieved using testing. c) Prepare a checklist that can be used for inspection of the user interface of a software product. (6+6+6)
3.a) What is the difference between the top-down and the bottom-up integration testing approaches? Explain your answer using an example. Why is the mixed integration testing approach preferred by many testers? b) Design the black-box test suite for a program that accepts two strings and checks if the first string is a substring of the second string and displays the number of times the first string occurs in the second string. c) Explain what do you understand by client-server software. What are its advantages over the traditional software architecture? How can a client-server software be effectively tested? (6+6+6)
4.a) Consider the following program segment. /* num is the number the function searches in a presorted integer array arr */ int bin_search(int num) { int min, max; min =0; max =100; while(min!=max){ if(arr[(min+max)/2]>num) max= (min+max)/2; else if(arr[(min+max)/2]
else return((min+max)/2); } return(-1); } a) Draw the control flow graph for this program segment. b) Determine the cyclomatic complexity for this program. (Show the intermediate steps in your computation. Writing only the final result is not sufficient) c) How is the cyclomatic complexity metric useful? (6+6+6) For more questions papers visit www.DoeaccOnline.com, www.IgnouOnline.com
5.a) Explain, why measurement of software reliability is a much harder problem than the measurement of hardware reliability. b) What do you understand by a reliability growth model? How is reliability growth modeling useful? Give examples of two reliability growth models. c) Explain the importance of software configuration management in modern quality paradigms such as SEI CMM and ISO 9001. What problems might arise if a development organization does not use any configuration management tool? (6+6+6)
6.a) Explain two test coverage metrics for procedural code. How are these useful? Can these be used satisfactorily for object-oriented programs? Explain your answer. b) Why effective testing of real-time and embedded systems is considered more difficult than testing traditional systems? Explain a satisfactory scheme to test real-time and embedded systems. c) Distinguish between the static and dynamic analysis of a program. Explain at least one metric that a static analysis tool reports and one metric that a dynamic analysis tool reports. How are these metrics useful? (6+6+6)
7.a) What do you understand by volume testing? Explain using a suitable example how volume test cases can be designed and the types of defects these tests can help to detect.
b) Explain at least one defect metric and how this metric can be collected. Also explain how defects can be effectively tracked for a software product. c) What do you understand by data flow testing? How is data flow testing performed? Is it possible to design dataflow test cases manually? Explain your answer. (6+6+6)
July, 2005 Note: 1. Answer question 1 and any FOUR questions from 2 to 7. 2. Parts of the same question should be answered together and in the same sequence. Time: 3 Hours
Total Marks:100
1. 1. 2. 3. 4. 5. 6.
What is 'Software Testing'? How is it different from debugging? What makes a good Software Test Engineer? Differentiate between static testing and dynamic testing? Differentiate between verification and validation. Describe the characteristics of 'good code'. Explain the concept of a test case and test plan.
7. What is the normal procedure to assure quality of an SRS document? (7x4) 2. 1. DOEACC is planning to start online testing. It will use an automated process for recording candidate information, scheduling candidates for exams, keeping track of results and sending out certificates. Write a brief test plan for this project. 2. Software testing can be an unending process. What criteria are used to stop testing? 3. Explain Stress, Load and Performance testing? (6+6+6) 3. 1. What is the objective of Unit and integration testing? Indicate the quality measures to assure that unit testing is complete? 2. You are a tester for testing a large system. The system data model is very large with many attributes and there are many interdependencies within the fields. What steps would you use to test the system and what are the effects of the steps you have taken on the test plan? (12+6) 4. 1. Explain and give examples of the following black box techniques? Boundary Value testing. Equivalence testing, Error Guessing. 2. Suppose you company is about to roll out an E-Commerce application. It is not possible to test the application on all types of browsers on all platforms and operating systems. What steps would you take in the testing environment to reduce the business risks and commercial risks? (12+6) 5. 1. What are the special features of client/server environments? Discuss the various strategies employed to test such systems. 2. Compare and contrast the top-down and bottom-up approach to testing Computer programs. (12+6) 6.
1. What is data flow testing? Illustrate its use by an example. 2. Describe the popular software quality assurance models. Compare and contrast the ISO 9000 and CMM models. (9+9) 7. 1. How can Software Quality Assurance process be implemented without stifling productivity? Explain. 2. Discuss the salient features of graphical interface testing. How is it different from WWW Testing? 3. Explain the testing process for Object Oriented programs. (6+6+6)
January, 2005 Note: 1. Answer question 1 and any FOUR questions from 2 to 7. 2. Parts of the same question should be answered together and in the same sequence. Time: 3 Hours
Total Marks:100
1. State whether the following statements are TRUE or FALSE. In each case. justify your answer using one or two sentences. Irrelevant and unnecessarily long answers will be penalized. 1. A pure top-down integration testing does not require the use of any stub modules
Ans. False: Because top down integration testing completely depend on stub.and while it is completely working on the basis of stubs it is called pure. 2. Use of static and dynamic program analysis tools is an effective substitute for through testing. Ans.True: Through testing is that where we test the complete structure module by module and go through whole structure of the program. And the combination of static and dynamic program analysis tools also do that. 3. Once the McCabe's Cyclomatic complexity of a program has been determined. It is very easy to identify all the linearly independent paths of the program. Ans.True: Because cyclomatic complexity is equal to the total number of independent path present. 4. During code review you detect errors whereas during code testing you detect failures. Ans.True: Code review find that if the code is executing well without see the output of the code. While during code testing it also find weather the program output is correct. More:- An error is usually a programmer action or omission that results in a fault. A fault is a software defect that causes a failure, and a failure is the unacceptable departure of a program operation from program requirements. 5. The reliability of a software product increases almost linearly, each time a defect gets detected and fixed. Ans.True: Because every time we fix a defect we ensure the less possibility of failure of a system, so it's reliability increases. 6. Modern quality assurance paradigms are centered around carrying out thorough product testing. Ans.True: Because if we through test a product it mean it go through all the testing phase and it will work as required and in all condition. 7. An important use of receiving a ISO 9001 certification by a software organization is that it can improve its sales efforts by advertizing its products as conforming to ISO 9001 certification.
Ans.True: ISO 9001 is the actual specification for the quality management system for production..So a organization can represent itself as a ISO 9001 company that will ensure the customer about the quality of product. More On ISO:- The International Organisation for Standardisation (ISO) is a worldwide federation of national standards bodies from some 130 countries, one from each country (So Europe has 14 votes to USA's 1 vote!). The mission of ISO is to promote the development of standardisation with a view to facilitating the international exchange of goods and services, and to developing co-operation in the spheres of intellectual, scientific, technological and economic activity. ISO's work results in international agreements, which are published as International Standards. 2. (7x4) 3. 1. What do you understand by automatic program analysis? Give a broad classification of the different types of program analysis tools used during program developement. What are the different types of information produced by each type of tool? Ans.Program analysis = deriving information about the behaviour of computer programs. Automatic = no humans involved Why do program analysis?
optimization validation
Automated Testing Tools available for programmers: 3.
Static analyzer Static Program Analysers: Static analysis tools scan the source code to try to detect errors. The code does not need to be executed. Most useful for languages which do not have strong typing. The can check the following: 1. 2. 3. 4. 5. 6.
Syntax Unreachable code Unconditional branches into loops Undeclared variables Uninitialised variables Parameter type mismatches
7. Uncalled functions and procedures 8. Variables used before initialization 9. Non usage of function results 10. Possible array bound errors 11. Misuse of pointers 4. Code Auditors 5. Assertion processors 6. Test file generators 7. Test Data Generators 8. Test VerifiersvOutput comparators 2. What is stress testing? How is it performed? Why is stress testing applicable to only certain types of systems? . Ans.Stress testing is the system testing of an integrated, blackbox application that attempts to cause failures involving how its performance varies under extreme but valid conditions (e.g., extreme utilization, insufficient memory inadequate hardware, and dependency on overutilized shared resources). Stress testing typically involves the independent test team performing the following testing tasks using the following techniques:
Test Planning Test Reuse Test Design: Use Case Based Testing Workload analysis to determine the maximum production workloads. Test Implementation: Develop test scripts simulating extreme workloads. Test Execution: Regression Testing Profiling Test Reporting Stress testing is typically complete when the following postconditions hold: At least one stress test suite exists for each scalability requirement. The test suites for every scheduled scalability requirement execute successfully. Typical examples include stress testing of an application that is: Software only. A system including software, hardware, and data components. Huge (e.g., number of users, number of transactions, amount of data). Batch with no realtime requirements. Soft realtime (i.e., human reaction times). Hard realtime (e.g., avionics, radar, automotive engine control).
Embedded within another system (e.g., flight-control software, cruise-control software). Client/server or n-tier distributed. A research prototype that will not be placed into service. Business-critical or safety-critical.
Stress testing is applicable to only certain types of systems and is not for systems that are not much performance dependent and that don't work in the extreme conditions. e.g a utility system working on a stand alone system. 3. Define three metrics to measure software reliability. Do you consider these metrics entirely satisfactory to provide measure of the reliability of a system? Justify your answer. Ans.Reliability of the delivered code is related to the quality of all of the processes and products of software development; the requirements documentation, the code, test plans, and testing.
Software reliability is comprised of three activities: Error prevention Fault detection and removal Measurements to maximize reliability, specifically measures that support the first two activities Three types of reliability Metric: Requirements Reliability Metrics Design and Code Reliability Metrics Testing Reliability Metrics Requirements Reliability Metrics Requirements specify the functionality that must be included in the final software. It is critical that the requirements be written such that is no misunderstanding between the developer and the client. There are three primary formats for requirement specification structure, by IEEE, DOD and NASA.[7,8,9] These specify the content of the requirement specification outside the requirements themselves. Consistent use of a format such as these ensures that critical information, such as operational environment, is not omitted. The importance of correctly documenting requirements has caused the software industry to produce a significant number of aids to the creation and management of the requirements specification documents and individual specification statements. However very few of these aids assist in evaluating the quality of the requirements document or the individual specification statements
themselves. The SATC has developed a tool to parse requirement documents. The Automated Requirements Measurement (ARM) software was developed for scanning a file that contains the text of the requirement specification. During this scan process, it searches each line of text for specific words and phrases. These search arguments (specific words and phrases) are indicated by the SATC's studies to be an indicator of the document's quality as a specification of requirements. ARM has been applied to 56 NASA requirement documents. Seven measures were developed, as shown below. 0. Lines of Text - Physical lines of text as a measure of size. 1. Imperatives - Words and phases that command that something must be done or provided. The number of imperatives is used as a base requirements count. 2. Continuances -Phrases that follow an imperative and introduce the specification of requirements at a lower level, for a supplemental requirement count. 3. Directives - References provided to figures, tables, or notes. 4. Weak Phrases - Clauses that are apt to cause uncertainty and leave room for multiple interpretation measure of ambiguity. 5. Incomplete - Statements within the document that have TBD (To be Determined) or TBS (To Be Supplied). 6. Options - Words that seem to give the developer latitude in satisfying the specifications but can be ambiguous. It must be emphasized that the tool does not attempt to assess the correctness of the requirements specified. It assesses individual specification statements and the vocabulary used to state the requirements; it also has the capability to assess the structure of the requirement document.
Design and Code Reliability Metrics Although there are design languages and formats, these do not lend themselves to an automated evaluation and metrics collection. The SATC analyzes the code for the structure and architecture to identify possible error prone modules based on complexity, size, and modularity. It is generally accepted that more complex modules are more difficult to understand and have a higher probability of defects than less complex modules.[5] Thus complexity has a direct impact on overall quality and specifically on maintainability. While there are
many different types of complexity measurements, the one used by the SATC is logical (Cyclomatic) complexity, which is computed as the number of linearly independent test paths. Size is one of the oldest and most common forms of software measurement. Size of modules is itself a quality indicator. Size can be measured by: total lines of code, counting all lines; noncomment non-blank which decreases total lines by the number of blanks and comments; and executable statements as defined by a language dependent delimiter.
Testing Reliability Metrics Testing metrics must take two approaches to comprehensively evaluate the reliability. The first approach is the evaluation of the test plan, ensuring that the system contains the functionality specified in the requirements. This activity should reduce the number of errors due to lack of expected functionality. The second approach, one commonly associated with reliability, is the evaluation of the number of errors in the code and rate of finding/fixing them. The SATC has developed a model to simulate the finding of errors and projects the number of remaining errors and when they will all be identified. To ensure that the system contains the functionality specified, test plans are written that contain multiple test cases; each test case is based on one system state and tests some functions that are based on a related set of requirements. The objective of an effective verification program is to ensure that every requirement is tested, the implication being that if the system passes the test, the requirement's functionality in included in the delivered system. An assessment of the traceability of the requirements to test cases is needed.
4. (6+6+6) 5. 1. What is the difference between the top-down and the bottom-up integration testing approaches? Explain your answer using an example. Why is the mixed integration testing approach preferred by many testers? Ans. Aspect Major Features
Bottom-up
Allows early testing aimed at proving feasibility and
Top-down
The control program is tested first
practicality of particular modules. Major emphasis is on module functionality and performance. Modules can be integrated in various clusters as desired.
Advantages
No test stubs are needed It is easier to adjust manpower needs Errors in critical modules are found early
Modules are integrated one at a time Major emphasis is on interface testing
No test drivers are needed The control program plus a few modules forms a basic early prototype Interface errors are discovered early Modular features aid debugging
Test Test drivers are needed stubs are needed Many The modules must be extended early phases integrated before a dictate a slow Disadvantages working program is manpower buildup available Errors in Interface critical modules at low errors are discovered levels are found late late At any given point, more code An early working program has been written and tested raises morale and helps that with top down testing. convince management Comments Some people feel that bottom- progress is being made. It is up is a more intuitive test hard to maintain a pure topphilosophy. down strategy in practice.
The cost of drivers and stubs in the top-down and bottom-up testing methods is what drives the use of 'big bang(mixed integration testing)' testing. This approach waits for all the modules to be constructed and tested independently, and when they are finished, they are integrated all at once.
6. Design the black-box test suite for a program that accepts two strings and checks of the first string is a substring of the second string and displays the number of times the first string occurs in the second string. 7. Explain, what do you understand with client-server architecture. What are its advantages over the traditional software architecture? How can a client-server software be effectively tested? Ans.The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing computing. A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration.
Advantages: This approach introduced a database server to replace the file server. Using a relational database management system (DBMS), user queries could be answered directly. The client/server architecture reduced network traffic by providing a query response rather than total file transfer. It improves multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and server. (6+6+6)
8. Consider the following program segment. 1. 2. /* sort takes an integer array and sorts it in ascending order *1 3. 4. void sort (int a [ ], int n) { 5. 6. int i, j; 7. 8. for(i=0;ia[j]) 13. 14. { 15. 16. temp=a[i] ;
17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29.
a[i]=a[j]; a[j]=temp; }
}
a. Draw the control flow graph for this program segment b. Determine the cyclomatic complexity for this program. (Show the intermediate steps your computation. Writing only the final result is not sufficient) c. How is the cyclomatic complexity metric useful? (6+6+6) 9. a. What problems would you face if you are developing several versions of the same product according to a client's request and you are not using any configuratlon management tools? b. Discuss the relative merits of ISO 9001 certification and the SEI CMMbased quality assessment. c. What do you understand by performance testing? What are the different types of performance testing that should be performed on a software product. (6+6+6) 10. a. When during the development process is the compliance with coding standards is checked? List two coding standards each for (i) enhancing readability of the code, (ii) reuse of the code. b. For the program segment given in Question 4, design the branch coverage and statement coverage test suite. Which of these is the stronger testing? Why? c. Distinguish between the static and dynamic analysis of a program. How are static and dynamic program analysis results useful? (6+6+6) 11. a. Distinguish between software verification and software validation. When are verification and validation performed during the software life cycle? Can one be used in place of the other?
Ans. Validation: Are we producing the right product? Verification: Are we producing the product right? Verification is a process to check that the software meets its requirements. Validation is to check that the software does what the customer expects. There are two complementary approaches to system checking and analysis in validation and verification (V & V). Software inspections or peer reviews analyze and check representations such as requirements document, design diagrams and program source code. It can be used at all stages of software engineering process. Software inspections or peer reviews are static V & V techniques because there is no need to run the software on a computer. The other approach is software testing, which involves running an implementation of the software and its operational behavior to check that it is performing as required. Software testing is dynamic V & V technique. Testing can also be used at different stages of software engineering process. A validation testing can be used to show that software is what the customer wants (It meets the requirements). Defect testing is intended to reveal defects in the software. b. What is the difference between process metrics and product metrics? Give examples of each. c. Briefly highlight the difference between code inspection and code walk through Compare the relative merits of code inspection and code walk through. Ans.Walkthrough is co-operative, organised activity by several participants. Participants select some test cases and simulate execution of the code by hand. A participant, usually the developer of the software being reviewed, narrates a description of the software and the remainder of the review group provides their feedback throughout the presentation. In a Walkthrough, the producer describes the product and asks for comments from the participants. These gatherings generally serve to inform participants about the product rather than correct it. Inspection - analysis is aimed explicitly at the discovery of commonly made errors. Code/design is examined by checking it for the presence of errors. Inspections require a high degree of preparation of the review participants, but the benefits include a more systematic review of the software and more controlled and less stressed meeting.
Software Inspections are a disciplined engineering practice for detecting and correcting defects in software artifacts, and preventing their leakage into field operations. Moderator: Responsible for ensuring that the inspection procedures are performed through out the entire inspection process. The responsibilities include: i. ii. iii. iv. v.
Verifying the work products readiness for inspection Verifying that the entry criteria is met Assembling an effective inspection team Keeping the inspection meeting on track Verifying that the exist criteria is met
Software Inspections are a reasoning activity performed by practitioners playing the defined roles of Moderator, Recorder, Reviewer, Reader, and Producer. The major difference between walkthroughs and inspections is that an inspection process involves the collection of data that can be used to feedback on the quality of the development and review process. In the methods of quality control, inspection is a mechanism that has proven extremely effective for the specific objective of product verification in many development activities. It is a structured method of quality control, as it must follow a specified series of steps that define what can be inspected, when it can be inspected, who can inspect it, what preparation is needed for the inspection, how the inspection is to be conducted, what data is to be collected, and what the follow-up to be the inspection is. Thus the result of inspections on a project has the performance of close procedural control and repeatability. However, reviews and walkthroughs have less structured procedures. They can have many purposes and formats. Reviews can be used to form decisions and resolve issues of design and development. They can also be used as a forum for information swapping or brainstorming. Walkthroughs are used for the resolution of design or implementation issues. Both methods can range from being formalized and following a predefined set of procedures to completely informal. Thus they lacks the close procedural control and repeatability. (6+6+6)
July, 2004 Note: 1. Answer question 1 and any FOUR questions from 2 to 7. 2. Parts of the same question should be answered together and in the same sequence. Time: 3 Hours
Total Marks:100
1. 1. Explain why it is not necessary for a program to be completely free of defects before it is delivered to its customers. To what extent can testing be used to validate that the program is fit for its purpose? 2. Discuss the differences between verification and validation and explain why validation is a particularly difficult process.
3. Discuss the differences in testing a business critical system, a safety critical system and a system whose failure would not seriously affect lives, health or business. 4. Explain differences between testing a system program based on objectoriented and procedure-oriented approach. 5. How is the cyclomatic complexity metric useful in the testing process? 6. Briefly list out the .parties who have vested interest in software testing and their interest too. 7. What are the two basic components of the testing strategy? Explain them briefly. (7x4) 2. 1. Explain the various aspects involved in testing web applications. 2. Discuss in detail security testing issues of web-based programs. (9+9) 3. 1. What is a test plan? Discuss the features of good test-plans. 2. Normally test design methods involve large number of test cases and it is nearly impossible to execute all of the tests. Describe and illustrate the various strategies and criteria employed by practitioners to reduce the number of test cases. (6+12) 4. 1. Explain the flow graph notation. Use this notation to represent the structured programming constructs. 2. Develop a test plan for exhaustive testing of a program that computes the roots, all possible types, of a quadratic equation. (8+10) 5. 1. Static Analysis is a technique for assessing the structural characteristics of source code. Explain this technique by taking a simple example. Bring out the utility and limitations of static analyzers. 2. Explain the concept and utility of function point metric. (12+6) 6.
1. Differentiate between black box and white-box testing. Consider a program that reads three integer values, representing the inputs and prints a message stating whether the triangle is scalene, isosceles or equilateral. Develop a flowchart of the program. Suggest a white-box testing methodology. (18) 7. 1. Explain what you understand by software process quality and software product quality. How would you assure the quality of software product? 2. A commonly used software quality measure is the number of known errors per thousand lines of product source code. Compare the usefulness of this measure for developers and users. What are the possible problems with relying on this measure as the sole expression of software quality? (10+8)
January, 2004 Note: 1. Answer question 1 and any FOUR questions from 2 to 7. 2. Parts of the same question should be answered together and in the same sequence. Time: 3 Hours
Total Marks:100
1. State whether the following statements are TRUE or FALSE. In each case, justify your answer using one or two sentences. Irrelevant and unnecessarily long answers shall be avoided. 1. System test plan can be prepared immediately after the requirements specification phase is complete.
Ans.True: Because a test plan can be make on the basis of user manual, which tell the expected functionality of the system. 2. The effectiveness of a test suite in detecting errors can be determined by counting the total number of test cases present in the test suite. Ans.False: Because test cases should minimum in a test suite but test cases should distinguish as they can move between all possible path of system, and find all possible result of system. 3. One of the objectives of system testing is to check whether coding standards have been adhered to or not. Ans.False : System testing is for test the functionality of system. 4. Error and failure are synonymous in software testing terminology. Ans.False: Because an error is usually a programmer action or omission that results in a fault. A fault is a software defect that causes a failure, and a failure is the unacceptable departure of a program operation from program requirements. 5. Development of suitable driver and stub functions are essential for carrying out effective system testing of a product. Ans.True: Because during system testing because whole system is not integrated so to execute the system stub and drivers needs. 6. The main purpose of integration testing is to find design errors. Ans.True: Because if the design of a software is correct and it built on the basis of that design it will integrate without any error. 7. Introduction of additional sequence type statements in a program would not increase the program's cyclomatic complexity. Ans.False: Because to introduce new independent path a conditional statement required. 2. (7 x 4) 3. 1. Usability of a software product is tested during which type of testing: unit, integration or system testing? How is usability tested? Ans.Usability of software product tested during system testing. It is not possible during unit testing because a single unit can not define whole
working of the software. Unit testing can only define that a indivisual unit is working properly or not. Integration testing is whether all units of software work together as a system because it is possible that one unit of software working correctly in isolation causes an error when integrates with another. However, it is not about the usability of the software. Therefore, it is the system testing where we test the usability of a software. In Usability testing, we test system for it's usabilty in real environment.we test for: if the system is according to the requirement specification of the system. if the system is giving output as the user wants. if the system is taking only the valid input and it's not taking the incorrect input of the user. if the system have a interface according to the requirement of the user. If the software is fit for it's purpose for what it is developed. 2. Explain difference between testing a system program based on objectoriented and procedure-oriented approach.
Ans. Unit Testing In procedure-oriented approach, the units for unit testing unit can not be identified easily while in object oriented systems, units for testing can be easily identified as these are already divided in individual units of objects and classes. And after testing indivdual units we can go further. Integration testing Integration testing is more important in procedure oriented approach as in this modules have more dependency on each other. To integrate procedure oriented module there is requirement of more stubs and drivers as they are not working module and here top down approach is better. In contrast, under OO approach, individual units can easily integrate facilitating bootom up approach. We can summarize the differences as:Test Method Unit Testing Integration
With Object-Oriented Software Unit testing is really integration testing, test logically related operations and data Object-oriented's unit testing No
With Procedural Software Test individual, functionally cohesive operations Test an integrated
Testing
more bugs related to common set of units global data (though could have (operations and errors associated with global objects common global and classes) data) Of limited value if using strongly Used on units or Boundary typed object oriented languages and integrated units and Value Testing proper data abstraction is used to systems restrict the values of attributes Limited to operations of objects. Must address exception handling Basis Path Generally performed and concurrency issues (if Testing on units applicable). Lowered complexity of objects lessens the need for this Equivalence Emphasized for object-oriented: Used on units, and Black-Box objects are black boxes, equivalence integrated units and classes are messages systems Testing 3. Design the black-box test suite for a function that accepts two pairs of floating point numbers representing two coordinate points. Each pair of coordinate points represents the center and a point on the circumference of the circle. The function prints whether the two circles are intersecting, one is contained within the other or are disjoint. (6+4+8) 4. 1. Distinguish between software verification and software validation. When the verification and validation are performed during the software life cycle. Can one be used in place of the other? 2. What is a coding standards? Give their importance and identify the problems that might occur if the engineers of a software development organization do not adhere to any coding standard? 3. How is data flow testing performed? Is it possible to design data flow test cases manually? Explain your answer. (6+6+6) 5. Consider the following program segment. 1. 2. 3. 4. 5. 6. 7. 8.
int find-maximum(int i, int j, int k) { int max, if(i>j) then
9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.
if (i>k) then max=i; else max=k; else if (j>k) max=j else max=k; return (max); }
a. Draw the control flow graph for this program segment. . b. Determine the cyclomatlc complexity for this program. (Show the intermediate steps in your computation. Writing only the final result is not sufficient) c. How is the cyclomatlc complexity metric useful in the testing process? (6+7+5) 6) a. Explain how code inspection is carried out. What are some of the important types of defects that can be detected during code inspection? Click here for Answer b. Do you agree with the following statement: Modern quality assurance paradigms ara centered around carrying out through product testing. Justify your answer. c. What do you understand by performance testing? What are the different types of performance testing that should be performed on a software product? (6+6+6) 7) a. What is the difference between black-box and white-box testing? Can one be used in place of another? b. For the program segment given in Q. 4, design the branch coverage and statement coverage test suite. Which of these is the stronger testing? c. Distinguish between the static and dynamic analysis of a program. How are static and dynamic program analysis results useful? (6+6+6)
8) a. Normally as testing continues on a software product more and more errors are discovered. Explain how you would decide when to stop testing. b. What is the difference between process metrics and product metrics? Give examples of each. c. List five salient requirements that a software development organization must comply with before it can be awarded the ISO 9001 certificate. (6+6+6)