Testing Data1

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Testing Data1 as PDF for free.

More details

  • Words: 8,388
  • Pages: 26
Software testing Software testing is the process used to help identify the correctness, completeness, security and quality of developed computer software. With that in mind, testing can never completely establish the correctness of arbitrary computer software. In computability theory, a field of computer science, an elegant mathematical proof concludes that it is impossible to solve the halting problem, the question of whether an arbitrary computer program will enter an infinite loop, or halt and produce output. In other words, testing is nothing but criticism or comparison, that is comparing the actual value with expected one. There are many approaches to software testing, but effective testing of complex products is essentially a process of investigation, not merely a matter of creating and following rote procedure. One definition of testing is "the process of questioning a product in order to evaluate it", where the "questions" are things the tester tries to do with the product, and the product answers with its behavior in reaction to the probing of the tester. Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product— putting the product through its paces. The quality of the application can, and normally does, vary widely from system to system but some of the common quality attributes include reliability, stability, portability, maintainability and usability. Refer to the ISO standard ISO 9126 for a more complete list of attributes and criteria.

Introduction In general, software engineers distinguish software faults from software failures. In case of a failure, the software does not do what the user expects. A fault is a programming error that may or may not actually manifest as a failure. A fault can also be described as an error in the correctness of the semantic of a computer program. A fault will become a failure if the exact computation conditions are met, one of them being that the faulty portion of computer software executes on the CPU . A fault can also turn into a failure when the software is ported to a different hardware platform or a different compiler, or when the software gets extended. Software testing may be viewed as a sub-field of software quality assurance but typically exists independently (and there may be no SQA areas in some companies). In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the code or deliver faster. Regardless of the methods used or level of formality involved the desired result of testing is a level of confidence in the software so that the developers are confident that the software has an acceptable defect rate. What constitutes an acceptable defect rate depends on the nature of the software. An arcade video game designed to simulate flying an airplane would presumably have a much higher tolerance for defects than software used to control an actual airliner.

A problem with software testing is that the number of defects in a software product can be very large, and the number of configurations of the product larger still. Bugs that occur infrequently are difficult to find in testing. A rule of thumb is that a system that is expected to function without faults for a certain length of time must have already been tested for at least that length of time. This has severe consequences for projects to write long-lived reliable software. A common practice of software testing is that it is performed by an independent group of testers after finishing the software product and before it is shipped to the customer. This practice often results in the testing phase being used as project buffer to compensate for project delays. Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes. Another common practice is for test suites to be developed during technical support escalation procedures. Such tests are then maintained in regression testing suites to ensure that future updates to the software don't repeat any of the known mistakes. It is commonly believed that the earlier a defect is found the cheaper it is to fix it. In counterpoint, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test driven software development" model. In this process unit tests are written first, by the programmers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness. [edit]

White-box and black-box testing In the terminology of testing professionals (software and some hardware) the phrases "white box", or "glass box", and "black box" testing refer to whether the test case developer has access to the source code of the software under test, and whether the testing is done through (simulated) user interfaces or through the application programming interfaces either exposed by (published) or internal to the target.

In white box testing the test developer has access to the source code and can write code that links into the libraries which are linked into the target software. This is typical of unit tests, which only test parts of a software system. They ensure that components used in the construction are functional and robust to some degree. In black box testing the test engineer only accesses the software through the same interfaces that the customer or user would, or possibly through remotely controllable, automation interfaces that connect another computer or another process into the target of the test. For example a test harness might push virtual keystrokes and mouse or other pointer operations into a program through any inter-process communications mechanism, with the assurance that these events are routed through the same code paths as real keystrokes and mouse clicks. In recent years the term grey (or gray in the United States) box testing has come into common usage. The typical grey box tester is permitted to set up or manipulate the testing environment, like seeding a database, and can view the state of the product after their actions, like performing a SQL query on the database to be certain of the values of columns. It is used almost exclusively of client-server testers or others who use a database as a repository of information, but can also apply to a tester who has to manipulate XML files (DTD or an actual XML file) or configuration files directly. It can also be used of testers who know the internal workings or algorithm of the software under test and can write tests specifically for the anticipated results. [edit]

Alpha, Beta, and Gamma testing In software development, testing is usually required before release to the general public. This phase of development is known as the alpha phase. Testing during this phase is known as alpha testing. In the first phase of alpha testing, developers test the software using white box techniques. Additional inspection is then performed using black box or grey box techniques. This is usually done by a dedicated testing team. This is often known as the second stage of alpha testing. Once the alpha phase is complete, development enters the beta phase. Versions of the software, known as beta-versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta-versions are made available to the open public to increase the feedback field to a maximal number of future users. Testing during the beta phase, informally called 'beta testing, is generally constrained to black box techniques although a core of test engineers are likely to continue with white box testing in parallel to the beta tests. Thus the term beta test can refer to the stage of the software—closer to release than being "in alpha"—or it can refer to the particular group

and process being done at that stage. So a tester might be continuing to work in white box testing while the software is "in beta" (a stage) but he or she would then not be part of "the beta test" (group/activity). Gamma testing is a little-known informal phrase that refers derisively to the release of "buggy" (defect-ridden) products. It is not a term of art among testers, but rather an example of referential humor. Cynics have referred to all software releases as "gamma testing" since defects are found in almost all commercial, commodity and publicly available software eventually. (Some classes of embedded, and highly specialized process control software are tested far more thoroughly and subjected to other forms of rigorous software quality assurance; particularly those that control "life critical" equipment where a failure can result in injury or death). (see Ivars Peterson's Fatal Defect for counter examples). Where alpha and beta refer to stages of the software before release (and also implicitly on the size of the testing community, and the constraints on the testing methods), white box, black box, and grey box refer to the ways in which the tester accesses the target. [edit]

System testing Most software produced today is modular. System testing is a phase of software testing in which developers see if there are any communications flaws--either not passing information, or passing incorrect information--between modules. Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. [edit]

Regression testing Main article: regression testing A regression test re-runs previous tests against the changed software, to ensure that the changes made in the current software do not affect the functionality of the existing software. Regression testing can be performed either by hand or by software that automates the process. [edit]

Test Cases, Suites, Scripts, and Scenarios

Black box testers usually write test cases for the majority of their testing activities. A test case is usually a single step, and its expected result, along with various additional pieces of information. It can occasionally be a series of steps but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table. The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario. Most white box testers write and use test scripts in unit, system, and regression testing. Test scripts should be written for modules with the highest risk of failure and the highest impact if the risk becomes an issue. Most companies that use automated testing will call the code that is used in their test scripts. A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate. They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests. See An Introduction to Scenario Testing Scenario testing is similar to, but not the same as session-based testing, which is more closely related to exploratory testing, but the two concepts can be used in conjunction. See Adventures in Session-Based Testing and Session-Based Test Management. [edit]

A Sample Testing Cycle Although testing varies between organizations, there is a cycle to testing: 1. Requirements Analysis: Testing should begin in the requirements phase of the software life cycle(SDLC).

2. Design Analysis: During the design phase, testers work with developers in determining what aspects of a design are testable and under what parameter those testers work. 3. Test Planning: Test Strategy, Test Plan(s), Test Bed creation. 4. Test Development: Test Procedures, Test Scenarios, Test Cases, Test Scripts to use in testing software. 5. Test Execution: Testers execute the software based on the plans and tests and report any errors found to the development team. 6. Test Reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. 7. Retesting the Defects Not all errors or defects reported must be fixed by a software development team. Some may be caused by errors in configuring the test software to match the development or production environment. Some defects can be handled by a workaround in the production environment. Others might be deferred to future releases of the software, or the deficiency might be accepted by the business user. There are yet other defects that may be rejected by the development team (of course, with due reason) if they deem it inappropriate to be called a defect. [edit]

Code Coverage For main article, see Code coverage Code coverage is inherently a white box testing activity. The target software is built with special options or libraries and/or run under a special environment such that every function that is exercised (executed) in the program(s) are mapped back to the function points in the source code. This process allows developers and quality assurance personnel to look for parts of a system that are rarely or never accessed under normal conditions (error handling and the like) and helps reassure test engineers that the most important conditions (function points) have been tested. Test engineers can look at code coverage test results to help them devise test cases and input or configuration sets that will increase the code coverage over vital functions. Two common forms of code coverage used by testers are statement (or line) coverage, and path (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches, or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. Generally code coverage tools and libraries exact a performance and/or memory or other resource cost which is unacceptable to normal operations of the software. Thus they are only used in the lab. As one might expect there are classes of software that cannot be

feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing. There are also some sorts of defects which are affected by such tools. In particular some race conditions or similarly real time sensitive operations are impossible to detect while run under code coverage environments; and conversely some of these defects are only triggered as a result of the additional overhead of the testing code. [edit]

Controversy There is considerable controversy among testing writers and consultants about what constitutes responsible software testing. The self-declared members of the ContextDriven School of testing (http://www.context-driven-testing.com) believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation. This belief directly contradicts standards such as the IEEE 829 test documentation standard, and organizations such as the FDA who promote them. Some of the major controversies include: [edit]

Agile vs. Traditional Starting around 1990, a new style of writing about testing began to challenge what had come before. The seminal work in this regard is widely considered to be Testing Computer Software, by Cem Kaner. Instead of assuming that testers have full access to source code and complete specifications, these writers, who included James Bach and Cem Kaner, argued that testers must learn to work under conditions of uncertainty and constant change. Meanwhile, an opposing trend toward process "maturity" also gained ground, in the form of the Capability Maturity Model. The agile testing movement (which includes but is not limited to forms of testing practiced on agile development projects) has popularity mainly in commercial circles, whereas the CMM was embraced by government and military software providers. [edit]

Exploratory vs. Scripted Exploratory testing means simultaneous learning, test design, and test execution. Scripted testing means that learning and test design happens prior to test execution. Exploratory testing is very common, but in most writing and training about testing it is barely mentioned and generally misunderstood. Many writers consider it a dangerous practice. Some writers consider it a primary and essential practice. Structured exploratory testing

is a compromise when the testers are familiar with the software. A vague test plan is written up, describing what functionalities need to be tested but not how, allowing the individual testers to choose the method of testing. [edit]

Manual vs. Automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. Others, such as advocates of agile development, recommend automating 100% of all tests. A challenge with automation is that automated testing requires automated test oracles (an oracle is a mechanism or principle by which a problem in the software can be recognized). Such tools have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software. The success of automated software testing depends on complete and comprehensive test planning. Software development strategies such as test-driven development are highly compatible with the idea of devoting a large part of an organization's testing resources to automated testing. Many large software organizations perform automated testing. Some have developed their own automated testing environments specifically for internal development, and not for resale. [edit]

Certification Many certification programs exist to support the professional aspirations of software testers. These include the CSQE program offered by the American Society for Quality, the CSTE/CSQA program offered by QAI, Quality Assuarance Institute, and the ISTQB certifications offered by ISTQB, International Software Testing Qualification Board. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. [edit]

Custodiet Ipsos Custodes One principle in software testing is best summed up by the classical Latin question posed by Juvenal: Quis Custodiet Ipsos Custodes (Who watches the watchmen?), or is alternatively referred informally, as the "Heisenbug" concept. Heisenberg's uncertainty principle makes it clear that any form of observation is also an interaction, that the act of testing can also affect that which is being tested. In practical terms the test engineer is testing software (and sometimes hardware or firmware) with other software (and hardware and firmware). The tools can have their

own defects and the process can fail in ways that are not the result of defects in the target but results as artifacts of the harness. There are metrics being developed to measure the effectiveness of testing. One method is by analyzing code coverage (This is highly controversial) - where every one can agree what areas are not at all being covered and try to improve coverage on these areas. Finally, there is the analysis of historical find-rates. By measuring how many bugs are found and comparing them to predicted numbers (based on past experience with similar projects), certain assumptions regarding the effectiveness of testing can be made. While not an absolute measurement of quality, if a project is halfway complete and there have been no defects found, then changes may be needed to the procedures being employed by QA.

Regression testing From Wikipedia, the free encyclopedia

Jump to: navigation, search Regression testing is any type of software testing which seeks to uncover regression bugs. Regression bugs occur whenever software functionality that previously worked as desired stops working or no longer works in the same way that was previously planned. Typically regression bugs occur as an unintended consequence of program changes. Common methods of regression testing include re-running previously run tests and checking whether previously-fixed faults have reemerged. Experience has shown that as software is developed, this kind of reemergence of faults is quite common. Sometimes it occurs because a fix gets lost through poor revision control practices (or simple human error in revision control), but just as often a fix for a problem will be "fragile" - if some other change is made to the program, the fix no longer works. Finally, it has often been the case that when some feature is redesigned, the same mistakes will be made in the redesign that were made in the original implementation of the feature. Therefore, in most software development situations it is considered good practice that when a bug is located and fixed, a test that exposes the bug is recorded and regularly retested after subsequent changes to the program. Although this may be done through manual testing procedures using programming techniques, it is often done using automated testing tools. Such a 'test suite' contains software tools that allows the testing environment to execute all the regression test cases automatically; some projects even set up automated systems to automatically re-run all regression tests at specified intervals and report any regressions. Common strategies are to run such a system after every successful compile (for small projects), every night, or once a week.

Regression testing is an integral part of the extreme programming software development methodology. In this methodology, design documents are replaced by extensive, repeatable, and automated testing of the entire software package at every stage in the software development cycle.

Black box testing From Wikipedia, the free encyclopedia

Jump to: navigation, search Black box testing, concrete box or functional testing is used in computer programming, software engineering and software testing to check that the outputs of a program, given certain inputs, conform to the functional specification of the program. The term black box indicates that the internal implementation of the program being executed is not examined by the tester. For this reason black box testing is not normally carried out by the programmer. In most real-world engineering firms, one group does design work while a separate group does the testing. A complementary technique, white box testing or structural testing, uses information about the structure of the program to check that it performs correctly.

Contents [hide] • •

1 Equivalence partitioning o 1.1 Disadvantages 2 Boundary value analysis

3 Smoke testing [edit]



Equivalence partitioning A technique in black box testing is equivalence partitioning. Equivalence partitioning is designed to minimize the number of test cases by dividing tests in such a way that the system is expected to act the same way for all tests of each equivalence partition. Test inputs would be selected from each partition. Equivalence partitions are designed so that every possible input belongs to one and only one equivalence partition.

[edit]

Disadvantages • • • •

Doesn't test every input No guidelines for choosing inputs Heuristic based very limited focus

[edit]

Boundary value analysis Boundary value analysis is a technique of black box testing in which input values at the boundaries of the input domain are tested. It has been widely recognized that input values at the extreme ends of, and just outside of, input domains tend to cause errors in system functionality. In boundary value analysis, values at and just beyond the boundaries of the input domain are used to generate test cases to ensure proper functionality of the system. As an example, for a system that accepts as input a number between one and ten, boundary value analysis would indicate that test cases should be created for the lower and upper bounds of the input domain (1, 10), and values just outside these bounds (0, 11) to ensure proper functionality. Boundary value analysis is an excellent way to catch common user input errors which can disrupt proper program functionality. Boundary value analysis complements the technique of equivalence partitioning . Some of the advantages of boundary value analysis are: • • •

Very good at exposing potential user interface/user input problems Very clear guidelines on determining test cases Very small set of test cases generated

Disadvantages to boundary value analysis: • •

Does not test all possible inputs Does not test dependencies between combinations of inputs

[edit]

Smoke testing

A sub-set of the black box test is the smoke test. A smoke test is a cursory examination of all of the basic components of a software system to ensure that they work. Typically, smoke testing is conducted immediately after a software build is made. The term comes from electrical engineering, where in order to test electronic equipment, power is applied and the tester ensures that the product does not spark or smoke. Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

White box testing From Wikipedia, the free encyclopedia

Jump to: navigation, search White box testing, clear box testing, glass box testing or structural testing is used in computer programming, software engineering and software testing to check that the outputs of a program, given certain inputs, conform to the structural specification of the program. The term white box (or glass box) indicates that testing is done with a knowledge of the code used to execute certain functionality. For this reason, a programmer is usually required to perform white box tests. Often, multiple programmers will write tests based on certain code, so as to gain varying perspectives on possible outcomes. A complementary technique, black box testing or functional testing, performs testing based on previously understood requirements (or understood functionality), without knowledge of how the code executes.

Glass Box Testing By Crista Risley What is black box/glass box testing?[2] [4] Black-box

and glass-box are test design methods. Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure. Black-box test design is usually described as focusing on testing functional requirements. Glass-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. Glass box testing requires the intimate knowledge of program internals, while black box testing is based solely on the knowledge of the system requirements. Being primarily concerned with program internals, it is obvious in SE literature that the primary effort to develop a testing methodology has been devoted to glass box tests. However, since the importance of black box testing has gained general acknowledgement, also a certain number of useful black box testing techniques were developed. It is important to understand that these methods are used during the test design phase, and their influence is hard to see in the tests once they're implemented.

Glass box testing definition [5] Software testing approaches that examine the program structure and derive test data from the program logic. Structural testing is sometimes referred to as clear-box testing since white boxes are considered opaque and do not really permit visibility into the code.

Synonyms for Glass box testing o

White Box testing

o

Structural testing

o

Clear Box testing

o

Open Box Testing

Types of Glass Box testing [6] o

Static and Dynamic Analysis static analysis techniques do not necessitate the execution of the software, dynamic analysis is what is generally considered as ``testing``, i.e. it involves running the system.

o

Statement Coverage Testing performed where every statements is executed at least once.

o

Branch Coverage Running a series of tests to ensure that all branches are tested at least once.

o

Path Coverage Testing all paths.

o

All-definition-use-path Coverage All paths between the definition of a variable and the use of that definition are now identified and tested.

Tradeoffs of Glass box testing [3] Advantages o

forces test developer to reason carefully about implementation

o

approximates the partitioning done by execution equivalence

o

reveals errors in "hidden" code:

o

beneficent side-effects

o

optimizations (e.g. charTable that changes reps when size > 100)

Disadvantages o

expensive

o

miss cases omitted in the code

Combining glass-box and black-box[3]

Most common approach in practice

int abs(int x) { // effects: return -x if x, x otherwise if (x < -1) return -x; else return x; } black-box subdomains: x < 0, x >= 0 glass-box subdomains: x < -1, x >= -1 intersect the subdomains, giving: x < -1, x = -1, x >= 0 testing these subdomains reveals the error intuition: good probability of error if either of the following occurs: o

program executes two inputs the same which have different specs

o

program executes differently under two inputs with same specs

Glass-Box Testing [1] In this testing technique, you use the code at hand to determine a test suite. Ideally, you want test data that exercises all possible paths through your code; however, this isn't always possible, and we can approximate by ensuring that each path is visited at least once.

Unit test From Wikipedia, the free encyclopedia

(Redirected from Unit testing) Jump to: navigation, search In computer programming, a unit test is a procedure used to validate that a particular module of source code is working properly. The idea about unit tests is to write test cases

for all functions and methods so that whenever a change causes a regression, it can be quickly identified and fixed. Ideally, each test case is separate from the others; constructs such as mock objects can assist in separating unit tests. This type of testing is mostly done by the developers and not by end-users.

Contents [hide]



1 Benefits o 1.1 Facilitates change o 1.2 Simplifies integration o 1.3 Documentation o 1.4 Separation of interface from implementation 2 Limitations 3 Applications o 3.1 Extreme Programming o 3.2 Techniques o 3.3 Language support 4 See also



5 External links



• •

Benefits The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. Unit testing provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits.

Facilitates change Unit testing allows the programmer to refactor code at a later date, and make sure the module still works correctly (i.e. regression testing). This provides the benefit of encouraging programmers to make changes to the code since it is easy for the programmer to check if the piece is still working properly.

Simplifies integration Unit testing helps eliminate uncertainty in the pieces themselves and can be used in a bottom-up testing style approach. By testing the parts of a program first and then testing the sum of its parts, integration testing becomes much easier.

Documentation

Unit testing provides a sort of "living document". Clients and other developers looking to learn how to use the class can look at the unit tests to determine how to use the class to fit their needs and gain a basic understanding of the API.

Separation of interface from implementation Because some classes may have references to other classes, testing a class can frequently spill over into testing another class. A common example of this is classes that depend on a database: in order to test the class, the tester often writes code that interacts with the database. This is a mistake, because a unit test should never go outside of its own class boundary. As a result, the software developer abstracts an interface around the database connection, and then implements that interface with their own mock object. This results in loosely coupled code, minimizing dependencies in the system.

Limitations Unit-testing will not catch every error in the program. By definition, it only tests the functionality of the units themselves. Therefore, it will not catch integration errors, performance problems or any other system-wide issues. In addition, it may not be easy to anticipate all special cases of input the program unit under study may receive in reality. Unit testing is only effective if it is used in conjunction with other software testing activities. It is unrealistic to test all possible input combinations for any non-trivial piece of software. A unit test can only show the presence of errors; it cannot show the absence of errors.

Applications Extreme Programming While Unit Testing is often associated with Extreme Programming, it existed a long time before Extreme Programming was invented. The unit testing concept is part of the Extreme Programming method of software engineering. Various unit testing frameworks, based on a design by Kent Beck, have come to be known collectively as xUnit, and are available for many programming languages and development platforms. Unit testing is the building block to test driven development (TDD). Extreme Programming and most other methods use unit tests to perform white box testing. Note that many in the Extreme Programming community favor the terms "developer testing" or "programmer testing" over the term "unit testing," since many other test activities (like function or acceptance test) can now be done at "developer-time."

Techniques

Both conventionally and as a well accepted industry practice, unit testing is conducted in an automated environment through the use of a third party supplied component or framework. However, one reputable organization, the IEEE, prescribes neither an automated nor a manual approach. A manual approach to unit testing may employ a stepby-step instructional document. Nevertheless, the objective in unit testing is to isolate a unit and validate its correctness. Automation is much more efficient for achieving this, and enables the many benefits listed in this article. In fact, manual unit testing is arguably a form of integration testing and thus precludes the achievement of most (if not all) of the goals established for unit testing. To fully realize the effect of isolation, the unit or code body subjected to the unit test is executed within a framework outside of its natural environment, that is, outside of the product or calling context for which it was originally created. Testing in an isolated manner has the benefit of revealing unnecessary dependencies between the code being tested and other units or data spaces in the product. These dependencies can then be eliminated through refactoring, or if necessary, re-design. Consequently, unit testing is traditionally a motivator for programmers to create decoupled and cohesive code bodies. This practice promotes healthy habits in software development. Design patterns, unit testing, and refactoring often work together so that the most ideal solution may emerge.

Language support The D programming language offers direct support for unit testing. Due to its demands on modularity in design and implementation, unit testing is particularly suitable for object-oriented programming languages. Most of these have frameworks that help simplify the process of unit testing.

Defect tracking From Wikipedia, the free encyclopedia

In engineering, defect tracking is the process of finding defects in a product, (by inspection, testing, or recording feedback from customers), and making new versions of the product that fix the defects. Defect tracking is important in software engineering as complex software systems typically have tens or hundreds of thousands of defects: managing, evaluating and prioritizing these defects is a difficult task: defect tracking systems are computer database systems that store defects and help people to manage them.

Contents [hide] • • •

1 Defect prevention 2 See also 3 External links o 3.1 Open source solutions o

3.2 Commercial solutions

Defect prevention The objective of defect prevention is to identify the defects and take corrective action to ensure they are not repeated over subsequent iterative cycles. Defect prevention can be implemented by preparing an action plan to minimize or eliminate defects, generating defect metrics, defining corrective action and producing an analysis of the root causes of the defects. Defect prevention can be accomplished by actioning the following steps: 1. Calculate defect data with periodic reviews using test logs from the execution phase: this data should be used to segregate and classify defects by root causes. This produces defect metrics highlighting the most prolific problem areas; 2. Identify improvement strategies; 3. Escalate issues to senior management or customer where necessary; 4. Draw up an action plan to address outstanding defects and improve development process.This should be reviewed regularly for effectiveness and modified should it prove to be ineffective. 5. Undertake periodic peer reviews to verify that the action plans are being adhered to;

6. Produce regular reports on defects by age. If the defect age for a particular defect is high and the severity is sufficient to cause concern, focussed action needs to be taken to resolve it.

Bugtracker From Wikipedia, the free encyclopedia

A bugtracker is a ticket tracking system that is designed especially to manage problems (software bugs) with computer programs. Typically bug tracking software allows the user to quickly enter bugs and search on them. In addition some allow users to specify a workflow for a bug that automates a bug's lifecycle. Most bug tracking software allows the administrator of the system to configure what fields are included on a bug. It should be possible to categorize bugs with regard to area of functionality, state and solution, priority and severity et. al. Having a bug tracking solution is critical for most systems. Without a good bug tracking solution bugs will eventually get lost or poorly prioritized.

Fuzz testing From Wikipedia, the free encyclopedia

Fuzz testing is a software testing technique. The basic idea is to attach the inputs of a program to a source of random data. If the program fails (for example, by crashing, or by failing built-in code assertions), then there are defects to correct. The great advantage of fuzz testing is that the test design is extremely simple, and free of preconceptions about system behavior.

Contents [hide] • •



1 Uses 2 Fuzz testing methods o 2.1 Event-driven fuzz o 2.2 Character-driven fuzz o 2.3 Database fuzz 3 See also



4 External links

Uses Fuzz testing is often used in large software development projects that perform black box testing. These usually have a budget to develop test tools, and fuzz testing is one of the techniques which offers a high benefit to cost ratio. Fuzz testing is also used as a gross measurement of a large software system's quality. The advantage here is that the cost of generating the tests is relatively low. For example, third party testers have used fuzz testing to evaluate the relative merits of different operating systems and application programs. Fuzz testing is thought to enhance software security and software safety because it often finds odd oversights and defects which human testers would fail to find, and even careful human test designers would fail to create tests for. However, fuzz testing is not a substitute for exhaustive testing or formal methods: it can only provide a random sample of the system's behavior, and in many cases passing a fuzz test may only demonstrate that a piece of software handles exceptions without crashing, rather than behaving correctly. Thus, fuzz testing can only be regarded as a proxy for program correctness, rather than a direct measure, with fuzz test failures actually being more useful as a bug-finding tool than fuzz test passes as an assurance of quality. Fuzz testing was originated at the University of Wisconsin in 1989 by Professor Barton Miller and the students in his graduate Advanced Operating Systems class. Their work can be found at http://www.cs.wisc.edu/~bart/fuzz/.

Fuzz testing methods As a practical matter, developers need to reproduce errors in order to fix them. For this reason, almost all fuzz testing makes a record of the data it manufactures, usually before applying it to the software, so that if the computer fails dramatically, the test data is preserved. Modern software has several different types of inputs: • • •

Event driven inputs are usually from a graphical user interface, or possibly from a mechanism in an embedded system. Character driven inputs are from files, or data streams. Database inputs are from tabular data, such as relational databases.

There are at least two different forms of fuzz testing:

• • •

Valid fuzz attempts to assure that the random input is reasonable, or conforms to actual production data. Simple fuzz usually uses a pseudo random number generator to provide input. A combined approach uses valid test data with some proportion of totally random input injected.

By using all of these techniques in combination, fuzz-generated randomness can test the un-designed behavior surrounding a wider range of designed system states. Fuzz testing may use tools to simulate all of these domains.

Event-driven fuzz Normally this is provided as a queue of datastructures. The queue is filled with data structures that have random values. The most common problem with an event-driven program is that it will often simply use the data in the queue, without even crude validation. To succeed in a fuzz-tested environment, software must validate all fields of every queue entry, decode every possible binary value, and then ignore impossible requests. One of the more interesting issues with real-time event handling is that if error reporting is too verbose, simply providing error status can cause resource problems or a crash. Robust error detection systems will report only the most significant, or most recent error over a period of time.

Character-driven fuzz Normally this is provided as a stream of random data. The classic source in UNIX is the random data generator. One common problem with a character driven program is a buffer overrun, when the character data exceeds the available buffer space. This problem tends to recur in every instance in which a string or number is parsed from the data stream and placed in a limited-size area. Another is that decode tables or logic may be incomplete, not handling every possible binary value.

Database fuzz The standard database scheme is usually filled with fuzz that is random data of random sizes. Some IT shops use software tools to migrate and manipulate such databases. Often the same schema descriptions can be used to automatically generate fuzz databases.

Database fuzz is controversial, because input and comparison constraints reduce the invalid data in a database. However, often the database is more tolerant of odd data than its client software, and a general-purpose interface is available to users. Since major customer and enterprise management software is starting to be open-source, databasebased security attacks are becoming more credible. A common problem with fuzz databases is buffer overrun. A common data dictionary, with some form of automated enforcement is quite helpful and entirely possible. To enforce this, normally all the database clients need to be recompiled and retested at the same time. Another common problem is that database clients may not understand the binary possibilities of the database field type, or, legacy software might have been ported to a new database system with different possible binary values. A normal, inexpensive solution is to have each program validate database inputs in the same fashion as user inputs. The normal way to achieve this is to periodically "clean" production databases with automated verifiers.

IEEE 829 From Wikipedia, the free encyclopedia

IEEE 829-1998, also known as the 829 Standard for Software Test Documentation is an IEEE standard that specifies the form of a set of documents for use in eight defined stages of software testing, each stage potentially producing its own separate type of document. The standard specifies the format of these documents but does not stipulate whether they all must be produced, nor does it include any criteria regarding adequate content for these documents. These are a matter of judgment outside the purview of the standard. The documents are: •

Test Plan: a management planning document that shows:

• • • • •

How the testing will be done Who will do it What will be tested How long it will take What the test coverage will be, i.e. what quality level is required



Test Design Specification: detailing test conditions and the expected results as well as test pass criteria. Test Case Specification: specifying the test data for use in running the test conditions identified in the Test Design Specification Test Procedure Specification: detailing how to run each test, including any set-up preconditions and the steps that need to be followed Test Item Transmittal Report: reporting on when tested software components have progressed from one stage of testing to the next Test Log: recording which tests cases were run, who ran them, in what order, and whether each test passed or failed Test Incident Report: detailing, for any test that failed, the actual versus expected result, and other information intended to throw light on why a test has failed Test Summary Report: A management report providing any important information uncovered by the tests accomplished, and including assessments of the quality of the testing effort, the quality of the software system under test, and statistics derived from Incident Reports. The report also records what testing was done and how long it took, in order to improve any future test planning. This final document is used to indicate whether the software system under test is fit for purpose according to whether or not it has met acceptance criteria defined by project stakeholders.

• • • • •



Test case From Wikipedia, the free encyclopedia

In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies like RUP recommend creating at least two test cases for each requirement. One of them should perform positive testing of requirement and other should perform negative testing. If the application is created without formal requirements, then test cases are written based on the accepted normal operation of programs of a similar class. What characterises a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a postcondition. Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as a pass. This happens often on new products' performance number determination. The first test is taken as the base line for subsequent test / product release cycles. Written test cases include a description of the functionality to be tested taken from either the requirements or use cases, and the preparation required to ensure that the test can be conducted. Written test cases are usually collected into Test suites. A variation of test cases are most commonly used in acceptance testing. Acceptance testing is done by a group of end-users or clients of the system to ensure the developed system meets their requirements. User acceptance testing is usually differentiated by the inclusion of happy path or positive test cases.

Structure of test case Test case definition consists of three main parts with subsections: •

Introduction/overview contains general information about Test case. o Identifier is unique identifier of test case for further references, for example, while describing found defect.

Test case owner/creator is name of tester or test designer, who created test or is responsible for its development o Version of current Test case definition o Name of test case should be human-oriented title which allows to quickly understand test case purpose and scope. o Identifier of requirement which is covered by test case. Also here could be identifier of use case or functional specification item. o Purpose contains short description of test purpose, what functionality it checks. o Dependencies Test case activity o Testing environment/configuration contains information about configuration of hardware or software which must be met while executing test case o Initialization describes actions, which must be performed before test case execution is started. For example, we should open some file. o Finalization describes actions to be done after test case is performed. For example if test case crashes database, tester should restore it before other test cases will be performed. o Actions step by step to be done to complete test. o Input data description Expected results contains description of what tester should see after all test steps has been completed o





Usually test case does not contain actual results. They should be described in defect issue report or in testing report.

Related Documents

Testing Data1
November 2019 15
Kamus Data1
July 2020 26
Copy Lg Data1
June 2020 10
Personal Bio Data1
October 2019 19
Neha Bio Data1.pdf
May 2020 19
Testing
July 2020 14