Glossary Of Terms

  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Glossary Of Terms as PDF for free.

More details

  • Words: 5,299
  • Pages: 11
Quality Dictionary Acceptance Testing Formal testing conducted to enable a user, customer, or other authorized entity to determine whether to accept a system or component. Actual Outcome The behaviour actually produced when the object is tested under specified conditions. Adding Value Adding something that the customer wants that was not there before. Ad hoc Testing Testing carried out using no recognized test case design technique. Alpha Testing Simulated or actual operational testing at an in-house site not otherwise involved with the software developers. Arc Testing A test case design technique for a component in which test cases are designed to execute branch outcomes. Backus-Naur Form A meta language used to formally describe the syntax of a language. Basic Block A sequence of one or more consecutive, executable statements containing no branches. Basis Test Set A set of test cases derived from the code logic, which ensure that 100% branch coverage is achieved. Bebugging The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Behaviour The combination of input values and preconditions and the required response for a function of a system. The full specification of a function would normally comprise one or more behaviours. Benchmarking Comparing your product to the best competitors'. Beta Testing Operational testing at a site not otherwise involved with the software developers. Big-bang Testing Integration testing where no incremental testing takes place prior to all the system's components being combined to form the system. Black Box Testing Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Testing by looking only at the inputs and outputs, not at the insides of a program. Sometimes also called "Requirements Based Testing". You don't need to be a programmer, you only need to know what the program is supposed to do and be able to tell whether an output is correct or not. Bottom-up Testing An approach to testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. Boundary Value An input value or output value which is on the boundary between equivalence classes, or an incremental distance either side of the boundary. Boundary Value Analysis A test case design technique for a component in which test cases are designed which include representatives of boundary values. Boundary Value Coverage The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite. Boundary Value Testing A test case design technique for a component in which test cases are designed which include representatives of boundary values. Boundary Value Coverage The percentage of boundary values of the component's equivalence classes, which have been exercised by a test case suite. Boundary Value Testing A test case design technique for a component in which test cases are designed which include representatives of boundary values.

Branch A conditional transfer of control from any statement to any other statement in a component, or an unconditional transfer of control from any statement to any other statement in the component except the next statement, or when a component has more than one entry point, a transfer of control to an entry point of the component. Branch Condition A condition within a decision. Branch Condition Combination Coverage The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite. Branch Condition Combination Testing A test case design technique in which test cases are designed to execute combinations of branch condition outcomes. Branch Condition Coverage The percentage of branch condition outcomes in every decision that have been exercised by a test case suite. Branch Condition Testing A test case design technique in which test cases are designed to execute branch condition outcomes. Branch Coverage The percentage of branches that have been exercised by a test case suite. Branch Outcome The result of a decision (which therefore determines the control flow alternative taken). Branch Point A program point at which the control flow has two or more alternative routes. Branch Testing A test case design technique for a component in which test cases are designed to execute branch outcomes. Bring to the Table Refers to what each individual in a meeting can contribute to a meeting, for example, a design or brainstorming meetings. Bug A manifestation of an error in software. A fault, if encountered may cause a failure. Bug Seeding The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. C-use A data use not in a condition. Capture/Playback Tool A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Capture/Replay Tool A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. CAST Acronym for computer-aided software testing. Cause-effect Graph A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases. Cause-effect Graphing A test case design technique in which test cases are designed by consideration of cause-effect graphs. Certification The process of confirming that a system or component complies with its specified requirements and is acceptable for operational use. Chow's Coverage Metrics The percentage of sequences of N-transitions that have been exercised by a test case suite. Code Coverage An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention. If we have a set of tests that gives Code Coverage, it simply means that, if you run all the tests, every line of the code is executed at least once by some test. Code-based Testing Designing tests based on objectives derived from the implementation (e.g., tests that execute specific control flow paths or use specific data items).

Compatibility Testing Testing whether the system is compatible with other systems with which it should communicate. Complete Path Testing A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables. Component A minimal software item for which a separate specification is available. Component Testing The testing of individual software components. Computation Data Use A data use not in a condition. Also called C-use. Concurrent (or Simultaneous) Engineering Integrating the design, manufacturing, and test processes. Condition A Boolean expression containing no Boolean operators. For instance, A < B is a condition but A and B is not. Condition Coverage The percentage of branch condition outcomes in every decision that have been exercised by a test case suite. Condition Outcome The evaluation of a condition to TRUE or FALSE. Conformance Criterion Some method of judging whether or not the component's action on a particular specified input value conforms to the specification. Conformance Testing The process of testing that an implementation conforms to the specification on which it is based. Continuous Improvement The PDSA process of iteration which results in improving a product. Control Flow An abstract representation of all possible sequences of events in a program's execution. Control Flow Graph The diagrammatic representation of the possible alternative control flow paths through a component. Control Flow Path A sequence of executable statements of a component, from an entry point to an exit point. Conversion Testing Testing of programs or procedures used to convert data from existing systems for use in replacement systems. Correctness The degree to which software conforms to its specification. Coverage The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite. A measure applied to a set of tests. The most important are called Code Coverage and Path Coverage. There are a number of intermediate types of coverage defined, moving up the ladder from Code Coverage (weak) to Path Coverage (strong, but hard to get). These definitions are called things like Decision -Condition Coverage, but they're seldom important in the real world. For the curious, they are covered in detail in Glenford Myers' book, The Art of Software Testing. Coverage Item An entity or property used as a basis for testing. Customer Satisfaction Meeting or exceeding a customer's expectations for a product or service. Data Coupling Data coupling is where one piece of code interacts with another by modifying a data object that the other code reads. Data coupling is normal in computer operations, where data is repeatedly modified until the desired result is obtained. However, unintentional data coupling is bad. It causes many hard to find bugs, and it causes all "side effect" bugs caused by changing existing systems. It's the reason why you need to test all paths to do really tight testing. The best way to combat it is with Regression Testing, to make sure that a change didn't break something else. Data Definition An executable statement where a variable is assigned a value. Data Definition C-use Coverage The percentage of data definition C-use pairs in a component that are exercised by a test case suite. Data Definition C-use Pair A data definition and computation data use, where the data use uses the value defined in the data definition.

Data Definition P-use Coverage The percentage of data definition P-use pairs in a component that are exercised by a test case suite. Data Definition P-use Pair A data definition and predicate data use, where the data use uses the value defined in the data definition. Data Definition-use Coverage The percentage of data definition-use pairs in a component that are exercised by a test case suite. Data Definition-use Pair A data definition and data use , where the data use uses the value defined in the data definition. Data Definition-use Testing A test case design technique for a component in which test cases are designed to execute data definition-use pairs. Data Flow Coverage Test coverage measure based on variable usage within the code. Examples are data definition-use coverage, data definition P-use coverage, data definition C-use coverage, etc. Data Flow Testing Testing in which test cases are designed based on variable usage within the code. Data Use An executable statement where the value of a variable is accessed. Debugging The process of finding and removing the causes of failures in software. Decision A program point at which the control flow has two or more alternative routes. Decision Condition A condition within a decision. Decision Coverage The percentage of decision outcomes that have been exercised by a test case suite. Decision Outcome The result of a decision (which therefore determines the control flow alternative taken). Design The creation of a specification from concepts. Design-based Testing Designing tests based on objectives derived from the architectural or detail design of the software (e.g., tests that execute specific invocation paths or probe the worst case behaviour of algorithms). Desk Checking The testing of software by the manual simulation of its execution. Dirty Testing Testing aimed at showing software does not work. Documentation Testing Testing concerned with the accuracy of documentation. Domain The set from which values are selected. Domain Testing A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Dynamic Analysis The process of evaluating a system or component based upon its behaviour during execution. Driver A throwaway little module that calls something we need to test, because the real guy who'll be calling it isn't available.

For

example,

suppose

module

A

needs

module

X

to

fire

it

up.

X isn't here yet, so we use a stub:

Emulator A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. Entry Point The first executable statement within a component. Equivalence Class A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. An Equivalence Class (EC) of input values is a group of values that all cause the same sequence of operations to occur. In Black Box terms, they are all treated the same way according to the specs. Different input values within an Equivalence Class may give different answers, but the answers are produced by the same procedure. In Glass Box terms, they all cause execution to go down the same path. Equivalence Partition A portion of the component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. Equivalence Partition Coverage The percentage of equivalence classes generated for the component, which have been exercised by a test case suite. Equivalence Partition Testing A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Error A human action that produces an incorrect result. Error Guessing A test case design technique where the experience of the tester is used to postulate what faults might occur, and to design tests specifically to expose them. Error Seeding The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Executable Statement A statement which, when compiled, is translated into object code, which will be executed procedurally when the program is running and may perform an action on program data. Exercised A program element is exercised by a test case when the input value causes the execution of that element, such as a statement, branch, or other structural element. Exhaustive Testing A test case design technique in which the test case suite comprises all combinations of input values and preconditions for component variables. Exit Point The last executable statement within a component. Facility Testing Test case selection that is based on an analysis of the specification of the component without reference to its internal workings.

Failure Deviation of the software from its expected delivery or service. Fault A manifestation of an error in software. A fault, if encountered may cause a failure. Feasible Path A path for which there exists a set of input values and execution conditions, which causes it to be executed. Feature Testing Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Flow Charting Creating a 'map' of the steps in a process. Functional Chunk The fundamental unit of testing. Its precise definition is "The smallest piece of code for which all the inputs and outputs are meaningful at the spec level." This means that we can test it Black Box, and design the tests before the code arrives without regard to how it was coded, and also tell whether the results it gives are correct. Functional Specification The document that describes in detail the characteristics of the product with regard to its intended capability. Functional Test Case Design Test case selection that is based on an analysis of the specification of the component without reference to its internal workings. Glass Box Testing Test case selection that is based on an analysis of the internal structure of the component. Testing by looking only at the code. Sometimes also called "Code Based Testing". Obviously you need to be a programmer and you need to have the source code to do this. Incremental Integration A systematic way for putting the pieces of a system together one at a time, testing as each piece is added. We not only test that the new piece works, but we also test that it didn't break something else by running the RTS (Regression Test Set). Incremental Testing Integration testing where system components are integrated into the system one at a time until the entire system is integrated. Independence Separation of responsibilities, which ensures the accomplishment of objective evaluation. Infeasible Path A path, which cannot be exercised by any set of possible input values. Input A variable (whether stored within a component or outside it) that is read by the component. Input Domain The set of all possible inputs. Input Value An instance of an input. Inspection A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection). Installability Testing Testing concerned with the installation procedures for the system. Instrumentation The insertion of additional code into the program in order to collect information about program behaviour during program execution. Instrumenter A software tool used to carry out instrumentation. Integration The process of combining components into larger assemblies. Integration Testing Testing performed to expose faults in the interfaces and in the interaction between integrated components. Interface Testing Integration testing where the interfaces between system components are tested.

Isolation Testing Component testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs. LCSAJ A Linear Code Sequence And Jump, consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements, the end of the linear sequence, and the target line to which control flow is transferred at the end of the linear sequence. LCSAJ Coverage The percentage of LCSAJs of a component, which are exercised by a test case suite. LCSAJ Testing A test case design technique for a component in which test cases are designed to execute LCSAJs. Logic-coverage Testing Test case selection that is based on an analysis of the internal structure of the component. Logic-driven Testing Test case selection that is based on an analysis of the internal structure of the component. Maintainability Testing Testing whether the system meets its specified objectives for maintainability. Manufacturing Creating a product from specifications. Metrics Ways to measure: e.g., time, cost, customer satisfaction, quality. Modified Condition/Decision Coverage The percentage of all branch condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. Modified Condition/Decision Testing A test case design technique in which test cases are designed to execute branch condition outcomes that independently affect a decision outcome. Multiple Condition Coverage The percentage of combinations of all branch condition outcomes in every decision that have been exercised by a test case suite. Mutation Analysis A method to determine test case suite thoroughness by measuring the extent to which a test case suite can discriminate the program from slight variants (mutants) of the program. See also The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. N-switch Coverage The percentage of sequences of N-transitions that have been exercised by a test case suite. N-switch Testing A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions. N-transitions A sequence of N+1 transitions. Negative Testing Testing aimed at showing software does not work. Non-functional Requirements Testing Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc. Operational Testing Testing conducted to evaluate a system or component in its operational environment. Oracle A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test. Outcome Actual outcome or predicted outcome. This is the outcome of a test. Output A variable (whether stored within a component or outside it) that is written to by the component. Output Domain The set of all possible outputs. Output Value An instance of an output. P-use A data use in a predicate.

Partition Testing A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes. Path A sequence of executable statements of a component, from an entry point to an exit point. Path Coverage The percentage of paths in a component exercised by a test case suite. A set of tests that gives Path Coverage for some code if the set goes down each path at least once. The difference between this and Code Coverage is that Path Coverage means not just "visiting" a line of code, but also includes how you got there and where you're going next. It therefore uncovers more bugs, especially those caused by Data Coupling. However, it's impossible to get this level of coverage except perhaps for a tiny critical piece of code. Path Sensitizing Choosing a set of input values to force the execution of a component to take a given path. Path Testing A test case design technique in which test cases are designed to execute paths of a component. Performance Testing Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Portability Testing Testing aimed at demonstrating the software can be ported to specified hardware or software platforms. Precondition Environmental and state conditions which must be fulfilled before the component can be executed with a particular input value. Predicate A logical expression, which evaluates to TRUE or FALSE, normally to direct the execution path in code. Predicate Data Use A data use in a predicate. Predicted Outcome The behaviour predicted by the specification of an object under specified conditions. Process What is actually done to create a product. Program Instrumenter A software tool used to carry out instrumentation. Progressive Testing Testing of new features after regression testing of previous features. Pseudo-random A series which appears to be random but is in fact generated according to some prearranged sequence. Quality Tools Tools used to measure and observe every aspect of the creation of a product. Recovery Testing Testing aimed at verifying the system's ability to recover from varying degrees of failure. Regression Testing Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. Re-running a set of tests that used to work to make sure that changes to the system didn't break anything. It's usually run after each set of maintenance or enhancement changes, but is also very useful for Incremental Integration of a system. RTS (Regression Test Set) The set of tests used for Regression Testing. It should be complete enough so that the system is defined to "work correctly" when this set of tests runs without error. Requirements-based Testing Designing tests based on objectives derived from requirements for the software component (e.g., tests that exercise specific functions or probe the non-functional constraints such as performance or security). Result Actual outcome or predicted outcome. This is the outcome of a test. Review A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. Security Testing Testing whether the system meets its specified security objectives.

Serviceability Testing Testing whether the system meets its specified objectives for maintainability. Simple Subpath A subpath of the control flow graph in which no program part is executed more than necessary. Simulation The representation of selected behavioural characteristics of one physical or abstract system by another system. Simulator A device, computer program or system used during software verification, which behaves or operates like a given system when provided with a set of controlled inputs. Six-Sigma Quality Meaning 99.999997% perfect; only 3.4 defects in a million. Source Statement An entity in a programming language, which is typically the smallest indivisible unit of execution. SPC Statistical process control; used for measuring the conformance of a product to specifications. Specification A description of a component's function in terms of its output values for specified input values under specified preconditions. Specified Input An input for which the specification predicts an outcome. State Transition A transition between two allowable states of a system or component. State Transition Testing A test case design technique in which test cases are designed to execute state transitions. Statement An entity in a programming language, which is typically the smallest indivisible unit of execution. Statement Coverage The percentage of executable statements in a component that have been exercised by a test case suite. Statement Testing A test case design technique for a component in which test cases are designed to execute statements. Static Analysis Analysis of a program carried out without executing the program. Static Analyzer A tool that carries out static analysis. Static Testing Testing of an object without execution on a computer. Statistical Testing A test case design technique in which a model is used of the statistical distribution of the input to construct representative test cases. Storage Testing Testing whether the system meets its specified storage objectives. Stress Testing Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Structural Coverage Coverage measures based on the internal structure of the component. Structural Test Case Design Test case selection that is based on an analysis of the internal structure of the component. Structural Testing Test case selection that is based on an analysis of the internal structure of the component. Structured Basis Testing A test case design technique in which test cases are derived from the code logic to achieve 100% branch coverage. Structured Walkthrough A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review. Stub A skeletal or special-purpose implementation of a software module, used to develop or test a component that calls or is otherwise dependent on it. A little throwaway module that can be called to make another module work (and hence be testable).

For example, if we want to test module A but it needs to call module B, which isn't available, we can use a quick little stub for B. It just answers "hello from b" or something similar; if asked to return a number it always returns the

same number - like 100. Subpath A sequence of executable statements within a component. Symbolic Evaluation A static analysis technique that derives a symbolic expression for program paths. Syntax Testing A test case design technique for a component or system in which test case design is based upon the syntax of the input. System Testing The process of testing an integrated system to verify that it meets specified requirements. Technical Requirements Testing Testing of those requirements that do not relate to functionality. i.e. performance, usability, etc. Test Testing the product for defects. Test Automation The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Test Case A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Case Design Technique A method used to derive or select test cases. Test Case Suite A collection of one or more test cases for the software under test. Test Comparator A test tool that compares the actual outputs produced by the software under test with the expected outputs for that test case. Test Completion Criterion A criterion for determining when planned testing is complete, defined in terms of a test measurement technique. Test Coverage The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite. Test Driver A program or test tool used to execute software against a test case suite. Test Environment A description of the hardware and software environment in which the tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers. Test Execution The processing of a test case suite by the software under test, producing an outcome. Test Execution Technique The method used to perform the actual test execution, e.g. manual, capture/playback tool, etc. Test Generator A program that generates test cases in accordance to a specified strategy or heuristic. Test Harness A testing tool that comprises a test driver and a test comparator. Test Measurement Technique A method used to measure test coverage items.

Test Outcome Actual outcome or predicted outcome. This is the outcome of a test. The result of a decision (which therefore determines the control flow alternative taken). The evaluation of a condition to TRUE or FALSE. Test Plan A record of the test planning process detailing the degree of tester independence, the test environment, the test case design techniques and test measurement techniques to be used, and the rationale for their choice. Test Procedure A document providing detailed instructions for the execution of one or more test cases. Test Records For each test, an unambiguous record of the identities and versions of the component under test, the test specification, and actual outcome. Test Script Commonly used to refer to the automated test procedure used with a test harness. Test Specification For each test case, the coverage item, the initial state of the software under test, the input, and the predicted outcome. Test Target A set of test completion criteria. Testing The process of exercising software to verify that it satisfies specified requirements and to detect errors. Thread Testing A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels. Top-down Testing An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Total Quality Control (TQM) Controlling everything about a process. Unit Testing The testing of individual software components. A Unit, as we use it in these techniques, is a very small piece of code with only a few inputs and outputs. The key thing is that these inputs and outputs may be implementation details (counters, array indices, pointers) and are not necessarily real world objects that are described in the specs. If all the inputs and outputs are real-world things, then we have a special kind of Unit, the Functional Chunk, which is ideal for testing. Most ordinary Units are tested by the programmer who wrote them, although a much better practice to have them subsequently tested by a different programmer. Caution: There is no agreement out there about what a "unit" or "module" or subsystem" etc. really is. I've heard 50,000 line COBOL programs described as "units". So the Unit defined here is, in fact, a definition. Usability Testing Testing the ease with which users can learn and use a product. Validation Determination of the correctness of the products of software development with respect to the user needs and requirements. Verification The process of evaluating a system or component to determine whether the products of the given development phase satisfy the conditions imposed at the start of that phase. Volume Testing Testing where the system is subjected to large volumes of data. Walkthrough A review of requirements, designs or code characterized by the author of the object under review guiding the progression of the review. White Box Testing Test case selection that is based on an analysis of the internal structure of the component.

Related Documents

Glossary Of Printing Terms
December 2019 31
Glossary Of Terms
November 2019 33
Glossary Of Forex Terms
November 2019 15
Glossary Of Video Terms
November 2019 12