QA Dictionary
Acceptance
Testing:
Formal testing conducted to
determine whether or not a system satisfies its acceptance criteria—enables an
end user to determine whether or not to accept the
system.
Affinity
Diagram:
A group process that takes large
amounts of language data, such as a list developed by brainstorming, and divides
it into categories.
Alpha
Testing:
Testing of a software product or
system conducted at the developer’s site by the end
user.
Audit:
An inspection/assessment activity
that verifies compliance with plans, policies, and procedures, and ensures that
resources are conserved. Audit is a staff function; it serves as the “eyes and
ears” of management.
Automated
Testing:
That part of software testing that
is assisted with software tool(s) that does not require operator input,
analysis, or evaluation.
Beta
Testing:
Testing conducted at one or more end
user sites by the end user of a delivered software product or
system.
Black-box
Testing:
Functional testing based on requirements with no knowledge of the
internal program structure or data. Also known as closed-box
testing. Black box testing indicates whether or not a program
meets required specifications by spotting faults of omission -- places where the
specification is not fulfilled.
Bottom-up
Testing:
An integration testing technique
that tests the low-level components first using test drivers for those
components that have not yet been developed to call the low-level components for
test.
Boundary
Value Analysis:
A test data selection technique in
which values are chosen to lie along data extremes. Boundary values include
maximum, mini-mum, just inside/outside boundaries, typical values, and error
values.
Brainstorming:
A group process for generating
creative and diverse ideas.
Branch
Coverage Testing:
A test method satisfying coverage
criteria that requires each decision point at each possible branch to be
executed at least once.
Bug:
A design flaw that will result in
symptoms exhibited by some object (the object under test or some other object)
when an object is subjected to an appropriate test.
Cause-and-Effect (Fishbone)
Diagram: A tool used
to identify possible causes of a
problem by representing the relationship between some effect and its possible
cause.
Cause-effect
Graphing:
A testing technique that aids in
selecting, in a systematic way, a high-yield set of test cases that logically
relates causes to effects to produce test cases. It has a beneficial side effect
in pointing out incompleteness and ambiguities in
specifications.
Checksheet:
A form used to record data as it is
gathered.
Clear-box
Testing:
Another term for white-box testing.
Structural testing is sometimes referred to as clear-box testing, since “white boxes” are considered opaque and do not
really permit visibility into the code. This is also known as glass-box or
open-box testing.
Client:
The end user that pays for the
product received, and receives the benefit from the use of the
product.
Control
Chart:
A statistical method for
distinguishing between common and special cause variation exhibited by
processes.
Customer
(end user):
The individual or organization,
internal or external to the producing organization,
that receives the product.
Cyclomatic
Complexity:
A measure of the number of linearly
independent paths through a program module.
Data
Flow Analysis:
Consists of the graphical analysis
of collections of (sequential) data definitions and reference patterns to
determine constraints that can be placed on data values at various points of
executing the source
program.
Debugging:
The act of attempting to determine
the cause of the symptoms of malfunctions detected by testing or by frenzied
user complaints.
Defect:
NOTE:
Operationally, it is useful to work with two definitions of a defect:
1) From the producer’s viewpoint: a product requirement that has not been met or a product attribute possessed by a product or a function performed by a product that is not in the statement of requirements that define the product.
2) From the end user’s viewpoint: anything that causes end user dissatisfaction, whether in the statement of requirements or not.
Defect
Analysis:
Using defects as data for continuous
quality improvement. Defect analysis generally seeks to classify defects into
categories and identify possible causes in order to direct process improvement
efforts.
Defect
Density:
Ratio of the number of defects to
program length (a relative number).
Desk
Checking:
A form of manual static analysis
usually performed by the originator. Source code documentation, etc., is
visually checked against requirements and
standards.
Dynamic
Analysis:
The process of evaluating a program
based on execution of that program. Dynamic analysis approaches rely on
executing a piece of software with selected test
data.
Dynamic
Testing:
Verification or validation performed
which executes the system’s code.
Error:
1) A
discrepancy between a computed, observed, or measured value or condition and the
true, specified, or theoretically correct value or condition; and
2) a mental mistake made by a programmer that may result in a program fault.
Error-based
Testing:
Testing where information about
programming style, error-prone language constructs, and other programming
knowledge is applied to select test data capable of detecting faults, either a
specified class of faults or all possible faults.
Evaluation:
The process of examining a system or
system component to determine the extent to which specified properties are
present.
Execution:
The process of a computer carrying
out an instruction or instructions of a computer.
Exhaustive
Testing:
Executing the program with all
possible combinations of values for program
variables.
Failure:
The inability of a system or system
component to perform a required function within specified limits. A failure may
be produced when a fault is encountered.
Failure-directed
Testing:
Testing based on the knowledge of
the types of errors made in the past that are likely for the system under
test.
Fault:
A manifestation of an error in
software. A fault, if encountered, may cause a
failure.
Fault
Tree Analysis:
A form of safety analysis that
assesses hardware safety to provide failure statistics and sensitivity analyses
that indicate the possible effect of critical
failures.
Fault-based
Testing:
Testing that employs a test data
selection strategy designed to generate test data capable of demonstrating the
absence of a set of pre-specified faults, typically, frequently occurring
faults.
Flowchart:
A diagram showing the sequential
steps of a process or of a workflow around a product or
service.
Formal
Review:
A technical review conducted with
the end user, including the types of reviews called for in the
standards.
Function
Points:
A consistent measure of software
size based on user requirements. Data components include inputs, outputs, etc.
Environment characteristics include data
communications, performance,
reusability, operational ease, etc. Weight scale: 0 = not present; 1 = minor
influence, 5 = strong influence.
Functional
Testing:
Application of test data derived
from the specified functional requirements without regard to the final program
structure. Also known as black-box
testing.
Heuristics
Testing:
Another term for failure-directed
testing.
Histogram:
A graphical description of
individual measured values in a data set that is organized according to the
frequency or relative frequency of occurrence. A histogram illustrates the shape
of the distribution of individual values in a data set along with information
regarding the average and variation.
Hybrid
Testing:
A combination of top-down testing
combined with bottom-up testing of prioritized or available
components.
Incremental
Analysis:
Incremental analysis occurs when
(partial) analysis may be performed on an incomplete product to allow early
feedback on the development of that product.
Infeasible
Path:
Program statement sequence that can
never be executed.
Inputs:
Products, services, or information
needed from suppliers to make a process work.
Inspection:
1) A formal
evaluation technique in which software requirements, design, or code are
examined in detail by a person or group other than the author to detect faults,
violations of development standards, and other problems.
2) A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).
Instrument:
To install or insert devices or
instructions into hardware or software to monitor the operation of a system or
component.
Integration:
The process of combining software
components or hardware components, or both, into an overall
system.
Integration
Testing:
An orderly progression of testing in
which software components or hardware components, or both, are combined and
tested until the entire system has been integrated.
Interface:
A shared boundary. An interface
might be a hardware component to link two devices, or it might be a portion of
storage or registers accessed by two or more computer
programs.
Interface
Analysis:
Checks the interfaces between
program elements for consistency and adherence to predefined rules or
axioms.
Intrusive
Testing:
Testing that collects timing and
processing information during program execution that may change the behavior of
the software from its behavior in a real environment. Usually involves
additional code embedded in the software being tested or additional processes
running concurrently with software being tested on the same
platform.
IV&V:
Independent verification and
validation is the verification and validation of a software product by an
organization that is both technically and managerially separate from the
organization responsible for developing the
product.
Life
Cycle:
The period that starts when a
software product is conceived and ends when the product is no longer available
for use. The software life cycle typically includes a requirements phase, design
phase, implementation (code) phase, test phase, installation and checkout phase,
operation and maintenance phase, and a retirement
phase.
Manual
Testing:
That part of software testing that
requires operator input, analysis, or evaluation.
Mean:
A value derived by adding several
qualities and dividing the sum by the number of these
quantities.
Measurement:
1) The act or process of measuring.
A figure, extent, or amount obtained by measuring.
Metric:
A measure of the extent or degree to
which a product possesses and exhibits a certain quality, property, or
attribute.
Mutation
Testing:
A method to determine test set
thoroughness by measuring the extent to which a test set can discriminate the
program from slight variants of the program.
Non-intrusive
Testing:
Testing that is transparent to the
software under test; i.e., testing that does not change the timing or processing
characteristics of the software under test from its behavior in a real
environment. Usually involves additional hardware that collects timing or
processing information and processes that information on another
platform.
Operational
Requirements:
Qualitative and quantitative
parameters that specify the desired operational capabilities of a system and
serve as a basis for deter-mining the operational effectiveness and suitability
of a system prior to deployment.
Operational
Testing:
Testing performed by the end user on
software in its normal operating environment.
Outputs:
Products, services, or information
supplied to meet end user needs.
Path
Analysis:
Program analysis performed to
identify all possible paths through a program, to detect incomplete paths, or to
discover portions of the program that are not on any
path.
Path
Coverage Testing:
A test method satisfying coverage
criteria that each logical path through the program is tested. Paths through the
program often are grouped into a finite set of classes; one path from each class
is tested.
Peer
Reviews:
A methodical examination of software
work products by the producer’s peers to identify defects and areas where
changes are needed.
Policy:
Managerial desires and intents
concerning either process (intended objectives) or products (desired
attributes).
Problem:
Any deviation from defined
standards. Same as defect.
Procedure:
The step-by-step method followed to
ensure that standards are met.
Process:
The work effort that produces a
product. This includes efforts of people and equipment guided by policies,
standards, and procedures.
Process
Improvement:
To change a process to make the
process produce a given product faster, more economically, or of higher quality.
Such changes may require the product to be changed. The defect rate must be
maintained or reduced.
Product:
The output of a process; the work
product. There are three useful classes of products: manufactured products
(standard and custom), administrative/ information products (invoices, letters,
etc.), and service products (physical, intellectual, physiological, and
psychological). Products are defined by a statement of requirements; they are
produced by one or more people working in a
process.
Product
Improvement:
To change the statement of
requirements that defines a product to make the product more satisfying and
attractive to the end user (more competitive). Such changes may add to or delete
from the list of attributes and/or the list of functions defining a product.
Such changes frequently require the process to be changed. NOTE: This process
could result in a totally new product.
Productivity:
The ratio of the output of a process
to the input, usually measured in the same units. It is frequently useful to
compare the value added to a product by a process to the value of the input
resources required (using fair market values for both input and
output).
Proof
Checker:
A program that checks formal proofs
of program properties for logical correctness.
Prototyping:
Evaluating requirements or designs
at the conceptualization phase, the requirements analysis phase, or design phase
by quickly building scaled-down components of the intended system to obtain
rapid feedback of analysis and design decisions.
Qualification
Testing:
Formal testing, usually conducted by
the developer for the end user, to demonstrate that the software meets its
specified requirements.
Quality:
A product is a quality product if it
is defect free. To the producer a product is a quality product if it meets or
conforms to the statement of requirements that defines the product. This
statement is usually shortened to “quality means meets requirements. NOTE:
Operationally, the work quality refers to products.
Quality
Assurance (QA):
The set of support activities
(including facilitation, training, measurement, and analysis) needed to provide
adequate confidence that processes are established and continuously improved in
order to produce products that meet specifications and are fit for
use.
Quality
Control (QC):
The process by which product quality
is compared with applicable standards; and the action taken when nonconformance
is detected. Its focus is defect detection and removal. This is a line function,
that is, the performance of these tasks is the responsibility of the people
working within the process.
Quality
Improvement:
To change a production process so
that the rate at which defective products (defects) are produced is reduced.
Some process changes may require the product to be
changed.
Random
Testing:
An essentially black-box testing
approach in which a program is tested by randomly choosing a subset of all
possible input values. The distribution may be arbitrary or may attempt to
accurately reflect the distribution of inputs in the application
environment.
Regression
Testing:
Selective retesting to detect faults
introduced during modification of a system or system component, to verify that
modifications have not caused unintended adverse effects, or to verify that a
modified system or system component still meets its specified
requirements.
Reliability:
The probability of failure-free
operation for a specified period.
Requirement:
A formal statement of: 1) an
attribute to be possessed by the product or a function to be performed by the
product; the performance standard for the attribute or function; or 3) the
measuring process to be used in verifying that the standard has been
met.
Review:
A way to use the diversity and power
of a group of people to point out needed improvements in a product or confirm
those parts of a product in which improvement is either not desired or not
needed. A review is a general work product evaluation technique that includes
desk checking, walkthroughs, technical reviews, peer reviews, formal reviews,
and inspections.
Run
Chart:
A graph of data points in
chronological order used to illustrate trends or cycles of the characteristic
being measured for the purpose of suggesting an assignable cause rather than
random variation.
Scatter Plot (correlation diagram): A graph
designed to show whether there is a
relationship between two changing factors.
Semantics:
1) The
relationship of characters or a group of characters to their meanings,
independent of the manner of their interpretation and use.
2) The relationships between symbols and their meanings.
2) The relationships between symbols and their meanings.
Software
Characteristic:
An inherent, possibly accidental,
trait, quality, or property of software (for example, functionality,
performance, attributes, design constraints, number of states, lines of
branches).
Software
Feature:
A software characteristic specified
or implied by requirements
documentation (for example, functionality, performance, attributes, or design
constraints).
Software
Tool:
A computer program used to help
develop, test, analyze, or maintain another computer program or its
documentation; e.g., automated design tools, compilers, test tools, and
maintenance tools.
Standards:
The measure used to evaluate
products and identify nonconformance. The basis upon which adherence to policies
is measured.
Standardize:
Procedures are implemented to ensure
that the output of a process is maintained at a desired
level.
Statement
Coverage Testing:
A test method satisfying coverage
criteria that requires each statement be executed at least
once.
Statement
of Requirements:
The exhaustive list of requirements
that define a product. NOTE: The statement of requirements should document
requirements proposed and rejected (including the reason for the rejection)
during the requirements determination process.
Static
Testing:
Verification performed without
executing the system’s code. Also called static
analysis.
Statistical Process Control: The use of
statistical techniques and tools to
measure an ongoing process for change or stability.
Structural
Coverage:
This requires that each pair of
module invocations be executed at least once.
Structural
Testing:
A testing method where the test data
is derived solely from the program structure.
Stub:
A software component that usually
minimally simulates the actions of called components that have not yet been
integrated during top-down testing.
Supplier:
An individual or organization that
supplies inputs needed to generate a product, service, or information to an end
user.
Syntax:
1) The
relationship among characters or groups of characters independent of their
meanings or the manner of their interpretation and use;
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.
2) the structure of expressions in a language; and
3) the rules governing the structure of the language.
System:
A collection of people, machines,
and methods organized to accomplish a set of specified
functions.
System
Simulation:
Another name for
prototyping.
System
Testing:
The process of testing an integrated
hardware and software system to verify that the system meets its specified
requirements.
Technical
Review:
A review that refers to content of
the technical material being reviewed.
Test
Bed:
1) An
environment that contains the integral hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test of a
logically or physically separate component.
2) A suite of test programs used in conducting the test of a component or system.
2) A suite of test programs used in conducting the test of a component or system.
Test
Case:
The definition of test case differs
from company to company, engineer to engineer, and even project to project. A
test case usually includes an identified set of information about observable
states, conditions, events, and data, including inputs and expected
outputs.
Test
Development:
The development of anything required
to conduct testing. This may include test requirements (objectives), strategies,
processes, plans, software, procedures, cases, documentation,
etc.
Test
Executive:
Another term for test
harness.
Test
Harness:
A software tool that enables the
testing of software components that links test capabilities to perform specific
tests, accept program inputs, simulate missing components, compare actual
outputs with expected outputs to determine correctness, and report
discrepancies.
Test
Objective:
An identified set of software
features to be measured under specified conditions by comparing actual behavior
with the required behavior described in the software
documentation.
Test
Plan:
A formal or informal plan to be
followed to assure the controlled testing of the product under
test.
Test
Procedure:
The formal or informal procedure
that will be followed to execute a test. This is usually a written document that
allows others to execute the test with a minimum of
training.
Testing:
Any activity aimed at evaluating an
attribute or capability of a program or system to determine that it meets its
required results. The process of exercising or evaluating a system or system
component by manual or automated means to verify that it satisfies specified
requirements or to identify differences between expected and actual
results.
Top-down
Testing:
An integration testing technique
that tests the high-level components first using stubs for lower-level called
components that have not yet been integrated and that stimulate the required
actions of those components.
Unit
Testing:
The testing done to show whether a
unit (the smallest piece of software that can be independently compiled or
assembled, loaded, and tested) satisfies its functional specification or its
implemented structure matches the intended design
structure.
User:
The end user that actually uses the
product received.
V-
Diagram (model): a diagram that
visualizes the order of testing activities and their corresponding phases of
development
Validation:
The process of evaluating software
to determine compliance with specified
requirements.
Verification:
The process of evaluating the
products of a given software development activity to determine correctness and
consistency with respect to the products and standards provided as input to that
activity.
Walkthrough:
Usually, a step-by-step simulation
of the execution of a procedure, as when walking through code, line by line,
with an imagined set of inputs. The term has been extended to the review of
material that is not procedural, such as data descriptions, reference manuals,
specifications, etc.
White-box
Testing:
Testing approaches that examine the
program structure and derive test data from the program logic. This is also
known as clear box testing, glass-box or open-box testing. White box
testing determines if program-code structure and logic is faulty. The test is
accurate only if the tester knows what the program is supposed to do. He or she
can then see if the program diverges from its intended goal. White box testing
does not account for errors caused by omission, and all visible code must also
be readable.