The following glossary contains terms which are useful when developing assertion based tests. Where appropriate, words in this glossary are aligned with the IEEE standardIEEE standard for information technology -- Test methods for measuring conformance to POSIX: IEEE Standard 1003.3-1991. (ISBN: 1-155937-104-8)
The Institute of Electrical and Electronic Engineers, Inc.
345 East 47th Street, New York, NY 10017-2394, USA. for information technology POSIX 1003.3-1991.
For the vast majority of the words in this glossary, the meaning is consistent and understood throughout the information technology industry. However, some words do mean different things to different people. These words are marked with a dagger (§).
Within this glossary, words which are defined elsewhere in the text are italicised where a cross-reference is useful.
Comments on the glossary or suggestions of words for inclusion are welcome. Please us send your comments.
Assertions take one of the following forms:
The Ford motorcar is black.
When cause occurs,
then effect results.
If <condition>: when, then
If <condition>: <bold assertion>
If <optional feature is supported>: when cause occurs,
then effect results.
Assertions are thus classified as following:
For example: tests for checking colors could be of the type
Normally the initial numbering scheme used when writing assertions is based on an increment of 10 between assertions. This enables assertion writers to add new assertions without re-numbering or being forced to place assertions in inappropriate positions.
This is testing the boundary conditions.
Most capture replay tools enable the test sessions to be edited, parameterised and generalised. Almost all of the tools have a compare facility to compare the expected results from a test run with those which actually occur.
Capture replay tools are often used for regression testing and testing the user interface associated with application software. See also replay.
The following note is adapted from the IEEE's POSIX 1003.3-1990 standard:
The term compliance was introduced to provide an efficient way to represent specific acceptable levels of conformance to an implementation of a specification (as measured by a test). Thus "compliant with" a specification means "passing" the tests associated with the specification. However, at a later stage the developers of this standard decided that a distinction between the words compliance and conformance should be eliminated as it was causing confusion.
An assertion for a conditional feature starts with:
These are also known as positive tests. (It is easier to use the phrase positive tests, since it removes the confusion with the word conformance).
An example of this would be where a personal computer was used as a capture and replay tool tool to play test scripts which exercised software on a server connected to the PC using a network.
Given that distributed testing has more than one possible meaning, it is always vital that this term is further defined before distributed testing development is considered in detail. See also distributed test and remote test.
The test developers create tests which are based on their best guess (or their own experience) regarding where errors might be found. Using experienced testers, this has been found to be an excellent way of finding errors.
For example, experience might show that on particular processors, selecting particular numbers for maths tests may be more likely to generate errors.
When an assertion is declared to be an extended assertion, and the test is not written, the reason code (or number as above) is marked alongside the assertion. Extended assertions are better known as untested assertions.
The assertion id for each general assertion starts with the letters GA.
Glass box tests are also known as white box tests or validation tests.
See also rationale (which tends to be the comments sub-set of informative text).
Informative text is not tested.
This is also known as closed loop testing.
To avoid additional ambiguity the reverse sense of "may" is best expressed as "need not" (rather than "may not").
Non-normative text is provided to suggest possible techniques for one of the following:
See also rationale (which tends to be the comments sub-set of non-normative text).
Non-normative text is not tested.
Writers of documents which are to be tested are well advised to separate normative and non-normative text, so it is obvious which text describes the operation of the product (and shall be tested) and which text is intended as background information.
Normative text is identical to text describing requirements.
Traditionally, measuring the length of time a functional test takes to execute is a poor measure of performance. This is because much of the time taken to perform a test is spent checking that an element of functionality has worked correctly, rather than purely the execution of the element.
With careful design, functional tests can show useful performance data (indeed, they can show exactly which elements of functionality are executing quickly or slowly).
These are also known as conformance tests.
Tests which work correctly in an independent order can often be executed in a random order to ensure that no interdependencies exist within the product.
Random testing is only effective if the tests can be reproduced. For this reason, the order and data associated with random tests must be preserved.
Rationale text is not tested.
A remote test may pass results back to a central results repository on another platform.
A required feature will result in a base assertion or an extended assertion being written.
Writers of documents which are to be tested are well advised to separate requirements and non-normative text, so it is obvious which text describes the operation of the product (and shall be tested) and which text is intended as background information.
Requirements are also known as normative text.
Best practice is to avoid the word "should" with regard to implementations (using the word shall in preference).
When the word "should" is used with reference to user operations, the text is not normally tested.
The overall approach for testing grouped elements of functionality should be included within the test suite design. Also known as tactic.
Traditionally, functional tests do not stress software. This is because it is normally possible to test functionality using very small amounts of data and a minimal number of users (thus the tests work well on a system with a small configuration).
However, with careful design, functional tests can stress a system (indeed, they can show exactly which elements of functionality are working well under stress and which are not).
The overall approach for testing grouped elements of functionality should be included within the test suite design. Also known as strategy.
For example, when conducting tests for mathematical routines, many values might be used to test a single assertion.
The results include: pass, fail, unresolved, unsupported, untested.
Test developers often add additional codes to these, including:
The requirements are used to define a precise level of thoroughness with which the assertion is to be tested. For example a testing requirement might require the test developer to add a test which demonstrated functionality shown in an example in customer documentation.
The POSIX specifications state that when there is a chance that an assertion may be ambiguous, incomplete or misinterpreted, the assertion is clarified by adding a testing requirement.
However, if an assertion is unclear, it should always be rewritten. Thus, this is no reason to add text to further define the meaning of the assertion.
Validation tests are also known as glass box tests or white box tests.
White box tests are also known as glass box tests or validation tests.