Even with the best validation, it’s very hard to achieve perfect quality in software. Here are some typical residual defect rates (bugs left over after the software has shipped) per kloc (one thousand lines of source code):
- 1 – 10 defects/kloc: Typical industry software.
- 0.1 – 1 defects/kloc: High-quality validation. The Java libraries might achieve this level of correctness.
- 0.01 – 0.1 defects/kloc: The very best, safety-critical validation. NASA and companies like Praxis can achieve this level.
This can be discouraging for large systems. For example, if you have shipped a million lines of typical industry source code (1 defect/kloc), it means you missed 1000 bugs!
Here are some approaches that unfortunately don’t work well in the world of software.
Exhaustive testing is infeasible. The space of possible test cases is generally too big to cover exhaustively. Imagine exhaustively testing a 32-bit floating-point multiply operation,
a*b. There are 2^64 test cases!
Haphazard testing (“just try it and see if it works”) is less likely to find bugs, unless the program is so buggy that an arbitrarily-chosen input is more likely to fail than to succeed. It also doesn’t increase our confidence in program correctness.
Random or statistical testing doesn’t work well for software. Other engineering disciplines can test small random samples (e.g. 1% of hard drives manufactured) and infer the defect rate for the whole production lot. Physical systems can use many tricks to speed up time, like opening a refrigerator 1000 times in 24 hours instead of 10 years. These tricks give known failure rates (e.g. mean lifetime of a hard drive), but they assume continuity or uniformity across the space of defects. This is true for physical artifacts.
But it’s not true for software. Software behavior varies discontinuously and discretely across the space of possible inputs. The system may seem to work fine across a broad range of inputs, and then abruptly fail at a single boundary point. The famous Pentium division bug affected approximately 1 in 9 billion divisions. Stack overflows, out of memory errors, and numeric overflow bugs tend to happen abruptly, and always in the same way, not with probabilistic variation. That’s different from physical systems, where there is often visible evidence that the system is approaching a failure point (cracks in a bridge) or failures are distributed probabilistically near the failure point (so that statistical testing will observe some failures even before the point is reached).
Test early and often, test it as if you want to make it fail (Test Driven), don’t treat the code as something precious. Don’t leave testing until the end, when you have a big pile of unvalidated code. Leaving testing until the end only makes debugging longer and more painful, because bugs may be anywhere in your code. It’s far more pleasant to test your code as you develop it.
In test-first-programming, you write tests before you even write any code. The development of a single function proceeds in this order:
- Write a specification for the function.
- Write tests that exercise the specification.
- Write the actual code. Once your code passes the tests you wrote, you’re done.
The specification describes the input and output behavior of the function. It gives the types of the parameters and any additional constraints on them (e.g.
sqrt’s parameter must be nonnegative). It also gives the type of the return value and how the return value relates to the inputs. You’ve already seen and used specifications on your problem sets in this class. In code, the specification consists of the method signature and the comment above it that describes what it does. We’ll have much more to say about specifications a few classes from now.
Writing tests first is a good way to understand the specification. The specification can be buggy, too — incorrect, incomplete, ambiguous, missing corner cases. Trying to write tests can uncover these problems early, before you’ve wasted time writing an implementation of a buggy spec.
Choosing Test Cases by Partitioning
Creating a good test suite is a challenging and interesting design problem. We want to pick a set of test cases that is small enough to run quickly, yet large enough to validate the program.
To do this, we divide the input space into subdomains, each consisting of a set of inputs. Taken together the subdomains completely cover the input space, so that every input lies in at least one subdomain. Then we choose one test case from each subdomain, and that’s our test suite.
The idea behind subdomains is to partition the input space into sets of similar inputs on which the program has similar behaviour. Then we use one representative of each set. This approach makes the best use of limited testing resources by choosing dissimilar test cases, and forcing the testing to explore parts of the input space that random testing might not reach.
We can also partition the output space into subdomains (similar outputs on which the program has similar behaviour) if we need to ensure our tests will explore different parts of the output space. Most of the time, partitioning the input space is sufficient.
Let’s look at an example.
BigInteger is a class built into the Java library that can represent integers of any size, unlike the primitive types
long that have only limited ranges. BigInteger has a method
multiply that multiplies two BigInteger values together (this is an instance method, hence the use of this):
/** * @param val another BigInteger * @return a BigInteger whose value is (this * val). */ public BigInteger multiply(BigInteger val)
For example, here’s how it might be used:
BigInteger a = ...; BigInteger b = ...; BigInteger ab = a.multiply(b);
This example shows that even though only one parameter is explicitly shown in the method’s declaration,
multiply is actually a function of two arguments: the object you’re calling the method on (
a in the example above), and the parameter that you’re passing in the parentheses (
b in this example). In Python, the object receiving the method call would be explicitly named as a parameter called
self in the method declaration. In Java, you don’t mention the receiving object in the parameters, and it’s called
this instead of
So we should think of
multiply as a function taking two inputs, each of type
BigInteger, and producing one output of type
multiply : BigInteger × BigInteger → BigInteger
So we have a two-dimensional input space, consisting of all the pairs of integers (a,b). Now let’s partition it. Thinking about how multiplication works, we might start with these partitions:
- a and b are both positive
- a and b are both negative
- a is positive, b is negative
- a is negative, b is positive
There are also some special cases for multiplication that we should check: 0, 1, and -1.
- a or b is 0, 1, or -1
Finally, as a suspicious tester trying to find bugs, we might suspect that the implementor of BigInteger might try to make it faster by using
long internally when possible, and only fall back to an expensive general representation (like a list of digits) when the value is too big. So we should definitely also try integers that are very big, bigger than the biggest
- a or b is small
- the absolute value of a or b is bigger than
Long.MAX_VALUE, the biggest possible primitive integer in Java, which is roughly 2^63.
Let’s bring all these observations together into a straightforward partition of the whole
(a,b) space. We’ll choose
b independently from:
- small positive integer
- small negative integer
- huge positive integer
- huge negative integer
So this will produce 7 × 7 = 49 partitions that completely cover the space of pairs of integers.
To produce the test suite, we would pick an arbitrary pair (a,b) from each square of the grid, for example:
- (a,b) = (-3, 25) to cover (small negative, small positive)
- (a,b) = (0, 30) to cover (0, small positive)
- (a,b) = (2^100, 1) to cover (large positive, 1)
The figure at the right shows how the two-dimensional (a,b) space is divided by this partition, and the points are test cases that we might choose to completely cover the partition.
Let’s look at another example from the Java library: the integer
max() function, found in the
/** * @param a an argument * @param b another argument * @return the larger of a and b. */ public static int max(int a, int b)
Mathematically, this method is a function of the following type:
max : int × int → int
From the specification, it makes sense to partition this function as:
- a < b
- a = b
- a > b
Our test suite might then be:
- (a, b) = (1, 2) to cover a < b
- (a, b) = (9, 9) to cover a = b
- (a, b) = (-5, -6) to cover a > b
Include Boundaries in the Partition
Bugs often occur at boundaries between subdomains. Some examples:
- 0 is a boundary between positive numbers and negative numbers
- the maximum and minimum values of numeric types, like
- emptiness (the empty string, empty list, empty array) for collection types
- the first and last element of a collection
Why do bugs often happen at boundaries? One reason is that programmers often make off-by-one mistakes (like writing
<= instead of
<, or initializing a counter to 0 instead of 1). Another is that some boundaries may need to be handled as special cases in the code. Another is that boundaries may be places of discontinuity in the code’s behavior. When an
int variable grows beyond its maximum positive value, for example, it abruptly becomes a negative number (discontinuous behaviour).
It’s important to include boundaries as subdomains in your partition, so that you’re choosing an input from the boundary.
max : int × int → int.
- relationship between a and b
- a < b
- a = b
- a > b
- value of a
- a = 0
- a < 0
- a > 0
- a = minimum integer
- a = maximum integer
- value of b
- b = 0
- b < 0
- b > 0
- b = minimum integer
- b = maximum integer
Now let’s pick test values that cover all these classes:
- (1, 2) covers a < b, a > 0, b > 0
- (-1, -3) covers a > b, a < 0, b < 0
- (0, 0) covers a = b, a = 0, b = 0
- (Integer.MIN_VALUE, Integer.MAX_VALUE) covers a < b, a = minint, b = maxint
- (Integer.MAX_VALUE, Integer.MIN_VALUE) covers a > b, a = maxint, b = minint
Two Extremes for Covering the Partition
After partitioning the input space, we can choose how exhaustive we want the test suite to be:
- Full Cartesian product.
Every legal combination of the partition dimensions is covered by one test case. This is what we did for the
multiplyexample, and it gave us 7 × 7 = 49 test cases. For the
maxexample that included boundaries, which has three dimensions with 3 parts, 5 parts, and 5 parts respectively, it would mean up to 3 × 5 × 5 = 75 test cases. In practice not all of these combinations are possible, however. For example, there’s no way to cover the combination a < b, a=0, b=0, because
acan’t be simultaneously less than zero and equal to zero.
- Cover each part.
Every part of each dimension is covered by at least one test case, but not necessarily every combination. With this approach, the test suite for
maxmight be as small as 5 test cases if carefully chosen. That’s the approach we took above, which allowed us to choose 5 test cases.
Black box and White box Testing
Recall from above that the specification is the description of the function’s behavior — the types of parameters, type of return value, and constraints and relationships between them.
Black box testing means choosing test cases only from the specification, not the implementation of the function. That’s what we’ve been doing in our examples so far. We partitioned and looked for boundaries in
max without looking at the actual code for these functions.
White box testing (also called glass box testing) means choosing test cases with knowledge of how the function is actually implemented. For example, if the implementation selects different algorithms depending on the input, then you should partition according to those domains. If the implementation keeps an internal cache that remembers the answers to previous inputs, then you should test repeated inputs. Trigger each path / algorithm in the function.
When doing white box testing, you must take care that your test cases don’t require specific implementation behaviour that isn’t specifically called for by the spec. For example, if the spec says “throws an exception if the input is poorly formatted,” then your test shouldn’t check specifically for a
NullPointerException just because that’s what the current implementation does. The specification in this case allows any exception to be thrown, so your test case should likewise be general to preserve the implementor’s freedom. Tests should always respect the specification.
One way to judge a test suite is to ask how thoroughly it exercises the program. This notion is called coverage. Here are three common kinds of coverage:
- Statement coverage: is every statement run by some test case?
- Branch coverage: for every
whilestatement in the program, are both the true and the false direction taken by some test case?
- Path coverage: is every possible combination of branches — every path through the program — taken by some test case?
Branch coverage is stronger (requires more tests to achieve) than statement coverage, and path coverage is stronger than branch coverage. In industry, 100% statement coverage is a common goal, but even that is rarely achieved due to unreachable defensive code (like “should never get here” assertions). 100% branch coverage is highly desirable, and safety critical industry code has even more arduous criteria (e.g., “MCDC,” modified decision/condition coverage). Unfortunately 100% path coverage is infeasible, requiring exponential-size test suites to achieve.
A standard approach to testing is to add tests until the test suite achieves adequate statement coverage: i.e., so that every reachable statement in the program is executed by at least one test case. In practice, statement coverage is usually measured by a code coverage tool, which counts the number of times each statement is run by your test suite. With such a tool, white box testing is easy; you just measure the coverage of your black box tests, and add more test cases until all important statements are logged as executed.
A good code coverage tool for Eclipse is EclEmma, shown on the right.
Lines that have been executed by the test suite are coloured green, and lines not yet covered are red. If you saw this result from your coverage tool, your next step would be to come up with a test case that causes the body of the while loop to execute, and add it to your test suite so that the red lines become green.