Stochastic (Exploratory) Testing
Stochastic testing occurs when we allow a reasonably well-seasoned test engineer to go with their “gut” and feel their way about the product’s performance. During the development of numerous embedded automotive products, we have seen stochastic testing elicit roughly the same amount of test failures as combinatorial testing. We are not recommending that stochastic testing supplant combinatorial testing, but rather, that they both have their place in our overall test strategy and they function as complements to each other.
It is important that we have a means for recording what we do during stochastic testing. Spectacularly successful tests should be added to the existing suite of test cases and reused. We have seen some situations where program managers, customers, and software engineers have been aghast as the original suite of test cases metamorphosed into the ultimate horror show, sometimes adding thousands of test cases. They may say “the product will never see that level of stimuli.” Then the test engineer walks them out to the field application of the product and makes the fault happen in a non-laboratory environment—demonstrating that it can and will indeed happen. However, we expect and hope the test suite grows as the test engineers learn more about the behavior of the product. Certainly the code is not static as we proceed through the development cycle and we should not expect the test suite to remain static either. The same concept applies to hardware testing; in most cases, our first attempts at testing are more of a learning experience than anything else.
We can use automation to help us with some of our stochastic testing. Consider an automated test fixture that exercises the features (requirements) of a product. Typically we have the automated testing go through a defined sequence of test cases. In other words, we execute these test cases in a single sequence. For example; test case A is followed by test case B. If our test cases were numerically defined, we can then use a “random number generator” to arbitrarily sequence the order the test cases are executed. In this way we find connections within the modules that may cause failures in our product. When these tests are automated, we can record the sequence and any resulting erroneous performance.
We consider it normal that a suite of tests continues to grow during the life cycle of product development. Often, the specification and the requirements change; it makes sense to expect the test suite to adapt to product changes as well as alterations to understanding. We would be more concerned if the test cases don’t grow!