Testing and numbers

We take a time out from the configuration management discussion. We see in the news numerous companies with field quality problems and we cannot help but think of the discussions we have had with colleagues about how many organizations handle their product testing. Testing, done right, is a lead indicator of product quality. It is a leading indicator, in that with some effort, it is possible to “predict” the quality of the product in the field. A lagging indicator, the opposite of a leading indicator, means you are learning about the product quality after you launch the product and have production volumes with associated problems.

A few of the ways the testing process can fail to deliver is provided in a brief list below:

1. Insufficient time and resources spent testing
2. Informal handling of fault reports (excel sheets hidden on somebody’s laptop)
3. Disbelief in the testing results (that could never happen)
4. Testing results treated like opinion (even specific measured data)

One (seemingly) ubiquitous way the process fails is the insufficient time and resources applied to this critical set of activities. We see projects that allow late deliveries of the requirements and the design iterations, but the testing must be on time. However, the late deliveries means the time allowed for testing has now been reduced from the original plan. Then, of course, the end date does not change. Projects that make this decision should have significant quality contingency money reserved.

Another common failure is to deny that the failure would ever happen in the field. Invariably, we will find the failure in the field—with the same failure modes and volumes as our testing results suggest.

My favorite failure is to dispute the test results as if they were opinions and not the results of a thorough and well-executed test suite. Even when the failure information includes sound and relevant mathematical analyses, we see actions as if the results were personal opinions. This scenario is not the same as the previous failure, where we acknowledge the failure is possible. In this case we refute test results by saying the results are opinion—a form of psychological denial.

Sometimes it is not possible to test every permutation of the product—to do so would take so much time that the product would be obsolete at completion of testing. That does not mean sidestepping the testing process or spontaneously repudiating the test results. Such a mentality sets your project and your business up for failure and potential litigation. Testing is not the last thing you do for your project; you should be conducting your testing during the whole of the product development cycle and learning something about the capabilities of the organization as well as the product. We do well for our customers and for ourselves when we take the time to do things right!

Post by admin

2 Responses to Testing and numbers