By Jon M. Quigley
I saw a LinkedIn post yesterday about the scope of testing during times of compressed schedule. The position was to test what is new in the software, and of that new, what is the most important, perhaps meaning what if it goes wrong, would be the worst for the client or customer. Generally, this is probably a good idea. However, there are some drawbacks to this approach. This means no regression testing. Regression testing is testing of the old software features when we add new software features to the product.
Testing as above is predicated on the belief that those things that we have changed or added, have no implication or impact on those features and functions that were already in place prior to this last iteration of the software. That may not be true. If we make changes to some software module that is used by other functions, we may miss testing a change in a key interaction.
It is good to ask questions about the nature of the change, for example, does this change to the software include changes to the operating system or to some subsystem handling for example the serial communication algorithm or another system-level software attribute? Additionally, what we intend to change when we rework software, and what is actually changed are sometimes not congruent. Somebody makes a change to a software module, that is not a required change but perhaps a seemingly innocuous change requested from the customer. As if there were not enough, just adding new features to the software provides opportunities for other things in the software to get accidentally changed.
Things that can get missed when we do not perform regression testing:
- Accidental changes to software and parameters (the fat finger situation)
- Changes to software modules that were not part of the new feature content are not accidental but opportunistic (customer wants a change and we accommodate)
- Changes to the system we use to build the software (new compiler – version, revision, or supplier; and development environment) that may impact the final product result
- Changes to operating system type components – for example, the communications modules in the software
- Errors in the constituent parts of the software build. (What software components or modules are included in the build? Do we have the latest and all component parts? )
Personally, unless seriously cramped for time (under some reservation), I prefer to always perform regression testing. In fact, regression testing is probably one of the best reasons to adopt some philosophy about automation for testing. Testing can be executed overnight, freeing up our team to use their creativity, experiences, and intuition to explore the product. I advocate regression testing even for mild changes to the product unless the changes are very specific criteria met. For example, the software team has a history of delivering software that does not have events like including incorrect parts in the build and others from the list above (level of maturity). I prefer performing regression testing, but then again, I work in embedded automotive software, and the consequence of a defect has a range of uncomfortable and intolerable situations. Better to err on the side of risk conservative rather than risk intense.