PDA

View Full Version : Testing After Deployment



JonMQuigley
05-11-2017, 01:12 PM
I read this article on LinkedIn on testing after deployment by Jan Bosch (https://www.linkedin.com/pulse/towards-testing-after-deployment-jan-bosch?trk=v-feed&trk=v-feed&lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BoPJhpRAB vsvXJMWynBtdpQ%3D%3D) and it started me to reflect upon my own testing experience. More often than not, it seems anyway, the product testing is crunched up against the delivery date. To be fair, the article is specific about using this technique in a continuous delivery system. That is there are constant small increments of the product delivered to the customer so the gap between this instantiation of the product and the last, specifically the feature content or complexity is small. In fact, even outside of a continuous delivery system, as a tester, you may find yourself testing a product that is already into the production pipeline. In my experience, usually the testing started before the launch with the software.. The testing is prioritized to explore those areas of the product and software that could cause harm if it failed or performed poorly. Once those failures are excluded, the product can be launched with what may end up being cosmetic or minor annoyance bugs. This happens when the project schedule has been mostly consumed by the prior activities and the testing is cramped up against the launch. This, in my experience, is where a re-evaluation of the risk reward equation SHOULD happen.

I am not sure this approach would work for product such as aerospace or automotive. These systems are complex and are subjected to numerous external shocks in magnitude and combination. Always being at production level quality is certainly an aspiration, but I am not sure what NHTSA would say if you launched a product - the customer found a bug in that product that caused harm - and NHTSA came back to your company wanting to see your due diligence and you said we launch first and test while the customer has it. It is true there is a previous iteration of the system to which we can restore, however, by that time we know we have a problem, there may be significant field units. That bring me to the fix in the field part. For the most part, cars and I suppose aircraft are not cell phones in that a new or previous software update is pushed to the product overnight while people are sleeping. This field work can be expensive and would be categorized as the cost of poor quality should something significant sneak through to the customer.

Testing after launch is sometimes necessary, and perhaps in some industries it is actually a beneficial way to work. For the automotive world, we should get as close to this pin as we can without actually doing so. The speed of delivery is necessary to stay competitive. The smaller increments reduces risk and improves our chances for success as well as providing us with a mechanism of gleaning customer feedback. We may have to launch occasionally before the testing is complete, but that should be based upon a risk reward assessment that makes sense.