The problem becomes when we believe that there is one solution or one silver bullet approach to product test. There are things that can be done before testing that help also, and those are related to reviews which some call static testing. Hardware and software are connected in the product and therefore so to the testing. So besides testing methods we should have:

Good configuration and change management practices in handling product updates
A philosophy of incremental development and recurring testing – not just at the end of the project
A system that enables tracking of the defects found and status (for larger companies or distributed teams this is more necessary)

Reviews: Reviews are critiques of pre-product artifacts. To be effective, it truly requires a critique of the design which will mean technically competent people with a hyper-focus on the product, unafraid to speak out about what they think they see as limits of the product. These individuals will also require some time to review the technical documentation under review – drawings, product specifications and the like, as the meeting is directed to illuminate the areas of concern as these technical and product experts deem. That is not to say new concerns will not be found in the review. Below is one type of review, the FMEA. Done well, we can find things we believe may be problems in time to devise experiments to determine if these are in fact problems, then rework the specifications and design to remove the mal performance and potential customer irritation or even damage.

Requirements based testing– is testing to the documentation requirements is not the final step but the first when it comes to product verification and quality improvement. Requirements based testing confirms that was built matches the requirements and we have not designed a product that does not meet the product expectation via the documentation. It is therefore important to write the requirements in such a way as to be verifiable – compare the requirements to some expected and measurable outcome or output. Writing the requirements in this way then makes it possible to reduce the testing human impact, hours in front of a machine testing to these requirements, but automating the testing. If requirements can be written in a measurable way, then the testing can be automated which will make it possible to cover more test cases in a shorter time without operator fatigue or impact on the test personnel. It takes time to set this automation up, and some time to maintain but in the end, from experience, more test cases will be executed per unit time ensuring the product is sufficiently stressed. Anecdotally, there is a correlation between test cases performed and the quality of the product in the field. We have seen the amount of quality problems post launch greatly reduced when we have increased the number of tests (not just via automation but automation frees some test talent up for experience based testing).

It is not possible to document all of the wanted and unwanted behaviors expected from the product. That is not to say we should not document some of the failures we can have seen in the past or can envision occurring when we use thought experiments to the uncover failure modes and the consequences of those failures. Which leads us to another technique – Design Failure Mode Effects Analysis (DFMEA) and for the process or manufacturing implications on the product, the Process Failure Mode Effects Analysis.

DFMEA: The DFMEA is a technique used in the automotive industry to critique the design as the product is under development. Each feature of the product is considered, what the feature does, what can go wrong, and what are the effects of that thing going wrong. We will go through the functions one by one and assess the failure probability of occurrence , severity of impact, and ability to detect the problem which will produce what is referred to as a risk priority number or (RPN). The RPN essentially establishes a hierarchy for redress of the concerns, the higher the RPN, the more we need to pay attention to failure. We will then either do detailed calculations (not so preferred) or develop a special set of tests to determine if your thoughts about the failure mode and subsequent effects are valid. For example, if after performing the first round of the DFMEA we find that we have thought of possible failure modes of the ignition system that could result in a severe consequence, we can then devise specific tests to evoke the failure before the design is even completed to ascertain the validity of the failure and our thinking about the consequences of said failure. Those things learned in the test, for example the failure mode is not one we wish to impact the customer, we will then rework the design such that the failure mode is removed or the severity of the failure mode is greatly reduced.

A good book to read on this topic is: https://www.amazon.com/Power-Deduction-Failure-Effects-Analysis/dp/087389796X/ref=sr_1_2?keywords=michael+%26+Failure+Mode+Effec ts+Analysis&qid=1571750297&sr=8-2


Exploratory based testing: Exploratory testing is intuition based from the tester. Perhaps after reading the specification but mostly knowledge of the product and the customer use, set about testing the product. These scenarios may not be included or prepared for in the product requirements or even the use cases (how the user is expected to use the product). The testing may take the form of steps the user may take in setting up the product, configuring the product or using the product. For example, there may be buttons associated with the product and the button pressing is only to happen at certain times, and the requirements document does not say anything about pressing outside of those specific times. The tester using experience based testing may press the button during a time when the button is not supposed to be actuated, and view the results, and record anything interesting from those results, while considering the overall implications on the user or the entire system.

Stochastic testing: In stochastic testing we randomize the sequence of the test cases or specific test exercises and is connected with automation of the testing. In this type of testing, the paths within the software are exercised in a variety of ways by shaking up the sequence in which the tests are conducted. This helps uncover defects that are associated with sequence of activities, software module functions and function interactions within the software as ell as variable and register handling. For example, we perform tests A, B, C and find all works well. Then we perform test B, A, B, C and we find a failure is presented. This form of testing will uncover software module interaction defects as well as logical errors and the calling, resetting and use of variables and registers within the software.

Stress testing: From experience one of the things we often neglect in product development, maybe less so in manufacturing, is variation. Variation in the product, the tolerances of all of the constituent parts (capacitors, inductors, mechanical parts and the tolerance stack up etc.) but also with regard to the environment. We may work from global standards or from our own research on the environment in which the product will be used, but we may not consider beyond those “known” environmental stimulus. For example, we may measure that the product is never used in ambient temperatures of below 30F, and conclude (assume) that the product will never be used below that temperature. We have never seen that event happen, but then again, we have not sampled the entire world (we could not – there is not enough time). However we can do some things to minimize the product risk due to these adjacent stimuli to which it may be encountered by going beyond the standards. Much like our experienced based testing pushes the unknown, stress testing does similarly. It need not be just for the impact of the environment on the product concerns, but also performance of the product. Stress testing will place demands beyond that specified on the product to determine the performance over a range of use beyond what is expected. We may conduct this level of testing with slow increments to the environment or use, monitoring performance until the product fails and record and deconstruct the product to understand the nature of the failure. What it a design issue or manufacturing issue? Does the failure we see matter? What sort of impact does the failure have on the customer should it come to pass in the field?