Software Testing Practices: A Guide to Quality Assurance
By Jon M Quigley
This post is to pile on to a post by Robert Fey on LinkedIn.
Below is a brief brainstorm of issues that can arise in product development and testing, resulting in poor outcomes. Your thoughts are welcome.
Here is a list of common failures in the context of software verification work and the mindset that “testing to specs is enough,” reflecting both the points you raised and insights from authoritative sources:
List of Typical Failures
- The team has a poor understanding of the verification flow, including a lack of clarity on how test planning, test cases, test data preparation, actual testing, and results analysis connect and support one another.
- Belief that testing to specifications is sufficient, ignoring that requirements or specs themselves might be incomplete, ambiguous, or incorrect, and thus, passing tests may not guarantee real-world correctness.
- Project schedule does not account for the time (or adapts to changing circumstances) to test the product, coupled with the dependency of software and hardware availability.
- Useless or redundant test cases continue to run, with no one critically assessing their value or stopping ineffective tests, leading to wasted effort and missed critical issues.
- Poor articulation of the design, with missing configuration management release notes for the product iterations.
- Late involvement of the test team, leaves insufficient time to develop test tools, fixtures, and test cases.
- Excluding the test group from design reviews overlooks opportunities to refine the design and articulate it based on testing experiences.
- Hiding failure or designing tests that cannot fail, meaning tests are ineffective at revealing flaws and thus do not raise true confidence in software quality. It can be “organizational politics” related.
- Lack of explicitness and precision in test design leads to tests that don’t actually validate the desired behavior, making test results unreliable or meaningless.
- Assuming automation (or extensive manual testing) is enough, without understanding what should and should not be automated, or which areas need human judgment and exploration.
- Critical failures are slipping through due to bureaucratic overhead, where test teams are bogged down by process rather than focusing on what matters most.
- Lack of transparency limited access to test reporting tools, and test results were kept in a way that the results are not clearly; language reporting defects are unclear (doublespeak)
- Failure to maintain and update test cases and test automation suites can result in obsolete tests that fail to cover new or changed functionality.
- Testing as a one-time event (“testing to pass”), rather than making tests re-usable, assurance-driven, and linked to ongoing needs and feedback.
- Focusing only on “happy path” testing, rather than designing tests to find edge cases, unexpected behaviors, and robustness issues.
- Inadequate or incorrect test data, leading to tests that do not reflect real-world conditions or fail to uncover data-dependent bugs.
- Documentation or specs that are ambiguous, resulting in incomplete, misaligned, or ineffective test cases.
- Constantly changing requirements that are not articulated to the testing and verification team
- Hardware and software entanglements that are not coordinated in time and content that will allow for appropriate testing
- Articulation of the severity of the consequences on the system is difficult when we do not have a way to trace a specific failure to the system at large (requirements traceability matrix)
Mindset Issues
-
Believing all faults are due to the testers, when often issues stem from incomplete requirements, code changes not communicated, or time constraints for testing.
-
Assuming fully tested software is “bug-free,” rather than embracing that testing proves the existence of bugs, not their absence.
-
Skipping end-to-end validation (“stunt tests” that only run once), rather than building reusable, repeatable test frameworks linked to assurance.
Key Takeaway
A robust verification culture requires constant questioning of assumptions, a focus on explicit and meaningful validation, and recognition that passing tests only show conformance to whatever is tested—not that the product is complete or correct.
For more information, contact us:
The Value Transformation LLC store.
Follow us on social media at:
Amazon Author Central https://www.amazon.com/-/e/B002A56N5E
Follow us on LinkedIn: https://www.linkedin.com/in/jonmquigley/
https://www.linkedin.com/company/value-transformation-llc
Follow us on Google Scholar: https://scholar.google.com/citations?user=dAApL1kAAAAJ




