Many companies rely on manual tests as a cornerstone of their quality assurance strategy. Unlike automated tests, manual tests, especially exploratory tests, are very flexible (you can decide to start testing anything at any time) but at the same time very costly. Thus, we must make sure that the effort we put into manual testing is spent as effectively as possible, i.e. that we maximize our chance to catch defects.
This, however, is very difficult if we don't know which parts of our codebase we have already tested. For automated tests, this is achieved by recording code coverage. For manual tests, this transparency is sorely lacking. Instead we used to rely on our gut feeling ("did I test this new feature completely yet?"). If we get it wrong, we either unnecessarily waste effort by needlessly re-testing the same things over and over again or - even worse - unknowingly ship code to production without having tested it at all. This is a problem for both unstructured exploratory tests and even for highly structured and formalized manual tests, as our study has shown.
Test Gap analysis for manual tests provides this much needed transparency.
By recording test coverage also for your manual tests (yes we can do that!), aggregating it over time and comparing it to recent changes in your codebase, we can show you where you still have Test Gaps: Code that has been changed for your current release but not yet tested. These are known to be the main cause of production defects and you should thus always know where your current gaps are so you can address them effectively.
Cumulative Testing
Test Gap analysis can even handle cumulative testing, where testers run some test on version A of your software today, then version B is ready and they continue running different tests on this new version tomorrow. Teamscale will correctly aggregate test coverage recorded for the different program versions over time and show you for any of them, what has been tested and where gaps still exist. It even works across different branches of your VCS and in scenarios with multiple test environments!
All of this means that you have to make no changes to your existing test processes to be able to use Test Gap analysis!
Even better: All of this also works if you have a mixed testing strategy - part automated and part manual tests!
Agile Manual Testing with Issue Test Gap Analysis
As you can see above, we can even map code changes and manual test coverage back to the issues in your issue tracker. This allows you to decide on feature, bug or user story level where you already have enough coverage and where additional testing is needed. We call this Issue Test Gap.
Most agile teams prefer working in this way as individual testers can focus exclusively on Test Gaps that are relevant to their assigned issues. Furthermore, Test Gap information is easily mapped to sprints or iterations and can be used e.g. in the sprint retrospective to regularly show the current progress and quality of testing.
Ideally, an issue is only Done once there are no remaining Test Gaps.
Spotting Big Risks before a Release
If instead you’d rather like an overview over your entire system to quickly spot large code areas that have been overlooked in the manual tests, e.g. shortly before a big release, you can use our treemaps. Any of those large clusters of red or orange code shown above are suspicious and you can investigate why these changes have not been covered by your manual tests so far. Since the analysis is regularly updated while you are still testing, you can react in time and schedule additional tests to close these gaps before your release.
Integration in the Testing Process
Many of our customers use these two approaches to continuously improve the quality of their releases by regularly using the valuable transparency and insights that Test Gap analysis provides for their manual tests to guide their testing behaviour:
Thus, Test Gap analysis helps you avoid accidentally shipping untested code to production. It helps both your unstructured exploratory tests and your structured manual test cases by directing the testers' and test writers’ efforts towards untested code changes - the primary cause of production defects.