Most errors occur in recently changed areas of our code. Therefore, testers generally focus their efforts on these areas. However, our research shows that, in practice, a significant portion of code changes is still released untested. And that these untested changes then also cause the most production defects. By addressing this obvious gap, we can increase the impact of our tests, since they can then uncover more bugs.
The main reason for this discrepancy is that testers do not have sufficient insight into what has actually been changed in the code during development and whether their tests have covered all these changes. Especially in embedded testing where it is often difficult to tell exactly which code is tested in complex SIL and HIL test stages.
In this presentation, I will introduce Test Gap Analysis, an innovative method that combines static and dynamic analyses to uncover these untested changes during testing.
First, I’ll review current research to show the reasons why testers often miss to test critical changes. Then I’ll introduce Test Gap Analysis and show how it closes this gap. Since the analysis relies on test coverage, I will show how to obtain coverage for any test stage, including SIL and HIL tests where most teams are lacking this data today. Finally, I will share practical experiences in applying Test Gap Analysis in different teams and testing processes over the last years.