We have to test more and more functionality in less and less time, as successful software grows from release to release, but release cycles are getting shorter and shorter. Historically grown test suites are often not up to this challenge, since they test too much and too little at the same time. Too much, since they contain redundant tests that cause execution and maintenance costs but provide little value over similar tests. Too little, since important functionality remains untested. We must make these test suites more effective (i.e. find more bugs) and more efficient (i.e. faster/cheaper) to succeed in the long run.
Our research community has worked on approaches to increase test effectiveness and efficiency for decades. In recent years, AI based approaches have appeared that also promise to help us find more bugs faster.
In this talk, I present different approaches to find more bugs in less time: History analyses of the version control system show, where most bugs occurred in past releases. This often uncovers process flaws that are root causes of future bugs. Test gap analysis reveals, which code changes have not yet been tested and are the most error prone. Pareto optimization of test suites, test impact analysis and predictive test selection identify tests that right now have the best cost-benefit ratio. And finally, defect prediction uses AI to predict where future bugs will occur.
We have implementing each of these analyses, have done empirical research on how well they works and employed them (some, the ones that work) in own development and at our customers. For each analysis, I outline their research foundation and show how well they work - while some excel, others are do not work at all - to answer which ones really allow us to find more bugs in less time.