Because release cycles are getting shorter and shorter, development and test are often done in parallel. This is hard to align. In practice, this often causes test gaps, when untested changes are deployed in production. It also causes waste through useless tests that test unchanged areas of an application that cannot contain new bugs.
In this talk, we will present so called change-driven testing. This approach helps to align test and development more closely to avoid these problems.
Change-driven testing analyzes, which (manual or automated) tests cover which code areas, and which code areas were changed when. This allows us to select those test that best match code changes. It also reveals remaining test gaps early. We will present both the research fundamentals and demo change-driven testing on our own software.
Audience
Testers, Test Managers, Developers, Architects
Key-Learnings
How tests can be selected automatically to cover code changes
How test execution can be simulated using historical data to detect missing test cases without having to execute any tests
How redundant test cases can be identified
How Test Gap analysis works
Which automated test selection approaches have which pros and cons