The challenge of securing resources for testing—often called the quality dilemma—stems from the imbalance between cost and perceived value. While testing costs are easy to quantify, the benefits are harder to express in monetary terms—the metric decision-makers care about. How do we price a bug our tests successfully prevented?
At Vestas, we found a way! Our wind turbines rely on complex software, and bugs can cause malfunctions, impacting energy production. If downtime exceeds limits, Vestas incurs penalties based on lost energy. These penalties are traced back to specific components, including software, driving accountability and improvement.
By attaching a price tag to bugs, we can estimate the benefits of improving test processes if they prevent (some of) these issues.
This talk outlines Vestas’ test process for wind turbine control software, the Test Gap analysis initiative we launched a year ago, and our cost-benefit analysis of it. Finally, we compare our estimates with real-world data after Test Gap analysis went live.
While not every domain has a direct link between bugs and costs, we hope this talk provides a template for quantifying the benefits of testing.
Audience:
Test Managers. Testers. Everybody who needs to ask management for more resources for their tests
Key-Learnings
- Use domain specific data to link software bugs to monetary costs. In our case: Penalty payments caused by production loss of wind turbines.
- Test Gap Analysis helps to prevent untested changes that cause many bugs in practice.
- Keep testing your hypotheses and updating your estimates as you implement your test initiative and get fresh data.