Quality issues cause rework, customer dissatisfaction and production issues.
It's tough to summarize the impact of quality with a single indicator. I want to share the key indicators I've been using for the past six years with over 40 different software development teams.
Measuring quality
These are a few metrics that can help your team visualize how it is doing on quality:
- Defects: bugs found by the customer
- Customer satisfaction: NPS scores or feedback from client-facing teams
- Unplanned downtime: Incidents or bugs both when making changes or outside of planned change windows
Sometimes companies do not have any systems to track customer feedback, or even if they do, it's hard to narrow it down to the scope of a team. In those cases, you can talk to client-facing and release teams to gather qualitative feedback.
Look for trends
Rather than looking at a single point in time, it is better to look for trends. How is your team doing over a 6-9 months period? Is quality getting better, or it's getting worse?
As with any metric, don't look at quality indicators in isolation.
Looking at "quality" metrics without looking at other metrics, such as throughput, can be misleading. I.e., the cause of an increase in defects can be the quality of the team getting worse or the team shipping more work but keeping the same proportion of defects.
Compare your team against others.
It might be interesting to see what percent of the defects or unplanned downtime is caused by your team. If the amount of problems produced by your team is a big part of the total, it means your team should probably focus more on quality.