Software Delivery

Release Cadence Considered a Poor Quality Metric

If you’re doing DevOps and releasing frequently you may be aware of the DORA (DevOps Research and Assessment) metrics. These have been identified as four key measures for success in deployment of software changes and are summarised as follows:

  • Deployment Frequency—How often an organisation successfully releases to production
  • Lead Time for Changes—The amount of time it takes a commit to get into production
  • Change Failure Rate—The percentage of deployments causing a failure in production
  • Time to Restore Service—How long it takes an organisation to recover from a failure in production

These are good measures and helpfully take away the argument of what to measure, because they are industry standard. They are also surprisingly simple to implement if you have your CI/CD tools set-up sensibly *

However are they all useful? And what do they show exactly? Is it how quick we are at responding to a situation or is it more about how valuable are changes are to the customer?

I don’t consider deployment frequency to be a good measure of the effectiveness of an engineering organisation. An effective engineering organisation should be able to release at a cadence that can guarantee a level of service quality which is deemed acceptable for the customer. No more, no less.

It’s tempting to purely focus on engineering prowess, readiness and speed however what does this mean from a customer experience perspective?

Use DORA as a baseline set of metrics to measure your engineering capability but don’t be tempted to consider this to be “job done”. Engineering change is only effective if it delivers value to a customer.

How you measure the delivery of value to a customer depends on your specific product and your specific customer.

* – if you’re struggling then feel free to reach out 🙂

You may also like