• Products
  • Pricing
  • Resources
  • Changelog
  • About Us
    Sign In
    Get Started
Guides

All you need to know about the DORA metrics, and how to measure them.

What are the four key DORA metrics? Why do they matter? How can you measure them?

Shubha Nabar

Share

February 25, 2022

The DORA metrics are a set of metrics that measure the quality and velocity of software delivery of an engineering organization. By measuring and continuously iterating on these metrics, engineering teams can deliver better software to their customers faster, and achieve significantly better business outcomes.

Where did the DORA metrics come from?

The DORA metrics were put forth by the DevOps Research and Assessment (DORA) organization that synthesized several years of research studying engineering teams and their DevOps processes. The group publishes a yearly report called the State of DevOps Report, and was acquired by Google in 2018. In 2018 the group also published a widely acclaimed book called Accelerate on building and scaling high performing technology organizations.

Why are the DORA metrics interesting?

The DORA metrics are especially interesting because they correlate with actual business outcomes and employee satisfaction. In addition, they finally give the software engineering world a set of industry standards to benchmark against. It’s not an overwhelming set of indicators either. Turns out, just 4 key metrics are sufficient to distinguish truly elite engineering teams from mediocre ones.

As the infographic taken from the State of DevOps Report 2021 depicts, elite engineering teams differ from mediocre ones by orders of magnitude on the DORA measures. Further, there isn’t necessarily a trade-off between quality and velocity as widely assumed. Elite performers both ship more frequently and with higher quality!

So what are the DORA metrics exactly?

The DORA metrics were inspired by lean manufacturing principles. The first two metrics are measures of software delivery velocity. They are:

1. Deployment frequency (DF): “How often an organization successfully releases to production”
This metric measures the frequency at which an organization successfully releases code to production. There is some latitude in how “production” is defined, depending on a team’s individual business requirements. But in essence, smaller, more frequent releases incur less risk and indicate a more predictable, consistent delivery of value to customers. Elite teams are able to deploy on-demand, typically several times a day, while lower-performing teams make more big-bang releases once every several months.

2. Lead Time: “The amount of time it takes for changes to get deployed to production”
This metric measures how long it takes on average for committed code to reach production. The metric is thus a measure of the efficiency of the DevOps toolchain and processes in an organization. Quicker deployments mean faster value delivery to customers. For elite teams, it typically takes less than an hour from when code gets checked in to when it gets deployed in production.

The next two metrics are measures of quality and stability in software delivery. They are:

3. Change Failure Rate (CFR): “The percentage of deployments that cause a failure in production”
This metric measures the quality and stability of the code that a team is shipping. It is calculated as the percentage of deployments that result in severe service degradation and require immediate remediation such as a rollback or a hotfix. For elite engineering teams, no more than 15% of their deployments result in degraded services.

4. Time to Restoration (MTTR): “How long it takes an organization to recover from a failure in production”
And finally, unplanned outages always happen. This last metric measures the time to recover from them and restore service availability for the end user. Elite teams typically take less than an hour to restore degraded services.

The table below taken from the State of DevOps Report 2021 summarizes four distinct performance profiles for engineering teams, with statistically significant differences in measures among them.

How can you measure your DORA metrics?

Measuring and monitoring an organization’s DORA metrics can be difficult because the underlying data needed to compute them often comes from many different systems and isn’t always easy to correlate. For instance, in order to measure the average lead time for changes, you need to be able to compute the delta of all the changes that got shipped to production since the last release to production and average all of their lead times. This requires tracing data across your CI/CD systems, your artifact repositories, and your source control system for all the many applications that your organization deploys. This is hard enough to do for one application, but as organizations grow and tooling and pipelines explode, this can be an entirely non-trivial endeavor.

At Faros AI, we put a lot of thought into making it super easy for engineering teams to connect up their individual data sources to our EngOps Platform. Faros then does the hard work of connecting the dots between the data sources automatically. Hooking up known vendors such as GitHub, BitBucket, Jira, Jenkins etc. to the Faros AI Platform is as simple as clicking a button on the UI; custom home-grown systems can also be easily integrated with the Faros SDK. Faros AI munges all the data, imputes changesets, correlates incidents with deployments, and so forth, to build a complete trace of every change from idea to production and beyond (and every stage in between). The result is DORA dashboards out of the box with no change in the development process.

Continuous improvement with data

With live DORA dashboards in place, engineering organizations can start to see where they stand relative to other engineering organizations, and what the scope for improvement is in their software delivery processes. The ability to slice and dice lead time or failure recovery time by application, DevOps team, and stage helps in identifying bottlenecks in processes — whether in code review, QA, build times, or triage. At the same time, trends over time enable organizations to assess the true impact of interventions — with data. More generally, engineering organizations can finally start to take a data-informed approach to improving the efficiency and effectiveness of their operations.

See Faros AI in Action

Head on over to GitHub to get started today OR Request a demo and we will be happy to set up time to walk you through the platform.


Back to blog posts

More articles for you

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.

Request a Demo