Fill out this form to speak to a product expert.
Discover the 5th DORA metric: Rework rate. Learn what it is, why it matters in the AI era, and how to start tracking it today. Get industry benchmarks, see what good looks like, and find practical tips to reduce wasted engineering effort and boost performance.
Google Cloud has just published its annual DORA (DevOps Research and Assessment) report, with a strong focus on the impact of AI on software engineering. If you haven't seen it yet, check out our summary of key findings from the DORA Report 2025.
For years, the DORA framework has been synonymous with four key metrics: deployment frequency, lead time for changes, change failure rate, and failed deployment recovery time (MTTR). But the DORA Report 2024 marked a significant evolution of this framework and the 2025 report completed the picture.
The metrics expanded to five, adding rework rate to the mix. However, no benchmarks were published at the time. The framework was also reorganized into two new categories:
Fast-forward to 2025, and the report now has benchmarks for all five DORA metrics, including rework rate. DORA benchmarks are updated every year and help teams and organizations compare against their peers and, more importantly, set realistic improvement goals, and track progress over time.
This year, the DORA report also moved away from traditional low/medium/high/elite performance designations to finer-grained per metric buckets.
The DORA research group had a hypothesis: Change Failure Rate (the ratio of deployments resulting in severe degradation or outage in production) works as a proxy for the amount of rework a team is asked to do. When a delivery fails, teams must fix the change, likely by introducing another deployment.
To test this theory, they added a new survey question about rework rate: "For the primary application or service you work on, approximately how many deployments in the last six months were not planned but were performed to address a user-facing bug in the application?"
By measuring rework rate explicitly and analyzing it alongside change failure rate, the research group built a more reliable picture of software delivery stability. It’s no longer just, “Did we break production?” It’s also, “How often are we compelled to ship unplanned fixes because defects slipped through?”
Those two signals, deployment instability and the subsequent churn it causes, provide a more holistically view of the impact of delivery issues.
When deployments are smooth, teams are more confident about pushing changes to production, and end users are less likely to experience issues with the application.
When deployments don’t go well, teams end up wasting precious time fixing issues, affecting team morale and delaying feature work, while end users get frustrated with a degraded experience.
Rework rate couldn't be more relevant given the rapid adoption of AI coding tools sweeping across engineering organizations.
Throughput goes up: More code, more experiments, more change velocity. But quality gates like reviews, tests, and staging checks don’t automatically scale with that pace. You can feel the tension in the day-to-day:
Faros AI's research quantifies these concerning downstream effects:
In Stack Overflow’s 2025 Developer Survey, 84% of respondents indicated using or planning to use AI tools, yet trust in their accuracy has sagged.
The most common pain point, reported by 66% of survey respondents, is encountering AI solutions that are “almost right.” And 45% say debugging AI‑generated code is more time‑consuming. In other words, the savings you expected up front can be eaten later in rework by the time spent inspecting, fixing, and re‑deploying.
In this environment, tracking rework rate carefully becomes essential. The benchmarks were first published this year, and it will be fascinating to see how they evolve in 2026 as AI adoption continues to accelerate.
If you’re eager to get insight into your teams’ performance, you can start tracking rework rate today in Faros AI—and nowhere else! Our DORA metrics dashboards measure rework rate at a given point-in-time, trend it over weeks, months and years, and break down the results by organizational unit and the application or service (see tips below) to pinpoint where instability is concentrated.
This fifth DORA metric is now included as part of our Engineering Efficiency Solution, giving you the complete picture of your software delivery performance in the AI era. Don't wait to understand how AI tools are impacting your team's stability. Contact us to start measuring all five DORA metrics now.
{{cta}}
Rework rate measures the percentage of deployments that were unplanned and performed to address user-facing bugs in your application. According to the DORA research group's definition, it's calculated by tracking deployments made specifically to fix defects that users encountered, rather than deployments that deliver new features or planned improvements.
Faros AI automatically identifies and classifies these unplanned deployments by analyzing your deployment data, linking it to incidents and bugs from your incident management and task management systems. This gives you an accurate, data-driven view without relying on manual surveys.
The optimal unit of analysis depends on your organization's structure, but we recommend starting at the service or application level, then rolling up to teams.
Here's why:
In Faros AI, you can analyze rework rate at any of these levels and easily pivot between views to understand where intervention is needed most.
The combination of both metrics gives you a complete picture:
This distinction is especially important in the AI era. As our data shows, AI tools are increasing PR volume and size while bug rates climb 9%. You might maintain a stable CFR through robust safeguards, but if your rework rate is climbing, you're accumulating technical friction that will eventually slow your throughput metrics (deployment frequency and lead time).
Together, these two instability metrics help you distinguish between "we ship fast and rarely break things catastrophically" versus "we ship fast with consistently high quality."
AI coding tools create a paradox: individual developers write code faster, but the downstream effects can increase rework. Here's the mechanism:
Larger PRs (up 154%) mean reviewers have more cognitive load and less ability to spot subtle bugs. More PRs overall (causing 91% longer review times) means reviewers are rushed and may approve changes with less scrutiny. The combination leads to more defects reaching production, which our data confirms with a 9% increase in bug rates.
The key is to track rework rate alongside your AI adoption metrics. If you're seeing productivity gains but rework rate is climbing, you should invest in better automated testing, and strengthen your quality gates.
The DORA Report 2025 published the first official benchmarks for rework rate. While we recommend reviewing the full report for detailed benchmarks, the key insight is that elite performers maintain significantly lower rework rates while sustaining high deployment frequency.
In Faros AI, you can compare your rework rate against these industry benchmarks and track your progress over time. Don’t panic if your current work rate is not on the top tier! The goal is to acknowledge the problem, set realistic goals for continuous improvement and understand the trend, especially as you adopt new tools and practices.
Absolutely! While rework rate is most powerful when viewed alongside the other DORA metrics, you can start tracking it independently. In fact, if you're currently using AI coding tools and concerned about quality, rework rate might be the single most important metric to baseline right now.
That said, we strongly encourage adopting all five DORA metrics together. They're designed as a system: throughput metrics show your speed, instability metrics reveal your quality, and the interplay between them tells you whether you're optimizing the right things.
Faros AI makes it easy to implement all five metrics at once, with automated data collection from your existing development tools—no manual surveys required.
{{cta}}