The Top 4 Sprint Metrics that Improve Developer Productivity

Four sprint metrics any engineering team can track to improve developer productivity and unlock better outcomes.

composite image of 4 graphs/charts used in the main article:
-a say/do ratio gauge
-a say/do ratio and unplanned work trend line chart across multiple sprints
-a capacity target adherence bar chart
-a scatter plot of survey responses vs say/do ratios by team to evaluate alignment to goals

The Top 4 Sprint Metrics that Improve Developer Productivity

Four sprint metrics any engineering team can track to improve developer productivity and unlock better outcomes.

composite image of 4 graphs/charts used in the main article:
-a say/do ratio gauge
-a say/do ratio and unplanned work trend line chart across multiple sprints
-a capacity target adherence bar chart
-a scatter plot of survey responses vs say/do ratios by team to evaluate alignment to goals
Chapters

Last updated January 2026

Sprint metrics and developer productivity

Sprints are short, typically two-week cycles in which development teams aim to complete a set amount of work. Engineering teams implement sprints to break down complex projects into manageable chunks, fostering regular delivery and continuous improvement.

At the end of a sprint, teams, line managers, product owners, TPMs, coaches, and other stakeholders review sprint metrics with the goal of continuous learning. In this article, we lay out the top four sprint metrics for improving developer productivity.

Sprint metrics answer key questions

In 2026, most engineering teams measure and track many sprint metrics, several of which can be used to understand and improve developer productivity. Improving developer productivity involves addressing the inefficiencies and frictions that impact the development process, such as over-planning, context-switching, and navigating inefficient tools and workflows — all of which can undermine productivity and team satisfaction.

In retros, teams ask:

  • Did we estimate our capacity correctly?
  • Are we delivering well against our commitments?
  • Are we working on the right things?
  • What is the current morale of the team, and how may that be impacting our performance?
  • Four sprint metrics are best suited to answer these questions and uncover the improvements to unlock higher productivity and better outcomes.

Four sprint metrics are best suited to answer these questions and uncover the improvements to unlock higher productivity and better outcomes.

1. Say/Do Ratio: How accurately are we estimating effort and capacity?

The say/do ratio is an essential sprint metric that measures how accurately teams estimate their capacity. Say/do ratio compares the number of story points committed at the start of a sprint (what they "say" they'll do) against what was completed by the end of it (what they "do").

The say/do ratio typically includes planned and unplanned work, so it measures how good the team is at estimating its capacity, but not whether it’s working on the right things (more on that later).

  • A high say/do ratio indicates that a team has a good grasp of its capacity and efficiency. They thought they could deliver 100 story points, and they delivered close to 100 story points.
  • A low say/do ratio indicates a team is not as good at planning to capacity. They thought they could deliver 100 story points, but only delivered 70. When this sprint metric is low, there is room to improve productivity. Teams should look into improving estimation accuracy, mitigating dependencies, and understanding the delays causing work to progress slower than anticipated.
gauge chart showing say/do ratio for current sprint and line graph showing trend of say/do ratio over time
Current sprint performance paired with historical trends highlights planning accuracy and delivery reliability.

Since the say-do ratio measures how well a team follows through on what they say they will do, the healthiest say-do ratio is 1:1, meaning that for every commitment made, the team delivers. Development teams should strive to keep this ratio as high as possible, demonstrating they can accurately predict their capacity and balance committed work with necessary maintenance tasks and urgent requests.

It’s also beneficial to understand the reasons why committed work was not completed, so teams can anticipate and mitigate those risks in future sprints. Implementing improvements should see the say-do ratio increase, sprint over sprint.

chart illustrating reasons why sprint tasks were incomplete
Why do sprint tasks go unfinished—and which reasons have the biggest impacts?

2. Planned/Unplanned Work Ratio: How are we managing our work?

As mentioned in the above sprint metric, during a sprint teams work on planned or unplanned work.

  • Planned work includes work items that have been identified, prioritized, and agreed upon before the sprint starts, for example, new features, UX changes, fixing known bugs.
  • Unplanned work comprises unexpected tasks that arise during the sprint, such as urgent bug fixes, unforeseen technical issues, or emergency requests from stakeholders. Some capacity should always be reserved for unplanned work.
gauge chart showing unplanned work ratio for current sprint and line graph showing trend of unplanned work ratio over time
Engineering managers tracking the unplanned work ratio can see both a current sprint snapshot and trends over time.

A healthy sprint typically has a planned/unplanned work ratio below 20%. This ratio indicates that the majority of the work completed is in line with the initial commitments, and the team is successfully balancing the unforeseen tasks.

As the 80/20 ratio begins to decrease in favor of unplanned work, it means the team is taking on higher amounts of unplanned work, an indicator of decreased productivity. They are delivering less of what they committed to the business. This suggests a need to reevaluate sprint planning, ensure alignment to business priorities, and potentially explore underlying issues, such as inadequate risk management, technical debt or poorly communicated cross-team dependencies.

Teams find it helpful to view a couple of these sprint metrics combined over a historical view of sprints. They like to see the say/do ratio and unplanned work trend juxtaposed with what shipped and what slipped each sprint. Examining the trend and the underlying tasks can help the team identify recurring patterns and issues they can address, and then the impact of those changes over time.

chart combining say/do ratio and unplanned work ratio trends by sprint over time
Say/Do ratio and unplanned work trends, shown alongside shipped and slipped work, reveal how rising unplanned work impacts delivery commitments.

3. Capacity Target Adherence: Are we working on the right things?

Another important sprint metric is capacity target adherence to ensure the team is working on the right things, aligned with strategic objectives. Capacity target adherence measures the distribution of effort across predefined categories of work.

Teams typically categorize work into multiple buckets, like strategic projects, technical debt, bug fixes, and KTLO. It’s good practice to have a reference ideal target in mind each sprint, e.g. 60:20:10:10. When it comes to the question of developer productivity, achieving the targeted distribution is an indication of high productivity, because the developers are advancing the strategic goals of the company. 

bar graph comparing capacity target adherence of multiple sprints to strategic targets
Capacity target adherence shows how closely each sprint’s work distribution aligns with the ideal balance of strategic work, tech debt, bug fixes, and KTLO.

By tracking this sprint metric and reflecting on how much time was actually spent on each type of work, teams can evaluate whether their efforts are in line with strategic targets. If there is a significant discrepancy, the team should raise the issue with a leader or stakeholder to discuss the reasons and reassess priorities.

4. DevEx to DevProd Correlation: What insights can we derive from comparing survey responses to sprint outcomes?

For a deeper understanding of how developer experience correlates to sprint performance, engineering leaders learn a lot from juxtaposing the above three sprint metrics with developer survey data. Blending quantitative productivity measures and qualitative feedback creates a holistic view of team performance and well-being, offering deeper insights into the factors influencing developer efficiency, satisfaction, and engagement.

If a team conducts quick surveys at the end of every sprint, you can look at these correlations every sprint. If you only do developer surveys once a quarter, then take a quarterly view.

scatter plot showing alignment to goals via average responses vs say/do ratio by team
Comparing average survey responses to Say/Do ratio by team reveals alignment gaps and opportunities for targeted improvement.

Senior leaders and domain leaders benefit from looking at these correlations between sprint metrics and developer surveys for the entire organization, for sub-orgs, and for individual teams. This helps identify systemic issues vs. team-specific challenges and prioritize continuous improvement priorities.

Analyzing results across teams illuminates thriving teams, whose best practices can be shared with others to improve outcomes.

Equipped with these enriched insights, engineering leaders can make more timely and informed decisions to enhance overall developer experience and team morale, target process refinements more precisely, and better assess the impact of changes on developer productivity.

Sprint metrics that power better delivery with Faros AI

Enterprise engineering organizations looking for a holistic view of sprint health and engineering productivity turn to Faros AI. Faros AI provides detailed dashboards across teams, products, repos, apps, and services—along with personalized insights at every level—to help uncover hidden issues and identify opportunities for improvement.

Remove bottlenecks and improve processes

  • Leverage root cause analysis to uncover correlations and causality.
  • Let our AI models surface the biggest impediments to team performance.
  • Get notified of delivery risks before they happen.

Create a culture of self-improvement

  • Understand what the team worked on and the effort behind the most complex tasks.
  • Track the deployment status of code changes and definitively know what’s been shipped to production.
  • Assess task and PR hygiene to identify areas for cleanup and process improvement.

Improve agility and predictability

  • Keep delivery on track by monitoring key sprint metrics.
  • Use thresholds, automations, and alerts to keep work flowing and WIP under control.
  • Correlate developer survey results with activity and process data for deeper insight.

How Riskified runs data-driven retros with Faros AI

Elad Kochavi, an engineering team leader at Riskified, runs his retros with sprint metrics from Faros AI. According to Elad, “We now have a combined picture for all the tools we use and can do much more sophisticated analysis in place of the naive and simplified views in Jira. Our transition to data-driven retros has energized and motivated the team. They love seeing the impact of their efforts in the charts.”

image of quote from Elad

Project management tools like Jira can only take your sprint metrics so far. Faros AI takes engineering data visualization to the next level with dashboards that provide a full, context-rich picture across all your teams’ sprints:

  • A combined view of human- and machine-curated data from Jira, source control, CI, CD, code analysis, testing, defects, and incidents)
  • Hierarchical drilldowns based on org structure, product groups, teams, apps, or services
  • Unlimited history
  • Velocity, throughput, quality, reliability, and predictability metrics
  • Team-tailored AI insights and recommendations

Ready to supercharge your sprint metrics in 2026?

Reach out to our experts for more details on how our advanced sprint metrics displayed on customizable dashboards can provide your teams with deeper insights.

Neely Dunlap

Neely Dunlap

Neely Dunlap is a content strategist at Faros who writes about AI and software engineering.

AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
Cover of Faros AI report titled "The AI Productivity Paradox" on AI coding assistants and developer productivity.
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
Cover of "The Engineering Productivity Handbook" featuring white arrows on a red background, symbolizing growth and improvement.
Graduation cap with a tassel over a dark gradient background.
AI ENGINEERING REPORT 2026
The Acceleration 
Whiplash
The definitive data on AI's engineering impact. What's working, what's breaking, and what leaders need to do next.
  • Engineering throughput is up
  • Bugs, incidents, and rework are rising faster
  • Two years of data from 22,000 developers across 4,000 teams
Research
7
MIN READ

Ten takeaways from the AI Engineering Report 2026: The Acceleration Whiplash

What two years of telemetry data from 22,000 developers reveals about AI's real impact on developer productivity, code quality, and business risk in 2026.

Blog
8
MIN READ

A software engineering metrics glossary for business and technical leaders

A practical software engineering glossary for the AI era: pull requests, PR size, merge rate, code churn, incident rate, and the DORA metrics engineering teams use to measure AI's impact on productivity and quality.

Customers
10
MIN READ

An industrial technology leader lays the foundation for AI transformation with Faros

Learn how a global industrial technology leader used Faros to unify 40,000 engineers and build the measurement foundation for AI transformation.