Want to learn more about Faros AI?

Fill out this form to speak to a product expert.

I'm interested in...
Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
Submitting...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

Key Takeaways from the DORA Report 2025: How AI is Reshaping Software Development Metrics and Team Performance

New DORA data shows AI amplifies team dysfunction as often as capability. Key action: measure productivity by actual collaboration units, not tool groupings. Seven team types need different AI strategies. Learn diagnostic framework to prevent wasted AI investments across organizations.

Naomi Lurie
Naomi Lurie
Key takeaways from the DORA Report 2025
13
min read
Browse Chapters
Share
September 25, 2025

What DORA's survey data reveals about AI's real impact on engineering teams

In July 2025, Faros AI released groundbreaking telemetry analysis from over 10,000 developers. We found what we call "The AI Productivity Paradox": AI coding assistants dramatically boost individual output—21% more tasks completed, 98% more pull requests merged—but organizational delivery metrics stay flat.

Two months later, the much-anticipated 2025 DORA State of AI-assisted Software Development Report (hereon DORA Report 2025) arrived with survey data from nearly 5,000 developers worldwide to complement the picture.

Don't have time to read the full 140-page DORA Report 2025? 

This article distills the key findings and shows how they connect with recent telemetry research on AI's productivity impact.

This article covers:

  • What DORA found about AI's impact on engineering productivity
  • DORA's seven organizational capabilities that amplify or neutralize AI benefits
  • The DORA 5: Throughput and instability metrics and benchmarks
  • The DORA report’s seven new team archetypes, and why measurement precision matters
  • What end-to-end metrics reveal about where productivity gains disappear

For enterprise leaders, these insights offer both validation and a roadmap—but the window for action is closing.

Survey and telemetry: Two views of the same reality

Survey data and telemetry aren't telling different stories about AI. They reveal different sides of the same transformation.

While it’s true that both Stanford and METR research show that developers are poor estimators of their own productivity, in this case developer sentiment is pretty aligned with objective telemetry. 

Here’s what both studies agree on: AI boosts individual-level output metrics.

The DORA Report 2025 survey data confirms what Faros AI's telemetry measured. Developers report higher individual effectiveness from AI adoption. This aligns with concrete increases in task completion (21%) and pull request volume (98%) that Faros AI observed.

Where challenges appear: The telemetry reveals organizational problems that stop these gains from translating to business value.

{{ai-paradox}}

While individual productivity increases, Faros AI's data shows:

  • Code review time increases 91% as PR volume overwhelms reviewers
  • Pull request size grows 154%, creating cognitive overload and longer review cycles
  • Bug rates climb 9% as quality gates struggle with larger diffs and increased volume
  • Software delivery performance metrics, like the DORA metrics of lead time, deployment frequency, change failure rate, and MTTR - remain flat
Metric Change with AI Adoption Impact
Tasks completed +21% Positive
Pull requests merged +98% Positive
Code review time +91% Bottleneck created
Pull request size +154% Review overload
Bug rate +9% Quality pressure
Organizational delivery Flat No business impact
AI's impact on development metrics shows individual gains don't translate to organizational improvements
AI's impact on throughput and workflows
AI's impact on PR size and quality

The multitasking question: More work, no more stress

One of the most interesting findings from both studies concerns the changing cognitive load of engineers as they shift to AI-augmented workflows.

Faros AI’s telemetry quantified this shift precisely: Developers using AI interact with 9% more task contexts and 47% more pull requests daily. The study noted that traditional thresholds for healthy cognitive load may need adjustment in AI-augmented environments.

AI's impact on developer multi-tasking and context-switching

Historically, context switching has been viewed negatively and linked to reduced focus.

Good news from the DORA Report AI: Survey data found no correlation between AI adoption and increased burnout or friction. Stress indicators remained neutral, hovering around zero, despite the measurably increased workload complexity.

This suggests two possibilities:

First, multitasking is changing. Developers aren't just juggling more manual work. They're orchestrating AI agents across multiple workstreams. An engineer can make progress on one task while their AI assistant handles another. This fundamentally changes what "context switching" means.

Second, AI benefits offset coordination overhead. Developers feel the cognitive relief of not writing boilerplate code or searching documentation. This balances the increased complexity of managing more concurrent workstreams.

Key insight for enterprises: Increased activity doesn't automatically mean increased stress. But it does require adapted workflows and stronger coordination to prevent future burnout as adoption scales.

The AI amplifier effect and seven critical capabilities

Both studies converge on a crucial insight: AI acts as an amplifier, not a universal productivity booster.

The DORA Report 2025 states that "AI... magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones."

The telemetry validates this amplifier effect through concrete metrics. Teams with strong platform foundations see their AI gains translate to organizational improvements. Teams dealing with pre-existing constraints see their individual productivity increases absorbed by downstream bottlenecks.

This explains why the DORA Report AI Capabilities Model focuses on foundational capabilities rather than tool deployment strategies.

The seven capabilities that amplify AI benefits

  1. Clear and communicated AI stance - Organizational clarity on expectations and permitted tools
  2. Healthy data ecosystems - Quality, accessible, unified internal data
  3. AI-accessible internal data - Context integration beyond generic assistance
  4. Strong version control practices - Mature development workflow and rollback capabilities
  5. Working in small batches - Maintaining incremental change discipline
  6. User-centric focus - Product strategy clarity despite accelerated velocity
  7. Quality internal platforms - Technical foundations that enable scale

Three of these capabilities show particularly strong convergence with Faros AI's findings:

Strategic clarity over experimentation

Both reports show that successful AI adoption requires explicit organizational strategy, not just tool deployment.

The DORA Report 2025 emphasizes "clear and communicated AI stance"—organizational clarity about expectations, permitted tools, and policy applicability.

Faros AI identifies "grassroots adoption that lacks structure and scale" as a key barrier. Bottom-up experimentation without centralized enablement creates training overhead and inconsistent outcomes.

Organizations moving from "AI experimentation" to "AI operationalization" establish usage guidelines, provide role-specific training, build internal playbooks, and create communities of practice.

The small batch challenge

The DORA Report AI research shows that working in small batches amplifies AI's positive effects on product performance and reduces friction. But Faros AI's telemetry reveals AI consistently increases PR size by 154%. This tension exposes a critical implementation gap.

Successful teams are finding ways to break AI-generated work into smaller, reviewable units—staging code across multiple PRs, using AI for prototyping but manually chunking implementation, and engineering better prompts for incremental changes.

Organizations that maintain small batch discipline despite AI's tendency toward larger changes see benefits scale beyond individual developers.

Platform prerequisites

Both studies validate that AI ROI depends fundamentally on platform maturity. The DORA Report 2025 found 90% of organizations now have platform engineering capabilities, with a direct correlation between platform quality and AI's amplification of organizational performance.

Faros AI's research identifies this as a critical differentiator: Organizations seeing measurable AI gains are doubling down on platform foundations to support rapid AI experimentation and faster flow of code through development pipelines. They're implementing AI engineering consoles to create a centralized data-driven command center for monitoring effectiveness and safety.

The convergence is clear: AI amplification requires platform maturity. Organizations struggling with basic CI/CD reliability, observability gaps, or fragmented developer experience will see AI gains absorbed by infrastructure friction.

Seven team archetypes: Why measurement precision matters

The DORA Report has long been known for the four key metrics: deployment frequency, lead time for changes, change failure rate, and mean time to recovery. The DORA Report 2024 marked a significant evolution of this framework.

What was new in 2024:

What’s new in DORA Report 2025:

  • Moved away from traditional low/medium/high/elite performance designations to per metric buckets 
  • Identified seven distinct team archetypes based upon performance patterns: Software delivery throughput, software delivery instability, team performance, product performance, individual effectiveness, valuable work, friction and burnout. 

The DORA Report 2025 identifies seven distinct team archetypes:

Archetype Key Characteristics
Foundational Challenges Teams in survival mode with significant process gaps
Legacy Bottleneck Teams in constant reaction to unstable systems
Constrained by Process Teams on a treadmill consumed by inefficient workflows
High Impact, Low Cadence Teams producing quality work but slowly
Stable and Methodical Teams delivering deliberately with high quality
Pragmatic Performers Teams with impressive speed but functional environments
Harmonious High-Achievers Teams in a virtuous cycle of sustainable excellence
DORA's seven team archetypes replace traditional low/medium/high/elite performance classifications

This shift from linear performance tiers to multidimensional archetypes has profound implications for measuring AI's impact. A team's archetype determines not just how they'll adopt AI, but what benefits they'll see and what risks they'll face.

Why one-size-fits-all AI strategies fail

The seven team types show us a big problem: AI makes existing team patterns stronger instead of fixing them. This means teams need different AI approaches based on their specific problems and strengths.

Consider how AI affects different team types:

  • "Legacy Bottleneck" teams dealing with old, broken code see AI help them write code faster. But their outdated systems become an even bigger problem. They get more productive as individuals, but their weak deployment systems and messy integrations eat up all those gains.
  • "Pragmatic Performers" who usually deliver work smoothly find AI creates new coordination problems. Faster code writing overwhelms their code review process. Bigger AI-generated changes break their normally smooth workflows.
  • "Harmonious High-Achievers" see AI multiply their already good teamwork. Their strong platform foundations and healthy work practices let AI benefits spread across the whole organization.

Regular performance measures would completely miss these differences and lead companies to use the same AI strategy everywhere. But this approach makes broken teams even more broken just as often as it helps good teams get better. The team type model gives us the precise diagnosis needed to match AI tools with each team's actual constraints.

Three critical measurement challenges for AI adoption

The variance between these archetypes is so significant that aggregating their metrics masks the patterns needed for effective intervention. 

1. Administrative groupings don't reflect actual teams

Jira boards, GitHub teams, and department structures rarely align with actual working relationships where AI impact occurs. A GitHub team might contain people who rarely collaborate, while a cross-functional product team might span multiple repositories.

AI productivity gains happen in the context of actual collaboration, not administrative boundaries.

Without measuring at the real team level, you can't accurately assess which archetype a team represents or how AI affects their specific constraint pattern.

2. Attribution errors compound over time

When developers change teams or projects—a common occurrence—their historical data typically travels with them in most analytics platforms. This creates significant distortions.

A high-performing developer joining a struggling team artificially inflates that team's historical metrics. This makes it impossible to isolate the effects of actual interventions or accurately classify the team's archetype.

3. Misallocated investment follows bad data

Without accurate team-level measurement mapped to these archetypes, enterprises misallocate AI investment. They might invest heavily in AI coding assistants for "Legacy Bottleneck" teams whose actual constraint is deployment pipeline fragility, while ignoring the code review capacity needs of "Pragmatic Performers" whose constraint is shifting from code generation to integration.

The solution: 

Connect formal reporting hierarchies from HR systems with actual collaboration patterns inferred from development telemetry. This enables measurement at the real team level (the 5–12 person working groups who collaborate daily on shared deliverables) combined with archetype classification based on their actual throughput and instability patterns rather than proxy organizational units.

{{cta}}

Value Stream Management: Where AI gains evaporate

The DORA 2025 Report identifies Value Stream Management as the practice that turns AI's individual productivity gains into organizational advantage. Faros AI's telemetry demonstrates why this matters:

While developers complete:

  • 21% more tasks
  • 98% more PRs with AI assistance

Organizations see:

  • Code review time increases 91%
  • Bug rates climb 9%
  • Organizational delivery metrics remain flat

Without end-to-end visibility, teams optimize locally—making code generation faster—while the actual constraint shifts to review, integration, and deployment. Organizations investing in AI without measuring their end-to-end development processes risk accelerating into a bottleneck rather than accelerating through it.

Finding where value gets lost: The GAINS™ Framework

Finding out that gains disappear is just the start. Companies need to know exactly where and why this happens. The team type model shows that "Legacy Bottleneck" teams lose value in different ways than "Constrained by Process" teams. But regular metrics treat them the same.

The GAINS™ (Generative AI Impact Net Score) framework fixes this problem. It looks at ten different areas to find the specific friction points for each team type.

  • For "Foundational Challenges" teams, GAINS shows which problems hurt their delivery the most and which ones need fixing first.
  • "Legacy Bottleneck" teams find out if AI makes their stability problems worse because of bad infrastructure or missing test automation.
  • "Constrained by Process" teams see if AI creates more paperwork or if changing their workflows could free up trapped productivity.

This precise diagnosis lets companies target specific problems instead of using generic solutions that often make existing constraints worse. It also creates a clear path from spotting the problem (gains disappearing in work streams) to fixing it (precise measurement to understand where and why).

The path forward: From insight to impact

Both studies point to the same conclusion: The AI productivity paradox isn't permanent, but solving it requires systematic action.

The DORA Report 2025 practical recommendations provide a checklist for enterprises ready to move from AI experimentation to operationalization:

  • Clarify and socialize AI policies to reduce ambiguity around permitted tools and usage
  • Treat data as a strategic asset through investment in quality, accessibility, and unification
  • Connect AI to internal context to move beyond generic assistance to company-specific value
  • Center users' needs in product strategy to maintain focus despite accelerated velocity
  • Embrace and fortify safety nets by strengthening version control and rollback capabilities
  • Reduce work item size to maintain small batch discipline despite AI's larger change tendency
  • Invest in internal platforms to build the foundation that enables AI benefits to scale

The telemetry data adds urgency to these recommendations. Organizations have roughly 12 months to shift from experimentation to operationalization before the AI amplifier effect compounds competitive disadvantages.

Early movers are already seeing organizational-level gains translate to business outcomes. Late adopters will find their individual productivity increases absorbed by systemic dysfunction.

The convergence of survey insights and telemetry precision provides the roadmap. The question is whether enterprise leaders will act on it with the urgency and precision the data demands.

{{cta}}

Frequently asked questions about the DORA Report 2025

What are the main findings of the DORA Report on AI?

The DORA Report 2025 found that 95% of developers now use AI tools, with over 80% reporting productivity gains. However, the research reveals that AI acts as an "amplifier" rather than a universal solution—it magnifies existing organizational strengths and weaknesses. 

The report introduces seven critical capabilities that determine whether AI benefits scale beyond individuals to organizational performance: clear AI stance, healthy data ecosystems, AI-accessible internal data, strong version control practices, working in small batches, user-centric focus, and quality internal platforms. 

Critically, the research shows no correlation between AI adoption and increased developer burnout or friction, suggesting teams are adapting successfully to AI-enhanced workflows despite handling more concurrent workstreams.

What are the seven team archetypes in the DORA Report 2025?

The DORA Report 2025 identifies seven team performance archetypes based on throughput metrics, instability metrics, and team well-being measures. These replace the traditional low/medium/high/elite classifications. 

The archetypes are: (1) Foundational Challenges—teams in survival mode with significant process gaps; (2) Legacy Bottleneck—teams constantly reacting to unstable systems; (3) Constrained by Process—teams consumed by inefficient workflows; (4) High Impact, Low Cadence—teams producing quality work slowly; (5) Stable and Methodical—teams delivering deliberately with high quality; (6) Pragmatic Performers—teams with impressive speed and functional environments; and (7) Harmonious High-Achievers—teams in a virtuous cycle of sustainable excellence. 

Each archetype experiences AI adoption differently, requiring tailored intervention strategies rather than one-size-fits-all approaches.

What is the AI Capabilities Model in the DORA Report 2025?

The DORA AI Capabilities Model identifies seven foundational organizational capabilities that amplify AI benefits rather than focusing on tool deployment alone. 

These capabilities are: (1) Clear and communicated AI stance—organizational clarity on expectations and permitted tools; (2) Healthy data ecosystems—quality, accessible, unified internal data; (3) AI-accessible internal data—context integration beyond generic assistance; (4) Strong version control practices—mature development workflows and rollback capabilities; (5) Working in small batches—maintaining incremental change discipline; (6) User-centric focus—product strategy clarity despite accelerated velocity; and (7) Quality internal platforms—technical foundations that enable scale. 

Research shows these capabilities determine whether individual productivity gains from AI translate to organizational performance improvements. Organizations lacking these foundations see AI gains absorbed by downstream bottlenecks and systemic dysfunction.

Why does the DORA Report 2025 emphasize Value Stream Management for AI adoption?

The DORA Report 2025 identifies Value Stream Management (VSM) as critical because it reveals where AI productivity gains evaporate in the development lifecycle. Without end-to-end visibility, teams optimize locally—making code generation faster—while actual constraints shift to review, integration, and deployment stages. The report describes this as "localized pockets of productivity lost to downstream chaos." 

VSM provides diagnostic frameworks to identify true constraints in the value stream, enabling organizations to invest AI resources where they create the most impact. Research shows that teams with mature measurement practices successfully translate AI gains from individual developers to team and product performance improvements, while teams lacking visibility see organizational delivery metrics remain flat despite individual productivity increases.

Note about Value Stream Management: Industry analysts view developer productivity insights platforms and Software Engineering Intelligence (SEI) as tools and capabilities that are fueling the VSM market, which is focused on improving overall business outcomes.

Want to speak with an expert? Contact our team for a consultation and demo of data-driven AI transformation.

Naomi Lurie

Naomi Lurie

Naomi is head of product marketing at Faros AI.

Connect
AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
AI
DevProd
7
MIN READ

GitHub Copilot vs Amazon Q: Real Enterprise Bakeoff Results

GitHub Copilot vs Amazon Q enterprise showdown: Copilot delivered 2x adoption, 10h/week savings vs 7h/week, and 12% higher satisfaction. The only head-to-head comparison with real enterprise data.
September 23, 2025
Editor's Pick
DevProd
DevEx
5
MIN READ

What Atlassian's $1B DX Acquisition Really Means for Your Developer Productivity Strategy

Atlassian's $1B DX acquisition validates developer productivity measurement but creates vendor lock-in risks. Why enterprises need independent platforms.
September 19, 2025
Editor's Pick
Guides
DevProd
12
MIN READ

Engineering Leadership Framework: Vision, Strategy & Execution Guide

Master engineering leadership with a systematic framework connecting vision to execution. Includes resource allocation models, OKR implementation & success metrics.
September 11, 2025