Fill out this form to speak to a product expert.
New DORA data shows AI amplifies team dysfunction as often as capability. Key action: measure productivity by actual collaboration units, not tool groupings. Seven team types need different AI strategies. Learn diagnostic framework to prevent wasted AI investments across organizations.
In July 2025, Faros AI released groundbreaking telemetry analysis from over 10,000 developers. We found what we call "The AI Productivity Paradox": AI coding assistants dramatically boost individual output—21% more tasks completed, 98% more pull requests merged—but organizational delivery metrics stay flat.
Two months later, the much-anticipated 2025 DORA State of AI-assisted Software Development Report (hereon DORA Report 2025) arrived with survey data from nearly 5,000 developers worldwide to complement the picture.
Don't have time to read the full 140-page DORA Report 2025?
This article distills the key findings and shows how they connect with recent telemetry research on AI's productivity impact.
This article covers:
For enterprise leaders, these insights offer both validation and a roadmap—but the window for action is closing.
Survey data and telemetry aren't telling different stories about AI. They reveal different sides of the same transformation.
While it’s true that both Stanford and METR research show that developers are poor estimators of their own productivity, in this case developer sentiment is pretty aligned with objective telemetry.
Here’s what both studies agree on: AI boosts individual-level output metrics.
The DORA Report 2025 survey data confirms what Faros AI's telemetry measured. Developers report higher individual effectiveness from AI adoption. This aligns with concrete increases in task completion (21%) and pull request volume (98%) that Faros AI observed.
Where challenges appear: The telemetry reveals organizational problems that stop these gains from translating to business value.
{{ai-paradox}}
While individual productivity increases, Faros AI's data shows:
One of the most interesting findings from both studies concerns the changing cognitive load of engineers as they shift to AI-augmented workflows.
Faros AI’s telemetry quantified this shift precisely: Developers using AI interact with 9% more task contexts and 47% more pull requests daily. The study noted that traditional thresholds for healthy cognitive load may need adjustment in AI-augmented environments.
Historically, context switching has been viewed negatively and linked to reduced focus.
Good news from the DORA Report AI: Survey data found no correlation between AI adoption and increased burnout or friction. Stress indicators remained neutral, hovering around zero, despite the measurably increased workload complexity.
This suggests two possibilities:
First, multitasking is changing. Developers aren't just juggling more manual work. They're orchestrating AI agents across multiple workstreams. An engineer can make progress on one task while their AI assistant handles another. This fundamentally changes what "context switching" means.
Second, AI benefits offset coordination overhead. Developers feel the cognitive relief of not writing boilerplate code or searching documentation. This balances the increased complexity of managing more concurrent workstreams.
Key insight for enterprises: Increased activity doesn't automatically mean increased stress. But it does require adapted workflows and stronger coordination to prevent future burnout as adoption scales.
Both studies converge on a crucial insight: AI acts as an amplifier, not a universal productivity booster.
The DORA Report 2025 states that "AI... magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones."
The telemetry validates this amplifier effect through concrete metrics. Teams with strong platform foundations see their AI gains translate to organizational improvements. Teams dealing with pre-existing constraints see their individual productivity increases absorbed by downstream bottlenecks.
This explains why the DORA Report AI Capabilities Model focuses on foundational capabilities rather than tool deployment strategies.
Three of these capabilities show particularly strong convergence with Faros AI's findings:
Both reports show that successful AI adoption requires explicit organizational strategy, not just tool deployment.
The DORA Report 2025 emphasizes "clear and communicated AI stance"—organizational clarity about expectations, permitted tools, and policy applicability.
Faros AI identifies "grassroots adoption that lacks structure and scale" as a key barrier. Bottom-up experimentation without centralized enablement creates training overhead and inconsistent outcomes.
Organizations moving from "AI experimentation" to "AI operationalization" establish usage guidelines, provide role-specific training, build internal playbooks, and create communities of practice.
The DORA Report AI research shows that working in small batches amplifies AI's positive effects on product performance and reduces friction. But Faros AI's telemetry reveals AI consistently increases PR size by 154%. This tension exposes a critical implementation gap.
Successful teams are finding ways to break AI-generated work into smaller, reviewable units—staging code across multiple PRs, using AI for prototyping but manually chunking implementation, and engineering better prompts for incremental changes.
Organizations that maintain small batch discipline despite AI's tendency toward larger changes see benefits scale beyond individual developers.
Both studies validate that AI ROI depends fundamentally on platform maturity. The DORA Report 2025 found 90% of organizations now have platform engineering capabilities, with a direct correlation between platform quality and AI's amplification of organizational performance.
Faros AI's research identifies this as a critical differentiator: Organizations seeing measurable AI gains are doubling down on platform foundations to support rapid AI experimentation and faster flow of code through development pipelines. They're implementing AI engineering consoles to create a centralized data-driven command center for monitoring effectiveness and safety.
The convergence is clear: AI amplification requires platform maturity. Organizations struggling with basic CI/CD reliability, observability gaps, or fragmented developer experience will see AI gains absorbed by infrastructure friction.
The DORA Report has long been known for the four key metrics: deployment frequency, lead time for changes, change failure rate, and mean time to recovery. The DORA Report 2024 marked a significant evolution of this framework.
What was new in 2024:
What’s new in DORA Report 2025:
The DORA Report 2025 identifies seven distinct team archetypes:
This shift from linear performance tiers to multidimensional archetypes has profound implications for measuring AI's impact. A team's archetype determines not just how they'll adopt AI, but what benefits they'll see and what risks they'll face.
The seven team types show us a big problem: AI makes existing team patterns stronger instead of fixing them. This means teams need different AI approaches based on their specific problems and strengths.
Consider how AI affects different team types:
Regular performance measures would completely miss these differences and lead companies to use the same AI strategy everywhere. But this approach makes broken teams even more broken just as often as it helps good teams get better. The team type model gives us the precise diagnosis needed to match AI tools with each team's actual constraints.
The variance between these archetypes is so significant that aggregating their metrics masks the patterns needed for effective intervention.
1. Administrative groupings don't reflect actual teams
Jira boards, GitHub teams, and department structures rarely align with actual working relationships where AI impact occurs. A GitHub team might contain people who rarely collaborate, while a cross-functional product team might span multiple repositories.
AI productivity gains happen in the context of actual collaboration, not administrative boundaries.
Without measuring at the real team level, you can't accurately assess which archetype a team represents or how AI affects their specific constraint pattern.
2. Attribution errors compound over time
When developers change teams or projects—a common occurrence—their historical data typically travels with them in most analytics platforms. This creates significant distortions.
A high-performing developer joining a struggling team artificially inflates that team's historical metrics. This makes it impossible to isolate the effects of actual interventions or accurately classify the team's archetype.
3. Misallocated investment follows bad data
Without accurate team-level measurement mapped to these archetypes, enterprises misallocate AI investment. They might invest heavily in AI coding assistants for "Legacy Bottleneck" teams whose actual constraint is deployment pipeline fragility, while ignoring the code review capacity needs of "Pragmatic Performers" whose constraint is shifting from code generation to integration.
The solution:
Connect formal reporting hierarchies from HR systems with actual collaboration patterns inferred from development telemetry. This enables measurement at the real team level (the 5–12 person working groups who collaborate daily on shared deliverables) combined with archetype classification based on their actual throughput and instability patterns rather than proxy organizational units.
{{cta}}
The DORA 2025 Report identifies Value Stream Management as the practice that turns AI's individual productivity gains into organizational advantage. Faros AI's telemetry demonstrates why this matters:
While developers complete:
Organizations see:
Without end-to-end visibility, teams optimize locally—making code generation faster—while the actual constraint shifts to review, integration, and deployment. Organizations investing in AI without measuring their end-to-end development processes risk accelerating into a bottleneck rather than accelerating through it.
Finding out that gains disappear is just the start. Companies need to know exactly where and why this happens. The team type model shows that "Legacy Bottleneck" teams lose value in different ways than "Constrained by Process" teams. But regular metrics treat them the same.
The GAINS™ (Generative AI Impact Net Score) framework fixes this problem. It looks at ten different areas to find the specific friction points for each team type.
This precise diagnosis lets companies target specific problems instead of using generic solutions that often make existing constraints worse. It also creates a clear path from spotting the problem (gains disappearing in work streams) to fixing it (precise measurement to understand where and why).
Both studies point to the same conclusion: The AI productivity paradox isn't permanent, but solving it requires systematic action.
The DORA Report 2025 practical recommendations provide a checklist for enterprises ready to move from AI experimentation to operationalization:
The telemetry data adds urgency to these recommendations. Organizations have roughly 12 months to shift from experimentation to operationalization before the AI amplifier effect compounds competitive disadvantages.
Early movers are already seeing organizational-level gains translate to business outcomes. Late adopters will find their individual productivity increases absorbed by systemic dysfunction.
The convergence of survey insights and telemetry precision provides the roadmap. The question is whether enterprise leaders will act on it with the urgency and precision the data demands.
{{cta}}
The DORA Report 2025 found that 95% of developers now use AI tools, with over 80% reporting productivity gains. However, the research reveals that AI acts as an "amplifier" rather than a universal solution—it magnifies existing organizational strengths and weaknesses.
The report introduces seven critical capabilities that determine whether AI benefits scale beyond individuals to organizational performance: clear AI stance, healthy data ecosystems, AI-accessible internal data, strong version control practices, working in small batches, user-centric focus, and quality internal platforms.
Critically, the research shows no correlation between AI adoption and increased developer burnout or friction, suggesting teams are adapting successfully to AI-enhanced workflows despite handling more concurrent workstreams.
The DORA Report 2025 identifies seven team performance archetypes based on throughput metrics, instability metrics, and team well-being measures. These replace the traditional low/medium/high/elite classifications.
The archetypes are: (1) Foundational Challenges—teams in survival mode with significant process gaps; (2) Legacy Bottleneck—teams constantly reacting to unstable systems; (3) Constrained by Process—teams consumed by inefficient workflows; (4) High Impact, Low Cadence—teams producing quality work slowly; (5) Stable and Methodical—teams delivering deliberately with high quality; (6) Pragmatic Performers—teams with impressive speed and functional environments; and (7) Harmonious High-Achievers—teams in a virtuous cycle of sustainable excellence.
Each archetype experiences AI adoption differently, requiring tailored intervention strategies rather than one-size-fits-all approaches.
The DORA AI Capabilities Model identifies seven foundational organizational capabilities that amplify AI benefits rather than focusing on tool deployment alone.
These capabilities are: (1) Clear and communicated AI stance—organizational clarity on expectations and permitted tools; (2) Healthy data ecosystems—quality, accessible, unified internal data; (3) AI-accessible internal data—context integration beyond generic assistance; (4) Strong version control practices—mature development workflows and rollback capabilities; (5) Working in small batches—maintaining incremental change discipline; (6) User-centric focus—product strategy clarity despite accelerated velocity; and (7) Quality internal platforms—technical foundations that enable scale.
Research shows these capabilities determine whether individual productivity gains from AI translate to organizational performance improvements. Organizations lacking these foundations see AI gains absorbed by downstream bottlenecks and systemic dysfunction.
The DORA Report 2025 identifies Value Stream Management (VSM) as critical because it reveals where AI productivity gains evaporate in the development lifecycle. Without end-to-end visibility, teams optimize locally—making code generation faster—while actual constraints shift to review, integration, and deployment stages. The report describes this as "localized pockets of productivity lost to downstream chaos."
VSM provides diagnostic frameworks to identify true constraints in the value stream, enabling organizations to invest AI resources where they create the most impact. Research shows that teams with mature measurement practices successfully translate AI gains from individual developers to team and product performance improvements, while teams lacking visibility see organizational delivery metrics remain flat despite individual productivity increases.
Note about Value Stream Management: Industry analysts view developer productivity insights platforms and Software Engineering Intelligence (SEI) as tools and capabilities that are fueling the VSM market, which is focused on improving overall business outcomes.
Want to speak with an expert? Contact our team for a consultation and demo of data-driven AI transformation.