Translating AI-powered Developer Velocity into Business Outcomes that Matter

Discover the three systemic barriers that undermine AI coding assistant impact and learn how top-performing enterprises are overcoming them.

Neely Dunlap
By
Neely Dunlap
A dark blue background with the letters AI appearing to be walls, with a person walking through the letters with a flashlight, heading toward the A's dead end

Translating AI-powered Developer Velocity into Business Outcomes that Matter

Discover the three systemic barriers that undermine AI coding assistant impact and learn how top-performing enterprises are overcoming them.

A dark blue background with the letters AI appearing to be walls, with a person walking through the letters with a flashlight, heading toward the A's dead end
Chapters

Two weeks ago, we published the AI Productivity Paradox Report 2025, a landmark study that exposes the disconnect between the adoption of AI coding assistants and their organizational impact. Developer output increases, but engineering outcomes are flat. 

We also identified common AI adoption missteps that explain this paradox, including slow uptake, uneven usage, adoption that skews to less tenured engineers, and surface‑level tool usage. 

{{ai-paradox}}

Today, we examine another angle of the report: The systemic barriers that sap productivity momentum even after AI coding assistants reach critical mass, and what top‑performing companies are doing to beat the odds.

Why AI gains stall: Three systemic barriers

Developers using AI complete 98% more code changes and 21% more tasks. But these gains evaporate at the company level, where neither a positive nor a negative impact can be observed. 

Why is this happening? Three systemic barriers keep coming up in operational fieldwork: 

Summary infographic depicting the three barriers which appear to be stalling broader AI impact
Three barriers summary infographic

1. Downstream bottlenecks cancel out upstream gains

AI accelerates code creation, but review queues, brittle test suites, and sluggish release pipelines remain stuck in yesterday’s gear. By Amdahl’s Law, your delivery engine only moves as fast as its slowest stage—so faster coding simply piles more work onto the choke points.

2. Grassroots adoption lacks structure and scale

AI adoption is still driven by bottom-up experimentation, with developer enthusiasm undermined by a lack of centralized enablement. Developers spend time navigating tools without guidance, users receive little to no formal training, and there's rarely a strategy tailored to role or experience—resulting in inconsistent outcomes and uneven utilization. Without shared best practices and strong internal communities to socialize tips and recommendations, the organization struggles to convert adoption into lasting impact.

3. Directionless deployment drains ROI

Simply handing out licenses to Copilot, Claude Code, or Cursor isn’t a strategy. Without clear goals, usage policies, and change‑management plans aligned to business priorities, AI becomes “just another tool” instead of a catalyst for transformation.

What high-performing companies do differently

Some companies are seeing greater success and higher ROI from their AI investments. Their edge stems from  three mutually reinforcing practices:

a table explaining the three practices to achieve higher AI ROI
Three practices to achieve higher AI ROI

Blueprint for operationalizing AI engineering

As software teams transition from AI-assisted coding to agentic development, the complexity and autonomy of AI participation will increase. This creates new coordination demands, where code may be written, reviewed, or executed by agents working in parallel with humans.

Read the comprehensive research to discover practical steps that scale AI through the entire lifecycle, set the stage for agentic development, and ready your organization for the next phase of AI‑driven innovation.

Neely Dunlap

Neely Dunlap

Neely Dunlap is a content strategist at Faros who writes about AI and software engineering.

AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
Cover of Faros AI report titled "The AI Productivity Paradox" on AI coding assistants and developer productivity.
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
Cover of "The Engineering Productivity Handbook" featuring white arrows on a red background, symbolizing growth and improvement.
Blog
6
MIN READ

Monorepo vs Polyrepo: What the PR benchmark data actually shows

Benchmark data from 320 teams comparing monorepo and polyrepo PR cycle times. What “good” looks like and why developer infrastructure matters, especially for AI agents.

Blog
8
MIN READ

Best Jellyfish Alternative for Enterprise Engineering Teams

Jellyfish falling short at scale? See why VPs of Engineering and CTOs at large enterprises choose Faros for deeper insights, flexible org models, and AI impact tracking.

Guides
12
MIN READ

Best DORA Metrics Tools for Tracking Software Delivery Performance in 2026

If you’re searching for DORA metrics tools, start here. This 2026 guide explains what’s new in DORA, why engineering intelligence platforms are the best tools for tracking DORA metrics and developer productivity insights, and why Faros AI is the top choice for enterprise teams amongst competitors.