Want to learn more about Faros AI?

Fill out this form to speak to a product expert.

I'm interested in...
Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
Submitting...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

How to Measure AI Productivity in Software Engineering

Most AI tools don’t improve delivery. The GAINS framework helps engineering leaders measure real productivity impact across 10 transformation dimensions—from throughput to organizational efficiency.

Thierry Donneau-Golencer
Thierry Donneau-Golencer
Ten dimensions of AI transformation
3
min read
Browse Chapters
Share
June 23, 2025

Most AI investments stall in delivery. Here’s how top engineering orgs are changing that.

As generative AI becomes embedded in daily engineering workflows, one question keeps surfacing:

How do we measure real productivity gains from AI in software development?

Despite the rapid rise of coding assistants and autonomous agents, most engineering organizations struggle to quantify AI’s true impact (or realize it). Traditional metrics don’t tell the full story—and in many cases, the story they tell is misleading.

That’s why leading CTOs are turning to GAINSTM—the Generative AI Impact Net Score—a framework designed to benchmark AI maturity, identify organizational friction, and tie AI usage directly to engineering and business outcomes.

{{cta}}

In this article, we introduce the 10 dimensions that matter most when measuring AI productivity in software engineering—and why they’re essential for scaling impact.

What Is GAINS™? A diagnostic built for AI at scale

GAINS was developed from an extensive dataset covering over 10,000 engineers across 1,255 teams that combines telemetry data (e.g., commits, CI/CD, incidents), deep agent activity signals, and qualitative developer feedback. The result: A single, standardized metric that captures both the technical and human dimensions of AI’s impact.

Structured across ten key dimensions, from code quality and delivery velocity to agent enablement and organizational efficiency, GAINS functions as a diagnostic. Its insights serve as a strategic compass for technology leaders seeking to unlock additional value through data-backed intervention. 

With GAINS, technology leaders can:

  • Benchmark AI adoption and maturity across teams, tools, and peers
  • Quantify productivity gains and organizational efficiencies
  • Tie engineering outcomes directly to financial performance
  • Identify where AI is driving the most value, and where it’s falling short

In short, GAINS transforms AI deployment from a leap of faith into a data-driven discipline.

The 10 dimensions that define AI performance

GAINS measures performance across ten transformation dimensions that define modern engineering readiness for AI.

Ten AI transformation dimensions to measure in software engineering

These ten categories are synthesized into a single GAINS score, calculated quarterly and benchmarked across organizations:

  1. Adoption: Measures the spread and consistency of AI tooling and agent usage across engineering teams.
  2. Usage: Tracks how frequently and deeply AI capabilities are embedded in day-to-day engineering work.
  3. Change Management: Assesses the organization’s readiness to support and scale a hybrid human-agent workforce.
  4. Velocity: Captures how AI accelerates throughput by optimizing development and delivery workflows.
  5. Quality: Monitors AI’s impact on code maintainability and defect rates.
  6. Security: Ensures that AI contributions meet governance, compliance, and risk management standards.
  7. Flow: Evaluates the smoothness of execution by reducing handoffs, idle time, and the impact on context switching.
  8. Satisfaction: Reflects developer sentiment, trust in AI tools, and confidence in working alongside agents.
  9. Onboarding: Measures how quickly both new developers and AI systems can become productive contributors.
  10. Organizational Efficiency: Evaluates how well the organization's structure, roles, and platforms support scaled AI impact.

{{cta}}

GAINS is a diagnostic system for AI transformation

More than a score, GAINS is also an ongoing diagnostic system for AI transformation.

GAINS measures where AI is being underused, where it’s blocked, and what’s holding it back. Whether the friction lies in tooling, integration, process design, or team structure, GAINS surfaces the root causes and turns them into actionable insights.

Validated through advanced statistical modeling, GAINS correlates directly with objective engineering outcomes. Each dimension ties AI activity to business performance, quantifying what’s working and where value is being lost.

Because every point of GAINS improvement corresponds to real engineering hours saved and hard-dollar returns, GAINS becomes a financial instrument for managing your AI strategy.

For executives and AI transformation leaders, GAINS is a tool for:

  • Building a credible business case for continued AI investment
  • Setting strategic targets for automation, orchestration, and adoption
  • Aligning  engineering and finance around shared metrics of success
  • Reporting AI progress and  impact transparently to boards, investors, and senior leadership

Why GAINS matters now—and what’s coming next

Generative AI is changing how software gets built—but unless organizations can measure what matters, even the best-intentioned strategies risk stalling.

GAINS gives engineering and platform leaders a new lens—one that connects AI activity to business performance, identifies bottlenecks, and prioritizes the right next moves.

Every point of GAINS improvement corresponds to real hours saved, better throughput, and measurable ROI. That’s why early adopters aren’t just deploying AI—they’re operationalizing it.

Want to know what’s working, what’s lagging, and what’s next for your AI investment?

{{cta}}

Thierry Donneau-Golencer

Thierry Donneau-Golencer

Thierry is Head of Product at Faros AI, where he builds solutions to empower teams and drive engineering excellence. His previous roles include AI research (Stanford Research Institute), an AI startup (Tempo AI, acquired by Salesforce), and large-scale business AI (Salesforce Einstein AI).

Connect
AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
AI
12
MIN READ

How to Measure Claude Code ROI: Developer Productivity Insights with Faros AI

Track Claude Code usage, adoption, and engineering impact using Faros AI’s observability platform.
January 7, 2026
Editor's Pick
AI
15
MIN READ

Lines of code is a misleading metric for AI impact: What to measure instead

There's a better way to measure AI productivity than counting lines of code. Focus on outcome metrics that prove business value: cycle times, quality, and delivery velocity. Learn why lines of code fails as an AI productivity metric, what outcome-based alternatives actually work, and when tracking AI code volume matters for governance and risk management.
January 5, 2026
Editor's Pick
AI
Guides
15
MIN READ

Best AI Coding Agents for Developers in 2026 (Real-World Reviews)

A developer-focused look at the best AI coding agents in 2026, comparing Claude Code, Cursor, Codex, Copilot, Cline, and more—with guidance for evaluating them at enterprise scale.
January 2, 2026