Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

I'm interested in...
Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.
Submitting...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

How to Measure AI Productivity in Software Engineering

Most AI tools don’t improve delivery. The GAINS framework helps engineering leaders measure real productivity impact across 10 transformation dimensions—from throughput to organizational efficiency.

Thierry Donneau-Golencer
Thierry Donneau-Golencer
Ten dimensions of AI transformation
3
min read
Browse Chapters
Share
June 23, 2025

Most AI investments stall in delivery. Here’s how top engineering orgs are changing that.

As generative AI becomes embedded in daily engineering workflows, one question keeps surfacing:

How do we measure real productivity gains from AI in software development?

Despite the rapid rise of coding assistants and autonomous agents, most engineering organizations struggle to quantify AI’s true impact (or realize it). Traditional metrics don’t tell the full story—and in many cases, the story they tell is misleading.

That’s why leading CTOs are turning to GAINSTM—the Generative AI Impact Net Score—a framework designed to benchmark AI maturity, identify organizational friction, and tie AI usage directly to engineering and business outcomes.

{{cta}}

In this article, we introduce the 10 dimensions that matter most when measuring AI productivity in software engineering—and why they’re essential for scaling impact.

What Is GAINS™? A diagnostic built for AI at scale

GAINS was developed from an extensive dataset covering over 10,000 engineers across 1,255 teams that combines telemetry data (e.g., commits, CI/CD, incidents), deep agent activity signals, and qualitative developer feedback. The result: A single, standardized metric that captures both the technical and human dimensions of AI’s impact.

Structured across ten key dimensions, from code quality and delivery velocity to agent enablement and organizational efficiency, GAINS functions as a diagnostic. Its insights serve as a strategic compass for technology leaders seeking to unlock additional value through data-backed intervention. 

With GAINS, technology leaders can:

  • Benchmark AI adoption and maturity across teams, tools, and peers
  • Quantify productivity gains and organizational efficiencies
  • Tie engineering outcomes directly to financial performance
  • Identify where AI is driving the most value, and where it’s falling short

In short, GAINS transforms AI deployment from a leap of faith into a data-driven discipline.

The 10 dimensions that define AI performance

GAINS measures performance across ten transformation dimensions that define modern engineering readiness for AI.

Ten AI transformation dimensions to measure in software engineering

These ten categories are synthesized into a single GAINS score, calculated quarterly and benchmarked across organizations:

  1. Adoption: Measures the spread and consistency of AI tooling and agent usage across engineering teams.
  2. Usage: Tracks how frequently and deeply AI capabilities are embedded in day-to-day engineering work.
  3. Change Management: Assesses the organization’s readiness to support and scale a hybrid human-agent workforce.
  4. Velocity: Captures how AI accelerates throughput by optimizing development and delivery workflows.
  5. Quality: Monitors AI’s impact on code maintainability and defect rates.
  6. Security: Ensures that AI contributions meet governance, compliance, and risk management standards.
  7. Flow: Evaluates the smoothness of execution by reducing handoffs, idle time, and the impact on context switching.
  8. Satisfaction: Reflects developer sentiment, trust in AI tools, and confidence in working alongside agents.
  9. Onboarding: Measures how quickly both new developers and AI systems can become productive contributors.
  10. Organizational Efficiency: Evaluates how well the organization's structure, roles, and platforms support scaled AI impact.

{{cta}}

GAINS is a diagnostic system for AI transformation

More than a score, GAINS is also an ongoing diagnostic system for AI transformation.

GAINS measures where AI is being underused, where it’s blocked, and what’s holding it back. Whether the friction lies in tooling, integration, process design, or team structure, GAINS surfaces the root causes and turns them into actionable insights.

Validated through advanced statistical modeling, GAINS correlates directly with objective engineering outcomes. Each dimension ties AI activity to business performance, quantifying what’s working and where value is being lost.

Because every point of GAINS improvement corresponds to real engineering hours saved and hard-dollar returns, GAINS becomes a financial instrument for managing your AI strategy.

For executives and AI transformation leaders, GAINS is a tool for:

  • Building a credible business case for continued AI investment
  • Setting strategic targets for automation, orchestration, and adoption
  • Aligning  engineering and finance around shared metrics of success
  • Reporting AI progress and  impact transparently to boards, investors, and senior leadership

Why GAINS matters now—and what’s coming next

Generative AI is changing how software gets built—but unless organizations can measure what matters, even the best-intentioned strategies risk stalling.

GAINS gives engineering and platform leaders a new lens—one that connects AI activity to business performance, identifies bottlenecks, and prioritizes the right next moves.

Every point of GAINS improvement corresponds to real hours saved, better throughput, and measurable ROI. That’s why early adopters aren’t just deploying AI—they’re operationalizing it.

Want to know what’s working, what’s lagging, and what’s next for your AI investment?

{{cta}}

AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
AI Productivity Paradox Report 2025
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
The cover of The Engineering Productivity Handbook on a turquoise background
Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
AI
News
7
MIN READ

Translating AI-powered Developer Velocity into Business Outcomes that Matter

Discover the three systemic barriers that undermine AI coding assistant impact and learn how top-performing enterprises are overcoming them.
August 6, 2025
Editor's Pick
News
AI
DevProd
4
MIN READ

Faros AI Hubble Release: Measure, Unblock, and Accelerate AI Engineering Impact

Explore the Faros AI Hubble release, featuring GAINS™, documentation insights, and a 100x faster event processing engine, built to turn AI engineering potential into measurable outcomes.
July 31, 2025
Editor's Pick
AI
DevProd
5
MIN READ

Lab vs. Reality: What METR's Study Can’t Tell You About AI Productivity in the Wild

METR's study found AI tooling slowed developers down. We found something more consequential: Developers are completing a lot more tasks with AI, but organizations aren't delivering any faster.
July 28, 2025