Fill out this form to speak to a product expert.
Get comprehensive metrics to measure productivity gains from AI coding tools. The Faros AI Iwatani Release helps engineering leaders determine which AI coding assistant offers the highest ROI through usage analytics, cost tracking, and productivity measurement frameworks.

AI transformation leaders rely on Faros AI to navigate critical decisions in AI adoption, impact, and ROI. And as the AI coding landscape evolves, models improve, and the tools become more powerful, new questions emerge.
We're announcing strategic additions to our industry-leading AI Transformation product to help answer your most critical questions. These include advanced metrics to measure productivity gains from AI coding tools that every engineering leader needs:
{{cta}}
This product release honors Toru Iwatani, creator of Pac-Man, whose pioneering ghost algorithms established distinct AI personalities in gaming. The four ghosts collaborate without explicit coordination, with their individual patterns naturally forming team tactics. Though not "intelligent" by today's standards, they responded dynamically to player behavior, creating an illusion of personality and adaptability that mirrors modern human-centered AI principles.
The essence of Iwatani's design—AI that feels alive, collaborative, and responsive—mirrors how modern AI systems aim to work with people, not just for them.
Let's dive in.
You've rolled out AI coding tools across your organization: GitHub Copilot, Cursor, Claude Code, Windsurf, Augment, Devin, and others. But here's the real question: Which developer usage behaviors actually move the needle on productivity metrics, like velocity and quality?
The Iwatani Release introduces rich developer behavior insights that connect tool usage patterns to engineering outcomes.
Start by answering this question: From all the AI coding assistants at their disposal, which tool do your developers prefer?
Faros AI measures adoption across all the AI coding tools in your stack, so you can instantly see where their preference lies. Usage data is available at every level of your organization, all the way down to the individual team.

Key questions AI transformation leaders should ask when analyzing AI coding assistant popularity:
Understanding tool preference is just the first step. There’s more to learn from digging deeper into the specific features developers actually use within each tool. This data reveals what developers find most valuable and where they see the biggest benefits.
For each of your AI coding tools, Faros AI measures the usage of its capabilities, which may include:

Key questions AI transformation leaders should ask when analyzing AI coding tool feature-level usage:
The main goal of AI tools is to boost engineering productivity, so it's important to figure out how often developers need to use them to see real benefits. In other words, what's the minimum usage frequency required for developers to experience clear improvements in their speed, output, and code quality?
First, Faros AI allows you to see at a glance how AI coding tool usage is progressing over time. The data is clearly visualized across a timeline, revealing the trends and inflection points driving AI adoption.

Next, Faros AI connects usage frequency with productivity and quality metrics to find the sweet spot where AI adoption creates real, measurable improvements. You can compare the impact across five categories: no usage, infrequent usage, moderate usage, frequent usage, and power usage.
In the chart examples featured below, you can see how different usage levels impact key velocity metrics.

Key questions AI transformation leaders should ask when analyzing the correlation between adoption and impact:
{{cta}}
Engineering leaders are increasingly seeking to understand how the nature of software development is shifting as AI agents begin to play an active role in their codebases.
With major organizations like Meta and Microsoft reporting that roughly 30% of their code is now AI-generated, the question is no longer if AI is reshaping software engineering, but how much of it AI now drives.
The Iwatani Release gives engineering leaders the visibility they need to answer three key questions:
Is this code AI-generated? Everyone wants to know. Understanding AI’s footprint is critical, because it directly affects:
Now, AI transformation leaders can measure the percentage of your codebase authored by AI, with detailed views per AI tool, repository, and team.

By making AI’s contribution measurable, leaders can manage risk, maintain quality, and plan for how teams will evolve alongside their automated counterparts.
Here's some guidance for AI transformation leaders as they examine this data:
Now you can quantify how much productivity AI agents are adding to your organization, with drill-downs available by AI tool, repository, and team. The Iwatani Release helps you understand:
These insights let you compare the units of productivity AI is contributing to inform future capacity planning. Key questions AI transformation leaders should ask when measuring AI agent contributions:
Not all AI-generated code leads to lasting gains. Some of it introduces rework, a reflection of both the quality of AI contributions and their human ramifications. Faros AI enables orgs to track rework rate (the fifth DORA metric) to see where AI-generated or AI-accelerated code is creating inefficiencies.
These insights allow organizations to balance speed with quality, ensuring that AI-driven development results in real productivity improvements rather than hidden waste.
{{cta}}
Once you understand AI coding assistant usage and can measure the impact, you have all the essential metrics to measure productivity gains from AI coding tools and decide which investments are worth paying for. Most importantly, you can finally answer the critical question: which AI coding assistant offers the highest ROI for your organization?
When every model, feature, and token tier carries a cost, knowing which tools truly deliver ROI becomes essential. With Faros AI’s Iwatani Release, the calculation has gotten even more sophisticated.
These are the insights that help AI transformation leaders prioritize renewals and upgrades for tools that drive measurable outcomes, identify underused or low-impact features to optimize license spend, and inform vendor negotiations with data on what’s actually working.
New models emerge, old ones get deprecated, performance and token consumption fluctuates. Having insight into the models your developers prefer, how much they cost, and what the tradeoffs are for cheaper models is extremely helpful.
As a first step, Faros AI measures which models are used most often per tool.

Then we dig deeper; For any given tool, Faros AI shows you which model developers choose for each specific feature. These insights help you identify which models work best for different tasks and recommend the most cost-effective, high-performing options to your teams.

Most AI coding tools operate on a token-based pricing model, where every interaction consumes tokens. The more tokens used, the higher the cost.
In the Iwatani release, Faros AI introduced token consumption and cost tracking. Claude Code is the first AI coding tool to expose token usage and cost data through its API. Faros AI ingests this data automatically, giving you unprecedented visibility into the true cost of AI-generated code. See cost per commit, cost per feature, and total spend by team and repository. Evaluate when it’s worth moving up a tier for greater token capacity at a lower effective rate.
You can also get value-per-dollar calculations to determine which AI coding assistant offers the highest ROI. Which tools provide the best bang for your buck? Faros AI calculates cost efficiency ratios, like tokens consumed per commit or cost per productivity gain, so leaders can objectively compare tools and make budget decisions based on data, not vendor promises.
This is the financial clarity AI transformation leaders need: clear ROI metrics that justify AI investments to CFOs and help you optimize your tool portfolio for maximum impact at minimum cost.
Note: Faros AI easily connects to and ingests data from all major AI coding tools. For tools like Claude Code, GitHub Copilot, Cursor and Windsurf, which have exposed APIs, you can connect directly with a simple token. But for other AI coding tools, for example Augment and others, Faros AI can still capture the necessary data through alternative integrations, ensuring you can use those insights to build data-driven strategies regardless of which tools your teams use.
Having comprehensive AI adoption metrics is powerful, but what happens when high usage doesn't translate to productivity gains? When developers are actively using AI tools but cycle times haven't improved?
This adoption-impact gap is where many organizations get stuck. The issue usually isn't the tools—it's the organizational systems around them. Successful AI transformation requires strategic alignment across processes, culture, and measurement frameworks that goes beyond tool deployment.
GAINS™ bridges this gap. While the Iwatani Release delivers the metrics to measure productivity gains from AI coding tools, GAINS provides the strategic framework to act on those insights. Schedule your GAINS™ consultation today.
The latest release from Faros AI transforms how organizations measure, optimize, and invest in AI coding tools. From understanding developer behaviors to tracking agentic code contributions to making cost-informed investment decisions, Faros AI gives you the intelligence you need to lead confidently through the AI transformation.
AI is everywhere. With Faros AI, impact is too.
Ready to see how Faros AI can transform your AI strategy? Book your demo today.


