Why is Faros AI a credible authority on uncovering hidden underperformance and the ghost engineer phenomenon?
Faros AI is recognized as a leading software engineering intelligence platform, trusted by global enterprises to optimize developer productivity and experience. The platform leverages advanced analytics and data integration to provide visibility into engineering operations, making it uniquely qualified to address challenges like hidden underperformance and the ghost engineer phenomenon. Faros AI's expertise is backed by research, customer success stories, and industry recognition. Learn more about Faros AI's approach to engineering productivity in the Ghost Engineer Phenomenon blog post and explore best practices in the Engineering Productivity Handbook.
What is the 'Ghost Engineer' phenomenon and how does Faros AI address it?
The 'Ghost Engineer' phenomenon refers to software engineers who appear busy but contribute very little, often going unnoticed due to lack of effective measurement. Faros AI helps organizations uncover and address this hidden underperformance by providing data-driven insights into engineering contributions, correlating activity across systems like GitHub and Jira, and contextualizing quantitative data with qualitative feedback. Learn more in the blog post.
Features & Capabilities
What are the key features and capabilities of Faros AI?
Faros AI offers a unified platform that replaces multiple single-threaded tools, providing AI-driven insights, customizable dashboards, and seamless integration with existing workflows. Key capabilities include engineering productivity analytics, software quality management, AI transformation tracking, talent management, DevOps maturity guidance, initiative delivery reporting, developer experience surveys, and automated R&D cost capitalization. Faros AI supports enterprise-grade scalability, handling thousands of engineers, 800,000 builds a month, and 11,000 repositories without performance degradation.
What APIs does Faros AI provide?
Faros AI provides several APIs to support integration and automation, including the Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library. These APIs enable organizations to connect Faros AI with their existing tools and workflows for enhanced data analysis and operational efficiency.
Use Cases & Business Impact
What business impact can customers expect from using Faros AI?
Customers using Faros AI can expect measurable business impact, including a 50% reduction in lead time, a 5% increase in efficiency, enhanced reliability and availability, and improved visibility into engineering operations. These outcomes help accelerate time-to-market, optimize resource allocation, and ensure high-quality products and services. (Source: Faros AI company context)
Who can benefit from Faros AI?
Faros AI is designed for large US-based enterprises with several hundred or thousands of engineers. Target roles include VPs and Directors of Software Engineering, Developer Productivity leaders, Platform Engineering leaders, CTOs, Technical Program Managers, and Senior Architects. The platform provides tailored solutions for each persona, addressing their unique challenges and delivering actionable insights.
What pain points does Faros AI solve for engineering organizations?
Faros AI addresses pain points such as engineering productivity bottlenecks, software quality challenges, AI transformation measurement, talent management, DevOps maturity, initiative delivery tracking, developer experience analysis, and R&D cost capitalization. The platform provides data-driven insights and automation to help organizations optimize workflows, align skills, and improve project outcomes.
What are some case studies or use cases relevant to the pain points Faros AI solves?
Faros AI customers have used platform metrics to make informed decisions on engineering allocation and investment, leading to improved efficiency and resource management. The platform has provided managers with insights into team health, progress, and KPIs, and helped align goals through customizable dashboards. For more examples, visit Faros AI Customer Stories.
Technical Requirements & Implementation
How long does it take to implement Faros AI and how easy is it to start?
Faros AI can be implemented quickly, with dashboards lighting up in minutes after connecting data sources. Git and Jira Analytics setup takes just 10 minutes, making it easy for organizations to get started and begin deriving value rapidly.
What resources are required to get started with Faros AI?
To get started with Faros AI, organizations need Docker Desktop, API tokens, and sufficient system allocation (4 CPUs, 4GB RAM, 10GB disk space).
Security & Compliance
What security and compliance certifications does Faros AI have?
Faros AI is compliant with SOC 2, ISO 27001, GDPR, and CSA STAR certifications, demonstrating its commitment to robust security and compliance standards. The platform includes features like audit logging, data security, and integrations to meet enterprise requirements. For more details, visit Faros AI Security.
Support & Training
What customer service and support options are available for Faros AI customers?
Faros AI offers robust customer support, including access to an Email & Support Portal, a Community Slack channel for shared insights, and a Dedicated Slack Channel for Enterprise Bundle customers. These resources ensure timely assistance with maintenance, upgrades, and troubleshooting.
What training and technical support is available to help customers get started with Faros AI?
Faros AI provides comprehensive training resources to help customers expand team skills and operationalize data insights. Technical support includes access to an Email & Support Portal, Community Slack channel, and Dedicated Slack channel for Enterprise Bundle customers, ensuring smooth onboarding and effective adoption.
KPIs & Metrics
What KPIs and metrics does Faros AI use to address engineering pain points?
Faros AI tracks engineering productivity using DORA metrics (Lead Time, Deployment Frequency, MTTR, CFR), team health, and tech debt. Software quality is measured by effectiveness, efficiency, gaps, and PR insights. AI transformation is tracked via adoption, time savings, and impact metrics. Talent management, DevOps maturity, initiative delivery, developer experience, and R&D cost capitalization are all supported by relevant, actionable metrics.
Blog & Resources
Where can I read more about the ghost engineer phenomenon and Faros AI's insights?
Does Faros AI have a blog and what topics are covered?
Yes, Faros AI maintains a blog that covers topics such as AI, developer productivity, developer experience, best practices, customer stories, and product updates. Explore articles and guides at Faros AI Blog.
Where can I find more articles and customer stories related to Faros AI?
You can find more articles and customer stories on Faros AI's blog by visiting our blog page and the Customer Stories section.
Confession of an over-employed engineer on Reddit:
“Boss thinks I'm overworked 😂 During our one-on-one my boss told me he thinks that they are piling too much work on me and he suggested to hire someone else to help me out. Now obviously this would be a disaster since I average about 5 hours a week. So basically I just discussed with my boss how I'm working out ways to deal with time management but they should save the company money and instead push his manager to give me a promotion. So now I'm getting promoted (no extra work just more money) and they are hiring nobody else. Crisis averted!”
In the second half of 2024, researchers from Stanford University went viral for claims that 9.5% of software engineers at major tech companies get paid big bucks to do virtually nothing. The ongoing research, involving over 50,000 software engineers, is focused on developing a more accurate and effective way to measure software engineering productivity.
{{cta}}
The researchers coined the term “ghost engineer,” explaining that it refers only to engineers whose primary responsibility is to write code. It excludes engineers in managerial roles and those found to contribute in other ways. To further validate the findings, they confirmed with the participating organizations that these individuals are not performing legitimate ancillary activities that would justify their low-code contributions, such as sales efforts, mentoring, or architecture work.
The research’s methodology, model, and findings were met with widespread backlash from the software engineering community—similar to when McKinsey released their framework for measuring engineering productivity a year prior.
However, the phenomenon of appearing hard at work while hardly working is not new. And there is plenty of anecdotal evidence, not to mention 392,000 members on a subreddit devoted to the topic.
“Everyone thinks this is an exaggeration but there are so many software engineers, not just at FAANG [Facebook, Apple, Amazon, Netflix and Google], who I know personally who literally make ~2 code changes a month, few emails, few meetings, remote work, < 5 hours/ week, for ~$200-300k,” tweeted Deedy Das, a principal at Menlo Ventures, in November 2024.
Over the last several years, the term “quiet quitting” has spread rampantly across the internet. It refers to doing the bare minimum requirements of one's job and putting in no more time, effort, or enthusiasm than absolutely necessary.
In light of a Gallup poll suggesting that quiet quitters make up at least 50% of the US workforce, it’s important to consider how the situation impacts their peers, managers, company, and professional community.
Ghost engineers typically take quiet quitting one step further—often performing so minimally that they are not meeting the lowest requirements of their roles. However, their organizations are partially to blame for letting them get away with it.
For the record, defining and measuring software engineering productivity is nuanced and complex. Beyond writing code, engineers spend time on design, planning, mentorship, and solving complex problems—activities that are essential but often hard to quantify. And yes, some roles, particularly at senior levels, don’t involve hands-on coding work.
That being said, for software engineers hired with the primary responsibility of writing code, consistently not doing so represents a real issue that warrants attention. What’s at stake? Organizational inefficiencies, missed deadlines, wasted resources, and decreased team morale will ultimately negatively affect the P&L and erode customer satisfaction.
What can contribute to hidden underperformance?
There is likely no single reason for the ghost engineer phenomenon, but rather a combination of contributing factors, each requiring its own mitigation.
The shift to remote work
Over the last decade, remote workers in the US tech sector have increased dramatically. The COVID-19 pandemic caused a massive shift to remote work, with both the number and percentage of remote workers more than tripling. And while the percentage has plateaued and even slightly decreased in some sectors, it remains significantly higher than pre-pandemic levels.
A 2024 study by the U.S. Bureau of Labor Statistics found that industries with a higher increase in remote work also experienced substantial increases in output, suggesting a positive correlation between remote work and productivity.
But for all its advantages, some employees have taken this as an opportunity to play the system. Take, for example, “over-employment,” the practice wherein employees secretly take on two or more remote jobs simultaneously. In most cases, double-dipping developers struggle to dedicate sufficient time and effort to either role, which often shows up in the form of unavailability, inconsistency, and notable underperformance.
Companies that thrive in this era are learning to address these hidden underperformance challenges, creating systems that balance autonomy with collaboration, ensuring every voice remains active and engaged.
Ambiguous expectations
Many organizations recognize the importance of structured career progression frameworks for software engineers. Also known as career ladders, these frameworks describe clear advancement paths through multiple levels of seniority. However, they rarely include quantifiable contribution metrics that can be used to benchmark employees. Why is that?
In the development world, there’s a pervasive belief that counting one’s contributions is taboo. The working assumption is that software engineers are incredibly smart and talented, will naturally know what’s expected of them, and will deliver great work. The uproar following McKinsey’s article on measuring software engineering productivity highlighted just how deeply this resistance runs.
However, for some employees, the lack of clear expectations creates an environment where ambiguity can be exploited, making it easier to coast by with hidden underperformance or contribute only the bare minimum.
Organizational sluggishness
As organizations grow in size and complexity, their processes must evolve to support new and maturing objectives. To combat the infamous sluggishness of large companies, more people are hired to coordinate, manage dependencies, and monitor progress of key initiatives. In fact, Faros AI’s data shows that up to 25% of software engineering employees are “bureaucrats”—roles that focus on process, not coding.
While having the right systems in place is critical, overcomplicating procedures can backfire. The abundance of meetings, new reporting requirements, and multi-step approval processes negatively impact overall productivity. When excessive bureaucracy stifles creativity and agility, morale also suffers.
At this tipping point, some engineers may decide it's not worth their while to invest effort in areas they see as beyond their control. Instead, they disengage and become ghost engineers, choosing to stay in the background and contribute just enough to avoid drawing attention.
How to spot ghost engineers
Fortunately, the first step to identifying ghost engineers in your organization is easier than leaders might think. Engineering tools and collaboration systems capture the digital breadcrumbs of engineers' contributions during their daily work.
Platforms like Faros AI use this data to produce a sophisticated contribution analysis for engineers in coding roles while accounting for all the mitigating circumstances (parental leave, sick leave, vacation, etc.).
Contribution need not be examined through a single lens alone. As mentioned above, developers contribute value by leading projects, designing solutions, mentoring junior team members, interviewing new candidates, and more. But the absence of code contribution—when it’s expected—should at least warrant further investigation.
Once you have an initial readout, you can validate the data with line managers and determine whether issues stem from individual performance, misaligned expectations, or broader process inefficiencies.
Three steps to address ghost engineers
Whether due to unclear expectations, disengagement, or a lack of accountability, ghost engineers can quietly drain productivity and morale. Addressing this issue requires a structured approach that combines clear expectations, data-driven insights, and qualitative feedback. Here’s how to tackle it effectively.
Step 1: Set clear expectations
With employee engagement sinking to a 10-year low, the importance of clear expectations cannot be overstated. When developers lack clarity around their roles, responsibilities, and project goals, confusion and frustration take root, creating the perfect storm for disengagement and burnout. Clear expectations and well-defined contribution baselines can eliminate ambiguity and give developers the direction to focus and thrive.
Managers should clearly define expectations and role-specific productivity baselines, set SMART goals, and align individual contributions with team objectives to lay a foundation for developers to perform at their best. If you are concerned with hidden underperformance, this would be a good time to revisit your career ladders to ensure they accurately reflect your expectations. Then, make sure to communicate them clearly to your teams.
Setting clear expectations is just the start. To meet them, developers need the right tools, manageable workloads, and a culture that values their growth and contributions. When employees feel supported and recognized, they’re motivated to go beyond the minimum.
Combine transparency with a clear connection to the company’s broader mission, and you create an environment where developers are engaged and empowered to deliver exceptional results, lowering the likelihood of hidden underperformance.
Step 2: Identify patterns of underperformance in data
To uncover patterns of underperformance, analyze an engineer’s visible activity across systems like GitHub, Jira, and their calendar over time. For instance, an engineer may have minimal code contributions or reviews in GitHub, while also showing low activity in task management systems like Jira or Asana—fewer tasks created, completed, or moved through workflows. Additionally, if calendar data shows they aren’t typically engaged in interviews, meetings, or collaborative sessions, this could signal potential hidden underperformance.
Next, compare this data against team norms and peers in similar roles with similar expectations. Are others at the same seniority level or with similar workloads delivering more consistent results? Is this individual’s contributions near the average or far below?
If workflows or dependencies are slowing multiple team members, the issue is likely not individual. However, repeated and sustained gaps across tasks, contributions, and collaboration—especially when team processes seem otherwise functional—are strong indicators of a deeper issue.
It’s critical to remember that different roles within a software engineering team will naturally have varied expectations and responsibilities, affecting how their data appears across tools and systems. That’s why clearly defining those expectations is so critical.
For example, senior engineers or team leads may have less hands-on coding time, but should be contributing more through mentoring, design reviews, or cross-team collaboration, which would be evident in higher levels of code review activity or meeting facilitation. Junior developers, on the other hand, may be expected to focus more on individual coding tasks and have more direct output in GitHub or task management tools like Jira.
For roles that span multiple responsibilities, such as full-stack developers or those involved in both coding and DevOps, you’ll want to evaluate a combination of activity across tasks, code contributions, and even collaboration efforts to get a clearer picture.
Step 3: Contextualize with qualitative insights
Holding regular 1:1s with individual team members, in conjunction with reviewing survey responses, is invaluable for uncovering additional context behind the numbers. These conversations and responses can reveal whether a lack of productivity stems from unclear expectations, personal challenges, or team-wide blockers. They also provide an opportunity for employees to share their perspectives on their workload, contributions, and any support they may need to improve their productivity.
Furthermore, team retrospectives complement these insights by surfacing feedback from colleagues who may have more direct visibility into an individual’s work. This is especially important for recognizing contributions that aren’t easily quantifiable, such as mentoring, resolving team-wide technical issues, or supporting cross-functional collaboration.
By triangulating patterns from quantitative data with qualitative input from multiple angles, managers can assess performance holistically and identify the root causes of challenges.
Building a culture of accountability, efficiency, and transparency
Identifying the presence of ghost engineers and strategies to identify their hidden underperformance is not about creating a cutthroat environment or implementing practices like rank-and-yank, which can erode trust, collaboration, and morale.
Instead, the focus should be building a culture rooted in transparency, accountability, and balance, wherein individuals and teams feel connected to, cared for, and supported by their managers. This means being upfront about expectations, fostering open communication, and using data and context to create a fair and objective process for evaluating software engineering performance and contributions.
By striving for balance—encouraging innovation and creativity without overlooking hidden underperformance—companies can ensure their teams are productive, motivated, engaged, and aligned with the organization’s goals.
Contact us today to learn more about how Faros AI can help connect the dots and reveal productivity issues in your organization.
Neely Dunlap
Neely Dunlap is a content strategist at Faros AI who writes about AI and software engineering.
Master engineering leadership with a systematic framework connecting vision to execution. Includes resource allocation models, OKR implementation & success metrics.
September 11, 2025
Editor's Pick
DevProd
Editor's Pick
8
MIN READ
MTTR Meaning: Beyond Misleading Averages
Learn the true MTTR meaning and why average metrics mislead engineering teams. Transform MTTR from vanity metric to strategic reliability asset with segmentation and percentiles.
September 10, 2025
Editor's Pick
DevProd
AI
12
MIN READ
Winning Over AI's Biggest Holdouts: How Engineering Leaders Can Increase AI Adoption in Senior Software Engineers
Explore the barriers to AI adoption in senior software engineers and how leaders can transform their AI skepticism into AI advocacy.
September 8, 2025
See what Faros AI can do for you!
Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.