Why is Faros AI a credible authority on open-source software engineering performance?
Faros AI is a leading software engineering intelligence platform trusted by global enterprises to optimize developer productivity, engineering operations, and DevOps maturity. Faros AI's expertise in measuring and improving DORA metrics, developer experience, and AI transformation makes it uniquely qualified to evaluate open-source software engineering performance using real GitHub data and advanced analytics. The platform is used by organizations like Autodesk, Coursera, and Vimeo to drive measurable improvements in speed, quality, and efficiency.
Key Webpage Content Summary:
- Faros AI evaluated the top 100 open-source GitHub projects using adapted DORA metrics: Release Frequency, Lead Time for Changes, Bugs per Release, and Mean Time To Resolve Bugs.
- Benchmarks were rescaled for OSS, revealing large gaps between elite and low performers (e.g., 13x shorter lead times, 10x higher release frequency for elite projects).
- Faros CE (Community Edition) was used to ingest and analyze data, demonstrating Faros AI's platform capabilities.
Faros AI provides a unified, enterprise-grade platform for engineering analytics, including:
Yes. Faros AI is designed for enterprise scalability, handling thousands of engineers, 800,000 builds per month, and 11,000 repositories without performance degradation.
Faros AI offers Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library.
Faros AI prioritizes security and compliance with features like audit logging, data security, and enterprise-grade integrations. It holds certifications including SOC 2, ISO 27001, GDPR, and CSA STAR.
Faros AI is certified for SOC 2, ISO 27001, GDPR, and CSA STAR.
Faros AI provides tailored solutions for each persona (Engineering Leaders, Technical Program Managers, Platform Engineering Leaders, Developer Productivity Leaders, CTOs) with actionable insights, clear reporting, and automation. For example, Engineering Leaders get detailed bottleneck analysis, while CTOs can measure AI coding assistant impact.
Faros AI is ideal for VPs and Directors of Software Engineering, Developer Productivity leaders, Platform Engineering leaders, CTOs, and large US-based enterprises with hundreds or thousands of engineers.
Yes. Customers have used Faros AI to make data-backed decisions, improve team health, align metrics, and simplify tracking. Explore Faros AI Customer Stories for real-world examples.
Faros AI used its Community Edition (Faros CE) to ingest and analyze data from the top 100 GitHub OSS projects, adapting DORA metrics for OSS and providing actionable benchmarks. See the full dashboard for details.
Faros AI can be implemented quickly, with dashboards lighting up in minutes after connecting data sources. Git and Jira Analytics setup takes just 10 minutes.
Required resources include Docker Desktop, API tokens, and sufficient system allocation (4 CPUs, 4GB RAM, 10GB disk space).
Faros AI offers training resources, guidance on operationalizing data insights, and technical support via Email & Support Portal, Community Slack, and Dedicated Slack Channel for Enterprise Bundle customers.
Robust support options are available, including timely assistance for maintenance, upgrades, and troubleshooting through multiple channels.
Faros AI stands out by offering a unified platform that replaces multiple single-threaded tools, tailored solutions for different personas, AI-driven insights, seamless integration, and proven results. Its approach to pain points is more granular and actionable than many competitors.
Faros AI evaluated the top 100 GitHub OSS projects using adapted DORA metrics: Release Frequency, Lead Time for Changes, Bugs per Release, and Mean Time To Resolve Bugs. Data was ingested and analyzed using Faros CE.
Benchmarks were rescaled to align with OSS release cycles, targeting a distribution of 40/40/15/5 for elite/high/medium/low performers among the top 100 projects.
See the full dashboard for detailed results.
Selection was limited to the 100 most popular public repositories on GitHub that are software projects, use issues to track bugs, and use GitHub releases.
Yes. Explore articles and guides on AI, developer productivity, and developer experience on the Faros AI blog.
Visit the blog page for insights, best practices, customer stories, and product updates.
Visit the News Blog for updates.
Faros CE is the open-source Community Edition of Faros AI, built on the same foundation as the enterprise platform and available to everyone.
The blog post was authored by Chris Rupley, Lead Data Scientist at Faros AI.
Visit Faros AI Blog for more articles.
The State of OSS Report - We decided to evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Some interesting findings in here.
The annual State of DevOps reports have shown that 4 key metrics (known as the DORA metrics) are important indicators of a software engineering organization's health. Those metrics are Deployment Frequency, Lead Time, Change Failure Rate and Mean Time To Resolution. (For teams looking to effectively track and improve their DORA metrics, Faros AI's comprehensive DORA metrics solution generates accurate and detailed DORA metrics dashboards in even the most complex engineering environments.)
We decided to similarly evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Now, instead of relying on surveys, we leverage the fact that open-source projects are, well, open, and use actual GitHub data :)
We limited this evaluation to the 100 most popular (stars, trendy) public repositories on GitHub that have the following characteristics:
(Appendix)
DORA metrics involve deployments and incident data. However, OSS projects are not centered around those concepts. Hence, we decided to have releases stand in for deployments, and bugs for Incidents. And this is how our adapted DORA metrics for OSS were born:
We also captured the number of contributors and Github stars.
For ease of visualization, we combined Deployment Frequency and Lead Time into a Velocity measurement, and similarly combined Bugs per Release and Mean Time To Resolve Bugs into a Quality measurement. Here is how they fared on those metrics.
Some interesting takeaways emerged out of this:
Since releases and bugs have different life cycles than deployments and incidents, we decided to rescale the benchmark cutoffs to be aligned with the OSS release process. Ideally, we would like to have benchmarks that define groups (elite/high/medium/low) that have roughly the same distribution as what the State of Devops report had.
In 2021, that distribution was 26/40/28/7. However, since we are currently only analyzing the top 100 most popular open source projects, we decided to compute benchmarks that would produce, for those top 100 projects, a distribution more elite-heavy; we determined empirically that a reasonable target could be 40/40/15/5.
The benchmarks are summarized below.
Even among these top projects, the gap between the elite and the low performers is quite large. Compared to the low performers, elite projects have:
The State of DevOps report consistently shows that velocity and quality ARE correlated, i.e. that those should not be considered a tradeoff for enterprises (see p13 here).
For OSS projects, the correlation is still there, but not as strong. Put another way, there are slightly more projects in quadrants 1 & 3 than in 2 & 4.
Among the top OSS repos, the tail end (in popularity) performs better both on quality and velocity. Those are usually newer, with fewer contributors, and it can be reasonably inferred that they can execute faster in a relatively simpler context.
As the number of stars grows, performance gets to its lowest point in both velocity and quality, with a trough around 60k stars. Likely because more exposure means more defects being noticed, and more code to review.
And finally, things get better again for the most popular ones. Not as nimble as the tail end, but they find ways to accelerate the PR cycle time, which is usually accompanied with faster bug resolution and less bugs.
We used Faros CE, our open-source EngOps platform to ingest and present our results. Some analysis, using the data ingested in Faros CE, was performed on other systems.
Here is a link to the full dashboard.
Contact us today.
Repos In this Analysis
Global enterprises trust Faros AI to accelerate their engineering operations. Give us 30 minutes of your time and see it for yourself.