The State of Open-Source Software
The State of OSS Report - We decided to evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Some interesting findings in here.
August 3, 2022
The annual State of DevOps reports have shown that 4 key metrics (known as the DORA metrics) are important indicators of a software engineering organization's health. Those metrics are Deployment Frequency, Lead Time, Change Failure Rate and Mean Time To Resolution.
We decided to similarly evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Now, instead of relying on surveys, we leverage the fact that open-source projects are, well, open, and use actual GitHub data :)
We limited this evaluation to the 100 most popular (stars, trendy) public repositories on GitHub that have the following characteristics:
- software projects only (exclude things like lists and guides)
- projects that use issues to track bugs, and GitHub releases, which is the concept most similar to deployments in the DORA literature.
(Full list in Appendix A)
DORA metrics involve deployments and incident data. However, OSS projects are not centered around those concepts. Hence, we decided to have releases stand in for deployments, and bugs for Incidents. And this is how our adapted DORA metrics for OSS were born:
- Release Frequency
- Lead Time for Changes (measured as the time for a change to go from a PR being opened to a Release)
- Bugs per Release
- Mean Time To Resolve Bugs (measured as the duration for which bugs were open)
We also captured the number of contributors and Github stars.
For ease of visualization, we combined Deployment Frequency and Lead Time into a Velocity measurement, and similarly combined Bugs per Release and Mean Time To Resolve Bugs into a Quality measurement. Here is how they fared on those metrics.
Some interesting takeaways emerged out of this:
1. A New set of Benchmarks for OSS
Since releases and bugs have different life cycles than deployments and incidents, we decided to rescale the benchmark cutoffs to be aligned with the OSS release process. Ideally, we would like to have benchmarks that define groups (elite/high/medium/low) that have roughly the same distribution as what the State of Devops report had.
In 2021, that distribution was 26/40/28/7. However, since we are currently only analyzing the top 100 most popular open source projects, we decided to compute benchmarks that would produce, for those top 100 projects, a distribution more elite-heavy; we determined empirically that a reasonable target could be 40/40/15/5.
The benchmarks are summarized below.
Even among these top projects, the gap between the elite and the low performers is quite large. Compared to the low performers, elite projects have:
- 13x shorter lead times from commit to release
- 10x higher release frequency
- 27x less time to restore service after a failure
- 120x lower failures per release
2. There is a positive quality/velocity relationship, but it is not strong
The State of DevOps report consistently shows that velocity and quality ARE correlated, i.e. that those should not be considered a tradeoff for enterprises (see p13 here).
For OSS projects, the correlation is still there, but not as strong. Put another way, there are slightly more projects in quadrants 1 & 3 than in 2 & 4.
3. Growing pains
Among the top OSS repos, the tail end (in popularity) performs better both on quality and velocity. Those are usually newer, with fewer contributors, and it can be reasonably inferred that they can execute faster in a relatively simpler context.
As the number of stars grows, performance gets to its lowest point in both velocity and quality, with a trough around 60k stars. Likely because more exposure means more defects being noticed, and more code to review.
And finally, things get better again for the most popular ones. Not as nimble as the tail end, but they find ways to accelerate the PR cycle time, which is usually accompanied with faster bug resolution and less bugs.
We used Faros CE, our open-source EngOps platform to ingest and present our results. Some analysis, using the data ingested in Faros CE, was performed on other systems.
Here is a link to the full dashboard.
Curious about Faros CE?
Head on over to GitHub to get started today, and join us on our Slack channel for questions, discussions, and to meet the community.
Repos In this Analysis
More articles for you
The Faros AI infrastructure leverages a proven modern data stack: Airbyte, Hasura, Metabase, n8n, and dbt — specially customized to handle the nuances of Engineering Operations data. And unlike blackbox solutions, it is designed to grow with the growing needs of your engineering organization. Read On ...
As an engineer at an early-stage startup I wear a lot of different hats. Some days I focus on coding; on others, I focus on designing features and defining the work for our contractors. Read this post to learn more about how I leverage Faros AI to make my job easier.
The lessons learned from the modern data stack (MDS) come in when building data pipelines to connect data from disparate tools. In this episode, Lars Kamp and Vitaly Gordon discuss about engineering productivity, DORA Metrics, Faros Open-source community edition, and more...
Get started with Faros AI today!
Start your free trial now and get the full picture in minutes.
No credit card required.