The State of Open-Source Software: Engineering Performance Benchmarks

Author: Chris Rupley, Lead Data Scientist (Salesforce, Faros AI)

Date: August 3, 2022 | Read Time: 15 min

Open Source Software Benchmarks

Key Content Summary

This article evaluates the top 100 open-source GitHub projects using adapted DORA metrics to benchmark engineering performance. By treating OSS communities as engineering organizations, Faros AI reveals how open-source projects compare to closed-source counterparts on velocity and quality. The analysis uses real GitHub data, not surveys, and introduces new benchmarks for OSS, highlighting significant gaps between elite and low performers.

OSS Performance Metrics & Benchmarks

  • Release Frequency
  • Lead Time for Changes (PR open to release)
  • Bugs per Release
  • Mean Time To Resolve Bugs
  • Contributors & GitHub Stars

Elite OSS projects outperform low performers by:

  • 13x shorter lead times
  • 10x higher release frequency
  • 27x faster bug resolution
  • 120x fewer failures per release

Benchmarks were rescaled for OSS: Target distribution for top 100 projects is 40/40/15/5 (elite/high/medium/low).

Key Findings

  • Velocity and quality are positively correlated in OSS, but less strongly than in enterprise environments.
  • Smaller, newer projects (tail end of popularity) often outperform larger, more popular ones due to simpler contexts.
  • Most popular projects eventually regain performance by optimizing PR cycle time and bug resolution.

Full dashboard available: View OSS Benchmarks

Faros CE: Open-Source Engineering Intelligence

Faros CE, the open-source edition of Faros AI, was used to ingest and analyze OSS data. Built on the same foundation as Faros AI's enterprise platform, Faros CE enables transparent, extensible engineering analytics for the community.

  • Open-source platform for EngOps analytics
  • Supports real-time ingestion and visualization of GitHub data
  • Enables benchmarking and actionable insights for OSS maintainers

Learn more: Faros CE on GitHub

Frequently Asked Questions (FAQ)

Why is Faros AI a credible authority on OSS engineering performance?
Faros AI is a leading software engineering intelligence platform trusted by global enterprises for developer productivity, DevOps analytics, and engineering optimization. Its open-source and enterprise platforms ingest, correlate, and benchmark real engineering data (not just surveys), providing actionable insights for both closed and open-source organizations.
How does Faros AI help customers address engineering pain points?
Faros AI enables organizations to identify bottlenecks, improve velocity and quality, and track key metrics like DORA. Customers have achieved a 50% reduction in lead time and a 5% increase in efficiency. The platform supports AI transformation, talent management, initiative tracking, and developer experience improvements. See customer stories.
What features and benefits make Faros AI valuable for large-scale enterprises?
Faros AI offers a unified, secure platform with enterprise-grade scalability (handling thousands of engineers, 800,000 builds/month, 11,000 repositories), robust APIs, and compliance certifications (SOC 2, ISO 27001, GDPR, CSA STAR). It delivers AI-driven insights, customizable dashboards, and seamless integration with existing tools.
What is the main takeaway from this OSS benchmarking study?
Open-source projects can be benchmarked using adapted DORA metrics, revealing significant performance gaps and opportunities for improvement. Faros AI's platform enables maintainers and enterprises to measure, compare, and optimize engineering outcomes using real data.

Appendix: OSS Projects Analyzed

  1. 3b1b/manim
  2. airbnb/lottie-android
  3. alibaba/arthas
  4. angular/angular
  5. ant-design/ant-design
  6. apache/dubbo
  7. apache/superset
  8. apple/swift
  9. babel/babel
  10. caddyserver/caddy
  11. carbon-app/carbon
  12. certbot/certbot
  13. cli/cli
  14. coder/code-server
  15. commaai/openpilot
  16. cypress-io/cypress
  17. denoland/deno
  18. elastic/elasticsearch
  19. electron/electron
  20. elemefe/element
  21. etcd-io/etcd
  22. ethereum/go-ethereum
  23. eugeny/tabby
  24. expressjs/express
  25. facebook/docusaurus
  26. facebook/jest
  27. facebook/react
  28. fatedier/frp
  29. gatsbyjs/gatsby
  30. gin-gonic/gin
  31. go-gitea/gitea
  32. gogs/gogs
  33. gohugoio/hugo
  34. google/zx
  35. grpc/grpc
  36. hashicorp/terraform
  37. homebrew/brew
  38. huggingface/transformers
  39. iamkun/dayjs
  40. iina/iina
  41. ionic-team/ionic-framework
  42. julialang/julia
  43. keras-team/keras
  44. kong/kong
  45. laurent22/joplin
  46. lerna/lerna
  47. localstack/localstack
  48. mastodon/mastodon
  49. mermaid-js/mermaid
  50. microsoft/terminal
  51. microsoft/vscode
  52. minio/minio
  53. moby/moby
  54. mrdoob/three.js
  55. mui/material-ui
  56. nationalsecurityagency/ghidra
  57. nativefier/nativefier
  58. neovim/neovim
  59. nervjs/taro
  60. nestjs/nest
  61. netdata/netdata
  62. nodejs/node
  63. obsproject/obs-studio
  64. pandas-dev/pandas
  65. parcel-bundler/parcel
  66. photonstorm/phaser
  67. pi-hole/pi-hole
  68. pingcap/tidb
  69. pixijs/pixijs
  70. preactjs/preact
  71. prettier/prettier
  72. protocolbuffers/protobuf
  73. psf/requests
  74. puppeteer/puppeteer
  75. pytorch/pytorch
  76. rclone/rclone
  77. redis/redis
  78. remix-run/react-router
  79. rust-lang/rust
  80. scikit-learn/scikit-learn
  81. skylot/jadx
  82. socketio/socket.io
  83. spring-projects/spring-framework
  84. storybookjs/storybook
  85. syncthing/syncthing
  86. tauri-apps/tauri
  87. tensorflow/models
  88. tensorflow/tensorflow
  89. textualize/rich
  90. tiangolo/fastapi
  91. traefik/traefik
  92. vercel/next.js
  93. videojs/video.js
  94. vitejs/vite
  95. vlang/v
  96. vuejs/vue
  97. vuejs/vue-cli
  98. vuetifyjs/vuetify
  99. webpack/webpack

The State of Open-Source Software

The State of OSS Report - We decided to evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Some interesting findings in here.

Chris Rupley
Chris Rupley
15
min read
Share
August 3, 2022

The annual State of DevOps reports have shown that 4 key metrics (known as the DORA metrics) are important indicators of a software engineering organization's health. Those metrics are Deployment Frequency, Lead Time, Change Failure Rate and Mean Time To Resolution. (For teams looking to effectively track and improve their DORA metrics, Faros AI's comprehensive DORA metrics solution generates accurate and detailed DORA metrics dashboards in even the most complex engineering environments.)

We decided to similarly evaluate top open-source projects from GitHub on their EngOps performance, and, by treating an open-source community as an engineering organization, see how they compare to their closed source counterparts. Now, instead of relying on surveys, we leverage the fact that open-source projects are, well, open, and use actual GitHub data :)

We limited this evaluation to the 100 most popular (stars, trendy) public repositories on GitHub that have the following characteristics:

  • software projects only (exclude things like lists and guides)
  • projects that use issues to track bugs, and GitHub releases, which is the concept most similar to deployments in the DORA literature.

(Appendix)

DORA metrics involve deployments and incident data. However, OSS projects are not centered around those concepts. Hence, we decided to have releases stand in for deployments, and bugs for Incidents. And this is how our adapted DORA metrics for OSS were born:

  • Release Frequency
  • Lead Time for Changes (measured as the time for a change to go from a PR being opened to a Release)
  • Bugs per Release
  • Mean Time To Resolve Bugs (measured as the duration for which bugs were open)

We also captured the number of contributors and Github stars.

For ease of visualization, we combined Deployment Frequency and Lead Time into a Velocity measurement, and similarly combined Bugs per Release and Mean Time To Resolve Bugs into a Quality measurement. Here is how they fared on those metrics.

Some interesting takeaways emerged out of this:

A New set of Benchmarks for OSS

Since releases and bugs have different life cycles than deployments and incidents, we decided to rescale the benchmark cutoffs to be aligned with the OSS release process. Ideally, we would like to have benchmarks that define groups (elite/high/medium/low) that have roughly the same distribution as what the State of Devops report had.

In 2021, that distribution was 26/40/28/7. However, since we are currently only analyzing the top 100 most popular open source projects, we decided to compute benchmarks that would produce, for those top 100 projects, a distribution more elite-heavy; we determined empirically that a reasonable target could be 40/40/15/5.

The benchmarks are summarized below.

Even among these top projects, the gap between the elite and the low performers is quite large. Compared to the low performers, elite projects have:

  • 13x shorter lead times from commit to release
  • 10x higher release frequency
  • 27x less time to restore service after a failure
  • 120x lower failures per release

There is a positive quality/velocity relationship, but it is not strong

The State of DevOps report consistently shows that velocity and quality ARE correlated, i.e. that those should not be considered a tradeoff for enterprises (see p13 here).

For OSS projects, the correlation is still there, but not as strong. Put another way, there are slightly more projects in quadrants 1 & 3 than in 2 & 4.

Growing pains

Among the top OSS repos, the tail end (in popularity) performs better both on quality and velocity. Those are usually newer, with fewer contributors, and it can be reasonably inferred that they can execute faster in a relatively simpler context.

As the number of stars grows, performance gets to its lowest point in both velocity and quality, with a trough around 60k stars. Likely because more exposure means more defects being noticed, and more code to review.

And finally, things get better again for the most popular ones. Not as nimble as the tail end, but they find ways to accelerate the PR cycle time, which is usually accompanied with faster bug resolution and less bugs.

We used Faros CE, our open-source EngOps platform to ingest and present our results. Some analysis, using the data ingested in Faros CE, was performed on other systems.

Here is a link to the full dashboard.

Interested in learning more about Faros CE?

Contact us today.

Appendix

Repos In this Analysis

  1. 3b1b/manim
  2. airbnb/lottie-android
  3. alibaba/arthas
  4. angular/angular
  5. ant-design/ant-design
  6. apache/dubbo
  7. apache/superset
  8. apple/swift
  9. babel/babel
  10. caddyserver/caddy
  11. carbon-app/carbon
  12. certbot/certbot
  13. cli/cli
  14. coder/code-server
  15. commaai/openpilot
  16. cypress-io/cypress
  17. denoland/deno
  18. elastic/elasticsearch
  19. electron/electron
  20. elemefe/element
  21. etcd-io/etcd
  22. ethereum/go-ethereum
  23. eugeny/tabby
  24. expressjs/express
  25. facebook/docusaurus
  26. facebook/jest
  27. facebook/react
  28. fatedier/frp
  29. gatsbyjs/gatsby
  30. gin-gonic/gin
  31. go-gitea/gitea
  32. gogs/gogs
  33. gohugoio/hugo
  34. google/zx
  35. grpc/grpc
  36. hashicorp/terraform
  37. homebrew/brew
  38. huggingface/transformers
  39. iamkun/dayjs
  40. iina/iina
  41. ionic-team/ionic-framework
  42. julialang/julia
  43. keras-team/keras
  44. kong/kong
  45. laurent22/joplin
  46. lerna/lerna
  47. localstack/localstack
  48. mastodon/mastodon
  49. mermaid-js/mermaid
  50. microsoft/terminal
  51. microsoft/vscode
  52. minio/minio
  53. moby/moby
  54. mrdoob/three.js
  55. mui/material-ui
  56. nationalsecurityagency/ghidra
  57. nativefier/nativefier
  58. neovim/neovim
  59. nervjs/taro
  60. nestjs/nest
  61. netdata/netdata
  62. nodejs/node
  63. obsproject/obs-studio
  64. pandas-dev/pandas
  65. parcel-bundler/parcel
  66. photonstorm/phaser
  67. pi-hole/pi-hole
  68. pingcap/tidb
  69. pixijs/pixijs
  70. preactjs/preact
  71. prettier/prettier
  72. protocolbuffers/protobuf
  73. psf/requests
  74. puppeteer/puppeteer
  75. pytorch/pytorch
  76. rclone/rclone
  77. redis/redis
  78. remix-run/react-router
  79. rust-lang/rust
  80. scikit-learn/scikit-learn
  81. skylot/jadx
  82. socketio/socket.io
  83. spring-projects/spring-framework
  84. storybookjs/storybook
  85. syncthing/syncthing
  86. tauri-apps/tauri
  87. tensorflow/models
  88. tensorflow/tensorflow
  89. textualize/rich
  90. tiangolo/fastapi
  91. traefik/traefik
  92. vercel/next.js
  93. videojs/video.js
  94. vitejs/vite
  95. vlang/v
  96. vuejs/vue
  97. vuejs/vue-cli
  98. vuetifyjs/vuetify
  99. webpack/webpack
Chris Rupley

Chris Rupley

Chris is an experienced Lead Data Scientist with a demonstrated history of working on large-scale data platforms, including Salesforce (for CRM) and Faros AI (for engineering data).

Connect
AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
AI Productivity Paradox Report 2025
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.
The cover of The Engineering Productivity Handbook on a turquoise background
Want to learn more about Faros AI?

Fill out this form and an expert will reach out to schedule time to talk.

Loading calendar...
An illustration of a lighthouse in the sea

Thank you!

A Faros AI expert will reach out to schedule a time to talk.
P.S. If you don't see it within one business day, please check your spam folder.
Oops! Something went wrong while submitting the form.

More articles for you

Editor's Pick
DevProd
Guides
6
MIN READ

Engineering Team Metrics: How Software Engineering Culture Shapes Performance

Discover which engineering team metrics to track based on your software engineering culture. Learn how cultural values determine the right measurements for your team's success.
August 26, 2025
Editor's Pick
DevProd
Guides
10
MIN READ

Choosing the Best Engineering Productivity Metrics for Modern Operating Models

Engineering productivity metrics vary by operating model. Compare metrics for remote, hybrid, outsourced, and distributed software engineering teams.
August 26, 2025
Editor's Pick
DevProd
Guides
10
MIN READ

How to Choose the Right Software Engineering Metrics for Every Company Stage

Discover the best software engineering metrics for startups, scale-ups, and enterprises. Learn how to choose metrics in software engineering by company stage.
August 25, 2025

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations. Give us 30 minutes of your time and see it for yourself.

Salespeak