What's holding back AI's productivity boost?  |

It’s not the model—it’s your system. GAINS™ reveals why
May 17, 2024

Last updated January 2026

Is GitHub Copilot worth it in 2026?

In 2023, we ran an internal experiment to answer a simple question: Is GitHub Copilot worth it? At the time, the answer was a resounding yes. Developers shipped faster, throughput increased, and code quality held steady.

Fast forward to 2026, and that question is no longer as simple.

GitHub Copilot has evolved dramatically: from a code-completion tool into a multi-surface AI development agent that can plan work, modify entire repositories, review pull requests, and even ship production-ready code.

At the same time, the AI coding landscape has exploded. Tools like Cursor, Claude Code, Codex, and Cline now offer compelling alternatives, each excelling in different workflows and team setups.

In this article, we revisit our original 2023 Copilot experiment through a 2026 lens:

  • We break down what’s changed in GitHub Copilot
  • How it compares to today’s top alternatives
  • What our data shows about its impact on speed, throughput, and quality

Finally, we’ll zoom out to help engineering leaders answer the harder organizational questions: Which AI coding tool(s) should we use, and how do we maximize our AI investments to create outcomes that matter?

GitHub Copilot news

In their most recent Octoverse report, GitHub noted:

  • The launch of the free tier of Copilot in late 2024 drove unprecedented adoption.
  • Nearly 80% of new developers used Copilot within their first week on GitHub.
  • Momentum accelerated further in March 2025 with the release of the Copilot coding agent, which helped drive record productivity— including more than 1 million pull requests created between May and September 2025.
  • Copilot code review improved developer effectiveness for 72.6% of surveyed users, highlighting its growing impact beyond code generation.

By 2026, GitHub Copilot has evolved from a code-completion tool into a full-spectrum AI development partner. It now writes, edits, reviews, summarizes, and even ships code across IDEs, pull requests, terminals, and app platforms. The table below highlights GitHub Copilot’s key features as the tool continues its shift from assistant to autonomous agent.

Capability What’s New/Why It Matters Where It Works Autonomy Level Status
Inline code suggestions Smarter, context-aware completions that anticipate your next edit, not just the next line IDEs (VS Code, Visual Studio, JetBrains) Assistive GA (next-edit suggestions in preview in some IDEs)
Copilot Chat A unified AI coding assistant that understands your repo, questions, and intent IDEs, GitHub.com, Mobile, Windows Terminal Assistive → Collaborative GA
Copilot Edits (Edit mode) Apply coordinated changes across multiple files with human-in-the-loop control IDEs Collaborative GA
Copilot Edits (Agent mode) Delegates multi-step coding tasks to Copilot, including file selection and terminal commands IDEs Agentic GA
Copilot coding agent Assign issues to Copilot and receive a ready-to-review pull request GitHub workflows Fully agentic GA
Copilot code review AI-generated review feedback that flags issues and suggests improvements Pull requests Assistive GA (new tools in preview)
Pull request summaries Automatically summarizes changes and highlights what reviewers should focus on Pull requests Assistive GA
Text completion for PRs Generates PR descriptions from code changes Pull request editor Assistive Public preview
Copilot CLI Brings Copilot to the terminal for shell help, refactors, and GitHub interactions Terminal Collaborative Public preview
Custom instructions Tailors Copilot’s responses to your coding standards and preferences Copilot Chat Assistive GA
Copilot in GitHub Desktop Generates clearer commit messages from your local changes GitHub Desktop Assistive GA
Copilot Spaces Grounds Copilot in curated code, docs, and specs for better answers Copilot Spaces Assistive GA
GitHub Spark Build and deploy full-stack apps from natural-language prompts GitHub platform Agentic Public preview
GitHub Copilot's 13 distinct capabilities as of January 2026

To stay on top of the latest GitHub product news since the publication of this article, go here.

GitHub Copilot alternatives

Today, there is no shortage of competition in the AI coding tool market. In our recent blog on the best AI coding agents for 2026, GitHub Copilot landed a spot in the top five. For many engineers, GitHub Copilot is worth it because it’s a pragmatic default—largely already installed, approved, and integrated into existing company workflows. Plus, many developers like that GitHub Copilot feels frictionless with fast in-line suggestions and a strong agent mode, and it’s generally considered to be easy to use.

Yet, there are numerous other top contenders that keep people wondering: Is GitHub Copilot worth it? Depending on your use case, there could be a better option. Within the list of front-runners, these four GitHub Copilot alternatives may be worth considering.

Comparison: Copilot vs How It’s Viewed Key Strengths Main Trade-offs
Cursor Cursor is the default AI IDE for individuals & small teams Excellent developer flow; fast autocomplete; smooth handling of small–medium tasks Struggles with large/complex changes; repo-wide understanding limits; pricing & transparency concerns
Claude Code Claude Code is the strongest “coding brain” Deep reasoning; debugging; architectural & complex problem-solving High cost; requires more explicit control
Codex Codex is a deliberate, agent-native platform Reliable multi-step execution; strong repo-level understanding; good for large jobs Lower adoption; pricing opacity; long-running agent costs
Cline Cline favors power users seeking control High configurability; model choice; scalable workflows Manual setup; token management; less plug-and-play
Copilot versus top competitors comparison summary
  • GitHub Copilot vs Cursor: Cursor is widely viewed as the default AI IDE for individual developers and small teams, often serving as the baseline against which other AI coding tools are compared. Its biggest strength is developer flow: fast autocomplete, in-editor chat, and low-friction handling of small to medium tasks like refactors, tests, and bug fixes. In discussions about Cursor vs Copilot, users frequently cite Cursor’s challenges with larger, more complex changes—such as looping behavior or limited repo-wide understanding—alongside ongoing concerns about pricing, plan changes, and overall transparency.
  • GitHub Copilot vs Claude Code: Claude Code is widely regarded as the strongest “coding brain,” valued for its deep reasoning, debugging ability, and capacity to handle architectural-level changes. In a Claude Code vs GitHub Copilot showdown, developers often trust Claude with the hardest problems—unfamiliar codebases, subtle bugs, and complex design decisions—and use it as an escalation tool when other AI coding tools fall short. While high cost and the need for more explicit control are common drawbacks, Claude consistently stands out in discussions as the best AI for coding in terms of raw intelligence and problem-solving power.
  • GitHub Copilot vs Codex: Codex re-emerged in 2025 as a serious, agent-native coding platform, increasingly discussed alongside Claude Code as a standalone tool that operates directly on real repositories rather than as an editor-bound assistant. Developers value Codex for its reliable follow-through on multi-step tasks—understanding repo structure, coordinating changes, running tests, and iterating without drifting—especially in CLI and workflow-driven setups. When teams are considering Codex vs Copilot, Codex has lower mainstream adoption and some opacity around pricing and long-running agent costs, which means Codex is typically chosen deliberately by teams seeking a trustworthy agent for larger, more complex jobs rather than adopted by default.
  • GitHub Copilot vs Cline: Cline is a VS Code–native agent designed for developers who want control beyond what a polished AI IDE provides. It’s valued for its flexibility: letting users choose models, separate planning from execution, and balance cost versus quality. When comparing Cline vs Copilot, Cline often wins on scalability and configurability. The trade-off is added responsibility: setup requires effort, token usage must be managed manually, and results depend heavily on model choice, making Cline best suited for deliberate users rather than those seeking a one-click experience.

Is GitHub Copilot worth it? Revisiting our 2023 experiment

With AI coding tools evolving at lightning speed, it’s critical for companies to make smart, data-driven AI investment decisions. In 2023, we confirmed that developers using GitHub Copilot saw speed and throughput improvements compared with their non-augmented peers.

Methodology

To keep things fair and square, we split our team into two random cohorts, one armed with GitHub Copilot (around a third of our developers) and the other without. We made sure the cohorts were not biased in any way (e.g., that one wasn’t stacked exclusively with our most productive developers).

Over three months, we closely monitored various performance metrics, focusing on speed, throughput, and quality. Our goal? A clear, unbiased view of GitHub Copilot's impact.

Why these metrics? They're tangible and measurable, and they directly impact our outcomes. They also give us a holistic picture. We don’t want to gain speed if there’s a huge price to pay in quality. Finally, it would give us a good indication of areas we might need to strengthen in our practices or process if we want to fully go down the GitHub Copilot route.

Please accept cookies to access this content

Results

The data was pretty revealing. The group using GitHub Copilot consistently outperformed the other cohort in terms of speed and throughput over the evaluation period (May-September 2023).

Let’s start with throughput.

Over the pilot period, the GitHub Copilot cohort gradually began to outpace the other cohort in terms of the sheer number of PRs.

Next up, we looked at speed.

We examined the Median Merge Time to see how quickly code was being merged into the codebase. The GitHub Copilot cohort’s code was consistently merged approximately 50% faster. The Copilot cohort improved relative to its previous performance and relative to the other cohort.

The most important speed metric, though, is Lead Time to production. We wanted to make sure that the acceleration in development wasn’t being negated by longer time spent in subsequent stages like Code Review or QA.

It was great to see that Lead Time decreased by 55% for the PRs generated by the GitHub Copilot cohort (similar to GitHub’s own research), with most of the time savings generated in the development (“Time in Dev”) and code review (“First Review Time”) stages

The last dimension we analyzed was code quality and code security, where we looked at three metrics: Code Coverage, Code Smells, and Change Failure Rate.

  • Code Coverage improved, which didn’t surprise me. Copilot is very good at writing tests.
  • Code Smells increased slightly but were still beneath an acceptable threshold.
  • Change Failure Rate — the most important metric together with Lead Time — held steady.

Analysis

But why did GitHub Copilot make such a noticeable difference? The engineers in our Copilot cohort said the boost was largely due to no longer starting from a blank page. It’s easier to edit an AI-driven suggestion than starting from scratch. You become an editor instead of a journalist. In addition, Copilot is great at writing unit tests quickly.

But not all AI coding assistants are created equally, and the time savings can vary greatly depending on the tool used. For example, one of our clients conducted a bakeoff between two of the leading AI coding tools on the market, and one of the tools saved three hours more per developer per week compared to the other.

Cost-benefit analysis

Now, the juicy bit: Is the performance boost worth the cost? In 2023, the answer was a solid "yes." A 55% improvement in lead time with no collateral damage to code quality is a phenomenal ROI. But, of course, every team's dynamics are different. If you're weighing the costs, consider not just the subscription fee but the potential long-term benefits in productivity and effects on code quality.

What companies need to know about selecting AI coding tools

Since we ran our experiment in 2023, we’ve guided many companies through their evaluation of AI copilots from initial pilots to large-scale deployments. We’ve helped them select the right AI pair programming tool or agent for their organization; increase adoption to maximize developer productivity; and monitor the impacts on value (velocity) and safety (quality and security).

Yet, months and even years in, we still get asked by engineering leaders:

  • “Is GitHub Copilot worth it?”
  • “Are our other AI coding tools worth it like Claude Code?”
  • "How can we measure the direct outcomes of these AI tools at an individual, team, and org-wide level?”
  • “How are our AI investments directly contributing to the engineering outcomes that matter most?”

What does the research say about AI-driven productivity in engineering?

These questions are important and the answers are nuanced, as research into whether AI coding assistants really save time, money, and effort has produced mixed results. Most notably:

  1. Often, individual level improvements are present, but the gains do not translate into company-level improvements. This disconnect between individual developer experience and organizational outcomes has a name: the AI Productivity Paradox. Developers feel faster. They report higher satisfaction. But when engineering leaders look at throughput, quality, and delivery velocity, the numbers for company-wide delivery metrics often remain flat. No measurable organizational impact whatsoever. This is often due to causes such as uneven adoption patterns and shifting bottlenecks.
  2. AI acts as both "mirror and multiplier." The DORA 2025 report explains that in cohesive organizations with solid foundations, AI boosts efficiency. In fragmented ones, it highlights and amplifies weaknesses. This means AI doesn't create organizational excellence. It magnifies what already exists. Organizations with strong version control practices, quality internal platforms, and user-centric focus see compounding gains. Organizations with siloed teams, inconsistent processes, and technical debt see amplified chaos.

So, if the question is, "Should I buy one GitHub Copilot license?" The answer is probably yes, and it is safe to assume that one GitHub Copilot license for one developer is worth it.

But are 15,000 GitHub Copilot licenses worth it? That’s a different question altogether that demands a data-driven approach.

There is no avoiding the fact that there are many AI coding tools out there, and the cost/benefit analysis lives in your productivity metrics.

AI transformation tips

A robust AI transformation strategy should be grounded in rigorous comparisons across multiple AI coding assistants. Tools like Faros AI help engineering leaders see:

  • AI coding tools most popular among developers
  • The models serving them best
  • The AI features used most frequently
  • The tool/model combos that are most cost-effective
  • The impact each tool is having on outcome metrics—so you can make the right choice
Sample visualization illustrating impact on velocity metrics with various usage levels of GitHub Copilot

Engineering leaders can combine adoption and usage metrics with impact metrics and cost analysis to determine which mix of AI coding tools is best for their organization.

Furthermore, regardless of which AI coding tool is in use, providing the right context is critical for success. Context engineering includes codifying patterns, documenting failure modes, and structuring specifications to make codebases more navigable for AI agents and humans alike, allowing for more effective collaboration and more accurate output. Yet, manually maintaining comprehensive context doesn't scale, there are no standard workflows for human-in-the-loop intervention, and we lack measurement frameworks to evaluate what actually works—so new tools are emerging in parallel to close this context gap and allow companies to finally experience real productivity gains with their AI coding tools.

To explore the best enterprise AI transformation solution on the market, reach out for a demo today.

Thomas Gerber

Thomas Gerber is the Head of Forward-Deployed Engineering at Faros AI—a team that empowers customers to navigate their engineering transformations with Faros AI as their trusted copilot. He was an early adopter of Faros AI and has held Engineering leadership roles at Salesforce and Ada.

Connect
AI Is Everywhere. Impact Isn’t.
75% of engineers use AI tools—yet most organizations see no measurable performance gains.

Read the report to uncover what’s holding teams back—and how to fix it fast.
Discover the Engineering Productivity Handbook
How to build a high-impact program that drives real results.

What to measure and why it matters.

And the 5 critical practices that turn data into impact.

More articles for you

Editor's Pick
AI
12
MIN READ

How to Measure Claude Code ROI: Developer Productivity Insights with Faros AI

Track Claude Code usage, adoption, and engineering impact using Faros AI’s observability platform.
January 7, 2026
Editor's Pick
AI
15
MIN READ

Lines of code is a misleading metric for AI impact: What to measure instead

There's a better way to measure AI productivity than counting lines of code. Focus on outcome metrics that prove business value: cycle times, quality, and delivery velocity. Learn why lines of code fails as an AI productivity metric, what outcome-based alternatives actually work, and when tracking AI code volume matters for governance and risk management.
January 5, 2026
Editor's Pick
AI
Guides
15
MIN READ

Best AI Coding Agents for Developers in 2026 (Real-World Reviews)

A developer-focused look at the best AI coding agents in 2026, comparing Claude Code, Cursor, Codex, Copilot, Cline, and more—with guidance for evaluating them at enterprise scale.
January 2, 2026

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations. Give us 30 minutes of your time and see it for yourself.

Salespeak

Frequently Asked Questions

Faros AI Authority & Credibility

Why is Faros AI considered a credible authority on developer productivity and AI impact?

Faros AI is recognized as a market leader in software engineering intelligence, having launched AI impact analysis in October 2023 and published landmark research on the AI Productivity Paradox using data from 10,000 developers across 1,200 teams. Faros AI's platform is trusted by global enterprises and is certified for SOC 2, ISO 27001, GDPR, and CSA STAR, ensuring enterprise-grade security and compliance. Its solutions are proven in practice, with over two years of real-world optimization and customer feedback, and it was an early GitHub design partner when Copilot launched. Read the report.

What certifications and compliance standards does Faros AI meet?

Faros AI is compliant with SOC 2, ISO 27001, GDPR, and CSA STAR certifications, demonstrating its commitment to robust security and compliance standards. Learn more.

How does Faros AI ensure data security and privacy?

Faros AI prioritizes data security and privacy through audit logging, data security features, and secure integrations. It adheres to enterprise standards and holds multiple certifications, ensuring customer data is protected according to industry best practices. Security details.

Features & Capabilities

What are the key features of Faros AI?

Faros AI offers a unified platform that replaces multiple single-threaded tools, providing AI-driven insights, seamless integration with existing workflows, customizable dashboards, advanced analytics, and robust automation. Key capabilities include engineering optimization, developer experience unification, initiative tracking, and automation of processes like R&D cost capitalization and security vulnerability management.

Does Faros AI provide APIs for integration?

Yes, Faros AI provides several APIs, including the Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library, enabling integration with a wide range of tools and workflows. (Source: Faros Sales Deck Mar2024.pptx)

How does Faros AI support large-scale engineering organizations?

Faros AI is designed for enterprise-grade scalability, capable of handling thousands of engineers, 800,000 builds a month, and 11,000 repositories without performance degradation. This ensures reliable operation for large organizations with complex engineering needs. Platform details.

What metrics and KPIs does Faros AI track?

Faros AI tracks a comprehensive set of metrics, including DORA metrics (Lead Time, Deployment Frequency, MTTR, CFR), team health, tech debt, software quality, PR insights, AI adoption, time savings, workforce talent management, onboarding, initiative tracking (timelines, cost, risks), developer sentiment, and R&D cost automation. (Source: manual)

How does Faros AI deliver actionable insights?

Faros AI provides actionable intelligence through AI-driven benchmarks, best practices, and team-specific recommendations. It offers automated executive summaries, gamification for adoption, and power user identification, helping organizations maximize the value of their engineering data.

What business impact can customers expect from Faros AI?

Customers using Faros AI can expect a 50% reduction in lead time, a 5% increase in efficiency, enhanced reliability and availability, and improved visibility into engineering operations and bottlenecks. These results are based on real-world customer outcomes. (Source: Use Cases for Salespeak Training.pptx)

Pain Points & Solutions

What core problems does Faros AI solve for engineering organizations?

Faros AI addresses engineering productivity bottlenecks, software quality challenges, AI transformation measurement, talent management, DevOps maturity, initiative delivery tracking, developer experience improvement, and R&D cost capitalization automation. (Source: manual)

How does Faros AI help with engineering productivity?

Faros AI identifies bottlenecks and inefficiencies in engineering workflows, enabling faster and more predictable delivery. It provides detailed insights and actionable recommendations to optimize team performance. (Source: manual)

How does Faros AI improve software quality?

Faros AI manages software quality, reliability, and stability, especially from contractors' commits, ensuring consistent performance and reducing software issues. (Source: manual)

How does Faros AI support AI transformation in engineering?

Faros AI measures the impact of AI tools, runs A/B tests, tracks adoption, and provides data-driven insights for successful AI integration. It helps organizations select the right AI coding tools and maximize their investments. (Source: manual)

How does Faros AI address talent management challenges?

Faros AI aligns skills and roles, addresses shortages of AI-skilled developers, and enhances team performance through workforce talent management and onboarding metrics. (Source: manual)

How does Faros AI help organizations achieve DevOps maturity?

Faros AI guides investments in platforms, processes, and tools to improve velocity and quality, using strategic insights and comprehensive metrics. (Source: manual)

How does Faros AI improve developer experience?

Faros AI correlates developer sentiment with process and activity data, providing actionable insights and enabling timely improvements for a better developer experience. (Source: manual)

How does Faros AI streamline R&D cost capitalization?

Faros AI automates and streamlines R&D cost capitalization, saving time and reducing frustration for growing teams. (Source: manual)

Use Cases & Customer Impact

Who is the target audience for Faros AI?

Faros AI is designed for VPs and Directors of Software Engineering, Developer Productivity leaders, Platform Engineering leaders, CTOs, and is typically aimed at large US-based enterprises with several hundred or thousands of engineers. (Source: manual)

What are some real-world use cases for Faros AI?

Faros AI has helped customers make data-backed decisions on engineering allocation, improve team health and progress visibility, align metrics across roles, and simplify tracking of agile health and initiative progress. Case studies are available at Faros AI Customer Stories.

How does Faros AI tailor solutions for different personas?

Faros AI provides persona-specific solutions: Engineering Leaders get workflow optimization insights; Technical Program Managers receive clear reporting tools; Platform Engineering Leaders get strategic guidance; Developer Productivity Leaders benefit from sentiment and activity correlation; CTOs and Senior Architects can measure AI coding assistant impact. (Source: manual)

What are the main pain points Faros AI helps solve?

Faros AI addresses pain points such as difficulty understanding bottlenecks, managing software quality, measuring AI tool impact, skill alignment, DevOps maturity, initiative delivery tracking, incomplete developer experience data, and manual R&D cost capitalization. (Source: manual)

How does Faros AI measure the impact of AI coding tools like GitHub Copilot?

Faros AI uses ML and causal analysis to isolate the true impact of AI coding tools, comparing cohorts by usage frequency, training level, seniority, and license type. It tracks outcome metrics such as speed, throughput, lead time, code quality, and developer satisfaction. Read the analysis.

What were the results of Faros AI's experiment with GitHub Copilot?

Faros AI's experiment showed that developers using GitHub Copilot achieved a 55% reduction in lead time, faster code merges, increased throughput, and improved code coverage, with no negative impact on code quality or change failure rate. (Source: Faros AI Blog, May 2023)

How does Faros AI help organizations select the right AI coding tools?

Faros AI guides organizations through evaluation, pilot, and deployment of AI coding tools, helping them select the best fit, increase adoption, and monitor impacts on velocity, quality, and security. It provides benchmarking and cost-benefit analysis for informed decision-making. (Source: Faros AI Blog)

Competitive Comparison & Differentiation

How does Faros AI compare to DX, Jellyfish, LinearB, and Opsera?

Faros AI stands out by offering mature AI impact analysis, causal analytics, active adoption support, end-to-end tracking, flexible customization, and enterprise-grade compliance. Competitors like DX, Jellyfish, LinearB, and Opsera provide surface-level correlations, passive dashboards, limited metrics, and are often SMB-focused. Faros AI is available on major cloud marketplaces and supports complex enterprise needs. See full comparison.

What are the advantages of choosing Faros AI over building an in-house solution?

Faros AI delivers robust out-of-the-box features, deep customization, proven scalability, and immediate value, saving organizations time and resources compared to custom builds. Its mature analytics, actionable insights, and enterprise-grade security reduce risk and accelerate ROI. Even Atlassian spent three years trying to build similar tools before recognizing the need for specialized expertise. (Source: manual)

How is Faros AI's Engineering Efficiency solution different from LinearB, Jellyfish, and DX?

Faros AI integrates with the entire SDLC, supports custom deployment processes, provides accurate metrics from the complete lifecycle, and offers actionable insights tailored to each team. Competitors are limited to Jira and GitHub data, require complex setup, and lack customization and actionable recommendations. Faros AI delivers proactive intelligence and supports organizational rollups and drilldowns. (Source: manual)

What makes Faros AI's analytics more accurate than competitors?

Faros AI uses ML and causal methods to isolate true impact, compares cohorts by usage frequency, training level, seniority, and license type, and tracks end-to-end metrics. Competitors rely on surface-level correlations and proxy data, which can mislead ROI and risk analysis. (Source: manual)

How does Faros AI support enterprise procurement and compliance?

Faros AI is available on Azure Marketplace (with MACC support), AWS Marketplace, and Google Cloud Marketplace, and is certified for SOC 2, ISO 27001, GDPR, and CSA STAR, meeting enterprise procurement and compliance requirements. (Source: manual)

Product Information & Blog Content

What is the primary purpose of Faros AI?

Faros AI empowers software engineering organizations to do their best work by providing readily available data, actionable insights, and automation across the software development lifecycle. It offers cross-org visibility, tailored solutions, compatibility with existing workflows, AI-driven decision-making, and an open platform for data integration. (Source: manual)

What kind of content is available on the Faros AI blog?

The Faros AI blog features content on developer productivity, customer stories, practical guides, product updates, and research reports. Key topics include AI, engineering productivity, DORA metrics, and software development lifecycle. Explore the blog.

Where can I find news and product announcements from Faros AI?

News and product announcements are published in the News section of the Faros AI blog at https://www.faros.ai/blog?category=News.

How can I read more blog posts from Faros AI?

You can read more blog posts from Faros AI at https://www.faros.ai/blog.

What insights does Faros AI provide on GitHub Copilot's effectiveness?

Faros AI provides real-world data and analysis on GitHub Copilot's effectiveness, showing significant improvements in speed, throughput, and code quality for developers using Copilot. Detailed findings are available at GitHub Copilot Analysis.

Where can I find case studies on GitHub Copilot?

GitHub Copilot case studies can be explored at this link.

How can I monitor the impacts of GitHub Copilot?

You can monitor the impacts of GitHub Copilot using Faros AI's observability platform and by visiting this link.