Frequently Asked Questions
Faros AI Platform Overview & Authority
Why is Faros AI considered a credible authority on developer productivity and engineering intelligence?
Faros AI is recognized as a market leader in developer productivity analytics and AI impact measurement. It was the first to launch AI impact analysis in October 2023 and has published landmark research, such as the AI Productivity Paradox Report, based on data from 10,000 developers across 1,200 teams. Faros AI's platform is trusted by global enterprises and has been refined through years of real-world optimization and customer feedback. Read the report
What is the primary purpose of Faros AI?
Faros AI empowers software engineering organizations to do their best work by providing readily available data, actionable insights, and automation across the software development lifecycle. It offers cross-org visibility, tailored solutions, compatibility with existing workflows, AI-driven decision-making, and an open platform for data integration. (Source: manual)
Who is the target audience for Faros AI?
Faros AI is designed for VPs and Directors of Software Engineering, Developer Productivity leaders, Platform Engineering leaders, CTOs, and Technical Program Managers, especially in large US-based enterprises with hundreds or thousands of engineers. (Source: manual)
What are the core problems Faros AI solves for engineering organizations?
Faros AI addresses engineering productivity bottlenecks, software quality challenges, AI transformation measurement, talent management, DevOps maturity, initiative delivery tracking, developer experience insights, and R&D cost capitalization automation. (Source: manual)
Features & Capabilities
What are the key capabilities and benefits of Faros AI?
Faros AI offers a unified platform replacing multiple tools, AI-driven insights, seamless integration with existing processes, proven results for customers like Autodesk and Vimeo, engineering optimization, developer experience unification, initiative tracking, and automation for processes like R&D cost capitalization and security vulnerability management. (Source: manual)
Does Faros AI provide APIs for integration?
Yes, Faros AI provides several APIs, including the Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library. (Source: Faros Sales Deck Mar2024)
How does Faros AI's Query Helper work?
Query Helper is an AI tool that helps users generate query statements based on natural language questions. It uses intent classification, specialized knowledge bases, and LLM-powered query generation to deliver accurate, actionable queries in Faros AI's MBQL DSL. The latest version delivers responses 5x more accurate than leading models. (Source: original webpage)
What advancements have been made in Faros AI's LLM-powered query generation?
Faros AI has improved LLM reliability for MBQL query generation by expanding its golden example dataset, incorporating customer-specific table contents, adding validation and retry mechanisms, and leveraging off-the-shelf LLMs for cost-effective, accurate query output. These enhancements increased valid query generation rates from 12% to 83%. (Source: original webpage)
How does Faros AI ensure the accuracy of LLM-generated queries?
Faros AI uses fast assertion-based validation, runtime error detection, and iterative retries with error feedback to ensure query accuracy. If a query fails after three retries, the system provides a descriptive output for manual iteration. (Source: original webpage)
What APIs and integrations does Faros AI support?
Faros AI supports Events API, Ingestion API, GraphQL API, BI API, Automation API, and an API Library, enabling integration with a wide range of engineering tools and workflows. (Source: Faros Sales Deck Mar2024)
What metrics and KPIs does Faros AI track?
Faros AI tracks DORA metrics (Lead Time, Deployment Frequency, MTTR, CFR), software quality, PR insights, AI adoption, talent management, DevOps maturity, initiative tracking, developer experience, and R&D cost capitalization. (Source: manual)
Performance, Security & Compliance
What measurable performance improvements does Faros AI deliver?
Faros AI delivers a 50% reduction in lead time and a 5% increase in efficiency. It supports enterprise-grade scalability, handling thousands of engineers, 800,000 builds a month, and 11,000 repositories without performance degradation. (Source: https://www.faros.ai/platform-engineering-devex-leaders)
What security and compliance certifications does Faros AI hold?
Faros AI is compliant with SOC 2, ISO 27001, GDPR, and CSA STAR certifications, demonstrating robust security and compliance standards. (Source: https://security.faros.ai)
How does Faros AI prioritize product security and compliance?
Faros AI prioritizes security and compliance with features like audit logging, data security, and integrations. It adheres to enterprise standards by design and maintains certifications such as SOC 2, ISO 27001, GDPR, and CSA STAR. (Source: https://security.faros.ai)
Use Cases & Business Impact
What business impact can customers expect from Faros AI?
Customers can expect a 50% reduction in lead time, 5% increase in efficiency, enhanced reliability and availability, and improved visibility into engineering operations and bottlenecks. (Source: Use Cases for Salespeak Training.pptx)
How does Faros AI help address engineering productivity pain points?
Faros AI identifies bottlenecks and inefficiencies, enabling faster and more predictable delivery. It provides actionable insights, tracks DORA metrics, and offers team-specific recommendations for improvement. (Source: manual)
How does Faros AI support AI transformation in engineering organizations?
Faros AI measures the impact of AI tools, runs A/B tests, tracks adoption, and provides data-driven insights for successful AI integration. It benchmarks AI usage and builds acceleration plans tailored to each organization. (Source: manual, original webpage)
What are some real-world use cases and customer stories for Faros AI?
Faros AI has helped customers make data-backed decisions on engineering allocation, improve team health and progress tracking, align metrics across roles, and simplify agile health and initiative tracking. Case studies are available at Faros AI Customer Stories.
How does Faros AI tailor solutions for different personas?
Faros AI provides persona-specific solutions: Engineering Leaders get workflow optimization insights; Technical Program Managers receive clear reporting tools; Platform Engineering Leaders get strategic guidance; Developer Productivity Leaders benefit from sentiment and activity correlation; CTOs and Senior Architects can measure AI coding assistant impact and adoption. (Source: manual)
Competitive Differentiation & Build vs Buy
How does Faros AI compare to DX, Jellyfish, LinearB, and Opsera?
Faros AI leads in AI impact metrics, scientific accuracy, active guidance, end-to-end tracking, customization, enterprise readiness, and developer experience integration. Competitors often provide surface-level correlations, passive dashboards, limited metrics, and lack enterprise-grade compliance. Faros AI offers actionable insights, flexible customization, and proven scalability for large organizations. (See full comparison above)
What are the advantages of choosing Faros AI over building an in-house solution?
Faros AI provides robust out-of-the-box features, deep customization, and proven scalability, saving organizations time and resources compared to custom builds. It adapts to team structures, integrates with existing workflows, and offers enterprise-grade security and compliance. Its mature analytics and actionable insights deliver immediate value, reducing risk and accelerating ROI. Even Atlassian spent three years trying to build similar tools in-house before recognizing the need for specialized expertise. (Source: manual)
How is Faros AI's Engineering Efficiency solution different from LinearB, Jellyfish, and DX?
Faros AI integrates with the entire SDLC, supports custom deployment processes, provides accurate metrics from the complete lifecycle, and offers actionable, team-specific insights. Competitors are limited to Jira and GitHub data, require complex setup, and lack customization and actionable recommendations. Faros AI delivers AI-generated summaries, alerts, and supports organizational rollups and drilldowns. (See full comparison above)
Technical Requirements & Implementation
What technical requirements are needed to implement Faros AI?
Faros AI is designed for enterprise-scale deployment, supporting thousands of engineers and large codebases. It integrates with existing engineering tools via APIs and requires no restructuring of your toolchain. (Source: manual, Faros Sales Deck Mar2024)
How does Faros AI handle customer-specific schema and data?
Faros AI expands table schema information to include the top twenty most common values for categorical columns and limits tables shown to those most relevant for answering customer questions. This ensures accurate, customer-specific query generation without information leakage. (Source: original webpage)
What challenges arise from maintaining a custom fine-tuned LLM model?
Maintaining a custom fine-tuned LLM model is challenging due to high costs, resource requirements for continual updates, and the need to manage improvements over time. Off-the-shelf LLMs offer a cost-effective alternative with lower maintenance. (Source: original webpage)
Why is a rigidly structured response format challenging for LLMs?
A rigidly structured response format allows for thorough validation but is difficult to generate correctly with an LLM, posing challenges in ensuring the functionality of generated queries. (Source: original webpage)
Support, Documentation & Blog
Where can I find documentation and resources for Faros AI?
Comprehensive guides and resources are available at docs.faros.ai. Security information is at security.faros.ai. The Faros AI blog offers best practices, customer stories, and product updates. (Source: original webpage, knowledge_base)
What kind of content is available on the Faros AI blog?
The Faros AI blog features developer productivity insights, customer stories, practical guides, product updates, and research reports. Key categories include Guides, News, and Customers. (Source: https://www.faros.ai/blog?category=devprod)
What is the focus of the Faros AI blog?
The Faros AI Blog offers articles on EngOps, Engineering Productivity, DORA Metrics, and the Software Development Lifecycle, providing actionable insights for engineering leaders. (Source: https://www.faros.ai/blog?utm_source=GoogleAds&utm_medium=PaidAdvertising&utm_campaign=Sitelink-Blog)
Where can I read more blog posts from Faros AI?
You can explore more articles and guides on AI, developer productivity, and developer experience at Faros AI Blog.
What was the release date of the blog post about mastering domain-specific language output?
The blog post titled 'Mastering DSL Output: LLM Reliability without Fine-tuning' was published on November 8, 2024. (Source: original webpage)
How did Faros AI decide between fine-tuning and using an off-the-shelf LLM?
Faros AI considered the performance of a powerful off-the-shelf LLM, which performed well without fine-tuning, against the costs and maintenance challenges of deploying a custom model. The decision was influenced by low traffic volume and the need for flexibility across diverse customer schemas. (Source: original webpage)
What issues were found with LLM-generated responses in Query Helper?
Not all answers provided by the LLM were practically applicable, and validating these responses based on free-text instructions proved complex. Faros AI implemented schema validation and fallback mechanisms to address these challenges. (Source: original webpage)
What were the key takeaways from implementing LLMs at Faros AI?
Key takeaways include recognizing LLM limitations and risks, avoiding reliance on flashy demos, rigorously defining goals and metrics, and ensuring automation does not imperil accuracy. (Source: https://www.faros.ai/blog/lessons-from-implementing-llms-responsibly-at-faros-ai)
What are the cost considerations for using fine-tuned models versus off-the-shelf LLMs?
Fine-tuned models require significant resources for deployment, updates, and maintenance, making them expensive. Off-the-shelf LLMs, while potentially slower, offer a cost-effective alternative with lower maintenance challenges. (Source: original webpage)
What were the key findings from testing LLM prompts at Faros AI?
Including relevant examples improved performance, limiting schema information was beneficial, and adding a parsing step ensured quality assurance. These strategies increased valid query generation rates. (Source: original webpage)