• Products
  • AI Copilot Impact
  • Pricing
  • Resources
  • Changelog
  • About Us
    Sign In
    Talk to Sales
DevProd

How to Identify Code Complexity’s Impact on Developer Productivity

Machine learning models signal when it’s time to pay down technical debt.

Neely Dunlap

Browse chapters

1
What is code complexity?

Share

September 24, 2024

Code complexity is nearly unavoidable in the modern software development landscape. As businesses innovate to satisfy rising demands, the introduction of new features gradually increases code complexity over time. If this complexity is not addressed, it escalates and compounds, increasing bugs and technical debt while decreasing developer productivity.

While tools now exist to prevent increasing code complexity at the individual code change level, many companies still struggle to address large existing code complexity issues due to the time-consuming nature, substantial expenses, and inherent risks associated with refactoring coding systems.

So how do you know when code complexity becomes a main contributor to lost developer productivity? When does it become time to address this issue head-on and prioritize the simplification process?

Machine learning models may provide the answer.

Recent R&D from Faros AI into developer productivity analytics, automated issue detection, and the ranking of potential causes is highlighting when code complexity is becoming a blocker.

What is code complexity?

Code complexity refers to the intricacy and sophistication of a software program, defined by the ease or difficulty of understanding, modifying, and maintaining the code. There are two main types of code complexity: cyclomatic and cognitive.

  • Cyclomatic complexity is a quantitative measurement. First introduced by Thomas J. McCabe in 1976, this metric measures the number of linearly-independent paths through a program module, or, how many decisions are made in your source code. With cyclomatic complexity, higher scores are considered bad and lower scores are considered good; lower scores indicate code that is easier to understand and test, less likely to produce errors, less risky to troubleshoot and modify, and hence easier to maintain.
  • Cognitive complexity is a qualitative measurement. It assesses how difficult the code is for humans to read, understand, and maintain. Determining cognitive complexity considers factors such as nesting levels, control flow jumps, logical operations, decision points, recursion, and complex data structures to identify code which may be challenging to work with. Think: clean code reduces cognitive load—so better code leads to lower cognitive complexity.

As both cyclomatic and cognitive complexity increase, so does the impact on developer productivity. Complex codebases are more prone to bugs and unexpected behavior, often forcing developers to divert time and energy from important feature work to debug and troubleshoot issues.

Furthermore, when codebases are overly complex, developers must spend more time and effort trying to understand the existing system, identify dependencies, and determine the safest way to make even small changes.

The cognitive burden of working with highly complex code can lead to developer fatigue and frustration, hampering their motivation and focus, while frequent context switching between different parts of a sprawling codebase slows down their ability to implement new features or enhancements efficiently.

What are the main contributors to code complexity?

Code complexity increases as software evolves. As a codebase grows, the increase in code volume naturally leads to greater complexity. Higher numbers of dependencies and multiple execution paths will require more debugging and higher maintenance tasks. Even the most well-written, well-organized code will become harder to manage over time, which is why this issue is nearly unavoidable.

Aside from volume, a host of other practices and processes across the software development lifecycle can contribute to code complexity. Code complexity can arise from:

  1. Flawed Architectural and Design Decisions, such as maintaining monolithic architecture for too long, choosing inappropriate frameworks, and neglecting long-term scalability considerations.
  2. Poor Code Quality and Maintenance, such as poor code clarity and readability, skipping or inadequately performing code reviews, and improperly managing dependencies.
  3. Ineffective Project Management and Execution, such as unchecked feature creep, mismanaged talent and resources, and poor version control practices.
  4. Inadequate Documentation and Legacy Integration, such as poor documentation practices and difficulty incorporating legacy code.

When left unchecked, all of these elements can lead to long-term, systemic coding complexity issues that are difficult to resolve.

What are the best practices to avoid code complexity?

To proactively manage code complexity and avoid its compounding effects, there are several best practices companies can follow.

Balance cohesion and coupling in your codebase.

Cohesion and coupling are key concepts in software design that significantly impact code complexity.

  • Cohesiveness refers to how closely related and focused the responsibilities of a single module, class, or function are. In simpler terms, it measures how well the elements within a module work together to achieve a single, well-defined task. Cohesiveness enhances the internal quality of modules by keeping related functionality together, so high cohesion typically makes the code easier to understand, maintain, and test.
  • Coupling refers to the degree of dependency between different modules, classes, or functions. It measures how closely connected different parts of a system are. Low coupling means that modules or components are independent of each other, with minimal dependencies, so changes in one area are less likely to affect other areas.

The ideal scenario is to achieve high cohesion within modules while maintaining low coupling between them. This balance ensures that each module is focused and self-contained, and changes in one module have minimal impact on others. Managing these aspects effectively leads to more maintainable, less complex code.

Use static code analysis tools for PR monitoring.

Static code analysis involves examining the source code of a program to identify potential vulnerabilities, errors, or deviations from prescribed coding standards. Types of static code analysis tools include bug finders, security scanners, type checkers, complexity analyzers, dependency checkers, and duplicate code detectors—all designed to address specific dimensions of code quality, security vulnerabilities, and maintainability challenges.

Tools such as Codacy and Sonar offer immediate feedback during the development process and can be integrated and automated in two main ways:

  • within CI/CD pipelines to run checks during builds or deployments
  • with version control systems, like GitHub or GitLab, to analyze code during pull or merge requests

Whenever a PR is submitted or code is merged, these tools perform checks to ensure the new code is free of vulnerabilities and meets quality standards, helping to minimize code complexity by identifying issues early and keeping the codebase consistent.

Set thresholds at the release level when testing the mainline.

Sometimes, such as when using a mono-repo model, two separate code updates are reviewed at the same time. They both pass separate static code analysis and are merged into the main branch, seeming completely fine on their own.

But, once introduced together, new integration challenges may arise and cause breakage in the mainline. While routine checks are conducted in the mainline, they are not typically a part of the pull request process—thus, the impacts aren’t immediately evident, but are felt when breakages occur further down in the development process and increase coding complexity.

To manage and prevent this, you can set up an additional step to automatically test the main branch whenever changes are made and block the release until any issues are fixed. This strategy helps control code complexity by catching integration issues early and reducing the risk of compounding problems, thus ensuring a cleaner, more reliable codebase.

By the time you come across this article, you’re probably aware of the high code complexity in your systems, but you’ve postponed addressing it to focus on customer-facing priorities.

While understandable, it is important to identify when high code complexity is impacting developer productivity to a point it’s having a significant impact on the business (and customers) in terms of:

  • Lead time (time to market)
  • Customer satisfaction (number of bugs and support tickets, CSAT)
  • Time to resolve issues (MTTR and SLA adherence)

How do you determine when coding complexity becomes a significant factor negatively impacting productivity when there are multiple factors at play?

Machine learning helps identify when code complexity has reached a tipping point

Devoting multiple cycles, months, or—let’s be honest—years to rearchitecting and refactoring code is not a decision made lightly. But it is necessary if it’s the number one factor impacting key performance metrics.

In the past, companies looking to understand the impact of their high code complexity turned to human data analysts to parse through complex code and make recommendations. Imagine some poor soul tasked with manually combing through mountains of code, making dozens of dashboards to look at metrics for every team, and comparing these metrics to factors like Jira tickets, team seniority, number of services owned, deployments per week—and every other factor of influence—and then trying to decide which of these hundreds of factors is actually causing their slow lead time. Not only is this impractical, but it's also a huge drain on time and money to try and understand the code complexity’s impact and potential causes in this manner.

But now, machine learning solutions, like those developed by Faros AI, offer a better way.

How do machine learning models determine code complexity’s impact?

Faros AI uses machine learning to ingest and analyze data from numerous key performance indicators, such as change failure rate, lead time for change, pull requests, cycle time, successful deployments, and incident resolution times, alongside cyclomatic complexity scores from tools like Codacy and Sonar.

This data is then examined across teams to identify significant differences and uncover potential causes for the discrepancies. Faros AI identifies correlations across conditions to pinpoint if high code complexity is the main contributor. For example, if PR cycle times are increasing rapidly and high code complexity is identified as a key factor, this indication provides leaders with a more concrete piece of evidence that it may be time to address the issue.

Furthermore, Faros AI’s platform can juxtapose these code complexity insights with developer survey data. If developers report coding complexity issues in surveys and this feedback aligns with the quantitative data, this combined picture gives leaders a compelling reason to consider tackling this compounding challenge and address it more effectively.

How is new AI technology affecting code complexity?

As many engineering organizations are adopting AI coding assistants, it’s critical to understand their impact on code complexity. Geekwire published an article exploring findings from a research project on AI copilots and the impact on code quality conducted by GitClear. Their findings indicate that while AI coding assistants make adding code simpler and faster, they can also cause decreases in quality through:

  • Increasing levels of “code churn”: As developers can generate code more quickly, higher percentages of code are being thrown out within a couple weeks of authoring. Consequently, the frequent rapid changes increases the risk of mistakes being deployed into production.
  • Disproportionate increases in “copy/pasted code”: The rate of copy/pasted code additions significantly exceeds thoughtful updates or restructuring of existing code, and the hastily generated segments often fail to thoughtfully integrate into the broader project architecture. This can create ongoing, compounding challenges for the team tasked with maintaining it thereafter.

These practices are generally seen as a negative indicator of code complexity. If your engineering organization is using AI copilots, Faros AI can illuminate this “AI-induced tech debt” and demonstrate its impact on downstream metrics. Armed with this insight, engineering leaders can take steps to mitigate these issues and promote better processes to support the ongoing health and manageability of their codebases.

Curious to discover how code complexity is affecting your KPIs and goals?

Whether or not you decide to embark on a refactoring and simplification initiative, it’s imperative you’re aware of how code complexity is affecting your development teams.

If you know it’s time to take action but you’re unsure where to start, or if you’re just curious to see how much longer you can sweep increasing code complexity under the rug (jokes), Faros AI’s engineering intelligence solutions can provide you with the answers for informed decision-making.

Request a demo to learn more.

Back to blog posts

More articles for you

See what Faros AI can do for you!

Global enterprises trust Faros AI to accelerate their engineering operations.
Give us 30 minutes of your time and see it for yourself.

Get a Demo