Anti-Patterns with Measuring Engineering Productivity

Article author
Antoine Boulanger 4/25/2021

Engineering is a team effort and a creative task, so factory-style definitions of productivity do not apply. Successful engineering productivity is about enabling engineers to produce high-quality work, which is done by removing the blockers that get in their way.

Fortunately, engineering managers can still approach engineering productivity problems using the same three-step process as most analytics projects:

  1. Measure the right things (understand how the complex system works)
  2. Track the metrics (using technical means to measure the right things)
  3. Take action (by implementing changes through both human and technical systems)

From our experience helping companies improve their engineering effectiveness, we've assembled the most common failure modes, especially those that may seem counterintuitive:

Measuring the Wrong Things 

Accurate measurement and modeling provides you the lay of the land. It's about finding the right place to look before you delve into specific monitoring activities. Most problems at this step revolve around measuring the wrong areas or trying to implement improvements before you understand the situation.

Mistake #1: Putting Metrics before Visibility 

Before you even select specific metrics to track, you must understand how your process functions:

  • What are people working on? 
  • How do the activities fit together? 
  • Which individuals/teams are often blocked, in what ways?

You must gain visibility before you assign metrics or run the risk of measuring the wrong elements.

Before implementing metrics, a manager/leader should ask themselves: "How do I know what's happening in my team(s)?" If the solution relies on many human conversations or multiple different, non-automated tools, you need better visibility.

For example, if a team feels like their continuous integration system is a development bottleneck, it's tempting to jump straight to accelerating individual job run times. If the actual root cause lies in flaky tests, however, or a lack of good local testing options/habits, this effort won't solve the underlying problem.

Once you can access the different pieces of your team's activities reliably and quickly, you'll have an accurate understanding of what levers can prompt precise outcomes.

Mistake #2: Measuring Only What's Easy 

Your code base doesn't tell the whole story. Managers tend to use git data because it's the easiest to measure, but what about other elements that may be even more impactful?

Calendar and alert data, for instance, provide an exceptional predictor of burnout and will never appear in your code data itself.

The best metrics encapsulate as much helpful information as possible. Many engineering teams exclusively measure code commits and engineer satisfaction (when they measure any data at all!). If you're interested in a more holistic viewpoint, consider incorporating upstream, human elements. 

Mistake #4: Modeling the Individual instead of the Team 

Engineering is a complex system in which hundreds of people work on different components. Success and failure are both emergent activities without single root causes. Instead of modeling performance at the level of individual engineers, look at the team level.

Attempts to isolate individual performance will always become gamified, leading to status-seeking behavior instead of increased effectiveness.

Even punishing individual engineers leads to a sort of "negative game" where individuals focus on avoiding those perceptions (instead of focusing on improved end results). In either case, individualized metrics cause individuals to satisfy those specific goals, regardless of those goals' impact on productivity.

Instead, the best engineering teams focus on team performance, placing the accountability onto managers to create a effective team system (which they typically do by finding and removing their team's blockers). Since engineering effectiveness is a complex system, focusing on the team-level lets you enable results while incorporating the human nuance. 

Mistake #5: Only Measuring in Case of Emergency 

Some teams avoid measurement entirely until something goes wrong.

The resulting "Code Red" or "All-Hands-On-Deck" initiatives and numerous incident dashboards give a false sense of control; in the end, this strategy leads to finger-pointing (and the resultant loss in team morale), and fails to improve your team's day-to-day.

If you want to maximize efficiency in all cases, you should understand your team's activities in every context: when times are good, when they're bad, and especially when they're muddled.

Mistake #6: Measuring to "Check the Box"

When a problem arises, does your organization implement a solution or do managers merely "check the boxes" to keep their jobs?

For a few reasons, qualitative surveys are a frequent culprit of "checking the box" with no real improvement. For example:  

  • Qualitative surveys are imprecise. These surveys only gather personal approximations of happiness and productivity, elements strongly influenced by employee perceptions at the time they're performed. Worse, they only gather data every once in a while (annually, for instance), which fails to assign a sufficiently precise understanding of which elements are prompting problems. 

  • Most qualitative surveys go unimplemented. Even when their data highlights important issues, they're frequently forgotten a few months later. The information isn't contextualized enough to be implementable, so it merely leads to social hubbub of leaders promising big changes... which amounts to little more than wasted time and resources.

Is your leadership actually incentivized to improve productivity? What about the managers and individual contributors? Then, do they understand enough of the actual on-the-ground activities to implement improvements at a reasonable level of precision? 

Pitfalls in Tracking Metrics 

Once you have visibility into your team's process, it's time to track the metrics that matter. When performed properly, tracking offers insights into your team's on-the-ground activity and notifies you of concerning developments, even before they become real problems. When you're ready to measure metrics, watch out for these common mistakes: 

Mistake #7: Making Manual Measurements

When initially measuring your team's activities, you may need to export data into individual spreadsheets to understand how your engineering system works. Once you're ready to track metrics, however, that process is slow, tedious, and lacks historical context.

Tracking should enable improvements over the long run. Most one-off spreadsheets quickly become irrelevant or lost to time. And, even when they're maintained, do you really want your team's productivity to depend on a hodgepodge of individually-checked documents?

Engineering effectiveness is about the long run, so single point-in-time measurements should exclusively be used for understanding its habits and common practices — not for long-term tracking and improvement.

Mistake #8: Re-Inventing the Wheel 

Productivity dashboards are intricate and complex. Dealing with dozens of APIs, org charts and various notions of team structures, projects and service catalogs is very hard. We've talked with more than a few companies who set out to build productivity tools in-house. They expect the process to take a single engineer something like two months... then, at six months, they add a second engineer... and so on.

Finding and tracking the correct metrics isn't easy. While your team may be fully able to build a productivity dashboard, the activity requires a significant amount of experimentation and expertise. It can easily evolve into a time sink, distracting expensive engineers from more valuable activities.

Mistake #9: Implementing "Magical" Metrics

"Magical", all-encompassing metrics may appear particularly precise, but watch out: they're typically too good to be true.

For example, some tech companies aim to evaluate the revenue impact of each individual line of code. While this sort of metric would be very helpful, for a variety of reasons, it's impossible to calculate: 

  1. Company revenue includes a whole host of activities outside of code, such as marketing, sales, and design. Crediting dollar amounts to specific lines of code misunderstands the complex synergy of these moving pieces. 
  2. Each line of code is part of both a complex codebase and culture that it draws from and contributes to in myriad nuanced ways. If it improves your product but decreases the rest of your team's efficiency, how much is that line of code really worth? 
  3. As a company and product evolve, the value of code changes over time. Sometimes, today's highly valuable code may even become tomorrow's code debt!

There is no "silver bullet" metric to maximize your engineering effectiveness. Magical metrics may look appealing at first but you should see them for the sirens they are. If they're mathematically meaningless, they'll lead you astray.

Actionability Issues 

Your metrics only matter when your managers can improve them. Ensure you're incorporating this actionable element into your productivity strategy or risk your productivity work stopping short of your bottom line.

Mistake #10: Assigning Metrics to the Wrong Teams 

For example, every large tech company has a centralized team responsible for testing and deploying code from multiple sources. In this context, data on deployment time is unactionable outside of the deployment team. It would be akin to blaming a car's passenger for the driver's speeding.

The place where you measure is not always the best place to implement improvements.

If a metric applies to an individual team, that team's manager is the right owner. But in the case of inherently centralized metrics, that centralized team should be in charge of their improvement, enabled in part by the ability to push additional requirements/requests to other teams. 

Mistake #11: Making metrics Pull-only

If a process is working well, you shouldn't have to visit your dashboard to consciously check it. Instead, your engineering effectiveness metrics should enable you to cruise while within the bounds, only notifying you when get off track.

Pull-based systems require intentional metric checking, while push-based systems operate in the background. By enabling a user to add their own goals and receive email/Slack alerts if the goals aren't met, a push system enables "set it and forget it", giving you peace of mind while you're implementing your plans. 

Mistake #12: Leadership Failing to Set Implementable Goals 

Even if an engineering element is a flashing red warning sign, the right people might not be enabled or incentivized to fix it.

Most pressure comes from the top of the organization, so engineering effectiveness requires your head of engineering to articulate productivity goals as a priority. Then, even when the leadership does implement a goal (e.g. "lower code review time over the next quarter"), each team needs its own subgoals and measurements.

To improve team productivity, leaders should align the team in a direction by making it clear that they are paying attention to a specific metric from now on (e.g. the number of alerts affecting engineers after hours). Then, they should stay away from prescribing particular solutions:  engineers are smart and will figure out specific solutions on their own. 

Accelerating Engineering Effectiveness 

Leading an engineering organization in a data-driven, effective way is as complex as any business intelligence project. From our experience, the best-performing teams:

  • Decide what matters — including a clear understanding of how the different elements activate, how long they take, who performs them, and what that process looks like — before tracking or implementing changes. 
  • Track those metrics in scalable ways
  • Create an aligned incentive structure that drives ownership and action.

If you come across any questions along your engineering effectiveness journey, consider trying Okay; we'd love to help improve your process.