Leading and Lagging Indicators:
How to Measure the Progress of Product OKRs
Most product teams review goals in hindsight to gauge success. But what if you didn’t have to wait until completion to start evaluation, allowing better clarity into the actions you take to reach those goals? After all, building products is about learning and iterating, which requires setting and measuring metrics that lead you towards your ultimate goal. Lags between your actions and measuring success hinder that course-correction.
Reading Time: 13 minutes
Last Updated: Sep 21, 2023
Ideally, Objectives and Key Results (OKRs) help express what you want to achieve (outcome metrics), instead of only listing what you want to do (outputs).
Using OKRs in Product Management means that Product Goals emerge from your Product Strategy Choices and serve as a bridge to inform your actions during Product Discovery and Delivery. Naturally, most product teams focus on metrics that capture the ultimate success from one of the most important strategic themes, like monetization, user engagement, growth, satisfaction, or process quality.
But the point of setting goals shouldn’t just be to measure success at the end of a goal cycle or fiscal year (which can often be seen in OKR Theater). Instead, the most significant benefit comes from regularly checking in on your progress and adjusting your actions accordingly.
Moving towards a Learn-Measure-Build Cycle of Product Goals
To experience these benefits, product teams need to rely on metrics that respond to their actions as directly and quickly as possible, instead of just reflecting on results of the past.
This article will help you understand:
- Using Leading and Lagging Indicators in Product Management
- Differentiating Leading and Lagging Indicators Through Context
- Causation and Correlation in Lagging and Leading Indicators
- Identifying the Right Leading Indicators for Your Product
- Using Insights to Work Backward
- Adapting Leading Indicators to Outcome OKRs
- Moving beyond Outcomes ‘by the book’
- Takeaway: Prioritize Leading Indicators to Avoid Lagging Decision-Making
Using Leading and Lagging Indicators
in Product Management
Individual product teams often realize that their goals may be important on the company level but, unfortunately, the pace at which these goals change creates a lag in how product teams can receive and act on feedback regarding their work’s impact. By the time metrics like company revenue or user growth have changed significantly, you can’t tie these lagging indicators to any of your previous efforts and are already working on the next initiative.
Yet there are plenty of metrics individual teams can more directly influence –– at least in theory. To productively adjust actions based on continuously measured progress, teams need metrics that lead them towards success. This is why these are called leading indicators.
Leading indicators allow you to predict the future. They are hard to create but worth it for their predictive value.
In a nutshell, the key differences between leading and lagging indicators can be summarized like this:
The Difference between Leading and Lagging Indicators
These differences become more stark in the following examples. But keep in mind that they are just that –– examples. What a leading and lagging indicator looks like for you is highly dependent on your teams, company and products. Even metrics like NPS or Churn, which would seem to solely be lagging, can have different meanings between different levels of a company. The following shows the type of relationship these two metrics have in general, not what your leading and lagging indicators may be.
Product Metrics Examples for Leading and Lagging Indicators
As with so much of Product Management, there’s lots of grey in this seemingly black-and-white perspective on leading and lagging indicators. But having a clear separation of leading and lagging indicators can serve as a critical input for defining product teams’ OKRs.
Let’s unpack this.
Differentiating Leading and Lagging Indicators Through Context
Ultimately, the more precise categorization of what makes a metric leading or lagging depends on the context. Even though there are some common patterns of what types of Key Results are often rather lagging or leading, there is no simple way to determine which is which without a deep understanding of how they are used in a particular situation. This means that a given metric might be leading in one scenario, but lagging in another. Let’s look at an oversimplified example:
Imagine you’re part of the product team that works on an email service platform’s integrations –– we'll call it "Postgorilla". And, for the sake of example, let's say Postgorilla’s company-level strategy choice is to execute "Increasing our best customers' time and money investment in us."
They might do that that through a Key Result like "Average Quarterly Upgrade Revenue per Customer of $250," among others. Here, an increase of the ARR of their heavy users ("Apelovers") would be the lagging indicator (end goal) that is only measurable in hindsight, and the average quarterly upgrade revenue allows them to predict the likelihood of reaching that Key Result.
While the quarterly “Upgrade Revenue” measurement is a leading indicator on the company’s OKR level, it is a lagging indicator from a product team’s perspective.
For those working on Postgorilla’s integrations marketplace, quarterly upgrade revenue is only measurable in hindsight and is the result of multiple teams’ efforts.
So, to find their leading indicator, this integrations marketplace product team needs to answer the question of what actions they can influence directly and detect frequently. By focusing Discovery efforts on the behaviors and pains of low-MRR heavy users, they might identify more leading Outcomes that can contribute to improving the “Average Quarterly Upgrade Revenue,” like:
- Use more of my marketing tools stacks’ functionality without having to switch apps could be measured through a KR like “No. of actively used integrations per user”
- “Time spent for integration installation procedures.”
But these are just two dimensions of context around lagging and leading indicators. What about internal -facing or platform teams? Imagine the perspective of a Partnerships Infrastructure Product Team Perspective within Postgorilla. From their point of view, the “No. of actively used integrations per user” is a lagging indicator.
How one team's leading measures are another team's lag measures
Why? Because they can’t influence it directly, and it will only change as a result of multiple teams’ efforts. So, what’s leading for the customer-facing team becomes lagging for the integrations infrastructure team. Instead of latching on to the leading indicator of another team, they have to find their own. By asking the same questions, they might identify leading indicators like:
- “No. Default-ready Integration SSOs”
- “Number of days without priority 0 Integration API outages”
This, in turn, helps them land on leading Key Results that they can influence directly.
Avoid tapping into generalized definitions when trying to land on more leading indicators for your next set of OKRs. Instead, consider the context of the metric: Can you influence it directly and measure its change continuously? If yes, it’s probably a valuable leading indicator for you. If no, reverse-engineer the lagging indicator to identify what Key Results can help you guide the way –– Using Insights to Work Backwards, which we will discuss soon, can be a good place to start.
For now, considering further examples of how to make this separation, here are some flavors I'd consider:
- How the metric is measured: Technically, an NPS measurement could be more leading than revenue just based on the theoretical user journey. But when the metric is only evaluated once or twice a year, it becomes more of a lagging indicator due to its almost non-existent detectable change for a team.
- The level you want to measure: Revenue, etc. are probably fine for the company- or department-level OKRs that are only evaluated at the end of a cycle in hindsight or throughout a year. But for a product team that wants to check whether they are on track every one or two weeks, that doesn't work. Trade the certainty for something less certain, but more responsive.
- How you want to work: Will the success of the team only be measured by the contribution to revenue, NPS, etc.? Can you enable the team to find the leading actions that will contribute to said metrics, and judge their success based on individual measurable changes in behavior?
It is only when you can take this more granular look at how your particular teams are able to operate that you can truly unlock the power of leading and lagging indicators in the context of your higher-level goal setting.
Causation and Correlation in Lagging and Leading Indicators
While lagging indicators like revenue growth come with the downside of being difficult to change and slow to act on, they also come with a substantial upside: Certainty. Tying actions to lagging indicators means that you are more likely to demonstrate that your efforts directly caused your results, even if this proof can only happen in hindsight.
By nature, leading indicators are less direct than lagging indicators. Because they represent more tactical changes that are easier to make in the present, they may only correlate to, not cause, those lagging indicators’ future success.
In our example, the Postgorilla integrations team arrived at more leading indicators they could use to prioritize their day-to-day work. Based on the real-life usefulness of these indicators, they could have expanded their perspective even further towards leading behaviors like this:
Leading Lagging Indicators Outcome and Output Sequence and Difference
Trying to move a lagging indicator (e.g. revenue) ties your actions to a more absolute, non-negotiable result, but the metrics move much slower. Whereas working on a more responsive metric (ie. page views) can't always be clearly tied to changes of a "more important" metric like revenue. You trade feedback and responsiveness for absolutism about the metric you work on.
But how did you get there?
Identifying the Right Leading Indicators
for Your Product
Swapping more certain lagging indicators for harder-to-measure but easier-to-respond-to leading indicators is crucial to creating an efficient “build” cycle.
In our example with Postgorilla’s integrations team, imagine the company is focused on subscription revenue during this goal cycle. And all three of the drivers of revenue –– new customers, upsells, churn reduction –– will only change significantly through contributions from multiple teams working in different departments at the end of a quarter or even a year.
Working on the integrations features within one department, Postgorilla now faces two distinct challenges:
- While they know where their department’s goals fit in with the bigger picture of the company, these still only allow them to review their actions’ impact in hindsight—at the end of the quarter.
- While they could develop more responsive metrics on a team-level, these wouldn’t have much impact on company or department priorities.
This is a key conundrum that Product Management faces when identifying leading indicators for a team to act on: balancing certainty about indicators at a company level with responsiveness to team priorities.
This is a key conundrum when identifying leading indicators is balancing certainty about indicators at a company level with responsiveness to team actions.
But those tactical changes don’t always have to be metrics within your product. Using a set of guiding questions to identify leading indicators can help you move in the right direction as a product team.
Leading and Lagging Indicators Guiding Questions
A crucial mind shift is to look beyond changes aimed at your users. Especially when you work on a product with insufficient quantitative data, considering actions and behaviors from stakeholders or team members can produce ideas for leading indicators much more quickly. A helpful lens for broadening your perspective beyond direct users is Impact Mapping and its ‘Adjacent Actors’ categorization.
Using Insights to Work Backward
From a product team’s perspective, the core idea is to work backward when trying to identify leading indicators to act on. Start by using metrics that might be lagging and are still in plain sight.
In our example, Postgorilla’s integrations product team doesn’t want to start with a customer upsell event, but something more tangible, like the active use of integrations. This still ticks most of the boxes of a lagging indicator but is closer to the team’s core actions and thereby more leading.
Postgorilla’s integrations team can now use the guiding questions to identify branches of leading indicators that match the criteria for leading indicators they are looking for.
These indicators should be:
A. Directly linkable to the team actions,
B. Predictors of future success (like the integration use, or ultimately, the customer upsell), and
C. Changing significantly throughout the goal cycle.
Identifying Leading Indicators by working backward
This diagram shows how you might work backward from a lagging indicator to discover leading indicators. The idea behind the different branches in the diagram is to give space to the various themes that might be behind several leading indicators, which might drive the same lagging indicator.
For example, one branch could be focussed on the series of actions and behaviors that result from a marketing customer journey, whereas another mirrors the order of technical backend processes.
For Postgorilla’s integrations team, this exercise might lead them (pun intended) to a selection of possible leading indicators like this:
Identifying Product Management Leading Indicators by asking WHAT
This exercise provides a better overview of the different leading indicators and has more context about whether pursuing one of them matches a team’s capabilities. For example, the product team would want to focus on indicators that they have the most control over, as opposed to ones that are dependent on another team’s actions, which makes them lag very quickly. The diagram mainly identifies the kind of metrics that could be used as a predictor of future success. It’s not about selecting and setting ambition levels, yet.
After choosing an indicator, you have to understand your customers, colleagues, or stakeholders’ main problem when executing that action. That’s the problem space that will inform the kind of solutions you want to validate and build—using your chosen leading indicators to measure your work’s success.
There are many different ways this could look and play out depending on your product, team structure, and available data. The most important thing is that you stay close to the core intent of identifying directly influenceable metrics that change at a pace that allows you to course-correct within your goal cycle.
Adapting Leading Indicators to Outcome OKRs
Using OKRs already poses many challenges for product teams that have to manage identifying their Product Strategy patterns or Product Discovery decisions as well. But the most common challenge is to choose Key Results that balance tasks or artifacts (ie. Output OKRs) with changes in behaviors that create results (ie. Outcome OKRs).
Comparing Output and Outcome OKR Examples
It can be difficult to separate Output and Outcome OKRs if you are used to thinking about OKRs as a single, guiding metric. But finding the distinction between these can be a key way that your team makes decisions on what to tackle next (Output) and how to analyze progress (Outcome).
Leading and Lagging Indicators Matrix
An often-used exercise to turn Output Key Results into more Outcome-ish ones is to ask “Why?”
“Do a Competitor Analysis!”
“Create more app ideas.”
“Drive the engagement of our mobile users.”
...and so on.
But when also trying to capture Key Results that are more leading than lagging, it’s vital to add a second dimension, one that helps develop a shared understanding of how leading or lagging your Output or Outcome Key Results might be.
Applying Guiding Questions for identifying Leading Indicators
The visualization above is meant to be a guide for you to move from one quadrant of potential Key Results to the other. This matrix doesn‘t offer a right-or-wrong perspective. Instead, it can serve as a tool that allows you to make informed choices about what kind of metrics work best for your team and your company to measure progress as you work. It will help you make trade-off decisions about the goals you want to achieve instead of letting dogma lead you blindly.
What would this matrix look like after one of our integrations team’s Key Result ideation sessions?
From Lagging to Leading Indicator as a Product Team
As you can see, the Postgorilla team started with a set of leading but Output-oriented Key Results. To arrive at a more Outcome-oriented flight level for their Key Results, they used the question “WHY is this important?” to move up.
And while they technically identified “2.8 Active Integrations per Customer” as an Outcome Key Result, it would have been only somewhat measurable at the end of a cycle. So the team members asked, “What actions do our customers have to do more of for our team to succeed?” This conversation generated ideas for more leading-ish yet still Outcome-oriented Key Results.
This process won’t be as straightforward as in our example. But it illustrates the kind of thinking you can use to check what kind of Key Results you are discussing or measuring in your team and how to shift course—If you want to.
Moving beyond Outcomes ‘by the book’
Setting Outcome OKRs has great benefits for Product Teams. They get more autonomy to pursue the exact solution to achieve the goal on their own, and they measure progress towards a result instead of ticking off tasks.
But Outcome OKRs do not automatically lead teams toward creating and measuring success. It’s entirely possible to define Key Results that describe a behavior change but aren’t measurable within –– and don’t change throughout –– a single goal cycle.
Therefore, teams need to be aware of whether they discuss Outputs or Outcomes as ideas for Key Results AND how leading or lagging these are. After all, what’s the point of setting and measuring a goal that checks all the boxes of being an Outcome but doesn’t help you adjust your actions along the way?
After all, what’s the point of setting and measuring a goal that checks all the boxes of being an Outcome but doesn’t help you adjust your actions along the way?
Ultimately, the decision to focus on leading or lagging indicators and balancing Outputs and Outcomes shouldn’t just be the result of a colorful matrix. It should be guided by your individual OKR System and it should help you use OKRs as a framework for setting and measuring Product OKRs in YOUR company.
Takeaway: Prioritize Leading Indicators to Avoid Lagging Decision-Making
Remember that leading indicators are a means and not an end. Listing as many leading indicators as possible won’t automatically create product success and can be a path toward getting distracted by vanity metrics. But they can be a useful lens to measure the direct impact of your actions as a product team instead of waiting for end-of-cycle feedback on whether you succeeded.
And please don’t treat this process of deciding on and separating leading and lagging indicators as a siloed craft. Choosing indicators based on how responsive they are to your team’s actions is a practice that can be incorporated into your existing goal-setting processes and frameworks, like OKRs. Use leading indicators to enhance how you work today to avoid worrying about making decisions based on feedback that arrived too late to inform your work.