Product Practice #288: How Leading Indicators are Defined by Context

Published:

Estimated Reading Time:  Minutes

One of the most undervalued aspects of Key Results is how detectable the changes are on a weekly bases throughout the cycle to allow the team to adapt their actions.

A lot of teams focus on easy-to-document but hard-to-influence lagging indicators when they should seek to identify and use leading indicators to define their OKRs. And even though there are some common patterns of what types of Key Results are often rather lagging or leading, another ingredient is often missing: A team’s context.

While there are universal traits of what makes a metric lagging or leading, it’s not always the same type of metric. A leading indicator is defined by its ability to predict future results through correlational behaviors. Let’s look at an oversimplified example:

At a company level, a SaaS company might use a quarterly measured Customer Effort Score across the product’s main activation areas to predict customer retention. In this case, retention is the lagging indicator (the end goal) that is only measurable in hindsight, and the Customer Effort score allows them to predict the likelihood of changing.

But the quarterly CES measurement is a lagging indicator from an individual, customer-facing, product team’s perspective in the same company. For them, it’s only measurable in hindsight and is the result of multiple efforts.

So, to find their leading indicator, this product team needs to answer the question of what actions they can influence directly and detect frequently. By focusing Discovery efforts on the behaviors and pains of low-CES respondents, they might identify more leading Outcomes that can contribute to improving the CES, like

  • Invite Co-workers without the need to connect Office365–whose success could be measured through a KR like “No. of invited co-workers within x days of onboarding
  • “Time spent with inviting co-workers.”

But these are just two dimensions of context around lagging and leading indicators. What about internal-facing or platform teams? Imagine the perspective of a growth infrastructure team in that same company. From their point of view, the “Time spent with inviting co-workers” is a lagging indicator. 

Why? Because they can’t influence it directly, and it will only change as a result of multiple teams’ efforts. So, what’s leading for the customer-facing team, becomes lagging for the growth infrastructure team. But instead of latching on to the leading indicator of another team, they have to find their own. By asking the same questions, they might identify leading indicators like

  • “No. of API calls to connect to company directory”
  • Reduce the number of Invite Service Request Call Timeouts.”

Helping them land on leading Key Results that they can influence directly.

Avoid tapping into generalized definitions when trying to land on more leading indicators for your next set of OKRs. Instead, consider the context of the metric: Can you influence it directly and measure its change continuously? If yes, it’s probably a valuable leading indicator for you. If no, reverse-engineer the lagging indicator to identify what Key Results can help you guide the way.

Speak soon,

Tim


Content worth your Time

Impact By The Numbers

Lagging indicators are great for keeping an eye on and evaluating the actual results produced. But because they tend to react with a delay to changes in the processes and activities that drive them, they are far less useful for informing more time-sensitive tactical decisions and actions. As a team that needs to allocate its limited resources effectively to achieve the desired impact on the company’s business results, we’d love to have something that, as early as possible, will give us an idea of the future performance of Revenue.

OKRs & B2B Companies

Let’s imagine you make a Learning Management System. Your customers are school system administrators and heads of IT for schools. Your goal is to help them get the system up and running and integrated into their various other tools. How they do that is ultimately up to them. How they make their teachers, parents and students aware of the system is also up to them. And, as much as we’d like to believe that our LMS helps make learning more effective, we have no control over the content populated within it nor the teachers teaching it. Students’ grades are also out of our control. The best we can hope to measure (other than renewed subscription which is also a lagging indicator) are our customers’ behaviors like the ones mentioned above.

Exploring Figma’s Self Serve Forecast Model

While the latter illustrates that today’s monetization is a lagging metric of Top of Funnel (ToF) activity from the last several months. Freemium tools can monetize users not just at signup but through engagement with the product over time.


{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
>