Sticking to dogmatic principles can make you feel good about your OKRs, but they rarely turn out to be useful. Here’s why Product OKRs should be IDLE to provide value beyond “best practices.”
It’s easy to rigorously follow instructions on HOW to write OKRs. Or to obey the idea that every OKR has to be an Outcome. But it’s much more difficult to go from these ideas to OKRs that are actually useful to Product Teams.
A more practice-informed lens to check the usefulness of your Product OKRs is based on the IDLE attributes:
I, as in INFLUENCEABLE
- A team’s actions should be able to move the KRs significantly on their own
- That means no overarching business goals that „change anyway “ or depend on another team’s work
D, as in DETECTABLE
- Changes to the KR have to be visible on a weekly bases throughout the cycle to allow the team to adapt their actions
- Prioritizing correlating leading indicators over causational lagging indicators
L, as in LINKED
- To avoid mirroring business as usual, the team’s OKRs have to be linked (not cascaded) to Company-level OKRs and their own Product Strategy
- If necessary, also horizontally aligned with the priorities of other teams/departments
E, as in EVIDENCE–INFORMED or EXPLORATIVE
- User Outcomes shouldn’t be guessed but rooted in actual insights through Discovery activities
- Alternatively: Making them Exploratory as an alternative to getting to evidence-informed KRs in the future
After all, an OKR shouldn‘t be defined by its technical correctness but by its usefulness in the context of your team, your product, and your company. Don’t get lost in chasing “best practices,” and check your OKRs for how they can guide your team.
PS.: If you want to learn how to put these attributes into practice, check out my Practical Product OKRs workshop I’m giving as part of the Digitale Leute Summit in October in Cologne, Germany.
Content worth your Time
Take an example: let’s say we spent three months shipping Feature X. We might then have a decision to make, should we work on Xv2 next? Simply looking at X’s results in isolation isn’t enough to answer this question; we must consider this relative to other possibilities — what about improving features A, B or C? When we take the time to regularly reflect on the impact of our work through metric reviews, we build towards a richer knowledge of what really drives our business.
The team’s higher-level retention goals should also factor in the causal levers that influence retention (such as cohort segmentation, feature usage, value proposition, value communication, natural frequency of usage, product quality, etc.) in order to focus their efforts on the influenceable ones that matter most. For example, before goal-setting around improving retention, teams should know what retention is composed of, which levers are likely to move it, and which solutions effectively pull those levers.
The danger of averages is you may move the metric by inspiring a small subset of customers to do a lot more of something. But this may not affect enough members to improve the overall product experience.