Regarding how many hierarchical levels one should cover through OKRs, the same rule applies to eating cookies past bedtime: The fewer, the better.
Intermediate organizational layers like departments, programs, or areas can be beneficial when they provide a narrowed focus “below” the company perspective (given the critical size of the organization that justifies these levels). But they can also create bureaucratic overhead when treated like a gatekeeper.
What organizations don’t realize about department OKRs: Most often, the people defining and monitoring them are not the ones doing “the work”. A Head of Product doesn’t ship the feature, and a Head of Marketing doesn’t write the newsletter.
This means they should not work for weekly check-ins and direct task assignments in mind. Aka: Act as a shell, not a gate.
When this is clear, (potential) department OKRs are good for two things:
- Provide teams with clarity on what their area of contribution to the company priorities can be (top-down).
- Offer an aggregated view of department priorities based on team OKRs to align with other departments (bottom-up).
Because if department OKRs look like team OKRs, they become too detailed and descriptive. They break down company priorities into too narrow Outcomes or Outputs, leaving the operational teams with nothing left other than executing. Instead, focus on providing direction, not linear connection.
How to put this Theory into Practice:
- Check your department/program/area OKRs for a harmful level of detail. Is there anything left to define for teams within this construct other than tasks?
- What’s the value of an additional OKR level between the company and teams? Is it an aggregated bottom-up view or a further top-down direction? Use them accordingly.
- Avoid a 1:1 laddering of team OKRs to department OKRs. Point out the contribution, but don’t be afraid to create your priorities.
Thank you for Practicing Product,
Content I found Practical This Week
On the metrics side of OKRs, we try to ensure that teams are distinguishing between “input” and “output” metrics. I find these often get confused. For example, someone might say, “It’s so hard to move activation in OKRs.” But that’s because it’s a lagging output metric, influenced by lots of different inputs, which rarely changes in a single quarter. So we want teams to goal themselves around metrics they have control over. If the team has control over the metric, the question becomes more tractable: Do we believe the input metric will move the output metric?
The real goals of OKRs are the conversations they make possible. When facilitated well, these conversations form the foundation of the new learnings and actions we take as a result of the shared insights we uncover. This continuous process of shared learning over the long term is always far more valuable than achieving some arbitrary number from a random Key Result check-in could possibly be in the short-term.
As tempting as it may be to sign up for your client’s success, it’s risky to set your OKRs up to reflect those goals. There are too many variables outside of your control. This goes for other types of services businesses as well. If you develop software, for example, you shouldn’t sign up for the commercial success of that software. If you provide legal services, you likely aren’t going to sign up for some percentage of successful court decisions. There’s just too many things in those scenarios that could sway the results that you can’t influence.