Product Practice #302: Impact Mapping Helps You Answer These Discovery Questions


Estimated Reading Time:  Minutes

As a structural tool, Impact Mapping helps you zoom in and -out of the micro of Product Discovery.

On a macro level, it helps you bring order to the insights artifacts you have:

  • IMPACT: How clear is the strategic focus? „Increase revenue“ is unspecific. „+34% margins of self-service Australian agency customers“ is specific.
  • ACTORS: How contextual are the segments you want to focus on? „Tim, 34, likes coffee“ is irrelevant to any business goal. „Product Managers overseeing $20M ARR using Stripe“ is a prioritization criterion.
  • OUTCOMES: Are the Outcomes rooted in valid user/customer/stakeholder problems (link to flipped)? „Make users buy more things“ is a BS outcome. „Understand shipping times without visiting the cart“ is based on a user problem.
  • EXPERIMENTS: Do you approach the solution space with a „building“ or „testing“ mindset? Building an MVP is an excuse to satisfy executive impatience. Picking hard-to-scale experiments to test critical assumptions is testing before building.

On a micro level, it guides the selection and execution of your Discovery tactics:

  • What must I learn about the most relevant actors through the research intent questions?
  • What combination of qualitative or quantitative techniques will reveal the „truth“ about the ACTOR‘s problems?
  • How do the results of my experiments affect my choices about ​driving prioritized Outcomes​?

Impact Mapping, like every other mapping tool, is a window into your reality at a given point in time. It points out your gaps so that you can select effective tactics, and it visualizes your decisions to bring team members and stakeholders along.


  • Have people answer the questions of the Impact Mapping layers without doing a formal framework introduction (nobody needs more frameworks).
  • Connect the dots through the map to uncover inconsistencies.
  • Prioritize your Discovery actions based on these inconsistencies and gaps to avoid following dogmatic defaults.

That’s (almost) all, dear reader. If you enjoyed today’s issue, please share it on LinkedIn or Twitter to help more people discover it.

Thank you for Practicing Product,


Content I found Practical This Week

How Asking Works: A Crash Course in Customer Discovery Questions

Asking the wrong types of customer discovery questions can unintentionally influence the answers. Humans are psychologically geared to please others in a conversation, particularly when it is with someone we don’t know (making a good first impression and all that). A single, forced-choice question followed by a series of yes/no questions that confirm our preconceived biases can derail the entire interview, and worse, lead to false conclusions about our customer that takes our discovery in the wrong direction.

Read it here

When NOT to run an experiment

This bucket almost goes without saying, but to run an experiment you need a control to compare your change against. If you’re launching a brand new independent product, or pivoting the product, you likely have nothing to compare this new product against (other than it not existing). In this case, you’re better off just setting independent success criteria (e.g. a specific retention rate, or a new user signup threshold), vs. coming up with an awkward experiment. This is especially true if going back to the previous product is out of the question and your own path forward is through.

Read it here

How Twitch Learned to Make Better Predictions About Everything

Over multiple rounds of questions, individual confidence intervals adjust to match their personal level of uncertainty. Risk management expert Douglas Hubbard — a pioneer in decision science — has shown that it takes seventy questions to calibrate probability assessment such that estimates participants believe are 90% likely to occur actually occur 90% of the time. As we ask employees to make these assessments, we immediately provide the correct answers, so the Immediate feedback can help employees calibrate their assessments. Our staff quickly learns if their estimates are either over- or under-confident.

Read it here

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}