From Information to Evidence:
How Context Informs Product Discovery Decisions
While Product Discovery is about gathering evidence to reduce uncertainty, there is such a thing as too much evidence. It’s all too easy for Product Teams to prioritize more experiments and interviews in search of the one “correct” insight that will unlock their Product Discovery. And to stay focused and not get lost in Discovery spirals, Product Teams need to consider the context of their insights.
Published by Tim Herbig in Product Discovery Resources
Reading Time: 10 minutes
Last Updated: Mar. 21, 2023

You can think of Product Discovery as driving on a highway toward a vaguely described destination without a GPS. As you drive, you have lots of incoming information. You’re constantly worried about taking the wrong exit and getting lost. Taking the first exit means prematurely committing to building an unproven solution based on non-existent problems.
Does the landscape match your destination? Why are a bunch of cars suddenly taking this exit? How come all the previous exits point to the same city name? Does that mean you’re going in circles?
The same principle applies to common pitfalls of Product Discovery: Waiting too long means getting lost in the Product Discovery spiral and chasing more research and testing data without ever creating value for users and the business by shipping a solution.
A lot of Product Discovery inputs are simply information. But Product Teams need more than information. They need evidence.
This article will show you how to use proximity and commitment to evaluate the strength of Discovery evidence using real-life examples:
- Differentiating Information from Evidence
- Understanding Context: Proximity and Commitment
- The Quadrants of Evidence Mapping
- Case Study: From Sales Proxies to Co-creating Evidence in B2B
- Case Study: Shipping to Validate as a Starting Point
- Case Study: Using Feature Suggestions as Starting Points for Research
- Considering Context Prevents Chasing More and More Evidence
- Product Discovery Decisions Infographic
Differentiating Information from Evidence
During Discovery, teams often face the problem of what to act on. Should you change the order of the homepage because people asked customer support where their favorite feature went? Does this 5-day A/B test result mean we will fix the churn of mobile users? Should a competitor feature prompt you to ship the same feature?
Claude Shannon once coined the phrase “Information is the resolution of uncertainty.” For Product Teams, the quote has to be rephrased.
“Evidence leads to a reduction of uncertainty.”
But what separates information from evidence? Here are the brief definitions that we’ll use in this article:
Information describes raw, often context-less, incoming data and can include opinions. Product Teams collect a lot of information through the various Product Discovery activities they work through in an adaptable, non-linear way.
Evidence, on the other hand, is a piece of qualitative or quantitative data that supports or refutes an assumption. To do that, it has to be contextually relevant in relation to the Discovery goal a team is working on.
The key aspect that separates information from evidence is the context (or lack thereof). The credibility of the context turns debatable information into a reliable piece of evidence.
Product Discovery is not just about gathering more information about your customers' behaviors and attitudes but also about identifying which evidence helps you make better decisions toward reducing uncertainty. The amount of information you collect doesn’t solve that, but the context will.
Understanding Context:
Proximity and Commitment
On the one hand, we need to collect the most critical information about our customers. On the other hand, organizations need a shared understanding of what types of insights are “worth” more than others in their specific context. This is less about rigid formulas and one-size-fits-all blueprints and more about agreed-upon principles.
That’s why I like to assess the different evidence a team has gathered so far along the lines of two axes: proximity and commitment.
- How “close” were you from the source? There’s a big difference between reported anecdotes about customer feedback and competitor moves when you don’t know what prompted any of them. The closer you are to the source of evidence, the more it should matter for your decision-making.
- How serious is the commitment behind recorded or observed activity? “Of course, I would buy this feature” is not a real commitment to a product. These comments are like submitted feature requests or C-level feature suggestions based on “aha!” moments in the shower. The more serious the commitment behind the feedback was, the more it should drive your confidence up or down.
These axes allow teams to prioritize insights from various Discovery activities through a concept I like to call Evidence Mapping:
In this diagram certain patterns emerge. For example, First-hand evidence is more reliable than reported anecdotes for decision-making. But based on the decision you’re looking to make and the data you have on hand, the same evidence could fall into a different category. As part of adopting, scaling, and iterating Product Discovery practices, teams need to invest in a shared understanding of what they value more, ie. user interviews over competitor moves.
The Quadrants of Evidence Mapping
It’s useful to spend a little time looking at each quadrant of this grid.
First-Hand and Serious Commitment Evidence
The most reliable evidence results from activities the team ran itself, as well as results from real decisions by users or customers. Examples can include:
- Interviewees actually went ahead and recommended a product via LinkedIn DMs to five peers after they told you they “like it” in the interview.
- Analytics data about the actions the most important audience segment took on the product detail page, indicating what information they value, compared to stack-ranking preferences in a survey.
First-Hand and Lip Service Evidence
It’s easy to mistake the evidence in the lower right quadrant for a serious commitment. “Lip service” is shorthand for every time the observed or recorded action didn’t really come with consequences for the user or customer. Examples can include:
- Submitted feature requests or online reviews that don’t come with consequences for the submitters.
- Expert decisions whose judgment is informed by their own experience, but not necessarily the behavior of your users.
Enforcing more serious commitments can help you turn lip service into a more reliable serious commitment, like pre-ordering or real-time referrals of an idea.
Anecdotal and Serious Commitment Evidence
In the upper left quadrant you’ll find the type of evidence whose validity is often grossly overestimated, like information about what “others” were doing. Examples can include:
- Feature launches by competitors. Yes, it takes serious commitment from a competitor to ship something, but you never know what this move is built on. It could be driven by a top-down HiPPO idea, n=1 user insights, or poorly interpreted quantitative data.
- Regulatory announcements. These are definitely real and often require action, but they are rarely informed by actual needs and behaviors of the users they are trying to serve. They mostly originate from ivory tower decisions.
Lip-Service and Anecdotal Evidence
Evidence in the lower left quadrant shouldn't inform decisions about actions, but may guide the additional steps required to refine anecdotal evidence. Examples can include:
- Comments about what existing or potential users think “would be nice” that have been watered down to solely align with the goals of proxy departments.
- A manager coming back from a family gathering over the weekend with tons of feedback about colors or where a button “should be.”
While these pieces of information may be sincerely presented, they are weak evidence.
Case Studies in Using Evidence in Product Discovery
To illustrate the real-life practices required for acting on strong evidence, I sat down with Fanny Krebs-Pinto, Product Lead at Kitchen Stories, to chat about her experiences there and previously at Doctolib.
From Sales Proxies to Co-creating Evidence in B2B Product Discovery
At Doctolib, Fanny’s starting point for her product roadmap was their product-level North Star Metric: The number of online bookings received by hospitals. Since Doctolib relied on an internal sales team to acquire these hospital customers, they had a lot to say about feature ideas that the hospitals “needed.”
While the reported anecdotes from sales conversations were rather weak evidence, they provided a great starting point for the product team—especially since it was quite hard to talk to users from within the hospitals on a regular basis. Using a MECE tree, Fanny and her team dissected the potential drivers of their North Star Metric, based on the existing analytics insights they had. That painted a picture of what was going on based on first-hand insights, but without much evidence about an idea’s actual validity. Nonetheless, they were able to identify the area of “non-online hospital Agendas, that *could* be available online,” which created an overlap with the sales team’s inputs.
From there, the Product Team could confidently start collecting more first-hand evidence from the target customers, by facilitating co-creation workshops to further test their initial assumptions and validate potential solutions “on site” with their audience.
Shipping to Validate as a Starting Point
It is helpful to delay building anything during Discovery as much as possible, but shipping is sometimes the fastest way to reliable insights. It’s important to distinguish shipping for the sake of it from shipping to learn. Fanny and her team were continuously focused on validating assumptions, which allowed them to treat the shipping of a solution as a starting point, and not the final destination.
When governments announced new vaccination regulations during the COVID pandemic, the B2C Doctolib team had to move fast without qualitative upfront research. They shipped a first iteration based on anecdotal evidence about the new regulations. In parallel, they leveraged their existing Doctor community to gather first-hand insights about the new regulations.
“The speed for us to validate assumptions was the shortest I've ever experienced. Sometimes we had to validate after shipping because the government or president in France or in Germany would announce something and then you had to get something out somehow very quickly.”
With a first solution out there and quantitative data coming in, they could dive deeper into qualitative insights from actual users. People wouldn’t have to imagine “how it would feel” to book a vaccination appointment, but could go through the process “live.”
This example shows how context informs the interpretation of evidence and decision-making in Product Discovery is. Conventional playbooks might force you through the same sequence of interviews and testing over and over again. But in Discovery, these measures are the means, not the end. Discovery is about reducing uncertainty through evidence–and each situation requires different decisions and techniques to achieve that.
Using Feature Suggestions as Starting Points for Research, instead of Delivery Backlog Items
At KitchenStories, Fanny experienced a shift in prioritizing and dealing with new ideas. Previously, the company would rely on a given prioritization framework to stack-rank all sorts of different ideas. Moving on from that approach, they established a central repository for submitting ideas in ProdPad. But that doesn’t mean the product team gets to work immediately. Instead, they embrace the idea of a theme-based roadmap to prioritize ideas: they cluster ideas by theme, and then treat them as inputs for their Discovery work on a prioritized theme.
Once the work on a theme is underway, they check for existing evidence first–-for example, from stored information from ongoing user interviews in Dovetail or quantitative data. Depending on the nature of the existing evidence, they either prioritize more problem-focused research or transition to testing individual ideas through Discovery practices like prototype testing or quantitative experiments.
Here’s how Fanny summarizes their work:
“Adding an idea to a theme is not a commitment to building it. Every idea is kind of treated similarly. Some ideas have a lot of information. Others are just like, I thought about this. It could be cool. That doesn't matter so much at this time for us, because what we do with these ideas is we just use these ideas as a signal that there is a theme emerging. The ideas can still be changed a lot, but at least if someone has this idea or observed some kind of user feedback, it's a signal that something is maybe happening in that direction that we want to look into.”
By maintaining ideas as a list of signals instead of a backlog, KitchenStories’ Discovery team can explore the actual problem and solution space within a theme with an open mind.
Considering Context Prevents Chasing More and More Evidence
There are obvious dangers in not doing enough Product Discovery, but there are equal challenges in doing too much. Think back to the highway driving example. You don’t want to take the first exit but you don’t want to go in circles either. So how do you know how much Product Discovery is enough?
Putting your evidence in context–by evaluating proximity and commitment–can help answer that key question. As seen in Fanny’s B2B experience at Doctolib, even your weakest evidence can be more helpful than longing for the evidence you don’t have. As long as everyone involved knows how much that evidence is ”worth“ to you as a team.
Even your weakest evidence can be more helpful than longing for the evidence you don’t have. As long as everyone involved knows how much that evidence is ”worth“ to you as a team.
Choosing your next Discovery activity has to be based on the individual and current state of uncertainty (depending on individual Discovery, industry, product lifecycle, skills, resources, company culture, etc.) As Douglas Hubbard explains in “How to Measure Anything,” teams should shift from trying to answer a specific question to reducing uncertainty based on what they already know. Then, they should use this new evidence to make decisions about the actions needed to (potentially) reduce uncertainty even further.