4 Questions to Ask to Get the Most Out of Your Experiments
Published on Nov 10, 2025
by Christophe Perrin
While there is, for good reason, a lot of emphasis on designing and running quality experiments, this post assumes all of that was done well: the hypothesis was clearly defined, the setup was sound, and enough data was collected to ensure a reliable learning can be made.
What I want to focus on here is what comes after the experiment concludes.
Experimentation is about making informed decisions, both tactical and strategic, based on evidence. Turning results into leanings and actions is where the real value lies, and as we will see, there is more to it than simply asking ‘Is it significant?’.
To extract the full value from an experiment, you need to ask four distinct but equally important questions.
Does the evidence support the hypothesis?
What else did we learn?
What do we do with this experiment?
What will we do next?
Does the Evidence Support the Hypothesis?
This is the most fundamental question: does the data support the original hypothesis?
Do we observe the expected statistically significant effect on the primary metric?
Do the secondary metrics support this?
Assuming the hypothesis and decision-criteria were clearly defined and pre-registered, answering this question can be done objectively simply by looking at the experimental data. Possible answers are 'the evidence supports the hypothesis', 'the evidence does not support the hypothesis'.
Answering this question simply tells us whether the data supports the hypothesis, but this is only the beginning. There is much more to learn from an experiment, and how this knowledge is used is where the value lies.
What Else Did We Learn?
Beyond confirming or rejecting the hypothesis, what additional insights emerged from the experiment?
Did user behavior reveal unexpected patterns?
Were there differences in how specific segments responded?
Did secondary metrics suggest new questions or areas of opportunity?
Did we observe any guardrail regressions, or negative side effects which were not accounted for in the hypothesis?
Was there anything about the implementation, experience, or delivery that surfaced operational or technical insights?
Answering that question helps capture learnings and helps build institutional knowledge, inform future experiments, and sharpen your understanding of your users and product.
What Do We Do With This Experiment?
This is the immediate product decision: Should we make the tested variant the default experience, or revert to the original implementation? Do we ship it or not?
Shipping it means making the variant visible to all users going forward. Keeping current means retaining the baseline experience for everyone, including those who were exposed to the variant during the experiment.
Typically, experiments are shipped when the underlying hypothesis was successfully tested otherwise we default back to base, but this does not need to be, and even in cases where the evidence supports the hypothesis, shipping it may not always be the best move. You need to consider:
Did the experiment surface new insights that might change how we interpret the results or alter our next move? For example, did secondary metrics or segment analysis reveal something surprising or worth further investigation?
Does this change align with our strategic direction? (if you have not asked yourself that question before the test)
Are there any tradeoffs to be made?
Is this the most optimal implementation, or does it create tech/product debt?
Is the impact size large enough to justify implementation costs?
Despite the positive signals, shall we refine and test again before rolling out?
Similarly a flat, or even negative, result does not automatically mean don't ship it. There are a few things to consider:
Is there a strategic reason to move ahead without supporting evidence?
Are there legal requirements which justify shipping this, despite the evidence?
The ship it action you take is a decision, and it should reflect not just the data, but the broader context. What's critical, and even more when the data doesn't fully support the action, is to clearly document the decision rationale. A decision rationale explains why a particular decision was made, whether it aligns with the experimental evidence or intentionally deviates from it. This builds trust, organisational memory, supports accountability, and helps others (and your future self) interpret and understand decisions in context.
Unlike the first questions, 'What do we do with this experiment?' it is harder to answer as it requires a certain understanding of the broader context under which the decision is made.
What Will We Do Next?
This is the meta-decision: how does this experiment affect our thinking about the whole hypothesis space?
Do we double down and iterate?
Do we pivot to a new approach based on what we learned?
Or is it time to end the line of experimentation?
Some ideas take several iterations to mature. Others reveal their limits quickly. The important thing is to be deliberate. Treat experiments not as isolated ‘Ship it’ decisions, but as steps in a learning journey.
Knowing when to stop exploring a hypothesis is just as valuable as knowing when to keep going.
This question should be considered independently of the ‘Ship it’ decision. You might choose to ship the current version and continue iterating if you see further potential in the idea. Alternatively, you might ship but decide not to invest further if the expected additional value doesn’t justify the cost. The decision to continue exploring or move on should be based on the opportunity ahead, not just the outcome of a single experiment.
Similarly, a no ship decision doesn't necessarily mean the end. You might choose to iterate further, for example, by exploring a more cost-effective implementation or refining the experience. Alternatively, you might decide to end the line of experimentation altogether if repeated efforts have shown no meaningful impact on customer outcomes.
Conclusion
Asking all four questions at the end of every experiment forces you to move away from the “Is it significant?” mindset and start thinking in terms of what you learned, how it informs the roadmap, and whether this line of experimentation is still worth pursuing.
