Build evidence-based understanding before designing solutions.
Use this when your team lacks confidence in user context, pain points, or behavior. It is ideal before roadmap bets or service redesign work.
Start with Interview Discussion Guide Workshop for this situation unless only one exploratory interview is needed — a single interviewer can design their own guide without a workshop.
Session risk to manage
Key risk: The team mistakes assumptions for insights.
Choose the structure that keeps trade-offs visible and forces the room to land somewhere explicit.
Open the method first if you need to judge the format. Start a workspace when this needs method fit, a session plan, and shareable follow-through in one saved thread.
Recommended route
Start with one method now, then compare a lighter or deeper route only if the room shape changes.
Recommended first
Interview Discussion Guide Workshop
Choose this when the session goal is: Needs are prioritized with evidence strength.
Tradeoff: You are choosing the clearest path over broader comparison work in 2-4 hours.
If the session is working, these are the signals you should be able to point to by the end.
Needs are prioritized with evidence strength.
Pain points are tied to real user context.
A clear research follow-up or validation plan exists.
Quick fit-check
Use these questions to confirm this is the right room before you commit to the method.
What decision should this session unlock by the end of the working block?
Why it matters: If the decision is vague, the room will drift into discussion instead of converging on a usable output.
What changes: If the answer is specific, Waypoint can recommend tighter decision formats. If it stays broad, Waypoint should push you toward framing or mapping first.
How real is the constraint around limited direct user access?
Why it matters: When user access is limited, the session design shifts from planning broad research to maximising signal from constrained access. The method choice depends on whether the constraint is about volume of access or quality of existing evidence — these call for different first moves.
What changes: If access is limited to 3–5 users, JTBD interview sprint produces higher signal per conversation than a broad guide. If the constraint is mixed-quality existing research, Affinity Mapping should come before new interview planning — synthesise what exists first. If there is genuinely no user access, run a Persona Hypothesis Workshop to make assumptions explicit before any research is designed.
Will mixed quality existing research create friction in the room?
Why it matters: Mixed quality research means some evidence is solid and some is anecdote or inference. If the quality difference isn't surfaced explicitly, vivid but low-quality evidence — a memorable user story, a strong internal opinion — will dominate synthesis over quieter but stronger evidence.
What changes: If existing research is mixed quality, run a source-credibility check before synthesis begins — mark each piece of evidence by source type and volume. If most existing research is weak, prioritise new interviews over synthesis. If quality varies significantly, weight evidence by source before drawing conclusions.
See more fit questions
What will you do if pressure to move fast to delivery remains unresolved during the session?
Why it matters: Some risks can be parked; others require a method that produces enough evidence or ownership before the group leaves.
What changes: If it cannot stay unresolved, Waypoint should bias toward techniques that leave owners, assumptions, or evidence checks visible before the room closes.
Other viable options
Use these only if the recommended route is blocked by room shape, confidence, or stakeholder availability.
Fallback 1
Persona Hypothesis Workshop
A workshop to draft hypothesis personas based on existing evidence and team knowledge.
Output artifact: Hypothesis personas
Avoid when: Avoid this when validated personas already exist and are current.
Watch for these signals in the room and use the paired fix before the session drifts.
Limited direct user access
What it looks like: The team designs research questions around features they want to build rather than around user goals — the constraint of limited access has pushed them toward validating rather than exploring, and the guide reflects what the team hopes to hear.
Fix: Capture needs evidence first and park solution ideas in a separate lane.
Mixed quality existing research
What it looks like: One vivid user story dominates the synthesis, evidence that contradicts it is deprioritised or forgotten, and the team builds confidence on a foundation of a single data point treated as a representative pattern.
Fix: Require at least two evidence sources before treating a pain point as priority.
Pressure to move fast to delivery
What it looks like: The team leaves the session with a list of things users said but no themes — each team member takes away a different interpretation of what the research means for the roadmap, and the synthesis work that should have happened in the session is deferred or never happens.
Fix: Run a 15-minute synthesis closeout with explicit next research actions.
Related playbooks
Named external methods that often show up around this situation
Use these when the room keeps reaching for a famous company method and you need the practical translation: what to borrow, what not to imitate, and which Waypoint move should take over next.