Skip to content
Waypoint

Discovery Waypoint

Run tomorrow · Primary source

Day-in-the-Life Target Selection

A scoping session that runs before the observation itself — it maps every candidate user group, scores each on five signal dimensions, and produces one clearly justified target rather than a guess or a committee compromise.

Best opened when

Customer engagements where a Day-in-the-Life is planned but the right user group has not yet been agreed.

What to steal first

The five-dimension scoring canvas and the insight-gap check.

What not to copy

It is not a substitute for the Day-in-the-Life observation itself. The play closes the scoping question and produces the brief; the observation still has to happen.

Adapt this playbook

What should you actually steal from this?

What to steal

Open by drawing out every candidate group — do not start with a pre-baked list. The facilitator asks: which groups keep coming up in account conversations or support queues? Which roles does the customer struggle to describe confidently? Which users are affected but rarely get spoken for in planning? Give each group a card. Do not score yet.

What to skip

Do not compress or skip the group surfacing step. Groups left off the canvas cannot be compared, and the room will anchor on the first group mentioned.

Application note

15 minutes to surface all candidate groups and build profile cards. 25 minutes to score each group individually then compare. 20 minutes to discuss scoring disagreements and run the insight-gap check. 10 minutes to commit to one target, agree recruitment criteria, and write the observation brief. If the room starts drifting, bridge back to Service Safari / Field Observation.

Closest Waypoint bridge: Service Safari / Field Observation

Core operating sequence

Play anatomy

Teams arrive at a Day-in-the-Life planning conversation knowing they want to observe someone but rarely knowing which group would produce the most useful insight. The loudest signal wins — usually whatever the last customer escalation mentioned, whoever has the most vocal internal champion, or whichever group a senior stakeholder has already named.

This play runs before the observation. Its first job is to make sure the room has actually surfaced all the candidate user groups — not just the obvious one. The facilitator draws these out through open questions: which groups keep coming up in account conversations? Which roles does the customer struggle to speak confidently about? Which users are affected but rarely represented in planning discussions? Each group that surfaces gets a card.

Once every candidate is on the table, each group is described honestly — who they are, what they are trying to do, and what friction signals exist for them. The portrait step forces specificity: is that signal confirmed from real evidence, or is it something the team suspects but has never verified?

Each group is then scored on five dimensions: pain signal strength, strategic leverage, insight gap, access feasibility, and decision feed. Scoring is fast and comparative — its value is in the comparison, not the absolute number.

The most important dimension is insight gap. A group the team already understands well enough to act on is a poor observation target regardless of how loud its pain signal is. The best DITL candidate is the group where friction is real but daily context, compensating behaviours, and root causes are still poorly understood.

The play closes with one named group, agreed recruitment criteria, and a brief of what specific behaviours and contexts the observer should pay attention to. That brief is the output.

Run it in the room

A clean first pass you can run

Participants
Facilitator plus 3–6 people with real knowledge of candidate user groups. 4–8 total is ideal.
Timing
15 min surfacing candidates and building profile cards. 25 min individual scoring. 20 min group discussion and insight-gap check. 10 min commitment and brief.
Prep
Ask each participant to name one or two candidate user groups in advance. Pull any existing support data, ticket summaries, or usage patterns — not to pre-load the answer but to give the portrait cards enough grounding to score honestly.
  1. 1Open by drawing out every candidate group — do not start with a pre-baked list. The facilitator asks: which groups keep coming up in account conversations or support queues? Which roles does the customer struggle to describe confidently? Which users are affected but rarely get spoken for in planning? Give each group a card. Do not score yet.
  2. 2Fill in portrait cards for each group: who they are, what they're trying to do, what friction signals exist and how confirmed they are, and roughly how many people this group represents. Two minutes per group maximum.
  3. 3Score each group independently on the five dimensions before discussing. Display scores simultaneously. Look first for sharp disagreements and for any group scoring low on insight gap despite strong confirmed pain.
  4. 4Run the insight-gap check out loud: for each high-pain group, ask whether the team already knows what the workaround is, why it exists, and how daily context drives it. If yes, the observation would mostly confirm rather than reveal — name that explicitly.
  5. 5Apply the decision-feed question: which group's observation feeds a real, active decision in the next 90 days? Let this sharpen the ranking rather than override the scoring entirely.
  6. 6Close by naming one group, agreeing recruitment criteria, and writing the observation brief together. The brief should be specific — not just a job title and vague context, but the behaviour or condition the team most needs to see.

You leave with

One named target user group, agreed recruitment criteria, and a one-paragraph observation brief ready to hand to whoever is running the Day-in-the-Life.

First failure point: The session ends with two or three groups 'still in consideration' because the room avoided the commitment step. The output is a shortlist rather than a single target, and the Day-in-the-Life never gets scheduled because nobody owns a specific brief.

What good looks like

If this is working, these are the signals you should be able to point to

  • One user group is named as the target before the session closes, with an agreed rationale the room can repeat without the facilitator.
  • At least one group initially assumed to be the obvious choice was reconsidered after the insight-gap check.
  • The observation brief names specific behaviours and conditions to watch for, not just a job title and a vague context.
  • The recruitment conversation is scheduled within two weeks of the scoping session.

How it worked there

The conditions that made it hold

This play was built for enterprise discovery teams that consistently struggle with the pre-observation question. The Day-in-the-Life method itself is well understood. The missing step is the structured conversation that turns 'we should shadow someone' into a defensible, agreed target — one the customer and the team can both explain and stand behind.

The five-dimension scoring canvas was designed to prevent two failure modes that appear repeatedly: defaulting to the most visible pain signal without checking insight gap, and skipping lower-visibility user groups whose workarounds would reveal more about actual adoption barriers than the headline group ever could.

What not to copy · Failure modes

What goes wrong when this is copied

The group with the most confirmed pain always wins. Pain signal is one of five dimensions, not the deciding factor. A group with well-documented friction and an already-funded fix is a weak DITL candidate — the observation would confirm what the team already believes. The more revealing target is often a quieter group whose friction patterns are real but whose daily context and compensating behaviours the team has never actually watched.

This is a research activity rather than a business decision. Framing the play as a 'research methodology step' invites stakeholders to treat it as something a researcher owns and delivers later. The scoring produces a commitment about where the team will invest its observational attention. That commitment belongs to the account lead, the product owner, and whoever is funding the next phase — not only to the research function.

The group surfacing step is skipped and the room scores only the group someone already had in mind. Groups that were never named cannot win.

Weak signals to watch for

  • It is not a substitute for the Day-in-the-Life observation itself. The play closes the scoping question and produces the brief; the observation still has to happen.
  • It is not a backlog prioritisation. The output is a target user group for deeper investigation, not a ranked list of problems to solve.
  • Do not compress or skip the group surfacing step. Groups left off the canvas cannot be compared, and the room will anchor on the first group mentioned.
  • Do not treat pain signal as the deciding dimension. High confirmed pain with a well-understood root cause often means a solution session is more appropriate than a day-in-the-life.

Closest Waypoint move

What to open next

Primary route

Service Safari / Field Observation

Use Service Safari once the target group and observation brief are agreed. This scoping play produces the input that observation method needs to be purposeful.

Run this scoping play first whenever more than one user group is under consideration. Skip it only if the target is already agreed, documented, and understood by everyone running the observation.

situation

Understand user needs and pain points

Use this situation route to frame the broader investigation context when the scoping session surfaces multiple high-scoring groups that each deserve attention.

Sources and confidence

Primary source

Reviewed by Discovery Waypoint Editorial Team · 2026-04-24