Toolkit · Problem Validation
10-Question Problem Validation Checklist
A practical checklist to stress-test whether you truly understand the problem before committing your team, roadmap, and energy to a solution.
- Forces clarity on whose problem you're actually solving.
- Separates symptoms from root causes and constraints.
- Surfaces assumptions and failure modes early.
- Helps you decide what's worth building now (and what isn't).
How to use this checklist
- 1. Pick a single problem or feature and print this checklist.
- 2. Work through each question with real examples and names.
- 3. Mark the items you can answer with specifics, not vibes.
- 4. If boxes stay blank, you're not ready to build yet — you're ready to learn.
You don't “pass” or “fail” this checklist. It's an honesty tool: if you can't answer most questions concretely, the work is in discovery, not delivery.
The 10-Question Problem Validation Checklist
Below is the full checklist in a readable format. Use it to slow down on the problem, align your team, and avoid committing to work that doesn't move the needle.
1. Whose problem is this, specifically?
- □ Name the exact user segment or role. Not “our customers” — which ones?
- □ Identify the job they're trying to do. What are they actually trying to accomplish?
- □ Name at least three specific people in this segment. If you can't, you don't know whose problem this is yet.
2. How painful is it?
- □ Evidence from support messages, calls, or data. What have they told us directly?
- □ They've built workarounds: spreadsheets, manual processes, hacks they've invented.
- □ They can quantify the cost: “We lose 6 hours per week” or “This delays us 3 days every time.”
- □ It comes up unprompted in multiple interviews: they bring it up without being asked.
- □ They're willing to pay to make it stop. Pain costs time, money, or sleep.
3. What are they doing today to cope?
- □ Walk through the last time this happened. “What did you do, step by step?”
- □ Document current tools and processes: what systems, spreadsheets, or rituals exist?
- □ Identify the workarounds: the more elaborate the workaround, the deeper the problem.
- □ Ask: “If this problem vanished, what would you stop doing?” Their answer tells you what they value enough to work around.
4. What measurable change would tell us the problem is gone?
- □ Define the result, not the solution. Not “users enable the feature.” What changes in their world?
- □ Measure efficiency gains: “Time spent drops from 90 minutes to 30 minutes per day.”
- □ Measure error reduction: “Manual exceptions decrease by 40%.”
- □ Measure time savings: “Process completes 3 days faster.”
5. What are the competing constraints?
- □ Identify who else is affected. Front-desk staff, managers, owners — who touches this?
- □ Name what each stakeholder values: efficiency, control, visibility, cash flow, etc.
- □ Map the conflicts: what happens if we optimize for one and ignore another?
- □ Document the trade-offs: acknowledge what we're choosing and what we're giving up.
6. What won't we do?
- □ Set explicit scope boundaries: what versions of this problem are out of bounds?
- □ Define the no-gos: what features, integrations, or complexity are we excluding?
- □ State the appetite: how much time and resources are we willing to invest?
- □ Name what we're not solving for: “We're solving attendance, not marketing.”
7. Where does this problem begin?
- □ Ask: “What happens right before this problem occurs?” Trace it upstream.
- □ Distinguish symptom from origin. Is this where it hurts or where it starts?
- □ Map the process. What's the full chain of events that leads here?
- □ Ask: “If we fixed this symptom, would the same problem show up somewhere else?” “Are we building a temporary bandage or fixing the source?”
8. What assumptions are we making?
- □ List every assumption explicitly. Get them out of your head and onto paper.
- □ Rank them: Critical, Important, Minor. Critical = if wrong, the whole thing fails.
- □ Test critical assumptions before building. Talk to users. Run small experiments.
- □ Plan to test important assumptions in beta. Don't wait until launch to learn.
9. What could make this the wrong problem?
- □ Run a pre-mortem. It's six months later and the feature failed. What happened?
- □ Ask: “What evidence would prove this is the wrong problem?” “What would you need to see to walk away?”
- □ Ask: “What user behavior would invalidate our hypothesis?” “What if they don't use it the way we expect?”
- □ Consider external factors. “What could make this problem irrelevant?”
10. If we solve this, what becomes possible next?
- □ Ask: “Does solving this make other problems easier?” Good problems compound.
- □ Identify what this unlocks: new markets, use cases, capabilities.
- □ Compare: does it compound or just check a box? One builds on itself. The other doesn't.
- □ Consider: is this a foundation or a dead end? The best problems open doors to problems you didn't know existed.
A final note
This checklist isn't about getting perfect answers. It's about asking better questions before you commit. If you can't answer most of these with specifics, it means you need more conversations, more observation, and more time in the problem space.
Don't rush past uncertainty. Sit with it. The discomfort of “I don't know yet” is cheaper than building the wrong thing.