User research is how you replace assumptions with evidence. Every product decision should trace back to something a real user said, did, or struggled with. Skipping research is the single most expensive mistake in product development: you'll build the wrong thing, then spend months fixing or discarding it.
Research Types
| Type | Purpose | Key Question | Output |
|---|
| Generative | Discover needs and opportunities | "What problems exist?" | Problem statements, opportunity areas |
| Evaluative | Test specific solutions | "Does this design work?" | Usability findings, design improvements |
| Qualitative | Understand the "why" | "Why do users behave this way?" | Insights, mental models, motivations |
| Quantitative | Measure the "what" | "How many? How often? How much?" | Statistics, benchmarks, conversion rates |
When to Use Each Type
| Project Phase | Research Type | Methods |
|---|
| Starting a new product | Generative + Qualitative | Interviews, field studies, diary studies |
| Defining features | Generative + Qualitative | Card sorting, concept testing, journey mapping |
| Designing solutions | Evaluative + Qualitative | Usability testing, A/B testing prototypes |
| Launched product | Evaluative + Quantitative | Analytics, surveys, A/B testing, heatmaps |
| Optimizing flows | Evaluative + Both | Funnel analysis, usability testing, surveys |
Method Selection Matrix
| Method | Type | Sample Size | Cost | Time | Best For |
|---|
| User Interviews | Generative, Qual | 5-12 | Medium | 1-2 weeks | Understanding needs, motivations, pain points |
| Surveys | Evaluative, Quant | 100-1000+ | Low | 1-2 weeks | Validating findings at scale, measuring satisfaction |
| Usability Testing | Evaluative, Qual | 5-8 | Medium | 1-2 weeks | Testing designs, finding interaction problems |
| A/B Testing | Evaluative, Quant | 1000+ | Low-Medium | 2-4 weeks | Optimizing conversions, comparing options |
| Card Sorting | Generative, Qual | 15-30 | Low | 1 week | Information architecture, navigation structure |
| Tree Testing | Evaluative, Quant | 50+ | Low | 1 week | Validating navigation structure findability |
| Diary Studies | Generative, Qual | 10-15 | Medium | 2-4 weeks | Longitudinal behavior, habits, contexts |
| Field Studies | Generative, Qual | 5-10 | High | 1-3 weeks | Understanding real-world context and environment |
| Analytics Review | Evaluative, Quant | N/A | Low | Days | Understanding behavior patterns at scale |
| Heatmaps | Evaluative, Quant | 1000+ clicks | Low | 1-2 weeks | Visual attention, click patterns |
| Eye Tracking | Evaluative, Qual | 10-20 | High | 2-3 weeks | Detailed attention patterns, reading behavior |
| Competitive Analysis | Generative, Qual | 5-10 competitors | Low | 1 week | Understanding market patterns and gaps |
| Concept Testing | Evaluative, Qual | 5-10 | Low-Medium | 1 week | Testing early ideas before building |
User Interviews
Planning Interviews
| Planning Step | Details |
|---|
| Define the goal | What specific questions do you need answered? "Understand how users manage their task lists" not "Learn about users" |
| Recruit participants | 5-8 people who match your target user profile. Not colleagues. Not friends. |
| Prepare a guide | 10-15 open-ended questions grouped by topic. Not a rigid script. |
| Set up recording | Get consent. Record audio/video for later review. |
| Assign roles | One interviewer (asks questions) + one note-taker (captures observations) |
| Schedule 60 minutes | 45 minutes of questions + 15 minutes buffer |
Interview Structure
1. INTRODUCTION & RAPPORT (5 min)
"Thanks for joining. I'm not testing you, I'm testing our product.
There are no wrong answers. I want to hear your honest experience."
2. BACKGROUND QUESTIONS (10 min)
"Tell me about your role."
"Walk me through a typical day."
"How long have you been doing [relevant activity]?"
3. CORE TOPIC EXPLORATION (25-30 min)
"Tell me about the last time you [did the thing we're researching]."
"What was frustrating about that?"
"What would make that easier?"
"Show me how you currently do this."
4. WRAP-UP (5 min)
"Is there anything else you'd like to share?"
"What's the one thing you'd change about [topic]?"
Asking Good Questions
| Question Type | Example | When to Use |
|---|
| Open-ended | "Tell me about a time when..." | Start of topics, let users lead |
| Follow-up | "Can you tell me more about that?" | When they mention something interesting |
| Clarifying | "What do you mean by 'complicated'?" | When they use vague or ambiguous terms |
| Behavioral | "What did you do next?" | Understanding actual behavior, not opinions |
| Contextual | "Where were you when this happened?" | Understanding environment and circumstances |
Questions to Avoid
| Bad Question Type | Example | Why It's Bad | Better Alternative |
|---|
| Leading | "Don't you think the checkout is confusing?" | Suggests the expected answer | "Walk me through your last checkout experience." |
| Yes/No | "Do you like the new design?" | Gives you no useful detail | "What stands out to you about this design?" |
| Hypothetical | "Would you use a feature that does X?" | People can't predict their future behavior | "When was the last time you needed to do X?" |
| Double-barreled | "How do you find and organize your files?" | Two questions disguised as one | Ask about finding and organizing separately. |
| Jargon-loaded | "How do you feel about the IA of our product?" | Users don't know your terminology | "How easy is it to find things on our site?" |
Analyzing Interview Data
- Transcribe or timestamp key moments from each interview
- Affinity mapping: Write each observation on a sticky note, group related notes into themes
- Count themes: "7 of 8 participants mentioned difficulty finding the search function"
- Look for patterns in behavior, not just stated preferences (people do ≠ people say)
- Quote directly in your findings. Stakeholders respond to user words more than researcher summaries
Surveys
When Surveys Work
| Good For | Bad For |
|---|
| Measuring satisfaction (NPS, CSAT) | Understanding why users are dissatisfied |
| Validating findings from qualitative research | Exploring new problem spaces |
| Reaching large sample sizes | Getting deep, nuanced insights |
| Tracking metrics over time | Replacing user interviews |
| Segmenting users by behavior or demographics | Asking about complex workflows |
Writing Good Survey Questions
| Guideline | Bad Example | Good Example |
|---|
| One question per question | "How satisfied are you with speed and reliability?" | Split into two separate questions |
| Use balanced scales | "Good, Very Good, Excellent" | "Very Poor, Poor, Neutral, Good, Excellent" |
| Avoid jargon | "Rate the API documentation" | "Rate the technical guides" |
| Randomize option order | Always listing "Very Satisfied" first | Randomize to avoid position bias |
| Keep it short | 40-question survey | 10-15 questions maximum (5-7 minutes) |
| Include a free-text field | All multiple choice | "Is there anything else you'd like to share?" |
Common Survey Scales
| Scale | Use For | Range |
|---|
| Likert | Agreement/satisfaction | 1-5 or 1-7 (Strongly Disagree → Strongly Agree) |
| NPS | Loyalty/recommendation | 0-10 (How likely to recommend?) |
| SUS | Usability perception | 10 questions, 1-5 scale, scored 0-100 |
| CSAT | Satisfaction with specific interaction | 1-5 (Very Unsatisfied → Very Satisfied) |
| CES | Effort to complete task | 1-7 (Very Difficult → Very Easy) |
| SEQ | Single task difficulty | 1-7 (Very Difficult → Very Easy) |
Card Sorting
Card sorting reveals how users expect information to be organized. It's essential for designing navigation and information architecture.
Types of Card Sorting
| Type | Process | When to Use | Output |
|---|
| Open | Users group cards and name the groups themselves | You have no existing IA and want to discover natural groupings | Category names + groupings |
| Closed | Users sort cards into predefined categories | You have a proposed IA and want to validate it | Fit score for each category |
| Hybrid | Users sort into predefined categories but can create new ones | Validating with flexibility to discover gaps | Validation + new category ideas |
Running a Card Sort
- Create cards: Write each content item on a card (physical or digital). 30-60 cards is typical.
- Recruit participants: 15-30 for open sort, 30-50 for closed sort.
- Tools: OptimalSort, Maze, or physical sticky notes.
- Analyze results:
| Analysis Method | What It Shows |
|---|
| Similarity matrix | How often two items were placed together (percentage) |
| Dendrogram | Hierarchical clustering showing which items group most strongly |
| Category agreement | How many participants used the same grouping |
| Standardization grid | How each card was categorized by each participant |
Interpreting results:
- Items sorted together by 70%+ of participants → Strongly related, keep them together
- Items sorted together by 40-69% → Somewhat related, consider grouping
- Items that split across groups → May need to appear in multiple places or need better labeling
- Groups with high agreement on names → Use those names for navigation labels
Tree Testing
Tree testing validates whether users can find things in your proposed navigation structure. It strips away visual design to test the structure alone.
Running a Tree Test
- Create a text-only tree of your proposed navigation (no styling)
- Write 8-12 tasks: "Where would you go to change your password?"
- Recruit 50+ participants
- Measure success rate and directness (found on first attempt vs. backtracked)
Success benchmarks:
| Metric | Target | Concern |
|---|
| Direct success rate | > 70% | Users find it on the first try |
| Indirect success rate | > 80% | Users find it eventually |
| Time to find | < 30 seconds | Not too many wrong turns |
| First click accuracy | > 50% | Users' instinct leads them right |
Creating User Personas
Personas are fictional representations of your key user types, based on real research data.
Persona Template
┌────────────────────────────────────────────────────────────────┐
│ [Photo] Sarah Chen, 34 │
│ Marketing Manager at a mid-size SaaS company │
│ Tech comfort: High │ Frequency: Daily user │
├────────────────────────────────────────────────────────────────┤
│ GOALS │ FRUSTRATIONS │
│ • Streamline team workflows │ • Too many disconnected tools │
│ • Track campaign ROI │ • Manual reporting wastes hrs │
│ • Prove value to leadership │ • Data stuck in silos │
├────────────────────────────────────────────────────────────────┤
│ BEHAVIORS │ CONTEXT │
│ • Checks dashboards first AM │ • Works hybrid (office 3x/wk)│
│ • Prefers visual data │ • Manages team of 5 │
│ • Delegates detailed analysis │ • Budget decision-maker │
│ • Uses mobile for quick checks │ • Reports to VP of Marketing │
├────────────────────────────────────────────────────────────────┤
│ QUOTE │
│ "I need to show results, not just activity." │
├────────────────────────────────────────────────────────────────┤
│ DESIGN IMPLICATIONS │
│ • Dashboard-first interface with key metrics visible │
│ • One-click report generation for stakeholder presentations │
│ • Mobile-friendly read-only views for on-the-go checking │
│ • Integration with existing tools (Slack, HubSpot, GA) │
└────────────────────────────────────────────────────────────────┘
Persona Best Practices
| Do | Don't |
|---|
| Base personas on real research data | Create personas from assumptions |
| Include 3-5 personas max | Create 10+ personas (nobody will use them) |
| Focus on goals, behaviors, and frustrations | Focus on demographics (age, income) |
| Include design implications | Make them so abstract they don't inform decisions |
| Update personas as you learn more | Treat them as permanent truth |
| Share with the entire team | Keep them locked in a research report |
Proto-Personas (When You Can't Do Full Research)
If you genuinely can't do user research (rare, but it happens), create proto-personas based on available data: support tickets, analytics, sales team knowledge, app reviews. Clearly label them as hypotheses to be validated.
Journey Mapping
A journey map documents the steps a user takes to accomplish a goal, including their thoughts, emotions, and pain points at each stage.
Journey Map Structure
PHASE: Awareness → Consideration → Purchase → Onboarding → Regular Use
ACTIONS: Google search Compare plans Enter CC info Complete setup Daily login
Visit site Read reviews Complete form Watch tutorial Use features
Read blog Start free trial Confirm order Import data Share with team
THINKING: "Is this "Which plan "Is this "How do I get "Can I do
legit?" is right?" secure?" started?" everything
I need?"
FEELING: 😐 Neutral 🤔 Uncertain 😟 Anxious 😕 Confused 😊 Satisfied
(or 😤 Frustrated)
PAIN Info hard Can't compare Too many No guidance Feature X
POINTS: to find features easily form fields after signup is buried
OPPS: Clear value Comparison Streamline Interactive Surface key
prop on landing table on checkout to onboarding features
page pricing page 3 fields wizard in-context
When to Journey Map
| Use It When | Skip It When |
|---|
| You need cross-team alignment on user experience | You need to test a specific screen |
| You want to identify the biggest pain points across a flow | You already know the problem and need solutions |
| You're redesigning an end-to-end experience | You're making small, incremental improvements |
| You want to prioritize where to invest design effort | You need quantitative data |
Research Ops: Practical Tips
Recruiting Participants
| Source | Pros | Cons |
|---|
| Existing users (email, in-app) | Know your product, easy to reach | May be too familiar, survivorship bias |
| User testing platforms (UserTesting, Maze) | Fast, large pool | May not match your exact user profile |
| Social media / communities | Highly targeted | Self-selected, may not be representative |
| Customer support / success teams | Know who has problems | Bias toward dissatisfied users |
| Screener surveys | Filter for exact criteria | Takes time to set up and recruit |
Research Repository
Store research findings where the team can find and reference them:
| What to Store | Format |
|---|
| Key findings | Short summaries with supporting evidence |
| User quotes | Direct quotes tagged by theme |
| Video clips | 30-60 second highlight clips |
| Recommendations | Specific, actionable design recommendations |
| Metadata | Date, method, participant count, researcher |
Communicating Research Findings
| Audience | Format | Focus On |
|---|
| Executives | 1-page summary + 3-5 key findings | Business impact, top-line metrics |
| Product team | Detailed report + video clips | Specific problems and recommendations |
| Designers | Annotated findings + user quotes | Design implications and constraints |
| Developers | Technical constraints and requirements | What the solution needs to handle |
Common Mistakes
| Mistake | Impact | Fix |
|---|
| Asking users what they want | Users design bad solutions | Ask about their problems, not their solutions |
| Only researching before launch | Missing real-world usage issues | Research continuously throughout the product lifecycle |
| Small survey samples (< 50) | Results are statistically meaningless | Need 100+ for surveys, 5-8 for usability tests |
| Confirmation bias in analysis | You find what you expected, miss reality | Have someone outside the project review findings |
| Not including stakeholders | Research results ignored | Invite stakeholders to observe sessions |
| Personas based on demographics | "25-34 year old urban professional" tells you nothing useful | Focus on behaviors, goals, and frustrations |
| Not sharing findings | Research sits in a doc nobody reads | Present findings, share clips, create a research repository |
Key Takeaways
- User research is not optional. Even 3-5 interviews will surface problems you never imagined.
- Use the right method for the question: interviews for "why," surveys for "how many," usability testing for "does it work."
- Ask about past behavior, not hypothetical futures. "Tell me about the last time..." not "Would you use..."
- Card sorting and tree testing should precede any major navigation redesign.
- Personas are only useful if they're based on research and include design implications.
- Journey maps expose the biggest pain points across an end-to-end experience.
- Share findings widely: 1-page summaries for executives, video clips for designers, quotes for everyone.
- Research is never done. Every release should inform the next round of research.