User Research Methods

User research is how you replace assumptions with evidence. Every product decision should trace back to something a real user said, did, or struggled with. Skipping research is the single most expensive mistake in product development: you'll build the wrong thing, then spend months fixing or discarding it.

Research Types

TypePurposeKey QuestionOutput
GenerativeDiscover needs and opportunities"What problems exist?"Problem statements, opportunity areas
EvaluativeTest specific solutions"Does this design work?"Usability findings, design improvements
QualitativeUnderstand the "why""Why do users behave this way?"Insights, mental models, motivations
QuantitativeMeasure the "what""How many? How often? How much?"Statistics, benchmarks, conversion rates

When to Use Each Type

Project PhaseResearch TypeMethods
Starting a new productGenerative + QualitativeInterviews, field studies, diary studies
Defining featuresGenerative + QualitativeCard sorting, concept testing, journey mapping
Designing solutionsEvaluative + QualitativeUsability testing, A/B testing prototypes
Launched productEvaluative + QuantitativeAnalytics, surveys, A/B testing, heatmaps
Optimizing flowsEvaluative + BothFunnel analysis, usability testing, surveys

Method Selection Matrix

MethodTypeSample SizeCostTimeBest For
User InterviewsGenerative, Qual5-12Medium1-2 weeksUnderstanding needs, motivations, pain points
SurveysEvaluative, Quant100-1000+Low1-2 weeksValidating findings at scale, measuring satisfaction
Usability TestingEvaluative, Qual5-8Medium1-2 weeksTesting designs, finding interaction problems
A/B TestingEvaluative, Quant1000+Low-Medium2-4 weeksOptimizing conversions, comparing options
Card SortingGenerative, Qual15-30Low1 weekInformation architecture, navigation structure
Tree TestingEvaluative, Quant50+Low1 weekValidating navigation structure findability
Diary StudiesGenerative, Qual10-15Medium2-4 weeksLongitudinal behavior, habits, contexts
Field StudiesGenerative, Qual5-10High1-3 weeksUnderstanding real-world context and environment
Analytics ReviewEvaluative, QuantN/ALowDaysUnderstanding behavior patterns at scale
HeatmapsEvaluative, Quant1000+ clicksLow1-2 weeksVisual attention, click patterns
Eye TrackingEvaluative, Qual10-20High2-3 weeksDetailed attention patterns, reading behavior
Competitive AnalysisGenerative, Qual5-10 competitorsLow1 weekUnderstanding market patterns and gaps
Concept TestingEvaluative, Qual5-10Low-Medium1 weekTesting early ideas before building

User Interviews

Planning Interviews

Planning StepDetails
Define the goalWhat specific questions do you need answered? "Understand how users manage their task lists" not "Learn about users"
Recruit participants5-8 people who match your target user profile. Not colleagues. Not friends.
Prepare a guide10-15 open-ended questions grouped by topic. Not a rigid script.
Set up recordingGet consent. Record audio/video for later review.
Assign rolesOne interviewer (asks questions) + one note-taker (captures observations)
Schedule 60 minutes45 minutes of questions + 15 minutes buffer

Interview Structure

1. INTRODUCTION & RAPPORT (5 min)
   "Thanks for joining. I'm not testing you, I'm testing our product.
   There are no wrong answers. I want to hear your honest experience."

2. BACKGROUND QUESTIONS (10 min)
   "Tell me about your role."
   "Walk me through a typical day."
   "How long have you been doing [relevant activity]?"

3. CORE TOPIC EXPLORATION (25-30 min)
   "Tell me about the last time you [did the thing we're researching]."
   "What was frustrating about that?"
   "What would make that easier?"
   "Show me how you currently do this."

4. WRAP-UP (5 min)
   "Is there anything else you'd like to share?"
   "What's the one thing you'd change about [topic]?"

Asking Good Questions

Question TypeExampleWhen to Use
Open-ended"Tell me about a time when..."Start of topics, let users lead
Follow-up"Can you tell me more about that?"When they mention something interesting
Clarifying"What do you mean by 'complicated'?"When they use vague or ambiguous terms
Behavioral"What did you do next?"Understanding actual behavior, not opinions
Contextual"Where were you when this happened?"Understanding environment and circumstances

Questions to Avoid

Bad Question TypeExampleWhy It's BadBetter Alternative
Leading"Don't you think the checkout is confusing?"Suggests the expected answer"Walk me through your last checkout experience."
Yes/No"Do you like the new design?"Gives you no useful detail"What stands out to you about this design?"
Hypothetical"Would you use a feature that does X?"People can't predict their future behavior"When was the last time you needed to do X?"
Double-barreled"How do you find and organize your files?"Two questions disguised as oneAsk about finding and organizing separately.
Jargon-loaded"How do you feel about the IA of our product?"Users don't know your terminology"How easy is it to find things on our site?"

Analyzing Interview Data

  1. Transcribe or timestamp key moments from each interview
  2. Affinity mapping: Write each observation on a sticky note, group related notes into themes
  3. Count themes: "7 of 8 participants mentioned difficulty finding the search function"
  4. Look for patterns in behavior, not just stated preferences (people do ≠ people say)
  5. Quote directly in your findings. Stakeholders respond to user words more than researcher summaries

Surveys

When Surveys Work

Good ForBad For
Measuring satisfaction (NPS, CSAT)Understanding why users are dissatisfied
Validating findings from qualitative researchExploring new problem spaces
Reaching large sample sizesGetting deep, nuanced insights
Tracking metrics over timeReplacing user interviews
Segmenting users by behavior or demographicsAsking about complex workflows

Writing Good Survey Questions

GuidelineBad ExampleGood Example
One question per question"How satisfied are you with speed and reliability?"Split into two separate questions
Use balanced scales"Good, Very Good, Excellent""Very Poor, Poor, Neutral, Good, Excellent"
Avoid jargon"Rate the API documentation""Rate the technical guides"
Randomize option orderAlways listing "Very Satisfied" firstRandomize to avoid position bias
Keep it short40-question survey10-15 questions maximum (5-7 minutes)
Include a free-text fieldAll multiple choice"Is there anything else you'd like to share?"

Common Survey Scales

ScaleUse ForRange
LikertAgreement/satisfaction1-5 or 1-7 (Strongly Disagree → Strongly Agree)
NPSLoyalty/recommendation0-10 (How likely to recommend?)
SUSUsability perception10 questions, 1-5 scale, scored 0-100
CSATSatisfaction with specific interaction1-5 (Very Unsatisfied → Very Satisfied)
CESEffort to complete task1-7 (Very Difficult → Very Easy)
SEQSingle task difficulty1-7 (Very Difficult → Very Easy)

Card Sorting

Card sorting reveals how users expect information to be organized. It's essential for designing navigation and information architecture.

Types of Card Sorting

TypeProcessWhen to UseOutput
OpenUsers group cards and name the groups themselvesYou have no existing IA and want to discover natural groupingsCategory names + groupings
ClosedUsers sort cards into predefined categoriesYou have a proposed IA and want to validate itFit score for each category
HybridUsers sort into predefined categories but can create new onesValidating with flexibility to discover gapsValidation + new category ideas

Running a Card Sort

  1. Create cards: Write each content item on a card (physical or digital). 30-60 cards is typical.
  2. Recruit participants: 15-30 for open sort, 30-50 for closed sort.
  3. Tools: OptimalSort, Maze, or physical sticky notes.
  4. Analyze results:
Analysis MethodWhat It Shows
Similarity matrixHow often two items were placed together (percentage)
DendrogramHierarchical clustering showing which items group most strongly
Category agreementHow many participants used the same grouping
Standardization gridHow each card was categorized by each participant

Interpreting results:

  • Items sorted together by 70%+ of participants → Strongly related, keep them together
  • Items sorted together by 40-69% → Somewhat related, consider grouping
  • Items that split across groups → May need to appear in multiple places or need better labeling
  • Groups with high agreement on names → Use those names for navigation labels

Tree Testing

Tree testing validates whether users can find things in your proposed navigation structure. It strips away visual design to test the structure alone.

Running a Tree Test

  1. Create a text-only tree of your proposed navigation (no styling)
  2. Write 8-12 tasks: "Where would you go to change your password?"
  3. Recruit 50+ participants
  4. Measure success rate and directness (found on first attempt vs. backtracked)

Success benchmarks:

MetricTargetConcern
Direct success rate> 70%Users find it on the first try
Indirect success rate> 80%Users find it eventually
Time to find< 30 secondsNot too many wrong turns
First click accuracy> 50%Users' instinct leads them right

Creating User Personas

Personas are fictional representations of your key user types, based on real research data.

Persona Template

┌────────────────────────────────────────────────────────────────┐
│ [Photo]  Sarah Chen, 34                                        │
│          Marketing Manager at a mid-size SaaS company          │
│          Tech comfort: High │ Frequency: Daily user            │
├────────────────────────────────────────────────────────────────┤
│ GOALS                          │ FRUSTRATIONS                  │
│ • Streamline team workflows    │ • Too many disconnected tools │
│ • Track campaign ROI           │ • Manual reporting wastes hrs │
│ • Prove value to leadership    │ • Data stuck in silos         │
├────────────────────────────────────────────────────────────────┤
│ BEHAVIORS                      │ CONTEXT                       │
│ • Checks dashboards first AM   │ • Works hybrid (office 3x/wk)│
│ • Prefers visual data          │ • Manages team of 5           │
│ • Delegates detailed analysis  │ • Budget decision-maker       │
│ • Uses mobile for quick checks │ • Reports to VP of Marketing  │
├────────────────────────────────────────────────────────────────┤
│ QUOTE                                                          │
│ "I need to show results, not just activity."                   │
├────────────────────────────────────────────────────────────────┤
│ DESIGN IMPLICATIONS                                            │
│ • Dashboard-first interface with key metrics visible           │
│ • One-click report generation for stakeholder presentations    │
│ • Mobile-friendly read-only views for on-the-go checking       │
│ • Integration with existing tools (Slack, HubSpot, GA)         │
└────────────────────────────────────────────────────────────────┘

Persona Best Practices

DoDon't
Base personas on real research dataCreate personas from assumptions
Include 3-5 personas maxCreate 10+ personas (nobody will use them)
Focus on goals, behaviors, and frustrationsFocus on demographics (age, income)
Include design implicationsMake them so abstract they don't inform decisions
Update personas as you learn moreTreat them as permanent truth
Share with the entire teamKeep them locked in a research report

Proto-Personas (When You Can't Do Full Research)

If you genuinely can't do user research (rare, but it happens), create proto-personas based on available data: support tickets, analytics, sales team knowledge, app reviews. Clearly label them as hypotheses to be validated.

Journey Mapping

A journey map documents the steps a user takes to accomplish a goal, including their thoughts, emotions, and pain points at each stage.

Journey Map Structure

PHASE:     Awareness  →  Consideration  →  Purchase  →  Onboarding  →  Regular Use

ACTIONS:   Google search   Compare plans    Enter CC info   Complete setup   Daily login
           Visit site      Read reviews     Complete form   Watch tutorial   Use features
           Read blog       Start free trial  Confirm order  Import data     Share with team

THINKING:  "Is this        "Which plan       "Is this        "How do I get   "Can I do
            legit?"         is right?"        secure?"        started?"       everything
                                                                             I need?"

FEELING:   😐 Neutral      🤔 Uncertain      😟 Anxious      😕 Confused     😊 Satisfied
                                                                             (or 😤 Frustrated)

PAIN       Info hard       Can't compare     Too many        No guidance     Feature X
POINTS:    to find         features easily   form fields     after signup    is buried

OPPS:      Clear value     Comparison        Streamline      Interactive     Surface key
           prop on landing  table on          checkout to     onboarding      features
           page            pricing page      3 fields        wizard          in-context

When to Journey Map

Use It WhenSkip It When
You need cross-team alignment on user experienceYou need to test a specific screen
You want to identify the biggest pain points across a flowYou already know the problem and need solutions
You're redesigning an end-to-end experienceYou're making small, incremental improvements
You want to prioritize where to invest design effortYou need quantitative data

Research Ops: Practical Tips

Recruiting Participants

SourceProsCons
Existing users (email, in-app)Know your product, easy to reachMay be too familiar, survivorship bias
User testing platforms (UserTesting, Maze)Fast, large poolMay not match your exact user profile
Social media / communitiesHighly targetedSelf-selected, may not be representative
Customer support / success teamsKnow who has problemsBias toward dissatisfied users
Screener surveysFilter for exact criteriaTakes time to set up and recruit

Research Repository

Store research findings where the team can find and reference them:

What to StoreFormat
Key findingsShort summaries with supporting evidence
User quotesDirect quotes tagged by theme
Video clips30-60 second highlight clips
RecommendationsSpecific, actionable design recommendations
MetadataDate, method, participant count, researcher

Communicating Research Findings

AudienceFormatFocus On
Executives1-page summary + 3-5 key findingsBusiness impact, top-line metrics
Product teamDetailed report + video clipsSpecific problems and recommendations
DesignersAnnotated findings + user quotesDesign implications and constraints
DevelopersTechnical constraints and requirementsWhat the solution needs to handle

Common Mistakes

MistakeImpactFix
Asking users what they wantUsers design bad solutionsAsk about their problems, not their solutions
Only researching before launchMissing real-world usage issuesResearch continuously throughout the product lifecycle
Small survey samples (< 50)Results are statistically meaninglessNeed 100+ for surveys, 5-8 for usability tests
Confirmation bias in analysisYou find what you expected, miss realityHave someone outside the project review findings
Not including stakeholdersResearch results ignoredInvite stakeholders to observe sessions
Personas based on demographics"25-34 year old urban professional" tells you nothing usefulFocus on behaviors, goals, and frustrations
Not sharing findingsResearch sits in a doc nobody readsPresent findings, share clips, create a research repository

Key Takeaways

  • User research is not optional. Even 3-5 interviews will surface problems you never imagined.
  • Use the right method for the question: interviews for "why," surveys for "how many," usability testing for "does it work."
  • Ask about past behavior, not hypothetical futures. "Tell me about the last time..." not "Would you use..."
  • Card sorting and tree testing should precede any major navigation redesign.
  • Personas are only useful if they're based on research and include design implications.
  • Journey maps expose the biggest pain points across an end-to-end experience.
  • Share findings widely: 1-page summaries for executives, video clips for designers, quotes for everyone.
  • Research is never done. Every release should inform the next round of research.