UX research is not a one-size-fits-all activity. The method you choose determines what you learn. Use the wrong method, and you’ll answer the wrong question with confidence. Use the right method, and you’ll uncover insights that actually improve the product.

Here’s a practical guide to the most common UX research methods, what they’re good for, and when to use them.

The Two Big Distinctions

Before choosing a method, understand two fundamental axes.

Attitudinal vs. Behavioral: Attitudinal methods ask people what they say. Behavioral methods observe what people do. These often differ. People say they want simplicity. They behave as if they want features. You need both.

Qualitative vs. Quantitative: Qualitative methods answer why (small sample, rich data). Quantitative methods answer how many (large sample, statistical confidence). Start with qualitative to understand the problem. Use quantitative to measure its scope.

The Method Library

User Interviews

What it is: One-on-one conversations exploring users’ experiences, motivations, and pain points.

Best for: Early discovery, understanding user goals, identifying unmet needs, exploring how people currently solve problems.

Not for: Validating design solutions (people are bad at predicting what they’ll actually use), measuring anything statistically.

Sample size: 5-10 participants per user group.

Why it works: Interviews reveal the “why” behind behavior. A survey tells you that 40% of users abandon checkout. An interview tells you they abandon because the shipping cost appears too late and feels like a betrayal.

The trap: Asking people what they want. Don’t ask. Ask about recent experiences. Ask about frustrations. Ask about workarounds. Their answers will imply solutions without requiring them to design.

Usability Testing

What it is: Watching people attempt tasks with your product (or prototype) and noting where they succeed, struggle, or give up.

Best for: Identifying friction points, validating navigation, testing whether users understand your interface.

Not for: Measuring overall satisfaction (use a survey), predicting market adoption.

Sample size: 5-8 participants per user group. Testing with 5 people finds 85% of usability problems.

Why it works: Watching someone struggle to find the checkout button is humbling and immediately actionable. You don’t need statistical significance to know that a button is invisible.

The trap: Testing only the happy path. Users will wander. Your test should let them. Design tasks that require real decisions, not linear instructions.

Surveys and Questionnaires

What it is: Structured questions sent to a large audience to measure attitudes, behaviors, or satisfaction at scale.

Best for: Measuring customer satisfaction (CSAT, NPS), quantifying the prevalence of known issues, segmenting users by behavior or attitude.

Not for: Discovery (you don’t know what you don’t know), understanding why people behave a certain way.

Sample size: Depends on required confidence. For a population of 10,000, a sample of 370 gives 95% confidence with 5% margin of error.

Why it works: Surveys provide the “how many” that qualitative research cannot. Interviews tell you that users are frustrated. Surveys tell you that 62% of users are frustrated.

The trap: Long surveys. Bad questions. Leading language. Every additional question reduces completion rate. Test your survey before sending it.

Field Studies / Contextual Inquiry

What it is: Observing users in their natural environment (home, office, factory) as they go about their real activities.

Best for: Understanding complex workflows, identifying workarounds, discovering needs users don’t articulate because they’ve become invisible.

Not for: Testing specific design solutions, quick feedback.

Sample size: 5-15 participants across relevant contexts.

Why it works: People adapt their environment to their needs in ways they never think to mention. Watching someone use a sticky note system to patch a software gap reveals a feature opportunity. Asking them about it would not.

The trap: Confirming what you already believe. Go into the field to be surprised, not validated.

A/B Testing

What it is: Showing two (or more) variants of a design to different users and measuring which performs better on a specific metric.

Best for: Optimizing existing designs, choosing between specific alternatives, validating hypotheses about what drives behavior.

Not for: Understanding why one variant won (use qualitative research afterward), exploring broad directional changes.

Sample size: Depends on expected effect size and baseline conversion rate. Small changes require enormous samples.

Why it works: A/B testing removes opinion from decision-making. “I think the green button will convert better” becomes “The green button converted 3.2% better with 95% confidence.”

The trap: Testing too many variables at once. Test one thing at a time. Run sequential experiments. Let each answer inform the next question.

Card Sorting

What it is: Users organize content topics into groups that make sense to them, revealing mental models for information architecture.

Best for: Designing navigation structures, labeling categories, understanding how users expect content to be grouped.

Not for: Testing visual design, evaluating interaction patterns.

Sample size: 15-30 participants for quantitative analysis, 5-10 for qualitative exploration.

Why it works: Your internal logic about where things belong is not your users’ logic. Card sorting reveals the gap.

The trap: Giving users too many items. Stick to 30-50 cards. Any more and fatigue compromises results.

Diary Studies

What it is: Participants record their experiences, behaviors, or thoughts over an extended period (days to weeks) using journals, photos, or app logs.

Best for: Understanding longitudinal behaviors, capturing in-the-moment reactions, studying habits and routines.

Not for: Quick feedback, evaluating specific interface details.

Sample size: 15-30 participants, accounting for drop-off over time.

Why it works: Memory is unreliable. A diary study captures what people actually do and feel in the moment, not what they remember doing or think they would feel.

The trap: Requiring too much effort. Make recording easy. Remind participants regularly. Compensate fairly for their time.

The Research Roadmap: When to Use What

Discovery Phase (before building anything)

  • User interviews to understand problems
  • Field studies to observe real workflows
  • Diary studies for longitudinal behaviors
  • Card sorting for initial information architecture

Design Phase (while building)

  • Usability testing on prototypes (low to high fidelity)
  • A/B testing for specific design decisions
  • Surveys to validate assumptions with larger samples

Launch and Beyond (after shipping)

  • Usability testing on live product
  • Surveys for satisfaction metrics (CSAT, NPS)
  • A/B testing for ongoing optimization
  • Analytics review (not listed above, but essential)

The 5-User Myth (And Truth)

You’ve heard that testing with 5 users finds 85% of usability problems. This is true for identifying major friction points in a single task flow. It is not true for:

  • Understanding diverse user groups (test 5 from each group)
  • Measuring task completion rates (you need statistical significance)
  • Finding edge cases (you need more participants or different methods)

Use 5 users for formative testing, identifying what’s broken. Use larger samples for summative testing, measuring how broken it is.

The Bottom Line

The best research method is the one that answers your specific question. Start by writing down what you need to learn. Then choose the method. Never start with a method and look for a question it answers.

And remember: research without action is theater. Every study should end with a clear set of decisions or changes. If you’re not ready to act on what you learn, don’t run the study.

About the Author

author photo

Mirko Humbert

Mirko Humbert is the editor-in-chief and main author of Designer Daily and Typography Daily. He is also a graphic designer and the founder of WP Expert.