User Research Mythbusters
Common user research problems & how to solve them
👋 Hey, Nikki here! Welcome to this month’s ✨ free edition ✨ of the UXR for Product People Newsletter. Each week, I write actionable tips, tricks, and techniques about conducting effective and efficient user research for non-researchers.
Subscribe to get access to these posts, and every post.
For more: User Research Academy Substack | NikkiBot
Hello Curious PwDR!
If you’re not a professional researcher but have to conduct user research as part of your job, it’s easy to feel out of your depth. You’re not alone—it can be overwhelming. Whether you’re a product manager, designer, or developer, user research is crucial for making smart product decisions, but it’s not always easy to pull off.
User research doesn’t have to be a nightmare. It can be clear, straightforward, and actionable.
In this guide, I’ll break down the most common user research headaches and show you exactly how to avoid them. Let’s make user research a powerful tool, not a roadblock.
1. “Where do I start?” — Lacking a clear research goal
The biggest mistake I’ve seen both from research and people who do research (PwDR)? Diving into research without a specific goal. This leads to random, unhelpful insights and wasted time. If you don’t know what decision you’re trying to make, the research will feel scattered.
What to do instead:
Get laser-focused on one clear research goal. Before you even think about writing questions or recruiting participants, clarify the decision you need to make. Research should always be tied to an outcome, not just collecting information for the sake of it.
Research goals directly relate to your research statement in the sense that they are the more in-depth areas we want to explore in our research statement that will help us answer what we are trying to learn. Our research goals should address what we want to learn and how we are going to study the research statement.
These goals are the things we want to be able to gather information about by the end of the study. They aren’t posed as questions, but we want to be able to “answer” them in the sense of getting enough data on them to feel comfortable making decisions. Below are some models you can use for creating research goals.
Common generative research goals:
Discover people’s current processes/decision-making about [research subject], and how they feel about the overall experience
Learn about people’s current pain points, frustrations, and barriers about [current process/current tools] and how they would improve it
Understand what [research subject] means to people (how they define it) and why it is important to them
Common evaluative research goals:
Evaluate how people are using a [product/website/app/ service]
Evaluate how people are currently interacting with a [product/ website/app/service]
Uncover the current tools people are using to [achieve goal], and their experience with those tools. Uncover how they would improve those tools
These definitely aren’t all the goals you could have, but these can give you a structure and jumping-off point for writing your research goals. If you’re having a hard time coming up with goals, you can ask yourself these questions:
What do we want to learn about [research topic]?
What type of experiences do we want to learn about?
What information do we want at the end of the study?
What decisions are we trying to make by the end of the study and what can help us make those decisions more confidently?
I recommend, for each study, having no more than three goals. I’ve found that going over three goals increases the scope and makes it hard to get in-depth information on each goal.
Imagine you’re planning to redesign the user dashboard of your product. Your goal could be, “Find out which sections of the current dashboard are most difficult to use.” Your key goal might be, “Evaluate the biggest pain points of the current dashboard.”
Keep it simple. Don’t try to answer too many questions in one go—stick to one or two key questions at most.
2. “I have no idea who to talk to” — Struggling to recruit participants
Finding the right users can feel like a challenge. Many non-researchers fall into the trap of interviewing people who are too close to the product (like colleagues) or gathering participants who don’t fit the ideal user profile.
Don’t overthink recruitment. You don’t need a massive pool of participants to get valuable insights. What matters is talking to the right people—those who reflect your actual users.
Recruitment steps:
Define your ideal participant. Think about who your target user is by asking yourself a few different questions:
What are the questions your users have to answer to get you meaningful information?
What gaps in knowledge do you have that you need your participants to fill in?
What behaviors do you need to understand more?
What habits are you trying to target?
What are the goals the user is trying to accomplish?
Example: If you’re testing a mobile app aimed at fitness beginners, your target participant might be someone who has signed up for the app in the last month and completed 3-5 workouts.
Start with your existing network. If you have users already, reach out to them first. You don’t need to look far—use a quick survey or email to see who’s willing to participate.
Example: For a new dashboard feature, email your most active users, asking them to help improve the product by participating in a 30-minute interview.
Use recruitment tools. Platforms like User Interviews or Respondent.io let you quickly find participants who fit your profile. These tools also allow you to target users outside your immediate network who still meet your criteria.
Start small. Initially, for evaluative research, aim for 6-8 participants. This is enough to spot recurring issues without getting overwhelmed.
3. “I’m drowning in data” — Overwhelmed by too many insights
You’ve done the research—interviews, surveys, user tests—and now you have piles of data. But sorting through it feels impossible. You’re stuck wondering, “Where do I even begin?”
The key is to look for patterns, not to analyze every single comment. Focus on identifying trends that come up repeatedly.
Steps:
Go back to your key question. When reviewing your data, filter responses based on the main question you were trying to answer.
Example: If your key question was, “What’s the hardest part of our onboarding process?” comb through your interview notes looking specifically for mentions of confusion or frustration during onboarding.
Identify recurring themes. Create a list of the most common issues that users brought up. If five people mention the same pain point, that’s a strong signal.
Example: While reviewing feedback, you notice that 6 out of 10 users mentioned the onboarding form is too long. That’s a pattern worth addressing.
Ignore the outliers (for now). One or two random comments might not represent the majority. Focus on the trends that show up in multiple interviews.
Make a short bullet list of 3-5 major insights or pain points based on user feedback. This will help you stay organized and avoid data paralysis.
4. “This is taking forever” — Research feels too time-consuming
User research is often viewed as a time-sink. And let’s face it, you’re juggling a million other tasks. You don’t have weeks to spend conducting user interviews or analyzing surveys.
I promise you that user research doesn’t have to take forever. By adopting a lean, continuous approach, you can integrate research into your existing workflows.
Steps:
Start with small studies. Focus on quick wins. Instead of a full-blown user study, run a usability test or pick one major question to answer in 7-10 interviews. I call this the MVR — minimum viable research. We still need to make it viable, but let’s make it doable as well.
Example: Test one feature with 5 users this week instead of waiting to test the whole product with 10 users.
Use micro-surveys. To gather ongoing feedback, add 1-2 simple questions at key points in the user journey. Consider using one of these quick and easy user research surveys.
Example: After a user completes a task in your app, ask, “How easy or difficult was this process for you?” on a scale of 1-7.
Incorporate research into existing processes. Add research questions to existing processes like QA testing or user feedback sessions. This allows you to gather data without taking extra time out of your day.
Example: If you’re running user acceptance tests before launching a new feature, tack on a few user research questions like “How easy or difficult was this feature to use?” or “What was the most confusing or frustrating part of this task?”
Research is best when it’s ongoing. Don’t try to do everything at once—start small and build momentum.
5. “What if my research is biased?” — Fear of leading questions
One of the trickiest aspects of research is asking questions that don’t bias the results. As a non-researcher, it’s easy to unintentionally lead users toward the answers you want to hear.
Craft neutral, open-ended questions to get unbiased feedback. Avoid framing your questions in a way that leads users to a specific conclusion.
Steps:
Avoid yes/no questions. These often lead to one-word answers and can reinforce your own biases. Instead, ask open-ended questions that encourage users to explain their thought process.
Example: Instead of “Did you find the feature helpful?” ask, “What did you think about the new feature? Can you walk me through your experience using it?”
Don’t validate your assumptions. Avoid questions that seek validation, like “This process is easy, right?” Instead, ask questions that allow users to highlight any problems themselves.
Example: Rather than asking, “Is this faster than the old checkout process?” ask, “How did this checkout process compare to your previous experiences?”
Test your questions. Run your questions by someone neutral—a colleague or friend—before using them in research. Ask them if the questions feel biased or leading.
I highly recommend checking out this article to help you with forming open-ended and unbiased questions.
Always ask users to explain their reasoning. The “why” behind their answers is where the real insights are hidden.
6. “I don’t have enough participants” — Small sample sizes
It’s common to feel like your research isn’t valid if you don’t have enough participants. But you don’t need a huge sample to gain valuable insights.
The great thing about qualitative research (this includes things like qualitative usability testing and interviews) is that we are not focused on statistical significance. We are looking to achieve theoretical saturation. These are hugely different from each other.
Theoretical saturation means we have heard the same key concepts repeatedly in our sample and aren’t really learning anything new. This can naturally be achieved with a smaller sample size.
Steps:
Start with 5 participants. As Nielsen Norman Group points out, you can uncover 85% of usability issues with just five users of the same segment*. You don’t need hundreds of participants to find valuable patterns.
Example: If you’re testing the onboarding flow of a SaaS platform, recruit five new users within the same segment to go through it. Ask them to narrate their thoughts as they complete each step.
Iterate after each round. Conduct your research in small waves. After each round, make improvements and then test again with a new batch of participants.
Example: After five users test a new feature, fix the most common pain points, then run another small test with 3-5 more users to validate the changes.
Mix qualitative and quantitative methods. If you’re concerned about having a small number of interview participants, back up your findings with survey data. Short surveys can give you broader insights without requiring a large time investment.
Example: Conduct nine in-depth user interviews, then send a 3-question survey to 100 users to see if the trends hold across a larger group.
Small, iterative research can be even more powerful than one-time, large-scale studies. This approach allows you to refine and improve continuously.
*Sample sizes are per segment. If you are trying to speak to five people of the general population, this sample size will be too small. Make sure to segment your audience when recruiting.
7. “I can’t make sense of the feedback” — Conflicting user feedback
It happens all the time: one user loves a feature, and another hates it. Conflicting feedback can leave you feeling stuck and unsure of how to move forward.
Prioritize feedback based on frequency and impact. Not all feedback is equally important—focus on recurring pain points that directly affect the success of your product.
Steps:
Look for common themes. If multiple users mention the same issue, that’s a clear signal. Group similar pieces of feedback together and focus on those recurring themes.
Example: If 4 out of 6 users mentioned difficulty finding the navigation bar, that’s something to address. If one user mentions they dislike the color scheme, that might be a lower priority.
Focus on impact. Prioritize issues that have the biggest impact on your core user journey. If a feature is confusing but isn’t central to the user experience, it might not need immediate attention.
Example: If users are frustrated by a feature in the settings menu, but it’s rarely used, deprioritize it in favor of fixing key navigation issues.
Consider user priority. Not all feedback should be weighted equally. Consider whether the feedback is coming from new users, power users, or casual users. Focus on the feedback that aligns with your target audience.
Example: If you’re designing a product for new users, prioritize the feedback from first-time users over that from advanced users who might have different needs.
Pro Tip: It’s okay if not everyone loves your product. Focus on solving the most common and critical issues first rather than trying to please every single user.
8. “Analysis paralysis” — Feeling stuck on next steps
You’ve collected a lot of feedback and uncovered some issues, but now what? It’s easy to get overwhelmed by all the data and feel unsure about what to do next.
Break down your findings into an actionable plan. First, focus on quick wins and the most critical pain points.
Steps:
Identify quick wins. Look for small, easy-to-implement changes that can have a big impact. For example, simplifying a confusing label or making a button more visible are common examples.
Example: Users keep saying they can’t find the search bar. Moving the search bar to the top of the page might be a quick fix that resolves a major frustration.
Tackle the biggest pain points. After addressing quick wins, focus on the top 2-3 issues that are causing the most friction in the user journey.
Example: If users are struggling to complete the onboarding process, break down each step of onboarding and look for ways to simplify it.
Create a roadmap. You don’t need to fix everything at once. Develop a roadmap that breaks down your findings into short-term (quick fixes) and long-term (more complex changes) actions.
Example: Your roadmap might include immediate changes like simplifying copy and long-term updates like redesigning the onboarding flow.
Don’t try to fix everything at once. Focus on the biggest wins and iterate as you gather more feedback.
9. “How do I measure success?” — Metrics for user research
You’ve made changes based on user research, but how do you know if it worked? Measuring the success of your research can be tricky without clear metrics in place.
Before starting your research, set clear success metrics and track these metrics over time to gauge the impact of your changes.
Steps:
Define success from the start. Before you even begin your research, decide what success looks like. Is it increased user satisfaction, reduced time to complete a task, or fewer support tickets? Make it specific.
Example: If you’re redesigning the checkout flow, your success metric might be a reduction in cart abandonment or a faster time-to-completion.
Use standardized usability metrics. Consider using metrics like the System Usability Scale (SUS) or Single Ease Question (SEQ) to track how user satisfaction or ease of use changes over time. Read more about those surveys here.
Example: After implementing changes to a product’s navigation, you could use the SUS to measure if users find the product easier to navigate now compared to before.
Tie metrics to business outcomes. Align your success metrics with overall business goals. For instance, if you’re improving onboarding, your key metrics might be time-to-complete or conversion rates from new users.
Example: After revamping the onboarding process, you might track how long it takes users to complete onboarding and compare it to the pre-research baseline. If it decreases by 30%, that’s a clear sign of success.
Be specific with your metrics. Instead of saying, “We want to improve usability,” say, “We want to reduce task completion time by 30% and increase user satisfaction scores to 80%.”
Reduce the user research headaches
User research can feel overwhelming if it’s not your primary role, but it doesn’t have to be. By focusing on clear goals, small, iterative steps, and actionable insights, you can make research a powerful tool for improving your product and making smarter decisions.
Here’s a quick summary of the key steps:
Define your goal. Start every research project with a specific question tied to a product decision.
Recruit smart, not hard. Focus on quality participants, even if you only have 5 users.
Look for patterns. Analyze your data for trends, not individual outliers.
Keep your questions simple. Don’t overthink it—ask clear, neutral questions to avoid bias.
Communicate clearly. Present your findings in plain language and tie them to business outcomes.
Set success metrics. Always track the impact of your research with clear metrics that align with product and business goals.
By following these steps, you’ll not only avoid the most common user research headaches but also become more confident in conducting effective research that drives real results.
The User Research Plan Template
I hate starting from scratch, and I don’t want you to have to, either. A user research plan was the biggest thing that helped me solve most of these headaches with my teams. You can find the template to use right here.
Have you had any of these headaches? Which of the solutions are you excited to try out? Let me know how it goes in the comments.👇🏻
📚 Additional frameworks and tactics to explore
Enjoying this? Share with others or refer a friend (and get your subscription comped!) — I always appreciate it so much!
Have a curious week,
Nikki


