3 UXR Surveys to Start Establishing Baselines
Elevate your product’s usability and user satisfaction
👋 Hey, Nikki here! Welcome to this month’s ✨ free edition ✨ of the UXR for Product People Newsletter. Each week, I write actionable tips, tricks, and techniques about conducting effective and efficient user research for non-researchers.
Subscribe to get access to these posts, and every post.
Hello PwDR!
As a product manager, designer, developer, or anyone doing research, you’re responsible for making decisions that can significantly impact the user experience—and, ultimately, the success of your product. User research surveys give you the data to back up those decisions. Instead of guessing what users want or basing choices on assumptions, you’re armed with real insights from your actual users.
Here’s why this matters to you:
Having a clear baseline of usability, ease of use, and satisfaction helps you make smarter product decisions. You’ll know exactly where your product needs improvement and where it’s already performing well.
By regularly using these surveys, you can track the effectiveness of your updates. If you launch a new feature, you’ll be able to see whether it’s actually making users’ lives easier—or creating more frustration.
Hard data from surveys like SUS, SEQ, and satisfaction scores make it much easier to get stakeholder buy-in. When you can show stakeholders that a feature with low usability is dragging down the product’s success, they’ll listen.
Product professionals who consistently bring user insights to the table stand out. Showing that your decisions are based on structured user feedback, not gut instinct, enhances your credibility and positions you as a leader who prioritizes users. This is crucial for long-term career growth and advancement.
By incorporating surveys into your product development process, you’re not just checking a box—you’re actively improving the product, driving user satisfaction, and building a reputation as a data-driven product leader. These tools help you make informed decisions, advocate for users, and prove the value of your work to the business.
Used strategically, these surveys can give you a baseline for usability, ease of use, and overall satisfaction, which you can track and measure over time to improve your product. Below, I’ll walk you through three powerful surveys—when to use them, how to set them up, and how to analyze the results. These surveys can provide valuable insights that can shape your product strategy.
**All survey questions are available at the bottom of this article in a template — scroll down to simply copy and paste**
1. UMUX/UMUX-Lite OR SUS: Measuring usability
When you need a quick, reliable measure of overall product usability, turn to the UMUX (Usability Metric for User Experience), UMUX-Lite, or the SUS (System Usability Scale). These tools are designed to give you a clear, actionable understanding of how easy your product is to use.
When to use them:
After launching a new feature. You want to know if this new feature made your product harder or easier to use.
Post-release of a major product update. If you’ve made significant changes, measuring usability can help you see if you’re moving in the right direction.
Ongoing usability tracking. Establish a baseline now, and then check in periodically to track how usability evolves as you iterate on the product.
Why use UMUX or SUS?
Both UMUX and SUS are standardized, easy-to-use surveys designed to give you a quick snapshot of usability. They provide you with a score that reflects the overall usability of your product, which you can track over time.
UMUX: This focuses on how well your system meets user needs and how easy it is to use. It consists of four questions, making it more modern and succinct than SUS.
UMUX-Lite: The simplified version of UMUX boils usability down to two questions, which is ideal when you need to minimize the time users spend on feedback.
SUS: A classic survey with ten questions, SUS is widely used and provides a usability score between 0 and 100. Anything above 68 is considered “above average.”
How to set up UMUX/UMUX-Lite or SUS:
Frequency:
Run it after major releases (feature rollouts, product redesigns) to assess the immediate impact on usability.
Set a quarterly cadence for ongoing usability tracking to ensure your product is consistently easy to use over time.
Calculating sample size: I recommend using a sample size calculator. Set the confidence level to 95% and the margin of error to 5%. For example, if you have a user base of 10,000, you’d need about 370 users to participate for statistically significant results.
If you’re short on time or resources, a sample of 30-50 users can help identify usability trends, though it may not be statistically robust.
Where to run the survey:
In-app pop-ups: Perfect for getting feedback immediately after a user has interacted with a new feature. For example, when a user completes a task or engages with a new feature, trigger the survey within the app.
Email surveys: Use email to target existing users after a release or a specific time period of using the new feature. This works well if you have users who may not log in frequently.
Post-purchase/onboarding: Trigger the survey after the completion of a major action, like onboarding or making a purchase, to assess the ease of use.
Analyze the results:
For UMUX and UMUX-Lite:
Score each question from 1 (Strongly Disagree) to 7 (Strongly Agree).
Average the scores to get an overall usability score. Scores above 70 suggest the product is easy to use, while anything below 70 indicates potential issues.
Track how the score changes over time—after feature launches, redesigns, or product updates.
For SUS:
1. Convert each question into a score between 0 and 100.
Calculate the overall SUS score by multiplying the average of your scores by 2.5. A score above 68 is “above average”; anything lower is a signal that your users are struggling.
Use the score as a baseline for future product changes.
In a B2B context:
Imagine you’re building a project management tool for enterprise clients. You run the UMUX survey after launching a major feature update (e.g., adding a complex reporting tool). If your usability score drops below 70, that’s a red flag: users may be struggling to understand how to generate reports or find the tool overly complicated. Here’s what to do next:
Send follow-up questions to those who rated usability poorly. Ask them which specific parts of the reporting tool were confusing or frustrating.
If this tool is critical to the customer experience, focus your development efforts on simplifying the reporting feature or improving documentation and in-app guidance.
In a B2C context:
Let’s say you manage a consumer mobile app that helps users book fitness classes. After redesigning the booking process, you run the SUS survey. Your score comes back as 62—below the 68 average. This means users are finding it harder to book classes than before. What to do next:
Look at patterns in the feedback. You might find that users are frustrated with the calendar view or that the payment process is too confusing. Use this data to refine specific steps in the booking journey.
Track changes over time. Implement small tweaks (e.g., clearer navigation or simpler payment options) and rerun the SUS survey after each iteration to see if the score improves.
2. SEQ (Single Ease Question): Measuring task-level ease of use
The Single Ease Question (SEQ) is an incredibly simple yet powerful tool for measuring how easy or difficult a specific task is for users. Unlike the broad focus of UMUX and SUS, SEQ zooms in on individual tasks and helps you understand whether users can complete those tasks with ease.
When to use SEQ:
After asking a user to perform a specific task (like setting up an account or finding a product in your app), SEQ lets you immediately gauge how hard or easy it was for them.
Anytime you want to measure the ease of completing a key user journey, SEQ helps you pinpoint the friction points.
Use SEQ before and after making changes to a specific part of your product to measure the impact.
Why use the SEQ?
It’s just one question: “How easy or difficult was it to complete this task?” Users answer on a scale from 1 to 7, where 1 is “Very Difficult” and 7 is “Very Easy.” This simplicity makes SEQ ideal when you want fast, targeted feedback on specific parts of your product.
SEQ is particularly useful for identifying frustrating user journeys. For example, if you recently redesigned your onboarding flow, SEQ can give you instant insights into whether the change improved or worsened the experience.
How to implement the SEQ:
After a user completes a task during usability testing or post-launch, ask: “How easy or difficult was it to complete this task?”
Give users a 7-point scale to rate their experience:
1 = Very Difficult
7 = Very Easy
This can be done on a moderated or unmoderated basis.
Frequency:
Run SEQ before you launch a new feature to establish a baseline, and again post-launch to measure improvement.
Run SEQ immediately after users complete tasks in usability tests to capture their immediate reactions.
For high-impact tasks (e.g., checkout in e-commerce), run SEQ regularly (e.g., monthly or quarterly) to monitor the user experience.
Calculate sample size: SEQ is generally used for specific tasks, so aim to collect responses from 50-100 users per task. For more formal usability tests, you can start with a smaller sample of 5-15 users, which can reveal around 85% of usability issues. For larger samples, use a confidence level of 95% and a margin of error of 5%. For example, if your B2C app has 5,000 users, a sample of 357 responses would be ideal.
Where to run SEQ:
In-app pop-up: Trigger the SEQ survey immediately after users complete a task (e.g., checkout, onboarding, form submission).
Usability testing follow-up: If you’re conducting in-person or remote usability testing, ask participants to rate how easy or difficult a task was as soon as they complete it.
Email survey: For B2B products, you can also follow up with a short email survey after a key task, such as creating a report or setting up a new integration.
How to analyze SEQ results:
SEQ results tell you whether specific tasks are easy or difficult for users to complete. If your users consistently give low SEQ scores, you’ve identified a problem that needs to be addressed. Here’s how to interpret and act on the results:
Aggregate the responses for each task. Calculate the average SEQ score on a scale of 1 (Very Difficult) to 7 (Very Easy). A score of 6 or 7 means the task is easy for most users, while anything below 4 is a clear sign that users are struggling.
Look for trends. Are certain tasks consistently rated lower than others? Low scores suggest that these tasks are causing frustration or confusion.
Ideate on solutions for the lower-rated tasks. If necessary, do extra research to dig into why people are having problems with the given tasks.
Compare the SEQ scores for each task over time. If you make changes to improve a process (e.g., redesigning the checkout flow), see if the SEQ score for that task improves after the update.
In a B2B context:
Imagine you’re testing the ease of using a data analysis dashboard for corporate clients. You ask users to complete a critical task—generating a sales performance report—and the average SEQ score comes back at 3.4. That’s not great; it indicates users are finding it challenging to complete this task.
What to do next:
Look for the friction points. Are users having trouble locating the report function, or are they confused by too many options? Conduct additional research (like interviews or screen recordings) to identify the exact pain points.
Sometimes, simplifying labels or reordering steps in the process can significantly improve task completion. After implementing changes, run the SEQ survey again to see if the score improves.
In a B2C context:
You manage a food delivery app, and you ask users to rate the ease of placing an order with SEQ. The score comes back at 5.5—not bad, but not perfect either. While users can place orders, something about the process is making it more difficult than it should be.
What to do next:
Break down the ordering process. Use SEQ to drill into specific steps—how easy is it to find restaurants? How easy is it to add items to the cart? Maybe users are happy with restaurant search but struggle when customizing orders.
If you find users are frustrated by customizations, simplify that part of the experience. Make sure to run SEQ again after each iteration to measure the impact.
3. Product- and Task-Level Satisfaction: Getting the Bigger Picture
Once you’ve tackled usability and ease of specific tasks, it’s time to assess the broader satisfaction with both your product overall and the individual tasks users must complete. This is where Product- and Task-Level Satisfaction Surveys come in. These surveys help you identify what users dislike about your product and how satisfied they are with key features or flows.
When to use Product- and Task-Level satisfaction surveys:
If you’ve just launched a new feature or redesign, this survey helps you understand how satisfied users are with the changes.
Knowing which aspects of your product users are least satisfied with allows you to focus your efforts where they’ll have the most impact.
Before making significant changes, use satisfaction surveys to establish a baseline for user contentment with your product.
Why use Product- and Task-Level satisfaction surveys?
Understanding satisfaction goes beyond usability and ease of use. It helps you gauge how users feel about your product—whether they’re happy with the value it provides and whether they’re likely to stick around long-term.
Product- and task-level satisfaction surveys give you a big-picture view of how users feel about your product overall and how they experience specific features or tasks. Use this data to prioritize improvements and guide your development strategy.
Set up Product- and Task-Level satisfaction surveys:
Create a Product Satisfaction Survey to measure how users feel about your product as a whole. Ask the following question:
“How satisfied or dissatisfied are you with the overall experience of using [product]?”
Please include both satisfied and dissatisfied in the question as it makes it less leading/biased.
Set up a Task-Level Satisfaction Survey to understand how users feel about specific tasks or features. Ask one of the following questions:
“How satisfied or dissatisfied are you with the process of [task]?”
“How satisfied or dissatisfied are you with the of navigating [feature]?”
“How satisfied or dissatisfied are you with the speed of [task completion]?”
For both product- and task-level satisfaction, you can use a 5-point satisfaction scale, where 1 = Very Dissatisfied, and 5 = Very Satisfied.
Frequency:
Immediately following the release of a major product feature or update.
Run satisfaction surveys every 3-6 months to keep a pulse on how users feel about the product overall.
Establish a baseline before making changes and measure again after the launch to see if satisfaction improves.
Calculate sample size: For product-level satisfaction, you want a large enough sample to accurately reflect your user base:
Use a sample size calculator with a confidence level of 95% and a margin of error of 5%. For a user base of 20,000, a sample size of 377 users will give you statistically significant results.
For smaller, task-level surveys, aim for at least 50-100 responses, but even 30 responses can provide helpful insights, especially in smaller, focused B2B contexts.
Where to run the survey:
Use a satisfaction survey after users complete a key action (e.g., after making a purchase, submitting a form, or using a new feature).
Send periodic satisfaction surveys through email to gather more thoughtful responses. This is often ideal for B2B clients who may need time to reflect on their product experience.
In a B2C context, run satisfaction surveys after purchases or interactions to gauge how users feel about the process.
Analyze the results:
Calculate the average satisfaction score for each question (e.g., “How satisfied are you with [task]?”). Use a 5-point scale where 1 = Very Dissatisfied and 5 = Very Satisfied.
Identify problem areas. Look for features or tasks with the lowest satisfaction scores. These are the areas where users are the least happy, making them ripe for improvement. For example, if the average satisfaction score for your onboarding process is 2.8 out of 5, it’s clear that improvements are needed.
Track changes over time by running satisfaction surveys regularly—before and after major updates, feature releases, or user journey improvements. Use this data to establish baselines and see how well your product changes resonate with users. If a feature revamp bumps the satisfaction score from 3.2 to 4.5, you know you’re on the right track.
In a B2B context:
You’re managing a CRM tool for businesses, and you run a product-level satisfaction survey after launching a new customer segmentation feature. The satisfaction score for that feature comes back as 2.7 out of 5—well below what you’d hoped. What to do next:
Prioritize fixes. If segmentation is a key feature, this low satisfaction score should prompt immediate action. Investigate further with open-ended follow-up questions like “What specifically didn’t meet your expectations?” to pinpoint the exact issues.
After you’ve made improvements (e.g., simplifying the segmentation workflow or adding better filters), re-run the satisfaction survey. If the score jumps to 4.0, you know you’re on the right track.
In a B2C context:
Imagine you run a streaming service, and you’ve added a new feature that lets users create custom playlists. You ask users how satisfied they are with the playlist creation process, and the average score comes in at 4.3 out of 5—good, but not perfect. What to do next:
Look for opportunities to refine. Even though the score is relatively high, there’s room for improvement. Follow up with users to ask what would make the playlist feature even better. Maybe they want more recommendations or better playlist sharing options.
Use the positive feedback to inform your next set of features or improvements. If playlist creation is a hit, consider adding related features, like collaborative playlists or better curation tools.
Surveys to start building your baselines
With just three simple surveys, you can start establishing baselines that help guide product decisions. Here’s a quick recap of when to use each survey and how to take action:
UMUX/UMUX-Lite or SUS
When to use it: After feature launches, product updates, or when tracking usability over time.
What to do next: Set up your survey with the standard UMUX or SUS questions. Calculate the overall score and track how it changes over time to monitor improvements or regressions in usability.
SEQ
When to use it: During usability testing or after users complete specific tasks (e.g., onboarding, checkout).
What to do next: After users complete a task, ask them how easy or difficult it was using SEQ’s single question. Use the average score to identify friction points and improve task flows.
Product- and Task-Level Satisfaction Surveys
When to use it: After major product or feature launches, or as part of an ongoing feedback loop to assess user satisfaction with specific tasks.
What to do next: Analyze average satisfaction scores to find areas where users are least happy. Use this data to prioritize product improvements and track how satisfaction changes after adjustments are made.
Bonus: survey template to get you started
Here’s a simple template you can use for each survey. Just plug them into your favorite survey tool, and you’re ready to go.
UMUX, UMUX-Lite, or SUS (pick one of the three)
Subject: We Value Your Feedback!
Body:
Hi [User’s Name],
We’re always striving to make [Product Name] better and more user-friendly. To do that, we need your input! This short survey will help us understand how well [Product Name] meets your needs and where we can improve.
It will only take about 2 minutes, and your responses will directly shape the future of our product.
Thank you for your time!
Instructions:
Please read each statement carefully and choose a rating that reflects your experience. For each statement, rate how much you agree or disagree on a scale from 1 to 7 (or 1 to 5 for SUS).
UMUX Survey Questions:
This system’s capabilities meet my requirements.
1 = Strongly Disagree, 7 = Strongly Agree
Using this system is a frustrating experience.
1 = Strongly Disagree, 7 = Strongly Agree
This system is easy to use.
1 = Strongly Disagree, 7 = Strongly Agree
I am able to complete my tasks quickly with this system.
1 = Strongly Disagree, 7 = Strongly Agree
UMUX-Lite Survey Questions:
This system’s capabilities meet my requirements.
1 = Strongly Disagree, 7 = Strongly Agree
This system is easy to use.
1 = Strongly Disagree, 7 = Strongly Agree
SUS Survey Questions:
I think that I would like to use this system frequently.
1 = Strongly Disagree, 5 = Strongly Agree
I found the system unnecessarily complex.
1 = Strongly Disagree, 5 = Strongly Agree
I thought the system was easy to use.
1 = Strongly Disagree, 5 = Strongly Agree
I would need the support of a technical person to use this system.
1 = Strongly Disagree, 5 = Strongly Agree
I found the various functions in this system were well integrated.
1 = Strongly Disagree, 5 = Strongly Agree
I thought there was too much inconsistency in this system.
1 = Strongly Disagree, 5 = Strongly Agree
I would imagine that most people would learn to use this system very quickly.
1 = Strongly Disagree, 5 = Strongly Agree
I found the system cumbersome to use.
1 = Strongly Disagree, 5 = Strongly Agree
I felt confident using the system.
1 = Strongly Disagree, 5 = Strongly Agree
I needed to learn a lot of things before I could get going with this system.
1 = Strongly Disagree, 5 = Strongly Agree
Closing Message:
Thank you for completing this survey! Your feedback is incredibly valuable and will help us improve [Product Name] for you and others. Stay tuned for updates, and feel free to reach out with any further suggestions!
SEQ Template:
Subject: Quick 1-Question Survey
Body:
Hi [User’s Name],
We hope you had a smooth experience with [Product Name]! We’re always working to improve, and we’d love to know how easy or difficult it was for you to complete your recent task.
It’s just one question and should take less than 30 seconds. Your response will help us ensure that everything runs smoothly.
Thanks for your time!
Instructions:
Please rate how easy or difficult it was for you to complete the following task on a scale of 1 (Very Difficult) to 7 (Very Easy).
SEQ Survey Question:
How easy or difficult was it to complete the task of [e.g., setting up your account, completing your purchase, using the new feature]?
1 = Very Difficult, 7 = Very Easy
Closing Message:
Thanks for letting us know! Your input is crucial to ensuring we continue to improve the experience with [Product Name]. If there’s anything else you’d like to share, feel free to hit reply to this email.
Product- and Task-Level Satisfaction Survey Template
Subject: We’d Love Your Feedback on [Feature/Product Name]
Body:
Hi [User’s Name],
We’ve been working hard to enhance [Product Name], and we’d love to hear your thoughts on how satisfied you are with your experience. This survey will take about 2-3 minutes and will help us prioritize future improvements.
Your feedback will go directly to our product team, so you can rest assured that your voice is being heard.
Instructions:
Please answer the following questions by selecting a rating from 1 to 5, where 1 means “Very Dissatisfied” and 5 means “Very Satisfied.”
Product Satisfaction Questions:
How satisfied or dissatisfied are you with the overall experience of using [Product Name]?
1 = Very Dissatisfied, 5 = Very Satisfied
Task-Level Satisfaction Questions:
How satisfied or dissatisfied are you with the ease of [Task Name] (e.g., onboarding, checkout, setting up your account)?
1 = Very Dissatisfied, 5 = Very Satisfied
How satisfied or dissatisfied are you with the speed of completing [Task Name]?
1 = Very Dissatisfied, 5 = Very Satisfied
How satisfied or dissatisfied are you with the overall process of using [Feature Name]?
1 = Very Dissatisfied, 5 = Very Satisfied
Closing Message:
Thank you so much for taking the time to provide your feedback! We’re committed to improving [Product Name] to better meet your needs, and your input helps us get there. Keep an eye out for future updates—we think you’ll love what’s coming!
Additional Notes:
Always start your survey with a personalized introduction. Mention how long the survey will take (this helps users commit) and emphasize that their feedback will shape the future of the product. You can also incentivize participation if possible (e.g., “Complete the survey for a chance to win a $50 gift card!”).
Be clear about how to rate each question (e.g., “Please rate on a scale of 1 to 5”) and provide brief, focused descriptions of each question to reduce confusion.
Show gratitude, and if possible, let users know how their feedback will impact the product (e.g., “We’ll use this feedback to prioritize new features”). Keeping the loop closed improves user satisfaction and increases survey completion rates in the future.
Implement this today
With the UMUX or SUS, SEQ, and satisfaction surveys, you’ll have a powerful toolkit that provides you with real insights into your product’s usability, task ease, and overall user satisfaction.
The key is consistency. Start by setting baselines, and then use these surveys regularly to track changes and guide your decisions. Whether you’re refining a new feature or overhauling a key user journey, these simple tools will keep you focused on what truly matters: creating a product that users love.
Now, grab these templates, set up your first surveys, and start collecting the feedback that will make your product better tomorrow than it is today.
Let me know how it goes in the comments.
📚 Additional frameworks and tactics to explore
Surveys that work ← go beyond these frameworks
How to Write (Better) Survey Questions: Systems for Collecting Clean and Meaningful Data
Enjoying this? Share with others or refer a friend (and get your subscription comped!) — I always appreciate it so much!
Have a curious week,
Nikki