User Survey Guide: If You’re Not Conducting Surveys, You’re Losing Conversions

User Surveys

Published by

July 26, 2017

Research is the foundation of the conversion optimization process. If we don’t know the facts behind what is happening on the site — and why — we can’t accurately pinpoint what to fix and how to fix it, no matter how much we test.

As we’ve talked about previously, research can be roughly divided into four main categories:

User testing is a type of qualitative research that gives us a window into our website visitors’ motivations and thinking processes. They share their impressions in their own words and through their onsite behavior, both of which are incredibly valuable sources of insight.

User surveys are different. Instead of allowing users to guide the conversation, surveys ask very specific questions. Answer options range from simple Yes/No, to “On a scale of 1 to 10,” to open-ended (a format that allows the user to give a free-form response to a particular question defined by the website owner or optimizer).

User surveys can go very wrong if you don’t structure your questions — or your expectations — properly. So let’s start with the basics.

What *is* a user survey?

When you run a user survey, you’ll post direct questions to your visitors with the goal of discovering their perceptions of specific issues. For conversion optimization, you can also use user surveys to identify the most common sources of anxiety and friction.

To get the most useful results, pose open-ended questions, like “What do you look for when you shop for bath products?” This type of question allows users to voice concerns or opinions freely, and they may bring up points you hadn’t even considered.

How to structure user survey questions to get actionable results

How you structure your user survey depends largely on your goal. What do you want to know?

You can implement customer surveys, on-site surveys, exit surveys, in-app surveys, pop-up surveys, or email surveys. There are many different channels and times to offer a survey — so your first task is choosing the type of survey that appears at the right time, to the right audience, for the questions you want to ask.

Keep in mind that the timing of your survey will affect the types of responses you get. For example, if you use an exit survey that pops up as someone is abandoning their cart before purchasing, you’ll be sourcing answers from a very different consumer segment than if the survey popped up after a successfully completed purchase.

Before deciding what questions to ask, first consider what you hope to achieve with your survey. Whatever you hope to achieve, the goal is always to obtain actionable insights.

With that in mind, consider these questions:

Do you have any problems with the site? Yes/No

Say you pose this question and offer two possible answers. After the survey runs, you find that 63% of people answered “Yes.” But what does that tell you? 63% of visitors DO have some sort of a problem — but what is the problem? You’ll have no clue.

The question above can be used only as an introduction to the next question, which will appear to people who answer with a “Yes”. It will probably be something like, “What problems are you experiencing with the site?”

Another not-so-useful survey question:

How would you rate this website?

Questions like a) and b) will not yield actionable insights because they’re not specific enough. Your users’ answers might tell you that something is wrong, but give you no place to begin finding out what, or how to fix it.

To improve your website, you need specific answers that identify problems. So don’t ask users to give you a grade. Ask them something like:

What nearly stopped you from buying from this site?

This is a great question to ask in our post-purchase exit survey example. It invites an open-ended response to a specific question, and will yield answers that point you in the direction of sources of friction and anxiety in the purchase process.

Good user survey design uncovers friction and anxiety

Every question you ask should directly relate to uncovering usability issues, existing friction, and sources of anxiety.

This way, you focus the survey on concepts that visitors can easily vocalize and express. However, don’t expect visitors to provide you with solutions — generally, all they can do is point out problematic spots in your website or sales funnel.

You can also use surveys to enhance your UX and improve conversions in another way: by finding out more about your audience.

When your aim is to learn about your prospects, rather than uncover what’s wrong with your website, you have to be even more careful with your questions. The questions you ask should be clear, short, and open-ended.

You must be very careful not to impose your point of view on your survey participants. In fact, you may want to repeat each question twice, in two different forms, to make the results more reliable.

To find out about your audience, you can ask things like:

  • “What can you tell us about yourself (your age, gender, and any other information you feel comfortable giving)?”


  • “What is the specific problem that our product solves for you?”

Both of these questions will likely result in answers that tell you more about your target audience and help you establish personas.

Naturally, your questions will differ depending on whether you’re surveying people who already bought your product (serve them a post-purchase survey) or people who opted out of the conversion funnel (serve them an exit survey), or visitors to your site who are just browsing your product catalog (serve them a traffic survey, asking how they heard of your site, what they expect to find, etc).

Similarly to the timing of your surveys, asking prospects and customers to provide feedback at different stages can provide multiple useful views of your website.

But if you have to choose just one type of user survey to run, you’ll probably get the most useful information from surveying customers who did not buy or who dropped out of the conversion funnel.

How to time user surveys effectively

When surveying your customers or prospects, timing is critical. You must ask your questions as soon as possible so the experience is still fresh in their mind, and the feedback they give is closer to reality. The responses should enable you to minimize any friction that either almost stopped customers from buying, or successfully stopped them from buying.

Keep in mind that you should restrict your questions to the experience of shopping on the website. You are not conducting a customer satisfaction survey, so do not ask questions about the product itself — that comes after a customer has bought. Try to figure out what obstacles or friction the customer has overcome to buy the product.

As a rule, you want to serve your surveys in the least disruptive way possible, which is why so many surveys are sent by email. Email allows a user to choose the best time to respond.

But email surveys also run a high risk of being ignored, which is why we recommend including both a deadline and an incentive for completion. (When you use an incentive, you do run the risk of people answering your survey in a perfunctory manner just to get the incentive. Therefore, structure this type of survey with care so that you get the best possible answers while requiring as little of the user’s time as possible.)

You can also use exit user surveys to find out why individual potential customers did not complete the conversion process. The survey should pop up when the customer displays intent to exit the page (often this is triggered by mouse velocity or location). Exit-intent questions should aim to identify the reason why the person is leaving the site.

A typical exit user survey
A typical exit user survey

A few exit user survey questions we recommend:

  • What information would you need about the product to purchase it?
  • What concerns did you have about the product that prevented you from purchasing?
  • What most influenced your decision not to buy?

Sometimes the answers will identify business decisions as the source of the problem, such as product cost, shipping cost, or similar. In these cases, if the number of similar answers is significant enough, you may want to reconsider the offending policy.

Another option for survey timing is to enable user surveys to pop up for visitors as they go about your website. Use these sparingly, as they can create UX issues if they’re annoying and disruptive! Make it easy to opt out or close the survey to limit the negative influence that the interruption may have on your visitors.

In pop-up surveys, frame questions so that they help you identify user experience problems. These surveys may be triggered to fire when the user has spent some time on the website, checking out the products but neither leaving nor converting. Ask if they’re finding the information they need, for starters.

For any survey, the rule is to keep them short. Anything over 10 questions, and you’ll see a drastic drop in the quality of answers and the number of completed surveys. People just won’t stick with you for that long.

How to conduct a user survey & encourage responses

Once you have prepared your questions, you must decide how to incentivize visitors to actually answer them.

The most common incentive is to offer a discount or give access to gated, premium content. In any case, make sure you attract enough responders.

User survey - statistical significance
User survey – statistical significance

Don’t run your survey for very long. Each survey needs at least 200 to 300 responses to be statistically significant. Once you have more than 200 responses, you can safely stop the survey, as it’s unlikely that more responses will result in more insights or shed light on more issues.

Limiting survey duration also increases relevance, cuts costs, and reduces negative effects on user experience.

Once the responses are in, the hard part of the survey process starts.

How to analyze and interpret user survey responses

After your survey, you’ll have a massive spreadsheet of text responses. The unenviable task you then face is to go through these and isolate useful insights.

That’s a tall order when you have more than a thousand responses (assuming at least 200 survey takers answering 5-6 questions each).

The best response-filtering method to find the most important insights faster is to isolate certain keywords that represent the most-mentioned concerns or concepts.

Isolating keywords will allow you to create categories of issues and sort responses accordingly. For example, you may have categories like shipping cost, trust, price, ease of use, and so on. By tallying the number of responses that belong in each category, you can rate the severity of each issue.

Once you have this tally, try to make brief summaries of the issues, using the exact words of the responders wherever possible. That way, you’ll have framed the issue, and you’ll be ready to prepare hypotheses for experiments.

Don’t skimp on survey analysis! Use however much time it takes to analyze the answers thoroughly and comprehensively. Try having a team divide and conquer the survey by question, analyze results individually, and then crosscheck the results.

Methods like word clouds or cluster analysis offer an efficient, effective way of structuring survey results so you can quickly spot the most serious issues.

User survey - An example of Gaussian distribution used in cluster analysis of survey results. The red word is the issue most frequently mentioned by responders.
User survey – An example of Gaussian distribution used in cluster analysis of survey results. The red word is the issue most frequently mentioned by responders.
User survey - Word clouds are another method of discovering patterns and commonalities across large amounts of data. The largest words represent the most severe issues mentioned in the most responses.
User survey – Word clouds are another method of discovering patterns and commonalities across large amounts of data. The largest words represent the most severe issues mentioned in the most responses.

Avoid these common user survey mistakes

One of the most common mistakes is posing the wrong questions. Make sure your survey questions don’t fall into the following traps.

  • Too abstract/vague: If you ask abstract questions that do not tell you anything about your visitors, you’ll invalidate your results. Answers are only as useful as they are specific — and they have to point out real issues on the website.
  • Closed-end questions: Yes/No questions, or even “On a range of 1-10” questions can overlook problems that the test creator may not think of, but that customers notice. Always give survey respondents a chance to voice their opinions in their own words.
  • “Leading the witness”: Mentioning a possible problem in a survey question can “lead” the customer and bias their answer.

Another common blunder is targeting the wrong audience.

For example, if you conduct an exit survey, make sure you leave out people who have completed their purchase. Ideally, you should target the people who have visited a product page and checked the price, or added a product to the cart but did not complete a purchase.

Or, if you are conducting a customer survey, target only the people who have actually completed the purchase.

Every user survey tool allows for audience targeting. Do it!

The analysis stage of surveying is also fraught with potential for misunderstanding and miscategorization. Survey responses should be analyzed comprehensively — because if you miss a potential issue, you are wasting the effort that went into making and running the survey.

You may also be tempted to pay attention to vocal outliers, when in fact solving more statistically significant issues can do more to improve your conversions. Knowing the exact count of responders who mention each issue is vital. Not collecting enough answers is an easily avoidable (yet all too common) mistake.

Always remember the significance table included above, and make sure you have enough responses for statistical significance. If the margin of error is too large, you can’t trust the results. For small samples, even a 10% margin of error may be too much.

And finally, making your survey too long is a mistake we see too often. It’s the easiest mistake to make, and the easiest one not to. Our rule of thumb is: If the onsite survey has more than three questions, it’s too long. Email surveys should have no more than 10 questions.

Surveys longer than 10 questions often fall prey to “the error of central tendency”. This refers to the phenomenon of respondents becoming fatigued, and starting to respond with nonsensical answers or answers too short to be meaningful.

Keep your user surveys short and engaging. If you need to have more than 6-10 questions, make separate, shorter surveys, rather than creating one long one.

Getting to know your customers benefits you in the long run

User surveys are a great method to get to know your visitors and customers. You can learn about their motivations, thoughts, and perceptions, and access insights that quantitative research just can’t give you, AKA the “why?” behind the “what”.

Knowing why your visitors do the things they do will help you create faster, easier, more enjoyable ways for them to do it. Also, the “why” can help you improve your website and marketing copy to boost conversions, and give your users a more personalized, enjoyable experience. In the long term, you’ll establish a genuine bond with your customers.

We hope this short guide will make user surveys seem more approachable and useful. Conducting them properly will provide you with a treasure trove of insights — and we mean that literally. These insights will make you money.

Series Navigation<< User Testing: Why You Should Be Testing Your Website And How to StartConversion Research: 6 Treasure Troves of Qualitative Research You Can Access Right Now >>

Published by

Edin is a Senior CRO Consultant. Edin is into Google Analytics and testing (any A/B testing tool really) and likes to write about it. You can follow Edin on Twitter.