Get psyched up for research methods: Survey Design

In the upcoming posts, I will be covering a series of UXR methods with some additional psychological research embedded to really give you a 360-degree look at our most common tools. We’re kicking off with survey design.

Dora Cee
UX Collective

--

(You may be thinking that user personas should be covered first. I’ve got good news for you! You can read about that in an earlier article, with an emphasis on avoiding unconscious bias. Hope you find it practical and useful.)

User research is what we reach for when trying to understand how people interact with a product or service. It helps us analyse their goals, behaviours, challenges, preferences, and needs. Based on the feedback and insights we gather, processes then can be optimised to create an ideal experience and design solution.

Screens showing survey creation and graphs of data, with man ticking off tasks in front of them.
Image by vectorjuice on Freepik

At its core, UX research exists to expose problems and conjure a potion to remedy the ills that befall less research-induced variations. Its purpose is essentially to offer a background context for design choices, and to ensure the product is really targeting the audience it is crafted for (rather than those who are building it).

Survey design involves much more than just coming up with a list of questions to ask. UXRs need to carefully consider the tone of the survey, the order of the questions, and the way they are worded.

According to the State of User Research 2022 report by User Interviews, 49% of the 562 user researchers they asked claimed they used surveys in almost every study.

Chances are you are quite familiar with this research method yourself, but there might still be some useful ground to cover that may have escaped or faded from your attention.

A breakdown of common survey tool usage. Survey Monkey: 38%. Qualtrics: 31%. Typeform: 18%. Optimal Workshop: 10%. UserZoom: 7%.
Most frequently used survey tools, as observed in the State of User Research 2022 report by User Interviews

Survey or questionnaire?

I bet this is not the question that keeps you up at night, but I figured I should be thorough. The American Journal of Pharmaceutical Education defined the difference in a 2008 article as follows:

“For clarity, the term survey should be reserved to describe the research method whereas a questionnaire or survey instrument is the data collection tool. In other words, the terms survey and questionnaire should not be used interchangeably.”

Will you get a lifetime ban from UX if you make this mistake? Doubtful. Still, it doesn’t hurt to cover all bases.

A survey consists of a set of individual questions, the delivery of the questionnaire, and an analysis of responses. A questionnaire is merely a set of individual questions.
Image by SurveyMonkey

When should you use surveys in UX research?

The hard truth is that surveys alone won’t give you a detailed enough glimpse into your audience’s opinions. It is important to note that survey results are not always accurate and should be used in conjunction with other forms of user research. Effectively you should observe the same underlying questions via multiple methods to average out the responses into a useful conclusion.

Nonetheless, surveys are a low-cost way to collect data from varying sample sizes, so they are a great entry point for conducting research. At the very least, you can use them to explore the area you are examining to help refine what you should be focusing on more closely.

Before we get started on practical tips, allow me to introduce some possible psychological pitfalls you might encounter as you survey this research territory. (Get it? Survey the landscape? Nevermind.)

Woman pointing at a score rating scale.
Image by vectorjuice on Freepik

The Hawthorne Effect

This is the social equivalent of the “placebo effect.” The Hawthorne Effect is when study participants change their behaviour to suit what they think is expected of them. This is especially apparent if they know they are being observed.

It is also rather debated in academia, so I’ll leave it up to you whether you choose to go with this belief or not. Gustav Wickström and Tom Bendix argued that it was a mere cop-out to avoid a deeper investigation into confounding variables. (That is to say, an unaccounted factor that can change the results of an experiment or research.)

This 2014 study concluded that it does exist, albeit it is still surrounded by mystery. Maybe it’s more of a Bermuda Triangle-scenario, after all. Or just simply a research whack-a-mole.

Either way, it would seem fair that at least in the case of unsupervised surveys, users would feel more comfortable sharing their honest feedback. Be aware that they might edit their responses to fit social norms, though, so you might have to examine these contextual influences, as well.

Your next problem: attitude construction and representation

When asking about one’s attitude towards a given subject, your respondents have two options. They can go down memory lane and retrieve a previously formed attitude and experience to answer the question. Or, they can form an opinion on the spot.

We usually do the latter, as confirmed by Norbert Schwarz in a 2007 article where he highlighted this timely finding of social psychology in action. His observation was that survey respondents were likelier to focus on their current, situational impressions, despite researchers hoping to analyse past feelings (in their respective studies).

It’s up to you whether you a) don’t care about past and present differences, b) want to check in with more immediate attitudes, or c) focus on past mindsets. This mental pattern is difficult to erase or evade, but it’s still helpful to keep in mind when forming questions or evaluating your results.

Man and woman gathered around document with list items.
Image by vectorjuice on Freepik

Treat the survey experience as part of UX

When undertaking this fair challenge, it is important to keep the user experience in mind. The goal is to collect accurate data while also ensuring that respondents can answer the questions easily and without confusion.

You will want to avoid creating pain points and any challenges to ensure a smooth and efficient flow through your survey. To achieve this, there are a few things to consider:

1) Avoid double-barreled questions.

This is to say, don’t mesh two questions into one. Your data will be thwarted, and your user confused as to which one they should be focusing on.

For example, “do you do UX research because you enjoy it and because you think it’s important?” will be challenging to the responder. What if they don’t enjoy it, but still think it is important?

This question only allows one answer, so they will have to puzzle over this and give you limited or “broken” feedback.

2) Use language that is neutral and free of bias.

I’ve covered biases here previously, but the short version is this: avoid leading or loaded questions that nudge the user towards a specific response.

Here is one quick example: “Why do you enjoy user research?”

I don’t even know if you enjoy user research. You could totally hate it, so with the above phrasing, I am basically forcing a fake-happy answer out of you.

3) Your questions should be easy to understand.

Jargon and technical terms should also be removed— these just pose a hurdle in the interpretation of the question. No one wants to pull out a dictionary to figure out what you mean.

If you would like to go further down this rabbit-hole, Asking Questions: The Definitive Guide to Questionnaire Design could prove a good reference material for learning to dodge common mistakes in survey construction. The book wasn’t published yesterday, but it’s still relevant.

4) Be sure to test the survey/questionnaire with a small group of users before deploying it more broadly.

This will help you identify any potential issues and make sure that the questions are clear enough.

If there are any corrections to be made, this gives you the opportunity to remedy errors before you end up with bigger problems.

5) Choose the right type of question for the desired data.

More on this up next, with examples to boot. I even present you with the most fun (and possibly useless) survey of your life. It’s the freebie you didn’t ask for.

Man pointing at question mark.
Image by vectorjuice on Freepik

Practical tips for designing survey questions

Not all feedback is created equal, but you can help the user in this regard. As mentioned above, when taking UX into account when designing survey questions, researchers can create more effective surveys that provide valuable insights.

With that in mind, here are some common types of questions you can utilise in your survey.

1) Open-ended questions allow respondents to answer in their own words. This can provide constructive details but can also be more time-consuming to analyse.

They are best employed when you want to explore a topic in depth and learn more about your respondents’ thoughts, feelings, and opinions. They can be used to gather qualitative data that can be applied to improve your product or service.

  • Example: “What is your favourite UX research tool?”
“What is your favourite UX research tool?” with freetext answer box.

2) Close-ended questions provide respondents with a limited number of options to choose from, making them quick and easy to answer.

Close-ended questions are best suited for quantitative data. They can be used to measure things like attitudes, preferences, and behaviours.

  • Example: “Are you interested in UX research?”
Question: “Are you interested in UX research?” Answer options: Yes or no.

3) Multiple choice questions. These offer a list of options for respondents to choose from.

Multiple choice questions are fitting for measuring quantitative data. They are often used to measure levels of agreement or satisfaction, and can be convenient for comparing different options.

  • Example:

“What is your opinion on UX research?”

a) I love UX research and it’s my favourite part of the design process.

b) UX research is important, but I don’t enjoy doing it.

c) I hate UX research and think it’s a waste of time.

“What is your opinion on UX research?” with above mentioned response options.

4) Likert-scale questions. These ask respondents to rate their level of agreement with a statement on a scale from (for example) 1 to 5.

Likert-scale questions come in handy when measuring opinions or attitudes on a given topic. For example, you might use a Likert-scale question to assess how satisfied respondents are with a new feature.

  • Example:

“On a scale from 1 to 5, how much do you agree that UX research is important to the design process?”

“On a scale from 1 to 5, how much do you agree that UX research is important to the design process?” with answers ranging from 1 to 5 where 1 is “Strongly Disagree” and 5 is “Strongly Agree.”
Disclaimer: Usually all numbers would be labelled, e.g. Strongly Disagree > Disagree > Neutral > Agree > Strongly Agree

6) Ranking and matrix questions.

Ranking questions are best used when you want to know the order of preference for a given list of options. They are typically effective for gauging how important different factors are to respondents.

Matrix questions present a series of statements that users can rate on a scale from (for example) “disagree” to “agree.” You can customise these as you wish, just keep the lineup clear and logical.

“User surveys in UX research are…” with completion statements being: Delightful, The bane of my life, A necessary evil. Response options: Disagree, Neutral, Agree.
This is a matrix question. Ideally, you’ll do better than this! I believe in you.

Ultimately, it is important to carefully consider the goals of the survey and the problem you are trying to solve before choosing the type of question best suited for the task.

Hopefully, the above breakdown helped clarify the basics and arm you with some bonus understanding or anxiety about the elusive psychological powers that be. Stay tuned for the next instalments, where I’ll rock your world in equal measures.

One final note: GDPR is alive and well

This is just your quick reminder to adhere to GDPR (General Data Protection Regulation) which came to effect almost exactly 4 years ago, confirming our shaky gut feeling that time is indeed flying.

As this very useful article by User Interviews mentions, GDPR covers everyone on European soil, so don’t forget to get consent from your research participants, when processing their data.

If you keep surveys anonymous and do not process personal data (including email addresses), then you can disregard GDPR — but stay aware of your responsibilities.

Thanks for reading! ❤️

If you liked this post, follow me on Medium for more!

Further Reading:

References & Credits:

  • Survey vs questionnaire: What’s the difference? by SurveyMonkey
  • Bradburn, N., Sudman, S.,&Wansink, B. (2004). Asking questions (2nd ed.). San Francisco: Jossey Bass.
  • Conrey, F. R., & Smith, E. R. (2007). Attitude representation: Attitudes as patterns in a distributed, connectionist representational system. Social Cognition, 25(5), 718–735.
  • Draugalis, J. R., Coons, S. J., & Plaza, C. M. (2008). Best practices for survey research reports: a synopsis for authors and reviewers. American journal of pharmaceutical education, 72(1).
  • Gehlbach, H., & Brinkworth, M. E. (2011). Measure twice, cut down error: A process for enhancing the validity of survey scales. Review of general psychology, 15(4), 380–387.
  • McCambridge, J., Witton, J., & Elbourne, D. R. (2014). Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. Journal of clinical epidemiology, 67(3), 267–277. https://doi.org/10.1016/j.jclinepi.2013.08.015
  • Schmidt, W. C. (1997). World-Wide Web survey research: Benefits, potential problems, and solutions. Behavior research methods, instruments, & computers, 29(2), 274–279.
  • Schwarz, N. (2007). Cognitive aspects of survey methodology. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 21(2), 277–287.
  • Wickström, G., & Bendix, T. (2000). The “Hawthorne effect” — what did the original Hawthorne studies actually show? Scandinavian Journal of Work, Environment & Health, 26(4), 363–367. http://www.jstor.org/stable/40967074
  • Images by vectorjuice on Freepik

--

--

Design / Psych / UX / AI & more | Here to translate scientific research into practical tips & advice.