Why pretests are crucial in UX research

And what I have learned from them

Daniela Goldgruber
UX Collective

--

While conducting any kind of user or UX research, there is one step that I have found in my own professional experience as pathbreaking. Only to notice that pretests are often skipped entirely.

A female researcher sitting in front of user data
Illustration by Erica Fasoli

Pretests are exactly what they sound like: tests before the tests. Whether you’re doing user research via a survey or an in-depth interview, or you’re conducting usability tests or A/B tests to find out more about the experience, all these can be considered as tests. You’re testing whether your assumption on something was right or wrong, you’re testing how your product is perceived, you’re testing if it works. But in order to do this successfully without any hick-ups during this process (which will have a negative impact on the data you collect), I recommend conducting a test on the test, aka pretest, first.

To many stakeholders, pretests seem like overengineering. Maybe even to you. You’ve already spent tons of energy just trying to get them to understand that products and services are designed for people, not for the companies creating them. And noticed that the stakeholders run the other way as soon as they hear the words ‘research’ or ‘strategy’, assuming it is only numbers that will get you nowhere and is a waste of resources. You’ve finally got them to agree that digging into the users and finding more about their personalities, behaviour, expectations and frustrations is important for the product (hence the company, hence them). And you might be doing this in addition to everything else that’s on your to-do-list. So why overcomplicate it and do a pretest of e.g. a survey?

Here’s why

Pretests make sure that any time you have spent to set up your user research is not wasted. They make sure that what you are intending to find out, you actually will. Just because you’ve written one question a certain way does not mean others will understand it the same way. Regardless of how much time is spent on the design of a survey, the survey as well as the entire research will set out to fail without testing it first. According to Brad Nunnally and David Farkas in their book UX Research: Practical Techniques for Designing better Products (2016), pretests are considered to be a dry run or the dress rehearsal of research. They help make sure that participants are likely to understand the questions as well as answer options, that friction within the survey user experience can be minimised, and hence that the participation number and quality of the survey and survey data can be improved. A pretest uncovers whether or not the survey works in the manner intended by the researcher.

In designing a questionnaire, many decisions and assumptions are made–some conscious, others not. A pretest helps highlight those assumptions that one was not planning on making and would like to avoid. As the researcher, problems need to be looked for and even anticipated. The most common and serious problems in survey questions are related to comprehension. Understanding the cognitive process of participants in answering questions will help write better questions and hence collect better data that is relevant to you. Therefore, at least one draft of the questionnaire is needed and requires testing before reaching the point where all frictions seem to be worked out. After all, the participants’ feedback gravely determines the changes and hence success of not only the survey but the entire research project. And while this may seem like a stretch but I have seen it happen, ultimately the success of a product.

Here’s how

While there is no rule that determines how many pretests need to be conducted and with how many participants, I’d suggest to go with a minimum of 3-5 participants during one pretest, if possible. There is also no right or wrong way to do this. You can opt for a thinking-aloud test by showing the pretest participants the questions and answer options of the survey and asking the participants to share with you whatever comes to their mind. This way, you’ll know if the question is understood and hence not skipped or answered incorrectly. You’ll know if the question you have intended to ask is also perceived as such, and therefore, the answers that will be given are relevant. What good is any research if what you need and want to find out is not given to you? Now, that’s a waste of resources. But understanding that you can control that outcome is making use of it wisely.

A person answering a survey question
Image from Shutterstock

Here’s one example

In the several pretests I have conducted, there were always some crucial changes that could be made to the research (e.g. survey) draft. One pretest with 7 testers showed that 71% of the participants do not like open questions (except at the end of a survey) and 57% generally skip them. Let’s assume that what you really need to find out you’ve phrased in an open question so you get as much and any insight from the participants as possible: Potentially more than half of the survey participants will not provide you with any answer at all. And just because the question is written in a certain way. Everything leaves an impact. While it is important when working as a e.g. Copywriter, UX Writer, Content Designer and Content Strategist to not solely get lost in the details and find a middle ground, in this first part of the job, details are essential.

Going back to the example of the preset result, it quickly became obvious to make these changes to the survey, which at the time still had open questions spread throughout based on the wish of the client. So I created a second draft that was also pretested (because resources were available). Other response options could easily be found that are, unlike open questions, limiting but are still highly likely, if not more likely, to provide the expected answers.

After most open questions were removed, the participants of the second pretest with different individuals were asked their opinion. 100% agreed that they do not like open questions (except at the end), and 40% out of the testers would skip them. This result mirrors the one from the first pretest and justifies the choice of removing open questions from the beginning and the middle of the survey for the client. Moreover, 100% have found explanations in questions, such as examples in brackets (which were added after further feedback during the first pretest), very helpful and believed they would not have otherwise understood these questions but gone for any answer option to move on.

Two individuals that are in a face-to-face conversation.
Image from Shutterstock

Now, these were some smaller and specific examples on the importance of pretests. However, ultimately, I think this is the most valid argument:

We don’t know what people want, think, observe, understand, struggle with and do. In Product Design, Content Design, and in Writing we are taught to never make assumptions. So we need to ask the users. But we cannot ask them assuming they understand the questions exactly the way we do. It’s all a collaboration. Products and services are for people, and they help us design them in a way that’s best for them. But we also need their help when it comes to finding that out.

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

--

--