Asynchronous usability tests: how to keep participants engaged?

Nam
UX Collective
Published in
7 min readJun 27, 2021

--

Image of User Testing, a popular platform for both synchronous and asynchronous usability testing, opens on a laptop browser.
User Testing, a popular platform for both synchronous and asynchronous usability testing.

Asking a qualitative researcher what they dread the most about conducting a research engagement, there is a high chance you might hear: a disengaged participant.

Qualitative research concerns with the texture of an experience, be it a daily phenomenon or a digital interface. This form of learning cares deeply about how meanings are constructed and negotiated — at its core, an appreciation of the complexity of interpretation and values that a word, an object, an interaction, or an event might produce. (For a deep dive into what “qualitative” means in qualitative research, here is a great article).

Such attention demands research design to be flexible, open-ended, and creative in order to access the many colors and registers of one’s experience: from what a participant sees and thinks to what they feel and understand. A mechanical and tedious research engagement can manifest in disengagement: participant moving through the process with minimal interest and proactiveness, reacting to questions with the ever dreadful “yes-no” answers, and little to no further articulation.

Asynchronous usability tests and staying true to the qualitative concerns:

Most usability tests are qualitative in nature: the learning goal typically centers on understanding usability issues that users might have, the ups and downs, the confusion, and the satisfaction of an experience (see here for a comparison between qualitative and quantitative usability testing). From a research design perspective, usability testing shares similar concerns with other qualitative methods such as interviewing or contextual inquiry in how to design an engaging study.

In synchronous user testing, researchers have more opportunities to intervene and create a facilitative environment, from customizing questions to better fit a participant’s context and guiding them through confusing tasks to supplying an otherwise “boring” session with humor and human touch.

In designing for asynchronous usability tests, this can be trickier. Moving research engagement outside of the in-person and synchronous modality means that you have fewer opportunities to give support, tinkle with the ambiance, and inject the customization work necessary for open-ended qualitative research. However, you do not need a real-time human presence to be facilitative — below are several ways researchers can design engaging usability tests while embracing the qualitative concerns genuinely.

These tactics or ways of thinking can easily be adapted to synchronous usability testing. However, in this article, I will be looking at this research method from a more “deprived” modality of asynchronicity to understand fully its limitation.

Illustration of a face with multicolors and swirly lines to show complexity.

Firstly, the research engagement starts with the screener:

It is critical to remember that participant engagement begins with the screener. Best practices have indicated the crucial need to screen participants by psychographics and behaviors rather than just demographics in order to zoom in on the most valuable group of users. Here are a few that covers the basics of writing an effective screener:

Such valuable behavioral and psychographic data can provide you with lenses to customize your questions and tasks. You won’t be able to customize the script in real-time, so make sure to bring in any learnings that you can gather during the screening process.

Take a usability test of a second-hand car comparison tool for example: if the screener looks for a behavioral attribute such as the number of options that a user would consider at a time with answers, hypothetically, ranging from 2 (narrow) to 5 (broad), why not design two different flows in your research script and customize the questions or tasks to reflect such preferences for “narrow” and “broad” comparison?

Don’t skip the warm-up and context-setting questions:

In asynchronous usability tests, especially those designed to garner quick feedback or impressions of a product, warm-up and context-setting questions are sometimes deemed unimportant and skipped to save time and processing effort.

In my opinion, these questions are even more important in such usability tests. As users participate in the test at the time and place of their choices instead of at a lab or on a call, these initial questions allow the users to step outside of what they were doing before and into the context relevant for the tasks at hand. In the case of high involvement products whereby one’s past experience can greatly influence their present choices such as education, insurance plans, or mortgages, they are even more crucial.

For example, instead of conditioning a task simply with “Imagine you are researching schooling options for your children,” design an “on-boarding” set of questions that allow users to conjure up their existing knowledge, thoughts, and feelings:

Could you tell us about yourself, your child(ren), and their educational needs?

Could you tell us a recent or past experience researching schooling options for your child(ren)? Think about the sources of information you look at, the people you turn to for ideas, and the struggles that you have.

Here is another example—though not particular about usability testing—that illustrates the consequences of failing to “on-board” and induct users into the research process.

A comprehensive and colorful account of experiences requires creative and thoughtful prompts:

There have been extensive guidelines on how to write realistic and actionable usability tasks with a strong emphasis on clarity.

With asynchronous tests, besides making sure your tasks are direct and lucid, you also want to make sure that they are not repetitive and “fleeting.” Repetitive tasks are dull and numbing, especially when participants lack the opportunity to contemplate on the tasks they just complete. Creating a more dynamic sequence of prompts and consciously giving space for deep reflection allows participants to remain “alert” and critical throughout the process.

Spradley (1979) provides a way of looking at interview questions, center around four different types: descriptive, structural, contrast, and evaluative¹. This way of thinking can easily be incorporated into task-based usability tests, allowing participants to produce a more comprehensive account even on their own.

Descriptive questions: questions designed to evoke a general account of what happens, from biographical information to specific anecdotes and life histories. e.g., How did you go about researching schooling options for your child(ren) in the past?

Structural questions: questions designed to access the organization of one’s knowledge including the categories and mental models employed during an interaction event. e.g., What types of alternative schooling options, beside public schools, are you aware of?

Contrast questions: questions designed to allow participants to make comparison between experiences. e.g., Comparing to other school websites, what do you like or dislike about this prototype?

Evaluative questions: questions designed to allow participants to reflect on their own thoughts and feelings about a particular experience e.g., If this is not a test, how relevant is this website in your actual schooling research process?

There are also other ways to design more captivating tasks, such as the “scavenger hunt” or “skin in the game” tasks.

Alternating the question and task format is another tactical but effective way to bolster participant engagement. Rather than a continuous set of open-ended tasks, interject with ranking or multiple-choice questions to stimulate a different thought process. Shuffle between a long open-ended task with a concise multiple-choice question can help to vary the tempo of the experience, alleviating that dragging and strenuous feeling that can happen with asynchronous tests. These tactics keep the session fresh and generate a variety of data for your research.

Lastly, design thoughtful wayfinding and reciprocal feedback as users progress through the test:

Without the presence of a facilitator, participants can easily feel lost. A sense of guidance and certainty is crucial for participants to access their thoughts and feelings in depth. Hence, it is important to design wayfinding to show users where they are in the process in order to facilitate a smooth research experience. Here are some tactics:

  • Provide a question or task number in relation to the total number of prompts e.g., 4/15.
  • Provide an estimation of the level of effort in the beginning i.e., this study will take about 15 minutes. You will be asked to perform a variety of open-ended tasks and multiple-choice questions.
  • Provide feedback midway or after each open-ended task to create a sense of accomplishment and refresh user attention e.g., Thank you for your effort so far. You are halfway through the test!
  • Provide answers after open-ended or difficult tasks. In open-ended tasks when users are likely to be confused, it is important to let them know the expected answer (without creating bias in subsequent tasks) as this prevents doubt from accumulating along the way and creates a sense of certainty.

(1) Spradley, J.P. (1979) The Ethnographic Interview. New York: Holt, Rinehart & Winston.

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

--

--

Writings on research, methods, and the queer hope of knowing others. p-nam.com