The secrets and pitfalls of designing persuasive technologies

When crafting compelling experiences, the line between well-intended nudges and shadier forms of manipulation may not always be clear. Perhaps a framework for designing with ethical persuasion in mind, and revisiting cautionary tales of dark patterns could be of help.

Dora Cee
UX Collective

--

Understanding human behaviour comes with the power of being able to influence users’ decision-making and underlying psychology. Still, there is a vast difference between persuasive design and (planned) malicious deception.

As we dance between ethical practices and dark patterns, checking in with ourselves from time to time can highlight unintended directions, and help us correct any wayward trajectories.

Business team brainstorming and analysing ideas.
Image by vectorjuice on Freepik

(Engineered against) behaving yourself

Technology companies will usually consider embedding invisible (yet weighty) elements such as motivation, ability, and triggers when designing products. Derived from social scientist B.J. Fogg’s model, these cornerstones of behaviour change are often misused as anchors to get users hooked in an infinite loop of clicking, scrolling, and interacting.

  • For example, motivation can be a desire for social connection and staying informed.
  • To this end, users also need to be able to perform tasks and act the way the app was designed to work.
  • Finally, triggers are prompting and nudging features to steer their behaviour, such as push notifications regularly piquing their interest to drive them back to the product.

Features employing these tactics are devised to capture users’ attention and keep them engaged — but how ethical some of these practices are, is another question. Even if you manage to protect yourself from less noble schemes with sheer willpower and self-awareness every now and then, our brains simply cannot keep up with the growing capabilities of technology.

Between 1956 and 2015, processing powers have increased over 1 trillion times, and as Moore’s Law continues to thrive, it’s no surprise our human physiology ends up lagging behind. This also means that persuasive tech algorithms keep learning more about us, and when savvy experts pair these insights with creative design ideas, user behaviour often gets hijacked.

With this (exploitative) caveat highlighted from the start, for now just keep it at the forefront of your mind as we go through the science of how these are often made. Then we can address where we have a tendency to trip up, even if coming from a place of fairly pure intentions.

The eight-step design process for persuasive technology

Persuasion is a powerful tool that designers can use for good by creating smooth and frictionless experiences, thereby inviting a win-win situation for everyone. That smartwatch buzz on your arm reminding you to stand up after being sedentary for too many hours? Your health most likely approves. The app that helps you learn a new skill, whilst making it fun? Neat indeed.

Apart from the previously mentioned model, Fogg also proposed an outline to help teams design products that influence users and drive their actions — crafting a path towards certain behaviours. Sound too evil? Let’s give ourselves the benefit of the doubt for now and assume our quest for persuasion is driven by goodwill. Later on, we shall revisit our dark side in more detail to balance the scales.

The steps of this guide aren’t meant to be taken as a rigid set of rules, but more along the lines of milestones to observe along the process. They are also not necessarily sequential — for example, you could tackle the first two steps in reverse order, depending on what works better for your project. With this approach, the framework is altogether quite flexible and can be tailored to different contexts. Off we go then.

Woman pointing at target on screen.
Image by vectorjuice on Freepik

Step 1: Choose a small, simple behaviour to target.

Ambitious goals can become overwhelming if not broken down into smaller chunks. For a less daunting plan, you could try aiming for a lighter and more simplistic first step as a way of inching towards the larger objective. Alternatively, if the overarching goal is somewhat vague, you can use this approach to size up the larger target on the horizon.

Opting for simplicity also helps keep a team’s focus in line, according to Fogg:

“In some situations, members of the design team may already have their own pet ideas for what they want to build, so the team collectively nods while each person adds an additional bell or whistle to the user experience, complicating the project and unwittingly setting the team up for failure.”

Step 2: Choose a receptive audience.

Needless to say, you should probably not aim to design for a user base who is resistant to your idea from the start. This is not to say that you should give up altogether — once you have a solution in hand that delivers results, you can expand your target audience; but we’ll get to this later.

Another thing to consider is to target users who are already (somewhat) tech-savvy. Early adopters and those keen to explore or experiment will most likely be more responsive to new projects, and lead to wider practical insights, as well.

Step 3: Find out what prevents the target behaviour.

When users don’t act the way you’d prefer, there is generally a combination of three reasons at play. You might be able to unearth the problem, by asking yourself these questions.

  • Are they not motivated to complete the action?
  • Do they lack the ability to do so?
  • Is there a missing trigger that would nudge them to perform the behaviour at the right time?

Basically, this is where the behaviour model mentioned at the beginning can help pinpoint where the product or idea may be falling short.

Business team analysing different media channels.
Image by vectorjuice on Freepik

Step 4: Choose a familiar technology channel.

This part is based on the first three steps, and next, selecting a channel that is already familiar to your audience is the smart move. Expecting them to learn a new method would effectively introduce another change, which could quickly become overwhelming.

Instead, opt for a platform that addresses the problem highlighted in Step 3. As a starting point, research advises the following examples:

  • For increasing motivation, videos, social networks, and gamification, in general, can be effective solutions.
  • If you need to make an action simpler and thus increase the user’s ability, (installed) apps and clear walkthroughs for onboarding can be your ally.
  • When in need of triggering behaviours, text messaging, emails and notifications can work. Just don’t overdo and spam them, otherwise you risk achieving the opposite effect, driving users away.

Step 5: Find relevant examples of persuasive technology.

Study your competitors and learn what techniques they use to win the audience over. You might not find clear-cut examples in your chosen field, so observing generic best practices in different domains could be your next best call.

Fogg recommends finding a range of examples:

“Specifically, a design team should examine at least nine examples in total: three that achieve a similar behaviour, three that reach a similar audience, and three that use the same technology channel as the design team’s.”

Step 6: Imitate successful examples & look out for their psychological elements.

Up next comes incorporating what you’ve learnt in the previous step, borrowing successful ideas from your analysis. Rather than reinventing the wheel, recognising and adapting examples can speed up laying down the foundations of your work.

Identifying the “secret sauce” in other products requires insight, as it tends to be more psychological rather than based only on aesthetics. (Which is not to say colours and visual elements don’t play into our psychology — the suggestion here is to look deeper.) This is where you get to innovate and weave your own unique spin, by getting creative in how you fit these mental drivers into your product idea.

For example, you could imitate the tone, length, persuasion strategy and format of a message to bump a user behaviour (such as sharing or recommending the product online).

Man and woman solving a puzzle in the shape of a brain.
Image by vectorjuice on Freepik

Step 7: Test and iterate quickly.

Once you are done coming up with possible tactics, a series of small, rapid tests should follow to see which persuasion method works best. Keep in mind that designing for persuasion is more difficult than designing for usability, because changing people’s behaviour is simply hard.

In other words, don’t get discouraged and set the bar low for your early trials. The point is to learn something from each test and gain insights so your next attempt will perform better.

As Fogg concludes, “knowing how to prototype, test, and evaluate results quickly is the most valuable skill for designers of persuasive technology.”

Step 8: Expand on success.

Scaling up should be next on your agenda. This can be done by making the target behaviour more difficult, or by reaching out to new audiences and seeing how different types of users get on with the product.

A systematic approach is ideal here, varying only one or two attributes of the successes from Step 7, so you can keep track of how well the expansion goes down compared to the original version.

Pitfalls and traps

One of the glaring problems with the above framework is that it only focuses on intended outcomes and forgets that messing with human behaviour can have just as significant (often) unintended results.

For instance, a quest to change a simple behaviour could lead to also shaping how users think and feel on a grander scale. Compulsions to swipe and refresh can quickly become addictions even, partly due to the crafty social engineering tactics employed in the background. And where user and stakeholder values don’t necessarily align, it’s rare to see companies taking the fall and opting for prioritising the user’s health over company profits.

Man and woman stranding around magnet attracting social media likes.
Image by vectorjuice on Freepik

In Toward an Ethics of Persuasive Technology, authors Daniel Berdichevsky and Erik Neuenschwander recommend sticking to a golden rule when it comes to persuasion strategies:

“The creators of a persuasive technology should never seek to persuade anyone of something they themselves would not consent to be persuaded of.”

They also highlight some other concerns to keep in mind, such as:

  • respecting and treating user privacy as they would their own,
  • providing enough disclosure on methods and intended results, and
  • never aiming to misinform users as a “means to an end.”

For example, if a gamified app lied about the user falling behind on its leader board and started bombarding them with prompts so they could regain their position, that would be a case of intentionally misinforming users to get them hooked.

If in doubt, perhaps the below flowchart can help figure out some questions around ethics and responsibility:

If the outcome is intended & ethical, then the designer is praiseworthy. If the outcomes is intended & unethical, then the designer is responsible an at fault. If the outcome is unintended, & reasonably predictable, & ethical, then the designer is not responsible. If the outcome is unintended, & reasonably predictable, & unethical, then the designer is responsible and at fault. If the outcome is unintended, & not reasonably predictable, & ethical/unethical, then the designer is not responsible.
Here’s a handy guideto help gauge levels of responsibility. | Berdichevsky, D., & Neuenschwander, E. (1999).

Five shades of dark UX

This is where we start inching into the fields of dark UX patterns. To borrow from researchers Tobias Nyström and Agnis Stibe,

“a dark pattern design could be defined as: the craft of purposefully designing patterns that damage the well-being of the users.”

A list of common types of dark patterns has already been compiled on the Deceptive Design website, but these can also be grouped together into five primary categories (going by this research).

Nagging: Redirection of expected functionality that persists beyond one or more interactions.
Obstruction: Making a process more difficult than it needs to be with the intent of dissuading certain actions.
Sneaking: Attempting to hide, disguise, or delay the divulging of information that is relevant to the user.
Interface interference: Manipulation of the user interface that privileges certain actions over others.
Forced action: Requiring the user to perform a certain action to access functions.
A quick summary of dark design strategies. | Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018, April)

Nagging happens when a user’s task is interrupted via pop-ups blocking the interface and other actions. This serves as a distraction to thwart their focus and can also be seen when an expected function is redirected to another.

Obstruction refers to impeding a task flow and making it more difficult to get the user to cancel the action altogether. Hiding options that would, for example, disable certain tracking features or using confusing wording to disorient the user are two common cases.

When trying to pause all push notifications, instagram gives the options of: 15 minutes, 1 hour, 2 hours, 4 hours, 8 hours, or to cancel the action.
A case of obstruction: Instagram only lets you pause push notifications for 8 hours max if you try to disable all of them in one swipe. Your next best option is to do this manually for each category and subcategory. Fun!

Sneaking tends to occur in order to hide or disguise information that would be relevant to the user — but would likely deter them from performing an action. This could include undisclosed costs or fallout from a choice that the user is lured to make via underhanded methods.

Opting in to updates, cookie policies and T&Cs are all presented in low-contrast, illegible text.
Next.co.uk somehow made their registration form user-friendly enough, but everything else? Oddly illegible. It ticks the box for interface interference, whilst also being sneaky.

Interface interference is the visual and interactive manipulation of interfaces to prioritise specific actions over others. Its aim is to confuse users and to make important alternatives more difficult to discover. This is a fan-favourite among UX-villains and it comes in three flavours.

  • Hidden information can range from sneaky T&Cs, information and checkboxes hiding in illegibly fine print, and low-contrast text.
  • Preselection is any situation where an option is selected by default and usually reflects the company’s preference over the user’s interests.
  • Aesthetic manipulation encompasses design choices that serve to distract and misdirect the user’s attention from one thing to another (such as paid over free content).

Forced actions are those which must be taken so the user can access or continue using a specific function. It could either be presented as a required step or camouflaged as an option from which the user will benefit.

If you are keen to browse around for similar cases, the authors of this research set up UXP2 Lab, where you can find additional examples.

Morally grayscaled

After all this talk about the layers of intended and unintended consequences, you may be still be left with plenty of questions around how best to proceed and not turn into an evil genius/creative. Conveniently, that is the point here.

Holding onto an inquisitive mindset and questioning practices (your own as well as others’) should keep your critical thinking skills fresh, polished and sharp.

We may have arguably already lost the race against our AI friends in most regards, but how our human psychology gets treated and implemented into design strategies is still in our hands. Ideally, aim not to become a puppet-master of sorts. Deal?

Thanks for reading! ⭐

If you liked this post, follow me on Medium for more!

References & Credits:

  • Berdichevsky, D., & Neuenschwander, E. (1999). Toward an ethics of persuasive technology. Communications of the ACM, 42(5), 51–58.
  • Chivukula, S. S., Brier, J., & Gray, C. M. (2018, May). Dark Intentions or Persuasion? UX Designers’ Activation of Stakeholder and User Values. In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems (pp. 87–91).
  • Fogg, B. J. (2009, April). Creating persuasive technologies: an eight-step design process. In Proceedings of the 4th international conference on persuasive technology (pp. 1–6).
  • Fogg, B. J. (2009, April). A behavior model for persuasive design. In Proceedings of the 4th international Conference on Persuasive Technology (pp. 1–7).
  • Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018, April). The dark (patterns) side of UX design. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–14).
  • Kim, T. W., & Werbach, K. (2016). More than just a game: ethical issues in gamification. Ethics and Information Technology, 18(2), 157–173.
  • Nyström, T., & Stibe, A. (2020, November). When persuasive technology gets dark?. In European, , and Middle Eastern Conference on Information Systems (pp. 331–345). Springer, Cham.
  • Images by vectorjuice on Freepik

--

--

Design / Psych / UX / AI & more | Here to translate scientific research into practical tips & advice.