Ethics in designing habit-forming products

Two methods to evaluate our motives when purposefully designing habit-forming products.

Jonathan Santiago
UX Collective

--

An abstract image of two sides of a brain. One side looks like it is a machine part, the other looks more creative and free.

Can habit-forming products be beneficial? Can they genuinely make the customer’s life better? And if so, can we truly create such products in a system where at the end of the day we need stakeholder buy-in and money to continue to operate?

I suggest that the answer is a resounding “yes”. And for those of us who design such habit-forming experiences, we need some guidelines and ways to routinely check our motives while creating products. How can we maintain sound ethics while purposefully moving customers towards the desired goal?

I currently work as a product designer on a mobile app averaging about 9.5 million unique users a day. We are quickly approaching a milestone of 500 million total downloads of the app. That is roughly 125 thousand downloads a day with a well-above-average retention rate. We are featured as a case study in the popular best-selling book entitled Hooked: How to Build Habit-Forming Products by Nir Eyal. Statistically, our product is quite sticky and habit-forming.

I don’t throw these metrics out to try and impress people, but to let you know just how much of a positive or negative impact we can have on the lives of countless people should we compromise ethically. I give these metrics to let you know that on the other side of any popular digital product there are people like me: people who still struggle internally with what is good for our customer community versus what might be manipulating them towards predetermined goals or business metrics.

The good news is that I feel that we live in a climate where the desire for transparent, gimmick-free, morally sound products and business models has never been higher. And the most “profitable” products will be those that can genuinely provide the most value and good to a customer while being open about their business methods.

The Netflix documentary The Social Dilemma quickly gained popularity near the end of 2020. Of it, New York Times writer Devika Girish says, “[The Social Dilemma] explores how addiction and privacy breaches are features, not bugs, of social media platforms.” Features. Meaning they are deliberately designed, tested, and optimized to be addictive.

A list of harmful social media habits are laid out like a Bingo game card entitled “Are you using social media or is social media using you?” Categories include: “Fell into a deep virtual rabbit hole. Saw an ad for something I was just talking about. Got into a comment war with a stranger. Scrolled instead of sleeping, driving, or hanging out with people, etc.” The center box says, “Signed up take back control at TheSocialDilemma.com. This box is circled.” Published by The Social Dilemma.
https://www.thesocialdilemma.com/start-a-conversation/

The Mitchel’s vs. the Machines was called “The first great animated movie of 2021” by Slate Magazine and is currently maintaining a 97% critics’ review meter on Rotten Tomatoes. The movie openly makes fun of the everyday [over] reliance on tech and social media that the majority of us are keenly aware of. Nothing is off-limits. A parody of Apple’s beloved Keynotes forms the setting where an AI called “PAL” launches the robot apocalypse. In one line, the dumb-founded founder of PAL says, “It’s almost like stealing people’s data and giving it to a hyper-intelligent AI as part of an unregulated tech monopoly was a bad thing.”

And it’s enjoyable because we can all see ourselves somewhere in the movie. The movie is not anti-tech or anti-social media, it’s just poking fun at the potential dangers of things few people were really talking about 15 years ago: “Do we really know where all this is going?”, “What, if any, safeguards are there?”

Animated Gif from The Mitchells vs The Machine showing an AI character named “PAL” with evil intentions.
https://giphy.com/gifs/sonyanimation-HReCpImRg7EYhjjj6A

The point of these two examples is that awareness of the manipulative power of tech and social media is becoming commonplace in conversation. This is good. It means people are ready to actively pursue products that genuinely have their own interests in mind while shunning those that treat them like they are the product.

What will follow are two methods I now use to evaluate whether my motives and actions as a product designer are ethically sound. This has become increasingly important knowing I’m designing for 9.5 million daily active users. Hopefully, there will be a few things that you will be able to add to your arsenal for creating usable, delightful, beneficial products or content that benefits the well-being of your customers, and in turn, benefits you.

Awareness of the manipulative power of tech and social media is becoming commonplace in conversation. This is good.

Let’s first define ethics in our context

Merriam-Webster’s first definition for “ethics” is:

“The discipline dealing with what is good and bad and with moral duty and obligation.”

Trying to define what an entire world of diverse people would consider good or bad in terms of morality would be a difficult task. I’m not qualified to speak in-depth on that subject as I know there are groups who have devoted years of research in the areas of ethics. I will say though, that from my readings and general review of history, it seems that in almost all societies there has been an underlying code of shared ethics that seems to be consistent.

In general, it boils down to what some people call the golden rule: “Do unto others as you would have them do unto.”

Have people always behaved to that standard? No. But even when they don’t, the fact that the standard exists is usually evident. One who steals something from another typically goes about it sneakily and does not boast about it. People rarely publish grand lies they’ve told in their memoirs or social media posts.

Even when we see great social injustices, it’s often preceded by the group that is carrying out the injustice somehow convincing themselves that what they are doing is not unethical in that particular circumstance. For example, a group who believes that no human should be a slave might first convince themselves that a certain group of people does not qualify as “human” due to their ethnicity or social standing. They have now created a loophole in their consciousness that allows them to still believe they are behaving in a morally good way. That same oppressive group might be seen holding a normal ethical standard among some people groups but not others.

A country deciding to carry out violence against another country might first convince themselves that it is for the greater good and that a small bit of violence now is justified because it will prevent greater violence (and perhaps even greater unethical behavior) later.

So I hope for the sake of this discussion, we can see that the majority of us — regardless of race, nationality, religion, or gender — share some common beliefs on what constitutes morally good behavior. And much of it falls under us treating others the way we would genuinely want to be treated.

SAFEGUARD METHOD 1

What I call the “control vs empower evaluation”

If your customers want to incorporate other products, brands, or mediums into their lifestyle, does your current product 1) penalize them or 2) optimize for them?

Anytime we are offering something of value that could be habit-forming, we have a responsibility to steward how the customer accesses that something. Evaluate if your product is:

  • Forcing customers to access a certain medium (www, smartphone) or channel (Twitter, Instagram, etc.) as the only way to consume your most important content.
  • Penalizing customers for not opening a digital product every day.
  • Not giving customers control over product communication.
  • Not giving customers control over product gamification.
  • Not optimizing for offline use.
  • Feature-centric instead of user-centric.

Example: Social media can be a dangerous place for many people, emotionally, mentally, and physically. Some have made the choice to detach from it for very valid reasons. Are we forcing our customers to use social media to access vital content? Are we offering exclusive content or perks for following our brand on social media? If so, we should at least consider the effects this may have on a certain segment of customers.

If we’re penalizing customers for not using something that could be harmful to them in order to access content we genuinely believe is good for them… do our methods match our motives?

On the other hand, if we find a way to optimize our content for such customers, we can still provide value but do it in a way that considers the needs and vulnerabilities of the user. We are now thinking user-centered and user-first rather than feature-centered or feature-first. User-centric considers the users’ needs first and builds from there. Feature-centric usually considers what feature we think would benefit the product and finds ways to get the customers to use it — sometimes at the expense of what is best for them.

Health apps, learning apps, meditation apps, religious apps… these products can help create life-transforming habits for good. If we’ve hooked our customers on such a product, we now carry a tremendous responsibility. What started as forming a healthy habit can lead to an unhealthy dependence if our product is not optimized for the user’s personal growth.

Here’s a made-up scenario of what I mean. After months of use, a health app may help a user finally hit that goal of getting their blood pressure down to a healthy level. The user had tried many other methods, but this app was the catalyst of change for them to develop a healthy lifestyle. Now let’s say this customer decides they need to take a month-long technology fast (break) for emotional health issues. Or, perhaps they are in the Navy and will be deployed on a ship for two months and will not be able to access the online version of the health app. Does the app penalize them by causing them to lose daily rewards (gems, streaks, etc.), or does it choose to play nice with offline use or even other products, knowing that for their users to truly have a healthy lifestyle, they will eventually need to incorporate other products and offline use?

I happen to be designing a spiritual/religious product intended to positively affect a person’s well-being. You can ask the same question about your product: “If my product/brand/content is habit-forming, am I penalizing my users for not using it specifically how I want them to, or am I optimizing for how they could consume it in a way that’s healthiest for them?”

SAFEGUARD METHOD 2

Nir Eyal’s Manipulation Matrix

Start by asking yourself the following questions: First, “Would I use the product myself?” and second, “Will the product help users materially improve their lives?”

I mentioned above that the product I’m a designer for was featured as a case study in the incredible book Hooked: How to Build Habit-forming Products. Eyal spends much of the book explaining how to create hooks based on internal triggers in a user that will become habits. He then devotes an entire chapter to the ethics behind purposefully creating habits in people. Eyal introduces a helpful diagram called the Manipulation Matrix.

4 Categories displayed on a diagram: Facilitator — maker uses the product and the product improves lives; Peddler—maker does not use the product but product improves lives; Entertainer—maker uses the product but the product does not improve lives; Dealer—Maker does not use the product, the product does not improve lives.
Nir Eyal’s Manipulation Matrix

Where do you fall on the matrix? The matrix doesn’t try to answer the question, “Can I hook my users?”, but “Should I hook my users?” When we are designing our habit-forming products we can use the Manipulation Matrix for an honest self-evaluation of the motives behind what we are creating.

Would I use the product myself?

If you were your own ideal customer, would you use your own product the way the majority of your customers are expected to use it?

  • Would you play that “free” game with in-app purchases knowing the overall cost and commitment to keep playing once you’ve invested “x” amount of time?
  • Would you pay what you are charging the customer for your “ebook” knowing the value you would be getting?
  • Would you encourage your 16-year old to use the same social network platform with the same frequency with which you promote to your customers or clients?
  • Would you fill in that order form with your own phone number, email, or physical address knowing what kind of communication you’ll receive once the organization has your information?

So where do we fall?

The Facilitator

“Facilitator” is highlighted on the diagram. A facilitator uses the product and their product improves the lives of their users.

We use the product ourselves and believe it materially improves our customer’s life. This is one of the best safeguards against unethical manipulation. Here, the designer or content creator can best relate to and empathize with the customer.

Yet even if our entire organization consisted of facilitators, it would be wise to have safeguards in place for the small amount of people who could develop genuine addictions to our products or services. With digital technology, it’s easier now more than ever to track usage and be alerted if unhealthy addictive behavior is detected. And as a facilitator with the best interests of the user in mind, what will we do with that data? Will we purposefully slow users down even if it means decreased metrics? As facilitators, we can afford to do this because we are in it for the long game. And our customers will be in it for the long game too because they can feel that we have their best interests in mind.

The peddler

“Peddler” is highlighted on the diagram. A peddler does not use their own product but their product improves the lives of their users.

We believe the product will improve someone’s life, but we wouldn’t use it ourselves. There is nothing immoral about being in the peddler category. I’ve been there and might find myself there again at some point. But as Nir states, “Peddlers tend to lack the empathy and insights needed to create something users truly want.”

Tip: User interviews are a great way to not only improve your product, but to build genuine empathy for the fellow humans you are creating for (resource in appendix footer).

The entertainer

“Entertainer” is highlighted on the diagram. An entertainer uses their own product but their product does not improve the lives of their users.

We use the product ourselves, but it doesn’t materially improve the lives of those who use it or consume its content. Entertainment isn’t wrong. One could even say that entertainment serves a useful purpose of letting our minds relax for a while. But if we find ourselves in this quadrant, the best thing we can do is acknowledge we are here. This will help us moderate the decision we make. If we’re trying to turn something that doesn’t improve the user’s life into a habit, odds are our methods will be fast, constantly shifting, and attempting to capitalize on the emotional urges of our customers. Think of the business models behind pay-to-win apps, TV sitcoms, etc. Products in this category often lack the power to stay around long-term before users move on to something newer.

Tip: Be honest with yourself that it doesn’t improve the lives of others. It’s OK. This will keep your motives and methods honest.

Look around for a genuine need in your users. Can any aspect of what you are creating address that need and improve the lives of your customers?

The dealer

“Dealer” is highlighted on the diagram. A dealer does not use their own product, nor does their product improve the lives of users.

One who wouldn’t use the product themselves nor believes it offers any substantial way of improving a user’s life has a higher risk of relying on unethical choices to obtain and keep users. We should be cautious should we find ourselves here.

I’ll restate that if we find ourselves in another category than a pure facilitator role, it doesn’t mean we’re not or can’t practice ethical design. Just honestly assessing which quadrant we’re in is a big step in being able to recognize and navigate the motivation behind our decisions.

And I see other benefits in Nir’s Manipulation Matrix. Perhaps you recognize that you’ve spent the past 10 years in the entertainment quadrant. You think about what has been motivating your decisions and how it has affected your mental and emotional health. You decide that perhaps you’ll take a break from this quadrant for a few years and reposition yourself to invest in an area you are more passionate about. Maybe you’ll even end up back in the entertainment quadrant at some point, or maybe you can pivot in that quadrant to redirect your product or mission so that you can become a facilitator.

Closing recap:

  • Do we give our customers control over features that could be habit-forming (do we empower users or attempt to control them)?
  • Are we proud of the way our product or brand is influencing the behavior of others?
  • Do we optimize for offline use, life circumstances, and the emotional health of the user?
  • If we wouldn’t use the product ourselves, and/or if we don’t believe the product materially improves the users’ lives, are we aware of how that could affect our ethical choices?

Recommended resources:

Hooked: How to Build Habit-Forming Products

Social Media & Gamification (NN/g)

User Interviews: How, When, and Why to Conduct Them (NN/g)

Center for Humane Technology

Undivided Attention (Podcast)

The UX Collective donates US$1 for each article we publish. This story contributed to World-Class Designer School: a college-level, tuition-free design school focused on preparing young and talented African designers for the local and international digital product market. Build the design community you believe in.

--

--

Designing products, directing projects, and developing people to help make the world a better place. I’m currently a product designer at YouVersion. 🤟