Should we all lose faith in UX?

UX, Surveillance Capitalism, and the “Banality of Evil”

Mahan Mehrvarz
UX Collective

--

GIF illustration of Adolf Eichman with a word cloud combining these three sentences in loop: 1) I was only an operative, I moved people, I didn’t care what happened next. 2) I was only a designer, I made user flows,I didn’t care what happened next 3) I was only a carpenter, I made tables, I didn’t care what happened next.
Illustration based on a photo of Adolf Eichmann at his 1961 trial.

It is not recent news that we are living in an era when everyday objects boot up, connect, and are infused by algorithms of all kinds, driving our governance, lifestyle, and more importantly, our social interactions. The experience design of digital products connects high-quality interactive systems to our lives. However, at the back-end and behind the curtains of almost every successful digital product, neither the data they capture from users, exist independent of ideas, preferences, and capital, nor do the algorithms they use to process data, are compatible with our societal values such as privacy, autonomy, or equity.

Almost a year ago, Mark Hurst wrote an essay titled “Why I am Losing Faith in UX ”. In his writing, which I extremely empathized with, he maintains the idea that most talented and highly-paid UX designers are hired by big tech companies to focus on human users’ exploitation via maximizing screen-time and engagement, instead of advocating human vulnerabilities or even solving their daily or life problems.

UX has completely flipped now, from advocating for the user to actively working against users’ interests. To boost profits, UX has turned into user exploitation.

Eventually, other UX thinkers responded to his Writing (“How To Put Faith in Design”, “Waking up from the dream of UX”, “The State of UX”, “Does UX stand for user experience or user exploitation?”, etc.). However, unlike the first piece which is backed by several cases (if not references), others barely meet the criteria of solid arguments. They are, at best, personal opinions of experienced designers/design thinkers with no clear supporting document, background literature, or academic/non-academic references.

Although Hurst’s arguments seemed flawless and built upon his lived experiences, I felt that there was a lot worth mentioning supporting his argument. While I strongly buy his arguments, there is one point that I am not on the same page with him. To me, the seriousness of the issue is more than what he revealed through his essay. I believe, UX community is not fully aware of the latent game behind the daily design activities and the amount of harm we, as designers, in our lovely design sprints, with colorful sticky notes, are cooking for humanity. Here in this story, I try to prepare a summary of the key players/generators in the current ethico-social concerns of UX: the UX that I think we should be all losing faith in or do something serious about it.

Dataveillance and the Mirage of Raw Data

Firstly, as UX designers/researchers, our work is considered the primary reason for gathering data from users. Because we are up to improving products by knowing more about their users. Secondly, this seems to have no unintended consequences as long as nobody misuses the Raw Data. We capture as much as we can, rather than want with the idea that it is better to be safe than sorry!

However, unfortunately, neither of the above arguments is right! Considering data (human behavioral data) a raw material of evidence or a fresh start for drawing conclusions is itself a wrong presumption. There is no and had not been, such a thing as raw data. This concept is not only bad but misleading. Data has a strong dependence on culture and society. Even the initial collection of data already involves intentions, assumptions, and choices that can be considered a kind of pre-processing. Data does not exist and wait for us to be discovered, but we have to generate it. The moment that we intend to generate it, we have to imagine its functional contribution to our purpose, and this imagination involves a certain degree of presumption. Collection, management, storage, and transmission of data are among activities that cook our data each in a certain way.

My own data may once have been raw, but by the time I began any serious interpretation, I had cooked it quite well.
from “raw data” is oxymoron

But are really designers, and product improvement the primary reasons behind gathering a variety of records generating with an extremely high velocity, resulting in a huge volume of behavioral data about each individual?

Roger Clarke in 1986 coined the term “dataveillance” to explain “the systematic monitoring, aggregating, and sorting of personal data … of one or more persons” serving the practice of control over the society. You may think “why should I worry if I don’t do anything wrong?” What if I tell you that in a surveillance society like most of ours, the definition of wrong is the locus of question and is shifting over time. Is it anything wrong with participating in an LGBTQ march? What about a political demonstration? Having poor health? Having a lot of debts? Being female? or Belonging to a certain ethnicity? There are so many contradicting definitions of wrong that at some point every one certainly was engaged in something wrong.

Knowing users for the purpose of product improvement is a theater with poppet designers. The reason behind investing so much in gathering behavioral data is to turn them into “prediction products” within the context of a surveillance society. The future of humankind is attempted to be predicted based on prejudicial, discriminative, and prescriptive data captured during a global technology show, starring UX designers in essential supporting roles.

Algorithmic Gods and Surveillance Capitalism

Artificial Intelligence, machine learning, and predictive modeling became unquestionable mathematical gods whose success, sadly, is first and for most in alignment with private companies’ profit or to sustain nation state, even if they discriminate, disinform, or act as vital threats to our societies.
In this regard, Cathey O’Neil introduces Weapons of Math Destruction (WMD). She mentions how the information society of ours created algorithmic gods, with their own black-boxed reality that no one really can appeal to. Mathematical models rule a variety of our social interactions and governance. For example, based on Facebook research, people who had been exposed to fewer cheerful content, produce more negative posts and vice versa without clearly knowing the reason for their mood. In other words, Facebook is capable of affecting how millions of people feel, what they learn, and whether they vote. Our emerging so called “smart governance” also uses these algorithms whose mechanisms are hidden from the public. Letting the mob only know the results, these algorithms often punish the poor, blame those who just happen to be surrounded by criminals, and discriminate in recruitment processes. O’Neil best describes the WMDs in the following:

“The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer.”

Few people are allowed to see and work on big tech companies’ algorithms. Guillaume Chaslot, a former Youtube engineer, with a Ph.D. in computer science was one of those people who has been repeatedly trying to inform people about the fact that today’s algorithms do not appear to be optimizing for what is truthful, or balanced, or healthy for democracy, and other human societal values.

Chaslot, elaborated that big tech content moderating platforms are creating fictional realities for society by utilizing algorithms that work for maximizing engagement, screen time, and attention rather than relevance and truth.

Hence, if “the earth is flat” keeps users online longer than “the earth is round”, this theory will be favored by the recommendation algorithm.

While we are being constantly monitored, in the last two decades, our human behavioral data has become a new commodity for major internet companies to be means for other enterprises’ market end. They capture human personal experiences and process them through black-boxed machine learning models to create “prediction products”. These data and insight about people’s future activities then, will be sold and served for others’ commercial gain. Shoshana Zubbof calls this new era; “the age of surveillance capitalism”. In this age, giant internet companies’ game is to offer digital products that solve people’s problems for free, while in the shadow game, it is people’s behavioral data that pays the cost.

We are not surveillance capitalism’s “customers.” Although the saying tells us “If it’s free, then you are the product,” that is also incorrect. We are the sources of surveillance capitalism’s crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism’s actual customers are the enterprises that trade in its markets for future behavior.

From Human-Centered to Human Exploitation

I have read many reading materials about human-centered design which I believe is the essence for many of today’s UX/Product design approaches. Jo Szczepanska’s piece titled “Design thinking origin story plus some of the people who made it all happen” is indeed among the best that gives an overall understanding of the topic with no commercial direction towards any institution. If you review HCD, you will understand how it all started to advocate human natures, solve human wicked problems, and include human users’ voices into the design process. From Buckminster Fuller’s tendency to use multidisciplinary teams for solving human habitation problems to Scandinavian designers who included workers’ feedback in reorganizing the work environments, HCD was a new concept that puts emphasis on users’ needs rather than the designer’s intuition.

Today, if you check out the UX design communities’ trends you often hear phrases like inclusive, equity-focused, and accessibility as leading product design research trends. In alignment with this, Google speaks about the next billion users and Facebook wants to bring the internet to highly deprived areas. However, remembering dataveillance, WMDs, and “prediction products”, we should be asleep so deep if we wouldn’t think of this as a play that big tech companies are orchestrating to hide their main motivation. In their shadow game, they are only in quest of more and more behavioral data; not humanity, not accessibility, not equity!

Why would companies spend billions to enable the poorest people to access their products? For the same reason that made Google, the technology giant with its leading mobile OS (android), focuses on the production of the most low-cost smartphones (Google Pixel)! This is “aggressive mimicry”, the scientific name of the phenomenon we often point out as “a wolf in sheep’s clothing”. What they want is more behavioral data and a variety of production products. They sell this evil idea behind the flags for inclusive design, accessibility, and equity-focused movements and us designers legitimize their intentions. We weave the “sheep’s clothing” they wear.

Annotated image of a anglerfish (labeled Business model) using a modified dorsal spine as a fishing rod with lure (labeled UX) to attract and capture prey.
Tweet from Erika Hall (The anglerfish uses a modified dorsal spine as a fishing rod with lure to attract and capture prey).

UX often incorporates value-blind and harmful methodologies while being practiced. When companies A/B test multiple design alternatives, it is not the resolution of one in solving human problems that makes it the winning choice, it is just the amount of engagement it causes. They do not show us what we want (and need), but what we can’t help looking at, clicking at, and scrolling.

UX also involves some unethical design patterns. Harry Brignull created the Dark Patterns Project. He defines dark patterns as “tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something”, things like the frustration in almost every cancellation flow in almost every digital platform. The annoying fact is all of them are carefully designed by a UX designer or team serving the purpose of human exploitation on a completely unethical way.

The amount of harm that many digital products are bringing to human society is irreversible and therefore, makes it a serious threat to humanity. The ledger of Harms is an ongoing project by the Center for Humane Technology (CHT is the institute behind the documentary “Social Dilemma”), creating a well-cited list of digital products’ side effects. Some are breath-taking:

66% is the increase in the risk of suicide-related outcomes among teen girls who spend more than 5 hours a day (vs. 1 hour a day) on social media.

2 minutes of exposure to a conspiracy theory video reduces people’s pro-social attitudes (such as their willingness to help others), as well as reducing their belief in established scientific facts.

The greater your level of Facebook addiction, the lower your brain volume. MRI brain scans of Facebook users demonstrated a significant reduction in gray matter in the amygdala correlated with their level of addiction to Facebook. This pruning away of brain matter is similar to the type of cell death seen in cocaine addicts

Ignorance & the Decline of Agency: “Eichmann in Jerusalem”

Many UX designers say that we just “give people what they want” to escape their ethical responsibility for manipulating human nature. There is also a carpentry analogy; saying that UX is only a set of skills, like carpentry; skills with zero agency and responsibility. But wait, if you were a carpenter, would you make a table for anyone regardless of their purpose? Even if you knew that the table will serve the purpose of torturing a prisoner?

Before you prepare an answer, let us recall a contemporary concept phrased as “Banality of Evil” in a book titled: “Eichmann in Jerusalem” by Hana Arendt. Adolph Eichmann was a Nazi operative, responsible for organizing the transportation of millions of Jewish people to various camps in support of the Nazi’s final purpose (genocide). Arendt, the German philosopher and historian, who was a New York Times reporter at the time of Eichmann’s trial, described him as someone with no evil intention who acted without any motivation other than to eagerly improve his career in the Nazi system. In her words, he was just following the orders without questioning them. Her concept of “Banality of Evil” refers to the idea that evil acts are not necessarily perpetrated by evil people. Instead, they can simply be the result of dutiful people obeying orders.

Now, considering the banality of evil in the carpenter analogy, would you still make that table regardless of its ultimate purpose?

GIF illustration of Adolf Eichman with a word cloud combining these three sentences in loop: 1) I was only an operative, I moved people, I didn’t care what happened next. 2) I was only a designer, I made user flows,I didn’t care what happened next 3) I was only a carpenter, I made tables, I didn’t care what happened next.
Illustration based on a photo of Adolf Eichmann at his 1961 trial.

Forget “What is Design?”, Stick to What It Shouldn’t Be!

Should we all lose faith in UX? Fabricio Teixeira and Caio Braga, under the sixth episode of their special UX Collective’s piece for the beginning of 22, also admit that there is something wrong with UX. Pushing designers to act upon such concerns in the most reachable UX publication is indeed a substantial step. However, when thinking about these concerns, it is important not to fall into the false cliche arguments and comparisons. While phrases like “design is not philanthropy”, “design is business”, and “our job isn’t to save the world” are very appropriate considerations about design and designers, they have limited relevance to this discussion. In many occasions, when a dialogue about such critiques on UX is about to shape, somebody starts saying “design is business” and “is for-profit” like “sales and marketing”.

Let’s remember that these phrases do not represent a contradicting argument in response to socio-ethical critiques of UX. Remember in the “wolf in sheep’s clothing” analogy, design is the clothing, marketing is not. Remember not to be philanthropy does not justify human exploitation. Remember there is always a choice between long-term and short-term profit. Remember although tricky, respecting human nature, taking responsibility for unintended platform consequences, and standing at the right side of the history does not contradict business or profit. Remember Airbnb…!

Airbnb following a reactionary commercial (to president Yrumps’ travel ban legislation), made its nondiscrimination policy mandatory and lost over a million user to value human good. Listen to the full story

Should we all lose faith in UX? Heather Wiltse, writes “Surveillance Capitalism, by Design”, almost a year before “Why I‘m Losing faith in UX”. She also, a little ahead of time, questions the role of UX design in the current state of our information society:

What, then, is the role of design in relation to the thoroughly artificial edifices and mechanisms of surveillance capitalism? Is it to rearrange pixels while possibilities for other ways of ordering economic and social systems become submerged, to keep us as users distracted while our lives are extracted and their data shadows sold to the highest bidder?…

Should we all lose faith in UX? Harry Brignull suggests creating a code of ethics for UX/UI design. Meanwhile, he seems to be a fan of public humiliation of those who utilize what he calls dark patterns. I believe the combination of these two is highly complementary. Without a radical reaction (public humiliation), a gradual semi-legal strategy (ethic codes) will not find its way into the professional practice of UX.

Should we all lose faith in UX? Tristan Harris and the Center For Humane Technology aim at bringing similar topics to the foreground of society (e.g. Social Dilemma and Your Undivided Attention). However, although I enjoy the applied approach of CHT, it seems they don’t utilize enough maturity or concentration when it comes to design. For example, #oneClickSafer (their petition to Mark Zuckerburg asking to remove the re-share button from posts after a certain number of sharing), in my understanding, despite the humane values behind, seemed a poor design solution to the misinformation problem.

Should we all lose faith in UX? I don’t think that it is a good idea to jump into any prescription for UX. Neither my intention for writing this story was to suggest to make an action. Rather, I intended to say there is a serious issue here that we, as designers, have to address: it is crucial to realize that they are not others misusing our design. Our design’s essence is becoming exploitative.

Separating the essence of products from their use is not a new controversy though. Marchal McLuhan, in his landmark essay “Medium is the message”, responds to David Sarnoff’s argument saying “the products of modern science are not in themselves good or bad; it is the way they are used that determines their value”:

That is the voice of the current somnambulism. Suppose we were to say, “Apple pie is in itself neither good nor bad; it is the way it is used that determines its value.” Or, “The smallpox virus is in itself neither good nor bad; it is the way it is used that determines its value.” Again, “Firearms are in themselves neither good nor bad; it is the way they are used that determines their value.” That is if the slugs reach the right people firearms are good. If the TV tube fires the right ammunition at the right people it is good. I am not being perverse. There is simply nothing in the Sarnoff statement that will bear scrutiny, for it ignores the nature of the medium…

Perhaps if we do not utilize both radical criticism and constructive dialogue to emerge a gradual shift in approach, we no longer can call ourselves a socially conscious community. We either have to lose faith in UX or to do something serious about the socio-ethical concerns in this field of practice. Self-critic, continuous dialogue, and maximum awareness, might put light on the kind of position that each of us will take about concerns in the design of digital products; the ones that we give birth to.

--

--