Ethical design is a risk management strategy

How do we convince decision-makers to make ethical design a priority?

Kate Every
UX Collective

--

Image credit: Alois Komenda via Unsplash.

Content warning: this article discusses suicide, please take care when reading.

TL;DR: It is not enough to talk about designing ethically as the “right thing to do.” Large organisations, operating in complex systems, have many competing priorities. Ultimately they are accountable to their shareholders and their bottom line. In order to appeal to decision-makers, we need to reframe ethical design as a risk management strategy.

Ethical design is a risk management strategy

When it comes to design and tech ethics, we often make the moral argument. Designing inclusively is the right thing to do. It’s our responsibility as designers to consider the impact of our work and mitigate against potential harms. The limitation of this approach to ethical design, is one of scope. We often operate within a limited sphere of influence, for example, when working as a designer for a particular product feature. While we can, and absolutely should, use our positions to advocate for ethical design, it often has a limited impact on the wider system in which we’re operating.

It’s important to acknowledge that we exist within these much larger systems. Multi-national corporations operate within a complex infrastructure of accountability to shareholders, global legislative frameworks, and wider economic systems. In reality (no matter what they say) their primary responsibility is not to the users of their products, nor to the staff who create them. When operating at this level, appealing to “the right thing to do” is often ineffective. There are too many other competing priorities.

Perhaps a better way to advocate for ethical design is to discuss its relationship to risk. Designing ethically from the outset is a way to manage risk.

I’m going to discuss some of the key risks that can be mitigated by embedding ethical design principles. The categories below are somewhat artificial, they are definitely not mutually exclusive. As in any complex system, these things intersect with one another.

Risk 1: Risk to public safety

At this point, there are countless examples of the ways in which tech can cause significant harm to individuals, to communities, to society at large. I have spoken about this before in terms of harm and impact. But to couch it in the language of risk, these are examples of risk to public safety.

A recent, tragic case that exemplifies the impact of tech on public safety is the suicide of 14-year-old Molly Russell. The inquest ruling was recently reported on, with a senior coroner concluding that Molly: “died from an act of self-harm while suffering from depression and the negative effects of online content” (in this case, from Pinterest and Instagram). Experts reviewed some of the self-harm content being recommended to Molly by the platforms in the period leading up to her death. One of them told the court:

“I had to see it over a short period of time and it was very disturbing, distressing… there were periods where I was not able to sleep well for a few weeks, so bearing in mind that the child saw this over a period of months I can only say that she was [affected], especially bearing in mind that she was a depressed 14-year-old. It would certainly affect her and made her feel more hopeless.”

The findings of the inquest are damning. The court has unequivocally drawn a line between the commercial and design decisions of social media companies, and the real-world impact on the life of an individual.

There is no doubt that the decisions made by these global organisations pose risks to public safety. But to shift focus: what about the risks to the organisations themselves?

Risk 2: Reputational

Cases like Molly Russell’s contribute to the mountain of bad PR for Meta (the company that owns Instagram, previously called Facebook). It comes after the explosive leaks last year from whistleblower Frances Haugen whose documents proved that Facebook repeatedly prioritised “growth over safety.” Documents released by Haugen show that the organisation was aware that Instagram was a “toxic” place for young people and chose not to share these findings. They knew that “32% of teenage girls surveyed said when they felt bad about their bodies, Instagram made them feel worse.” They knew that “13% of UK teenagers and 6% of US users surveyed traced a desire to kill themselves to Instagram.”

Facebook is frequently discussed in the media, for all the wrong reasons. These negative stories are the consequence of the company making strategic decisions which have not prioritised inclusive and ethical product design. Instead, they prioritise innovation and growth above all things. Working at such speed does not allow for thoughtful consideration of the damaging unintended consequences of design. We have seen from Haugen’s testimony that whilst Facebook engages in internal research (which is fundamental to designing ethically), they choose to ignore findings that would steer them in a different direction.

“There were conflicts of interest between what was good for the public and what was good for Facebook… Facebook over and over again chose to optimise for its own interests, like making more money.” — Frances Haugen

It is classic short-termist thinking. Ignore evidence that might put a halt on developing the features they want to develop, so that they can maximise revenue. This might have a short-term benefit to the bottom line. But over time, the risk to reputation is growing. Facebook’s reputation is at an all-time low. They’re losing support from their users, their employees, and their investors.

Could the significant risk to Facebook’s reputation be behind its decision to rebrand to “Meta” last year? The company claims that the new name better encompasses what it does, as it branches into virtual reality and plans to build a “metaverse.” But to a more cynical (or realistic?) eye, it seems to have come at a very convenient time to divert attention away from the growing bad news stories. Is this a late-stage attempt at reputational risk management?

Risk 3: Legal and regulatory

Of course, trial by media isn’t the only negative impact that can come out of unethical tech. What about when it veers over from immoral into being outright illegal?

A lot of the ways in which tech causes harm are not actually regulated against… yet. Legislation is slow-moving and highly influenced by political factors. What we have seen from the past decade is that the pace of change in tech companies far outstrips that of legal institutions. There are no legal precedents for a lot of the harms we now see being perpetrated by big tech companies. Cue one of my favourite quotes for summing up modern life:

The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology. — E.O Wilson

Where there are strong regulatory frameworks, companies see themselves running into trouble. A recent investigation by the Information Commissioner’s Office (ICO), the UK’s data watchdog, has found that TikTok may have breached data protection law. The investigation has found that TikTok may have processed the data of children under 13 without parental consent. They have also found that the company may have processed special category data without a legal basis. This could end up with TikTok facing a £27m penalty.

This news comes a few weeks after the announcement that Meta has been fined £349m by the Irish data watchdog. Their investigation found that the platform set the accounts of 13-to-17 year olds to “public” by default. This design choice exposed children’s personal data (like phone numbers and email addresses) to the public internet.

Legislation around data protection is perhaps more advanced than for other areas of harmful tech. But this is changing. The EU has proposed a new law (The AI Act) that aims at mitigating the harmful use of AI, outlawing AI technologies that cause physical and psychological harm. For companies who don’t consider ethical design from the outset, the regulatory risks are growing.

Risk 4: Commercial

Commercial risk isn’t actually a distinct category, it’s a culmination of all of the above. Organisational decisions that cause risk to public health, lead to a negative impact on reputation. This in turn leads to risks to the business’ bottom line. In some cases, the organisational may encounter legal risks.

In February of this year, it was reported that Facebook had seen its daily active users drop for the first time in the 18-year history of the company. Their shares dropped by more than 20%, wiping around $200bn off the company’s stock market value. Now, correlation is not causation. We cannot attribute this stock market activity directly to growing negative sentiment towards Facebook. There are other factors at play, like the growth of TikTok as a major competitor.

But it wouldn’t be too far a leap to assume that the regular reporting on Facebook’s damaging impact on society might make investors wary. They just might not want to take the risk.

Mitigating risks

Acknowledging the risks of bad design is only the beginning. To embed real change requires systemic thinking. The Center for Humane Technology have created a Framework for Changing Complex Systems. It shows the varying degrees of impact that can be triggered by intervening at different levels in the system.

Diagram of a seesaw that shows points having progressively greater degrees of impact as they move farther to the right of the diagram. From left to right, the leverage points are: 1. Platform Changes, 2. Internal Governance, 3. External Regulation, 4. Business Model, 5. Economic Goal, 6. Culture & Paradigm.
Center for Humane Technology’s “Framework for Changing Complex Systems”, inspired by Donella Meadows’ 12 Leverage Points to Intervene in a System.

For Facebook to truly address the issues discussed above and start to have a net positive impact on society, a new logo isn’t going to cut it. They could start with feature and platform changes, or updates to their internal governance and team structures. But to make the biggest impact, they need to be rethinking their business model and their economic goals. I can’t see that happening any time soon.

The positive from this is that the wider culture is becoming more and more aware of these issues. Ultimately, changing societal sentiment might force a paradigm shift. There’s hope for us yet.

What do you think?

How do you see ‘risk’ feeding into the conversation on ethical and inclusive design?

Share your thoughts. I would love to hear from you. I’m on LinkedIn.

🎩 Hat tip to Mike Tattersall whose post on human-centred thinking as risk mitigation got me thinking.

--

--

Service Designer working on public services and committed to design ethics and trauma-informed practice