What is an AI-affirming future?

The role of UX design and research in AI systems

Amanda Snellinger. Ph.D.
UX Collective

--

Dall-E 2 prompt: Robot lost in a topiary maze.

There’s a lot of dissonance in the tech world these days. There is the hype of generative AI transforming work, productivity, and digital interaction as we know them. Positive disruption! Yet many, rightfully, worry about its impacts on labor markets, disinformation, exploitation, inequality, and the further disentangling of social fabric. Negative disruption. The rolling layoffs in the tech sector may seem like a harbinger of this inevitable AI future.

Here I sketch out how user experience design and research (UXDR) [1] can play an affirming role, offering an alternative vision to the AI moral panic. Yes, the general applications of AI will continue to be integrated into the digital systems that already impact all aspects of our lives: our work, our education, our health, our economy, our communication, our media, our bureaucracy, our government, etc. The question is not if, but where and how AI will be integrated. What this ‘era of AI’ affords us is the opportunity to be intentional about what we want AI systems to do for us and how they should be instituted.

UXDR’s user-centered approach to human-computer interaction (HCI) was integral to the evolution of digital product development beyond the ‘build it and they will come’ tech ethos. These disciplines figured out how to make tech usable and engaging (of course not always to our benefit). And UXDR has also been the vanguard in establishing design and ethical frameworks for how to institute AI into our digital lives.

Let us continue being at this forefront by moving beyond the tech industry into other sectors. Doing so requires us to broaden our view of who the end user is, how we design for them, and even when AI should be incorporated into the systems that serve them.

Who’s responsible? Tech regulation and offloading AI externalities

Sam Altman, the founder of Open AI, has made it clear that it’s up to governments and society to determine the guardrails that should be put on AI technology. He’s justified Open AI’s iterative release of these models because it allows us to grapple with the implications of this technology’s capabilities and determine its usage accordingly. This is reckless to some, including 1,100 technologists — some notorious and some respected — who called for a six-month pause on “training AI systems more powerful than GPT4” to allow safety protocol and regulation to catch up. Other critically vocal technologists[2] have rebutted the moratorium plea as disingenuous considering the signatories and their motivations.[3]

Despite the polarization around AI, everyone agrees that tech regulation is necessary. And yet, it is a partial solution that is and will continue to be woefully behind; America has a notoriously poor track record of regulating technology. And while EU, Canada, and China are a bit ahead of us; no government or international institution is positioned to mitigate all harm through regulation alone. Nevertheless, it’s worth taking a cue from these regulatory aspirations and distilling them into actions we can take to circumvent harm.

Let’s take the Blueprint for an AI Bill of Rights as an example. In January 2023, the Office of Science and Technology Policy (OTSP) released this document. It’s not legislation nor is it a Biden administration policy. Rather, it’s meant to provide guidance for “making automated systems work for the American people.” OTSP spent a year listening to stakeholders and experts from across industry and public sectors, including communities and citizens that will and have been impacted by AI.[4] I am impressed with the due diligence and consideration that went into this framework and I hope its spirit is instituted into law.

So, what makes these automated systems work for the people? What struck me in reading this document is that it’s the same requirements that make digital products compelling enough for customers to use and pay for. Beyond contributing value, they must be accessible, equitable, effective, convenient, and maintained. Designing these systems requires building and tailoring for specific use cases, proactive risk assessment, and system transparency. The return on investment and performance metrics may differ; however, the process entails similar components as product development.

Intentionally designing human-centered AI systems

At its core, design is about intention. Good design provides a solution for a given activity or scenario that is seamless. In other words, design aims to provide solutions, not cause problems. UX design develops interaction for and within a given context.

UX research provides contextual insight into the who, what, why, and how of intentionally designed systems. When it comes to automated systems that serve a lot of people[5], particularly public ones, we must widen our aperture beyond the customer/end-user context to account for the various stakeholders who are affected by these systems as well as the socio-political dynamics that may impede accessible and equitable institution of these systems. Doing this well may, in fact, afford us an opportunity to redesign aspects of our society that no longer serve us collectively.

The AI Bill of Rights calls for continual monitoring of these automated systems. This requirement is uncontroversial since AI’s brittleness impedes accuracy and can defy user expectations. The tech solution to this constraint is reinforcement learning through human feedback (RLHF)[6]. And while RLHF is necessary to improve model output, it is a partial monitoring mechanism that should be buttressed by evaluative user research.

Evaluative user research focuses on how users interact with the system and if it is effectively solving the problems it was designed to solve. Because change is hard, people will not adopt a solution unless it provides more benefit than their status quo; therefore, it’s important to validate a system’s effectiveness on a regular basis.

But it’s also important to monitor AI systems’ downstream effects if we want to protect people’s rights. Discovery research should be instituted to understand AI systems’ broader impacts on things like how people work, learn, shop, and socialize, which are worth monitoring to optimize their benefits and mitigate unintended harm.

At a high level, this is what UXDR can contribute. Let's pivot towards a collective approach to designing all systems and processes from this human-centered design ethos. It’s in our power to shape the role of AI technology in society and create a better future for all. Let’s seize this opportunity to redefine what UXDR can achieve, not just the user experience of digital products but whole systems designed to serve us.

In true design thinking form, I will leave you with a set of ‘how might we’ provocations:

1. How might we keep these systems from reproducing the harmful biases they were trained upon?

2. How might we ensure these systems deliver their value prop, working for people?

3. How might we circumvent downstream harm or inconvenience for the various stakeholders that AI systems are meant to serve?

4. How might we circumvent the prophecy, “The future of intelligence must be about search, while the future of ignorance must be about the inability to evaluate information?”Patricia Lockwood

To conclude on a somewhat hopeful note, UXDR are not bullshit jobs that AI will replace. We have a crucial role in shaping the society we want, which inevitably includes AI — which, unfortunately, is propelled by corporate finance capitalism. Let’s apply our expertise beyond the tech industry to help our governments and other sectors determine how we, as humanity, want to institute these AI systems.

I leave you with one last provocation. What’s an AI-affirming path forward and what is your role in it?

“Think of people. People are the answer to the problems of bits.” Lanier, 2023

“Solidarity is the inversion of the algorithmic state of exception.” — McQuillan, 2022

The position and opinion expressed here are those of Amanda T. Snellinger — the author — hers, and hers alone. They are not endorsed by her employer, and likely not by her father or her cat who don’t understand this topic; and most definitely not any large language models that don’t understand anything in the human, ‘I get it’, sense of understanding.

Endnotes

[1] Here I am speaking from my specific expertise, UXR and anthropology. I urge others in technical and non-technical disciplines to contribute to this prefigurative design vision. Because the intervention I’m proposing should be interdisciplinary and comprises diverse perspectives.

[2] Timnit Gebru takes particular umbrage with the letter for citing her co-authored article to bolster the claim that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research.” because it elides what she and her co-authors were asserting, comparing large language models to human intelligence is one of the biggest harms that AI poses, causing massive downstream risk.

[3] I am not going to weigh into this debate. My position is that AI is a political technology. I am critical of numerous aspects of model development including, the machine learning discipline being empirically dubious from a benchmarking perspective; the structural biases inherent in the data sets used to train these models; and the lack of model transparency, particularly GPT4. These concerns undergird a broader critique, this ‘scientific discipline’ is premised on a problematic Western epistemology rooted in post-colonial global capitalism.

[4] You can watch the town halls here: Listening to the American People | OSTP | The White House.

[5] Civic tech and service design are crucial to extending and implementing UX principles for AI beyond digital products.

[6] I worry that this task will become “dirty work” akin to exploitative forms of labor like content moderation.

--

--

Anthropologist, UX researcher, product strategist, mainly supports incubations in the 0 to 1 phase