Developing UX/AI thinking for a new design world

Steven Spielberg’s movie reminded us of how far AI will bring us. UX/AI designers will remind us of our humanity in the designs of the future

Darren Yeo
UX Collective

--

Steven Spielberg’s movie reminded us of how far AI will bring us. UX/AI designers will remind us of our humanity in the designs of the future (image source: Yeo)
Steven Spielberg’s movie reminded us of how far AI will bring us. UX/AI designers will remind us of our humanity in the designs of the future (image source: Yeo)
UX Collective Editor’s Pick
Aug 14

I remembered a time when I was working with the airline cargo operations department. Despite our various attempts to find a human-centric topic within cargo operations, we quickly realized that there were very few such instances. That was understandable since the customers are very far from this back-office action. Besides, there were bigger problems, such as improving operational efficiency and practicing cost management. At the same time, the steering committee was wishing to expose the value of design thinking — how emotional connections through culture building, collaboration, and empathy are important to foster a better, more productive working relationship with the cargo operations personnel.

After various brainstorming sessions, we landed on the observation that every cargo package looked mundane and void of any expression. Pragmatically, it was necessary to protect the contents across their journey, but for the purpose of design thinking, we saw an opportunity to inject some personality into the banal cargo packaging. Within minutes, we were able to identify items ranging from common objects (e.g., pharmaceuticals, food) to weird cargo (e.g., incubator eggs, F1 car racing parts). From those objects, we imagine the exchange between the sender and the receiver, as well as the emotions attached to each piece of cargo. What came out of the design thinking workshop of over 80 participants were ideas generated from the emotions of the cargo and their end-users.

The cargo boxes were given personalities based on human feelings.

Personification

This took place before the boom in Generative AI and was based on a familiar design exercise known as personification. Known as a literacy device, personification is the act of giving a human quality or characteristic to something that is not human. This differs from another familiar concept known as personalization, which is the action of designing or producing something to meet someone’s individual requirements. The latter is well known to UX designers, linking user metadata to the digital interface. On the contrary, personification focuses on giving human qualities so as to create an emotional appeal when done appropriately.

One of the most obvious examples is how components have taken on a more organic shape over the years. Curves are more pleasing to the eyes because of their similarities to human anatomy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box. (source: Amazon)
One of the most obvious examples is how components have taken on a more organic shape over the years. Curves are more pleasing to the eyes because of their similarities to human anatomy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box. (source: Amazon)

One of the most obvious examples is how components have taken on a more organic shape over the years. Call them fillets, rounded corners, or squircles, curves are more pleasing to the eyes because of their similarities to human anatomy. In modern UI, some primary buttons could easily be mistaken for a thumb if it weren’t for a contrasting color with microcopy. Even gradients could be argued as a case of personification, as skin pigmentation distributes different tones of color evenly. Others, like Amazon, incorporated its iconic arrow with a box illustration as its app logo, intentionally giving a smile to the banal cargo box.

There is, however, another better example of a device that “breathes.” Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. Contrast this with a utilitarian on/off switch without any sensory feedback. Yes, it gets the job done, but it is void of any human connection. Just like cargo boxes for back-office operations.

Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. (source: Google)
Take a moment to observe a smart speaker in action. As a person activates a trigger using their voice, a feedback response is given. Through a series of interactions, we can witness a smarter speaker come to life with rhythmic patterns from its light indicators and audio chimes. All of this is a form of mimicry of human gestures and behavior so as to bring an element of humanness to the product. (source: Google)

Anthropomorphism

Yet there is more than meets the eye. The other defining feature is processing spoken language to carry out subsequent actions. For once, it could be perceived to be more human, more ‘intelligent’. Thus, it opens the door to a whole new discipline of user experience known as conversational UX, where human conversations are considered.

The term anthropomorphism, is often confused with personification. Here is the key difference between the two words:

Personification is the use of figurative language to give inanimate objects or natural phenomena humanlike characteristics in a metaphorical and representative way. Anthropomorphism, on the other hand, involves non-human things displaying literal human traits and being capable of human behavior.
Masterclass

The key distinction lies in the application of human traits to non-human objects. While personification interprets human attributes, anthropomorphism applies them directly. Smart speakers are a transition between personification and anthropomorphism because users are starting to imagine inanimate objects coming to life and becoming human beings. It explains why my three-year-old daughter cried when my Google Home Mini was not listening to her voice command. She treated the inanimate object like a human, even though it didn’t look like one.

A new design world

Today, she is much older and knows the smart speaker is a digital assistant. That being said, the AI horizon has continued to develop at a tremendous pace with the emergence of large language models (LLMs), raising anthropomorphism to a significant level with its ability to generate continuous natural dialogs through prompts. And although the most recognizable interface is ChatGPT, a LLM’s API will allow further exploration into other forms of interfaces, such as voice assistants, robotics, and even humanoids.

A famous fable comes to mind: the story of Pinocchio, a wooden puppet toy that came to life. Desiring to be a real boy, Pinocchio had to learn many hard lessons about dealing with human behavior. The story could draw parallels with Artificial Intelligence, and Steven Spielberg was on the right track when he and Stanley Kubrick (director of 2001: A Space Odyssey) produced the underrated yet highly emotional movie, A.I. Artificial Intelligence.

As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. (image source: Disney; Warner Brothers and Dreamworks)
As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. (image source: Disney; Warner Brothers and Dreamworks)

In this story, the protagonist, David, was a prototype Mecha child given to a couple who were going through grief over their son’s medical condition. Over time, David develops a love for the mother but is rejected after the son recovers. Abandoned, David and his teddy bear Teddy embark on a quest to find their own “Blue Fairy” to become real. However, the movie ends with a defeated truth: there was no way for David to be a human being, even with the most advanced technology of the future. David spends his happiest day by recreating his “mother”, allowing them a final day together before finally falling asleep.

UX/AI

As I reflect on Spielberg’s movie, I wonder when the time will come when inanimate objects will reach a state of extreme anthropomorphism to inhibit their own emotions and judgment. At the pace of artificial intelligence, we may reach that point within our lifetime. But before that happens, humanity-centered designers with an innate knowledge of user experience and artificial intelligence need to step in. A new breed of UX/AI designers needs to join forces with other professions in similar fields to provide an ethical yet delightful human experience through anthropomorphic design.

Here are three observable instances where a UX/AI designer comes into play:

1. Dealing with the uncanny valley

First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. Although the theory has its critics, who dispute that the younger generation (i.e., Gen Z, Gen A) may be more akin to acceptance, there is a general consensus of mistrust happening when there is a mismatch in experience.

First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. (source: Mori)
First hypothesized by Robotics professor Masahiro Mori, the uncanny valley effect is a situation when a human being’s emotional response drastically dips when an inanimate object becomes too human-like. (source: Mori)

Try watching Spielberg’s A.I. movie to compare your emotional response between humanoid David and robotic bear Teddy. Chances are you will experience a higher sense of creepiness with some of David’s interactions. While uncanny valley is often associated with visual appearances, conversational interfaces like chatbots generate similar reactions based on inappropriate responses. Microsoft’s early AI experiment, Tay, created one such incident when it hurled abusive tweets at people before being shut off.

Therefore, as UX/AI designers, strategizing the best aesthetic treatment is of utmost importance, but to execute on such a transdisciplinary practice requires the designer to have good taste. Here is an excerpt beautifully written by Caio Braga and Fabricio Teixeira:

Taste is the ability to identify quality. To understand quality we need to look critically at: materials that are fit for purpose, ergonomy that considers audience needs, effective use of affordances, usability, accessibility, harmonic color choices, aesthetic choices that elicit emotion, intentional visual hierarchy — amongst others. Taste is in the observer, quality is in the object. The concept of taste becomes more productive when framed objectively around quality, and in ways that are measurable or at least comparable.

At the same time, the ability to measure uncanniness as a UX metric is also worth developing. From past mistakes made by Meta’s low-quality avatar design to overly-expressive CGI characters, such as Cats, the challenge is to find the right emotional balance without tipping over. We are likely to see a new set of UX/AI methods to test for uncanniness and other emotional responses from end-users.

2. Dealing with pareidolia of consciousness

Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none.

Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none. (image source: Taubert)
Ever thought you saw a face in everyday things or in a natural environment? Psychologists call this phenomenon Pareidolia, which is the tendency to form a meaningful pattern out of human perception where there is actually none. (image source: Taubert)

This same phenomenon also exists in AI when a user imagines consciousness inside an LLM when there is actually none. And as models continue to become more sophisticated, detection of the illusion becomes harder because conversations will feel real.

One such company to look at is Air AI, which claims to be the world’s first-ever conversational AI tool that can engage in full-length phone calls, lasting anywhere from 10 to 40 minutes while sounding just like a real human. In other words, there could be a high chance of a person speaking to an AI bot while imagining it to be a real human.

UX/AI designers should step in to create systemic solutions that benefit all parties. They are to create experiences that are in accordance with existing AI governance, standards, and principles. The role of UX/AI designers is thus to translate these policies into actual product experiences. (source: Air AI)

In such instances, the UX/AI designers should step in to create systemic solutions that benefit all parties. They are to create experiences that are in accordance with existing AI governance, standards, and principles. Thankfully, resources, such as Microsoft’s Responsible AI, are publicly available. The role of UX/AI designers is thus to translate these policies into actual product experiences.

One way of breaking pareidolia is to create a preamble to inform users of any AI involvement. Conversations can also be synced in a user’s account so that users can refer back to annotations made by the AI. Such interventions create transparency and give agency to the user when interacting with a more anthropomorphic AI. Lastly, rather than reduce the propensity for deepfakes, another way to establish user authenticity lies in the increasing use of badges within a user’s profile or other newer authentication methods.

3. Dealing with AI Hallucination

Perhaps the most famous debacle in the story of Pinocchio was his growing nose with each lie he told the Blue Fairy. But we should take a closer look. Was Pinocchio lying with malicious intent or out of desperation for his situation? Or maybe he did not have the self-awareness to recognize the false information he was providing, especially since he was only about a day old?

In the world of AI, this is better known as hallucination, where the LLM attempts to provide a plausible answer. The output may sound convincing because the LLM uses statistics to generate language that is grammatically and semantically correct, but it may actually be factually inaccurate or even nonsensical. Largely due to the quality of the data, the AI could be perceived as naive. Or the user could be ignorant without providing the right parameters or doing any fact-checking. In any case, the inconsistencies may cause doubt about every output from the AI.

A classic example of ChatGPT hallucination when the chatbot fabricates a response with the URL slugs even though the URL was fake. (image source: wiki)
A classic example of ChatGPT hallucination when the chatbot fabricates a response with the URL slugs even though the URL was fake. (image source: wiki)

While there are methods to mitigate hallucinations, such as writing clearer and more specific prompts with examples, UX/AI designers can also incorporate user-centric features that seek improvements. Already, we see this in ChatGPT whereby users can provide a binary 👍 👎 feedback by reporting whether the output was accurate. More can also be done in this area by creating modules that allow users to adjust the temperature of randomness with ease, or display accuracy-strength meters to show the confidence of the result.

With the new anthropomorphic entrants of LLMs, people may feel a shift in the digital landscape, where less focus will be placed on native app or web development and more on initiatives that fully support AI. We see this with investors raising the median pre-money valuation for generative AI by more than 2-folds to $90 million in 2023. In fact, in a recent report, one Pitchbook analyst predicted that, at a 32% compound annual growth rate (CAGR), the market could reach $98.1 billion by 2026. So, does this spell the end of the digital world that we once knew?

We see this with investors raising the median pre-money valuation for generative AI by more than 2-folds to $90 million in 2023. In fact, in a recent report, one Pitchbook analyst predicted that, at a 32% compound annual growth rate (CAGR), the market could reach $98.1 billion by 2026. (image source: Pitchbook)
We see this with investors raising the median pre-money valuation for generative AI by more than 2-folds to $90 million in 2023. In fact, in a recent report, one Pitchbook analyst predicted that, at a 32% compound annual growth rate (CAGR), the market could reach $98.1 billion by 2026. (image source: Pitchbook)

Multimodal design

The answer is no, because the UX/AI designer will need to factor in multimodality. Rather than migrate away from existing applications, integration using the same AI model is highly possible. This would mean configuring AI features into existing digital web and app products while maintaining their congruency with newer, more unique applications. Whether it is voice to text, text to image, or image to application, the end result is an accumulation of know-how for creating human friendliness across multiple modes of interaction. Similar to establishing a foundational model in AI, there is a foundational model in design. Only through the harmonious interconnection of systems can we create a family of products, so to speak.

Wouldn’t it be nice to have a family of products that have an appealing emotional response? Where you are assured, even certain, of what the AI is offering to do for you? Our journey of personification and anthropomorphism starts and concludes with user experience being at the heart of it all. Not only considering the human response of the end-users but also the human qualities of the product itself. Perhaps one day we will embrace an aspect of human quality in all inanimate objects, including the design of cargo boxes.

Further Reading

--

--

Rethinking Design. Redesigning Thinking. Living, Breathing Experience.